Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
I put 1500 images in the regularization folder and SD still try to create more. Later I found out that some of my images are not 512x512 so I resized them and everything work fine. Also don't forget to put "/" before your directory path if you are using Runpod (such as: /workspace/stable-diffusion-webui/test)
is there a way to set up regular Dreambooth training on an 8GB VRAM machine? I know there's hardware limitations but maybe there's a command line arg i dont know about
It depends on many factors like how many training images, their goal, parameters, etc. Dr.Furkan has a very detail and useful comparison video you may watch it here when you have free time https://www.youtube.com/watch?v=sRdtVanSRl4
Hi, can anybody help me? I just watched one of your videos (Realistic Photos By Kohya LoRA Stable Diffusion Training) and it says to just type man or woman to generate regularization images. When I type woman 90% of the images are nudes, however, I noticed that all your man images had clothes will this affect my outcome?
Hey had the same yesterday after watching the tutorial. Have you looked at the prompt of the generated images? I saw only by chance later that there were words in my prompt that I did not enter This was due to the extension Unprompted, which I once installed. This added keywords like nude and modelshoot
what kind of progress are you making with temporal coherence, is this a holy grail, or do you see smoothness in the horizon? Just wondering if its readily available provided time is invested. But what is the timeframe, a couple of days, weeks? mad scientist dedication?
I'm getting okay results. Really hit and miss depending on how complex the generation is. I'm by no means an expert at this stuff so it's a lot of trial and error. As for when we'll achieve true temporal coherence? Could be tomorrow could be next year. But I'm sure there's people much smarter than me working on it right now. I'm trying to learn it because I'm setting goals for myself with stable diffusion. Rather than just endlessly generating images. I want to progress with the tech.
CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 10.76 GiB total capacity; 4.29 GiB already allocated; 10.12 MiB free; 4.46 GiB reserved in total by PyTorch)
I have a 8gb 3070ti, is there any reason for me not to be able to use Konya correctly?
Yeah. I followed one of your tutorials that had torch2.0. I got it set up and then a git pull reverted me right back. Lol. Didn't have the heart to go through that again. I'll update in the morning when my render is done.
Our Discord : https://discord.gg/HbqgGaZVmr. How to do free Stable Diffusion DreamBooth training on Google Colab for free. If I have been of assistance to you and you would like to show your support for my work, please consider becoming a patron on https://www.patreon.com/SECourses
Playlist of Stable Diffusion Tutorials, Automatic1111 and Goo...