Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
I put 1500 images in the regularization folder and SD still try to create more. Later I found out that some of my images are not 512x512 so I resized them and everything work fine. Also don't forget to put "/" before your directory path if you are using Runpod (such as: /workspace/stable-diffusion-webui/test)
is there a way to set up regular Dreambooth training on an 8GB VRAM machine? I know there's hardware limitations but maybe there's a command line arg i dont know about
It depends on many factors like how many training images, their goal, parameters, etc. Dr.Furkan has a very detail and useful comparison video you may watch it here when you have free time https://www.youtube.com/watch?v=sRdtVanSRl4
Hi, can anybody help me? I just watched one of your videos (Realistic Photos By Kohya LoRA Stable Diffusion Training) and it says to just type man or woman to generate regularization images. When I type woman 90% of the images are nudes, however, I noticed that all your man images had clothes will this affect my outcome?
Hey had the same yesterday after watching the tutorial. Have you looked at the prompt of the generated images? I saw only by chance later that there were words in my prompt that I did not enter This was due to the extension Unprompted, which I once installed. This added keywords like nude and modelshoot
what kind of progress are you making with temporal coherence, is this a holy grail, or do you see smoothness in the horizon? Just wondering if its readily available provided time is invested. But what is the timeframe, a couple of days, weeks? mad scientist dedication?
I'm getting okay results. Really hit and miss depending on how complex the generation is. I'm by no means an expert at this stuff so it's a lot of trial and error. As for when we'll achieve true temporal coherence? Could be tomorrow could be next year. But I'm sure there's people much smarter than me working on it right now. I'm trying to learn it because I'm setting goals for myself with stable diffusion. Rather than just endlessly generating images. I want to progress with the tech.
CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 10.76 GiB total capacity; 4.29 GiB already allocated; 10.12 MiB free; 4.46 GiB reserved in total by PyTorch)
I have a 8gb 3070ti, is there any reason for me not to be able to use Konya correctly?
Yeah. I followed one of your tutorials that had torch2.0. I got it set up and then a git pull reverted me right back. Lol. Didn't have the heart to go through that again. I'll update in the morning when my render is done.
Our Discord : https://discord.gg/HbqgGaZVmr. How to do free Stable Diffusion DreamBooth training on Google Colab for free. If I have been of assistance to you and you would like to show your support for my work, please consider becoming a patron on https://www.patreon.com/SECourses
Playlist of Stable Diffusion Tutorials, Automatic1111 and Goo...
I tried to get ChatGPT to explain dreambooth_training.py to me. So far this worked well. Then I asked what values I should put here, starting from 15 images of mine:
After analyzing all the code, I thought she should know on what bases the calculation is based. But apparently it doesn't
According to the calculation in the video it should be 15*12=180. But ChatGPT says 15...
Furthermore she says:
The number of class images --num_class_images should be set to 12, because this is the number of training images you have for your class.
The --sample_batch_size depends on how much GPU memory you have available. A batch size of 4 is relatively small and should be fine for most GPUs, but it depends on your specific configuration.
The --max_train_steps should be at least large enough to go through all the training frames once. Since you have 12 training frames and use a batch size of 1, you should set at least 12 steps, e.g. --max_train_steps=12.
The --save_interval depends on how often you want to save the model weights. A good rule is to save every few epochs. Since you only have 12 training frames, you might not want to train too many epochs to avoid overfitting. For example, if you want to train 3 epochs, you can use --max_train_steps=36 and --save_interval=12 to save the model after each epoch.