Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Is there a necessary correlation between the base model used for training Lora and the generated class dataset? Does the class images have to be generated by the base model used? Are we attempting to pass the model weights (bias?) to the trained Lora model? Trying to understand the underlying logic @Furkan Gözükara SECourses
So one should try to use images generated by the base model used for the Lora in order not to mess up the models own fine tuning. I'm going to test this out
another question. Batch size will affect how exactly the learning rate? I know one should divide total steps by batch size to get the number of steps. So for a training of (12 instance images x 10 epochs x 80 repeats) (total 9600steps) / batch size = 1200 steps... I use batch 8 to speed up the process, but does this degrade the learning rate or loss? And I guess 1200 steps is on the low side. I should be rather between 1500-3000 steps after batch divide, correct?
Hey guys. Thank you Furkan for your contributions! I've a question. I'm running Dreambooth trainings with the A111 SD extension. I've a Nvidia GeForce RTX 3060 12gb. For what I've seen from your videos, the card should be able to make 5-7 it/s right? I don't know if anyone here has the same card and does trainings... I'm stuck in 2.5 it/s and can't go any faster, I've tried a lot of things. Do anyone here know what things I should test or check to see if there's any issue or that's the expected card speed?
#Kohya SS web GUI DreamBooth #LoRA training full tutorial. You don't need technical knowledge to follow this tutorial. In this tutorial I have explained how to generate professional photo studio quality portrait / self images for free with Stable Diffusion training.
Hey! Love the tutorials. Im getting stuck at generating images in Automatic1111 getting "OutOfMemoryError: CUDA out of memory." tried a bunch of things but havent been able to solve it yet. Any ideas?
get a bigger/better card or use runpod/google collab. there are many techniques @Furkan Gözükara SECourses shows in his video many techniques to lower vram make sure to watch closely
Could somebody help me with my vladmatic install i was having issues with diffusion storing to much vram so i deleted the venv to reset it but now im getting an issue with it not recognizing clip?
is it possible to batch sd upscale a folder of images but with different prompts ? as if you were to load the image in to png info , send to img2img and sdupscale using prompt from png info , but in a batch process?