Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
If you want to train FLUX with maximum possible quality, this is the tutorial looking for. In this comprehensive tutorial, you will learn how to install Kohya GUI and use it to fully Fine-Tune / DreamBooth FLUX model. After that how to use SwarmUI to compare generated checkpoints / models and find the very best one to generate most amazing image...
I went through and changed all the settings I could find, but I wasn't sure how to change the network_dim or network_args. I'm also not clear on the epochs, so I just set it to 16 because I'm not sure what the distinction is between epochs and max_train_epochs.
I believe network dim id lora rank and alpha, which should be changed from 128 to what you see there. Epoch is like a turn, max epoch is max turns you want. The kohya_ss GitHub wiki tells a bit more about the settings.
i see what it was -- I was matching the settings to the dreambooth tab -- network dim is in the lora tab. Is there a reason that I'm too naive to know that these settings would not work well in dreambooth?
hey everyone, when i am finetuning stable diffusionXL 1.0 base on a woman's picture. why does the base model always have that indian clothing style. so the problem is that when i am tuninng my model, the model can only perform good while i generate a picture of the character i have trained in indian clothes. Need help here
@Furkan Gözükara SECourses Just saw your latest post with the sample Dwayne Johnson training set; it is very useful!
I have a question: is there a particular number/proportion of Head Shot, Close Shot, Mid shot and Full body shot you would use? All equal? (e.g. 7 each?). I am not sure how whether some of your images is a Mid shot or a Close shot or Full body shot.
Also, what number of training images would you recommend? (balancing between quality and speed - obviously, 256 images is not practical for many of us on Batch size 1!). Would you say 28 is a good amount? Or would you go with 84 like you did recently for your client?
@Furkan Gözükara SECourses I am considering training a LoRA Flux model for product photography (still life photography). I want to train a LoRA that specializes in representing still lifes, including props, lighting, and other details. I have a dataset of 200 images. What Kohya parameters would you recommend? Thanks in advance!
@Furkan Gözükara SECourses Master, I'd like to do a quick training on 10 photos. My sister wants to see how it looks LOL. It doesn't need to be high quality, so what's the best way to do it? LORA training is probably faster than Dreambooth, right? If so, what configuration should I choose for my RTX 4090? And how many epochs for 10 photos?
LOL. I choose Rank_3_18950MB_9_05_Second_IT, 200 epoch, so 2000 steps. I expected much faster on 4090 LOL. SHowing me over 220h. Its not what i expected. So its any way do train on local pc with normal time ?
I am registered and I have the Json's included in the LoRA_Tab_LoRA_Training_Best_FLUX_Configs folder, but I think they are more oriented to train human figure or characters. Which one could I use for my purpose of creating Lora for still life?
Read more carefully, he already show how to train person and train style. There is no shortcut to it. There are no perfect settings. It depends on your dataset as well and also experiments. He's already shown how to do experiment how to check quality. Perfect settings is judged by you not him he only shows the way to do it. Get to work you lazy people
Over at OneTrainer we have optimized the vram of Prodigy a little bit. It's now basically at Adafactor levels. Might be interesting for kohya too (I don't have contact).
Hello! Has anyone worked on "LoRA extracted from Full Fine Tune, then LoRA at different network dims"? I'm looking for someone who can help me with this for a small fee! Please DM me. I already have a prepared dataset for the character Conan (Arnold Schwarzenegger), around 100 images, and my own GPU, an RTX 3090, ready for testing.
If you want to train FLUX with maximum possible quality, this is the tutorial looking for. In this comprehensive tutorial, you will learn how to install Kohya GUI and use it to fully Fine-Tune / DreamBooth FLUX model. After that how to use SwarmUI to compare generated checkpoints / models and find the very best one to generate most amazing image...