Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
@Furkan Gözükara SECourses Just saw your latest post with the sample Dwayne Johnson training set; it is very useful!
I have a question: is there a particular number/proportion of Head Shot, Close Shot, Mid shot and Full body shot you would use? All equal? (e.g. 7 each?). I am not sure how whether some of your images is a Mid shot or a Close shot or Full body shot.
Also, what number of training images would you recommend? (balancing between quality and speed - obviously, 256 images is not practical for many of us on Batch size 1!). Would you say 28 is a good amount? Or would you go with 84 like you did recently for your client?
@Furkan Gözükara SECourses I am considering training a LoRA Flux model for product photography (still life photography). I want to train a LoRA that specializes in representing still lifes, including props, lighting, and other details. I have a dataset of 200 images. What Kohya parameters would you recommend? Thanks in advance!
@Furkan Gözükara SECourses Master, I'd like to do a quick training on 10 photos. My sister wants to see how it looks LOL. It doesn't need to be high quality, so what's the best way to do it? LORA training is probably faster than Dreambooth, right? If so, what configuration should I choose for my RTX 4090? And how many epochs for 10 photos?
LOL. I choose Rank_3_18950MB_9_05_Second_IT, 200 epoch, so 2000 steps. I expected much faster on 4090 LOL. SHowing me over 220h. Its not what i expected. So its any way do train on local pc with normal time ?
I am registered and I have the Json's included in the LoRA_Tab_LoRA_Training_Best_FLUX_Configs folder, but I think they are more oriented to train human figure or characters. Which one could I use for my purpose of creating Lora for still life?
Read more carefully, he already show how to train person and train style. There is no shortcut to it. There are no perfect settings. It depends on your dataset as well and also experiments. He's already shown how to do experiment how to check quality. Perfect settings is judged by you not him he only shows the way to do it. Get to work you lazy people
Over at OneTrainer we have optimized the vram of Prodigy a little bit. It's now basically at Adafactor levels. Might be interesting for kohya too (I don't have contact).
Hello! Has anyone worked on "LoRA extracted from Full Fine Tune, then LoRA at different network dims"? I'm looking for someone who can help me with this for a small fee! Please DM me. I already have a prepared dataset for the character Conan (Arnold Schwarzenegger), around 100 images, and my own GPU, an RTX 3090, ready for testing.
If you want to train FLUX with maximum possible quality, this is the tutorial looking for. In this comprehensive tutorial, you will learn how to install Kohya GUI and use it to fully Fine-Tune / DreamBooth FLUX model. After that how to use SwarmUI to compare generated checkpoints / models and find the very best one to generate most amazing image...
currently trying to train a character lora (Anime, Pony checkpoint), but it also get that animescreen-cap style(usual anime style), wonder if there is a way to train character without those styles and effects
Hi @Furkan Gözükara SECourses, I saw you commented on Boris Noll's LinkedIn post. I'm curious if you have explored these methods of erasing concepts from the Flux base model, and training it for higher resolution output? I would love to learn more about this workflow (possible also his object and concept LoRA training methods). Thank you.
Thanks for your insight. I was also very suprised about this post since I have never seen something like that before. Regarding the concept LoRA's I assume he trained 3 style loras for additional "aumotive" looks?