Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
If you want to train FLUX with maximum possible quality, this is the tutorial looking for. In this comprehensive tutorial, you will learn how to install Kohya GUI and use it to fully Fine-Tune / DreamBooth FLUX model. After that how to use SwarmUI to compare generated checkpoints / models and find the very best one to generate most amazing image...
@Furkan Gözükara SECourses in flux finetuning I saw you used 100 epochs for 256 images. so if you have for example 350 images, then what epochs should you choose? or should you reduce images to amount of 256?
2 days? what is all involved in such a process and steps is most time spent? what do you think about this tool or gen AI in general? (because I saw in your channel you do movies)
@Furkan Gözükara SECourses in video you say epoch 100 is good for 256 images... but after you mention that epoch 50 is best?
what I dont understand yet... so while training you can already move the generated checkpoints to some different place because these are kind of finished states of the trained model and will only be needed for training in case the training fails. which would mean you could then use the checkpoint as new starting point? is this correct?
I've read some comments about using a clip interogator to filter out similar images from a dataset. Has anyone used this technique or know of a tool that could do this?
Is anyone aware of a platform that offers the ability to train a flux lora and then use something like adetailer/udetailer to fix the face on inference, like out of the box or with minimal setup?
Not looking to set up my own thing from scratch on something like runpod, hoping for something more plug and play like Replicate
Thank you for the Flux workflow, it looks promising. But I have a question I keep getting a "mat1 mat2 mismatch" error for Clip_G looks like I'm using the wrong version, can you share the link to the clip file you're using? Also, it would be nice if you can share the generation data for one of your images, just for me to check if everything is working correctly.
I'm getting two additional options for Kohya that don't appear on the exapmle image on the guide on Patreon - "Conv Dimension" and "Minimum Difference". Should I leave them with default values?