Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
When fine tuning flux, any suggestions on lora_rank? The default on the replicate trainer is 16, and it says higher numbers capture more features but take longer to train. Any guidance on this?
I'm ways from understanding the logic of Kohya training. To many parameters. I can't for the life of me understand why there is no better way to do it than the folder preparation And I have watched your video several times. It's a looooong video
In the video you say "You need to select LORA. Because we are currently training LORA"... Which we aren't since now we're talking finetuning, so it's a little bit confusing
Just out of curiosity, approximately how many steps was the checkpoint used for this image trained for? I'm just curious because for me even at around 2000 steps I'm still getting PlayStation2 CGI type looks even though I'm using real photos as a dataset (50 images). It starts to improve at much higher steps.