Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Get more from SECourses: Tutorials, Guides, Resources, Training, FLUX, MidJourney, Voice Clone, TTS, ChatGPT, GPT, LLM, Scripts by Furkan Gözü on Patreon
I did a finetune of flux dev, results are amazing. I use Extract Flux LoRA tab to extract the Lora from the finetuned model. When I use the extracted Lora it makes no difference with not using Lora. @Dr. Furkan Gözükara Did you try the Extract Flux LoRA feature ?
I’m confused on what fine tuning is, do we basically do the same training as in this video https://youtu.be/nySGu12Y05k?feature=shared but with the configuration from the new update on your patreon?
Ultimate Kohya GUI FLUX LoRA training tutorial. This tutorial is product of non-stop 9 days research and training. I have trained over 73 FLUX LoRA models and analyzed all to prepare this tutorial video. The research still going on and hopefully the results will be significantly improved and latest configs and findings will be shared. Please wat...
2 Issues sometimes it works, but it looks nothing like the fine tuned model. Doesnt work and i get the error below During handling of the above exception, another exception occurred: Traceback (mos...
Well for me it is, the extracted lora does not look like my finetuned checkpoint. At 1.5 strength it starts to look more like it but it created a lot of other issues and graininess, so definitely not the best approach
has anyone got the only training specific layers yet to see a speed increase? im getting same speed trying to training only 2 layers as i do when i train all