Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Unlock the power of FLUX LoRA training, even if you're short on GPUs or looking to boost speed and scale! This comprehensive guide takes you from novice to expert, showing you how to use Kohya GUI for creating top-notch FLUX LoRAs in the cloud. We'll cover everything: maximizing quality, optimizing speed, and finding the best deals. With our exc...
Dr Gözükara , do you have any teaching/posts about using NF4 models in SwarmUI ? (Sorry, I am from Automatic 1111, I got no idea how to set up SwarmUI for Flux regarding NF4 models) Also are Dev original models better than NF4 models? NF4 models are smaller. I am worried about the quality
Get more from SECourses: Tutorials, Guides, Resources, Training, FLUX, MidJourney, Voice Clone, TTS, ChatGPT, GPT, LLM, Scripts by Furkan Gözü on Patreon
Any tips on getting more realistic results? You've replicated the face very well, but these all look like "AI Images" and I think folks can very well recognize that now and disregard them. With Flux training I think we have moved past the challenge of replicating faces, and the new challenge is to train and prompt to get more convincing results.
ComfyUI? Do you have a link to the face detailer you use? I am trying to stick with Forge since it seems to be so much faster and simpler than a ComfyUI workflow
Dr Gözükara, can you shed light on choices of 'Samplers" and their respective scheduler? I usually generate Asians and I have found (not necessarily true) DPM-2 looks better than Euler. Care explain how Sampling works ? (like if you already have 1 in your video)
I have a question about training LoRA. My LoRA files usually have a very large size after training, while the LoRA files I download from the internet are much smaller. How can I reduce the size of my LoRA files after training?
Bro I followed the video, I see at 4 x GPU you do not change your batch size to 4, you leave it at 1. Also, when I hit training, (I have 10,000 images) Changed it to 8 epoch, save each 1 Epoch, I do see that when it starts 2500 steps instead of 10,000 But when I do nvitop I only see 1 GPU being used. 3 are unused or not moving? Why? I thought I would see all 4 GPUs being used. Changed the learning rate to 0.0001 as well on both places.
(Loaded your 4xGPU-FAST) I see only 1 batch running at 10000 steps now, not 2500. but I do see all 4 GPU being used now in NviTop, GPU 0 fluctuates between 25% and 100% training ends quickly and no model comes out. Its not working at all loading your config. Your config has 0.00005 Learning rate when it loads not 0.0001 like your video shows