Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
My Proposed Parameters: Repeats: 4, Epochs: 24, Batch Size: 2, Total Steps: ~5040 (approx. 51 steps per image), Unet LR: 1e-4, Text Encoder LR: 5e-5, Scheduler: cosine, Save every N epochs: 2
Does this recipe look solid for a high-quality Flux character LoRA? Any red flags or suggestions? Thanks!
Your point about style bleeding is exactly why I set up the style_A and style_B tags in my captions—I'm hoping that gives me the control to call them separately.
you mentioned your researched training config. Is that publicly available anywhere? I'd love to read up on it and compare it with my setup to make sure I'm on the right track.
If you want to train FLUX with maximum possible quality, this is the tutorial looking for. In this comprehensive tutorial, you will learn how to install Kohya GUI and use it to fully Fine-Tune / DreamBooth FLUX model. After that how to use SwarmUI to compare generated checkpoints / models and find the very best one to generate most amazing image...
Based on your warning, I'm now planning to train two separate loras as the safest path
Before I separate the dataset, I had one last quick question about the "style bleed." Was your experience with this happening on Flux even when the underlying facial identity was identical across both styles?
I ask because my dataset uses the exact same face model for both my "styles," with the only changes being things like hair color and subtle makeup. I'm trying to understand if flux is so powerful that it still blends even those subtle, correlated changes, or if the bleeding happens mostly when training on two more distinct concepts (like different characters or art styles).
Hi Dr. I want to try a full finetune / dreambooth with Flux SCHNELL. Do you have configs to use in Kohya SS for this model or the configs for Flux DEV works in the same way?
Anyone know what would cause my training to just stop randomly? This has happened numerous times since I upgraded to a 5090 a few weeks back. No errors are displayed and there seems to be nothing in particular triggering the event. I reinstalled everything and it runs smoothly...until it just stops and gpu utilization drops to zero. Once this occurs I'm unable to create a backup and have to restart onetrainer and resume from backup. Very frustrating because I have to keep restarting the training and can't leave and let it cook
Watt- fluctuating between 350-500 approximately if I'm looking at it right. I'm doing an SDXL training btw. All 1024x1024, batch size 2, accumulation steps 3 just experimenting. But ever since I installed 5090 Onetrainer just randomly stops responding.
Yea, I do use kohya for FLUX, but recently got back into SDXL. And yes, your new config for SDXL is what I meant. I'm wondering if one of my settings is wonky.
If you are interested in using AI, generative AI applications, open source applications in your computer, than this is the most fundamental and important tutorial that you need. In this tutorial I show and explain how to properly install appropriate Python versions accurately, how to switch between different Python versions, how to install diffe...