Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
@Dr. Furkan Gözükara I noticed all your generations have very clear backgrounds, how is that possible? Mine always come out with a shallow depth of field which is soft and blurry background
the LoRA training making that impact. As you do train more epochs, that annoying "shallow depth of field which is soft and often blurry" effect is gone. But more training also causes overfitting so you have to have a very good config + decide which checkpoint to use for balance both
@linaqruf_ @oron1208 Wait for 8b. It's basically Flux without distillation and heavy hands dpo. This should make it easy to finetune (and dpo). We're also trying a new scaling down mechanism for mmdit, the new 2b is gonna work much better.
If you specify fp8_base for LoRA training, the flux will be cast to fp8 from bf16, so the VRAM usage will be the same even with full (bf16) base model.**