Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
@linaqruf_ @oron1208 Wait for 8b. It's basically Flux without distillation and heavy hands dpo. This should make it easy to finetune (and dpo). We're also trying a new scaling down mechanism for mmdit, the new 2b is gonna work much better.
If you specify fp8_base for LoRA training, the flux will be cast to fp8 from bf16, so the VRAM usage will be the same even with full (bf16) base model.**
Has anyone done tests regarding interactions between multiple LoRAs, or is there documentation somewhere how it works in detail (assuming it doesn't get so technical it goes over my head)
For instance, does LoRA load order matter? Is it better to train a model with two characters (like the one I did above with a character + mascot) compared to training two different LoRAs and using them both at the same time?
Is there any concensus on what is causing the grid lines/scan lines in some Flux LoRA's? I've read anything from: VAE issue, CUDA version issue, a bug in kohya, anti-AI watermark in training dataset. Has anyone successfully narrowed down what was causing it?