Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
What you are asking is currently experimental and includes some extra steps. In the end, you may or may not get the results you're looking for. If you are ready to fail a few times (like I have been failing for the past 3 days trying to do something experimental), I can tell you how lol.
I am trying to do a full Dreambooth finetune on massed compute with 2x A6000. I am using the config provided on patreon for 48gb gpu. I also set necessary parameters in accelerate tab but it keeps crashing due to OOM.
With one A6000 it worked just fine. Also is bucketing currently supported for Flux Dreambooth?
Hello community, I am having a 2gb lora on SD , it was like 147mo only for 40mn , what I am doing wrong now ? the file is bigger and takes about 3h , the lora does not even look alike heeeelp
@Dr. Furkan Gözükara Update on my https://huggingface.co/nyanko7/flux-dev-de-distill training tests. Some bad news and some good news, Good news: I can train many subjects at the same time without bleeding between each other 11 people in one LORA, simple captions "Name class" note: the token must be different a saw a little bleeding with similar names like "Diego man" "Dani man" works best with "NameLastname class" so they end up been very different. Bad news: there is still some class bleeding, may be my fault because I was using a higher lr than recommended to get faster results, Other thing I was using faster presets rather than quality presets "Apply T5 Attention Mask disabled" "Train T5-XXL disabled" now I'm testing with dose enabled, other bad news regularization images still reduces resemblance, I will try this again with "Apply T5 Attention Mask enabled" and "Train T5-XXL enabled" and with the recommended learning rate. The other option that I didn't try was "Guidance scale for Flux1" flux-dev-de-distill has a default cfg of 3.5, that option I leave it at 1 for now because I'm using regular Flux-Dev for inference is another thing I have to test. I will report my Updates.