Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Can your video to educate us on Wan 2.1 Lora involving both Local Computer Training and Massed Compute Training ? For now, can u tell me if I can train Wan 2.1 on kohya ss for Lora ?
I did one training on Flux DeDistilled, because negative prompts were important to me, and I am very satisfied with result. I didn't see any objective benefits from base model over the de-distilled one - and negative prompt is a huge benefit on the other hand. that said, I guess it all depends on your personal goal and needs.
I had some small luck with a piercing https://civitai.com/user/Aikage you can see my Sanne character has black ear clips but they're not always consistent.
You are correct the best way I have found is to take a lot of pictures of JUST the tattoo and tag it something, then take pictures of the character with the tattoo and without and when the tattoo is present tag it with the tattoo tag which will link it to the close up pictures. Still it mostly just gets the rough shape correct
what epoch was your best checkpoint for the batch 7 config? I have 252 imgs so I’ve been training until 111 epochs to reach 4K steps but never went higher than that.
hey just joined to try get better settings for kohya finetune of flux and see discord pop up so thought I'd ask here. 48gb RTX 6000 Ada should be enough to finetune right? going mad here keeps getting CUDA out of memory even with 512x512, batch 1, fp16, adam8, gradient_checkpointing and everything i can find to reduce memory, and feel i must be doing something wrong its really odd
Hi, I've been trying to merge 2 Flux LoRas via Kohya SS together and I keep failing. Say, if normally I set these 2 Loras to the strenght of 1 and 0.6, what ratios should I put in there? And then, sometimes it merges a lora to the size of 24Gb or 40Gb+ if I switch the save precision so I'm not really sure what settings in general there would be the best