Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
it's just.... i trained same 10img lora of myself once without captions and once with minimal captions. and it's so hard to say which is better side by side... subjective. i need some more xy grids
@Dr. Furkan Gözükara Hello everyone, I have a question about custom lora training on kohya using flux I trained my face with 21 photos and 142 epochs as you mention on the equation on your video, which is... 3.000 / (number of photos) = number of epochs. When I write the prompt on SwarmUI using Flux.1-dev-fp8. with 1024x1024 resolution the picture looks with low quality, also when the size is 1024+ seems to appear some artifact vertical lines on the picture (see the image) . I'm thinking to train again from zero a new lora with more images or use finetuning. What do you recommend?
Hi everyone, I'm not sure which chat to post this in, but can someone explain to me how to properly use wildcards in swarm ui? I have a weird issue with. I have a wildcard file with 10 lines for shirt color in it but when I try to generate 10 images, it will choose one color from those 10 and generate 10 images with that color
Update https://huggingface.co/nyanko7/flux-dev-de-distill training. LR 0.0001 vs 0.00003, much better results with lower LR, may be a little undertrained. Less class bleeding, but the class on training captions seems to decrease resemblance in non de-destilled models, I think that adding the class is a problem, I will remove the class from the captions I think that it will eliminate class bleeding completely. You can always add the class on inference if you wish, but removing it from the captions will protect the class from bleeding. flux-dev-de-distill learns very different than regular flux-dev it learns the caption tokens much better. I will update tomorrow-
Not from my training because is from family and friends I can't share, The quality on inference is identical, the advantage is that you can train multiple people on the same lora, I trained 11 people in one lora. I'm perfecting the technique to avoid concept bleeding completely not only between the people I'm training that is fixed, but fix the class bleeding to a minimum too.