@Dr. Furkan Gözükara Update https://huggingface.co/nyanko7/flux-dev-de-distill Lora training. 3 peop

@Dr. Furkan Gözükara Update https://huggingface.co/nyanko7/flux-dev-de-distill Lora training. 3 people same class in 1 lora with regularization, regularization captions just the class, lr 0.00005 unet and Te, very interesting results, high diversity on the class on inference on regular flux-dev, promoting for the trained subjects high resemblance, one curious thing a . on the prompt produces a total different image, for example same seed "Name class" different image than "Name class." is like the . works like a separation in the prompt, more to test, if you will use regularization use only the class for the captions, on previous tests I used captioned regularization and it produces unpredictable and unconcistent results. more things to test
Was this page helpful?