<@205854764540362752> not sure if kohya and/or you still see this on a closed PR, so here too:
@Furkan Gözükara SECourses not sure if kohya and/or you still see this on a closed PR, so here too:
@bmaltais can we add this to gui so many people want to test
all my proof-of-concept samples were at weight 1.
Yes, loss of reg steps is then very low compared to train steps, but this is a function of the reg step prediction already being very close to the target.
you can try at weight 10 or even 1000, but I'd expect that the regularization then overwhelms the training steps, and the model doesn't learn anything anymore.
@bmaltais can we add this to gui so many people want to test
- Specify a large value for --prior_loss_weight option (not dataset config). We recommend 10-1000.
- Set the loss in the training without using the regularization image to be close to the loss in the training using DOP.
all my proof-of-concept samples were at weight 1.
Yes, loss of reg steps is then very low compared to train steps, but this is a function of the reg step prediction already being very close to the target.
you can try at weight 10 or even 1000, but I'd expect that the regularization then overwhelms the training steps, and the model doesn't learn anything anymore.




