Did I mention that I love Prodigy for SD 1.5 Lora training? I just tried it with a very poor quality
Did I mention that I love Prodigy for SD 1.5 Lora training?
I just tried it with a very poor quality image set. I extracted images from phone video, and CompreFace could only select 13 of them, almost all of which were a bit blurry. I started training this with 8x iterations on 64/64 with the usual Prodigy script with d_coef 0.6.
The end result could be said to be good, because the subject was similar, but produced images as blurred as the sample images. So it also captured the "style" of the sample photos well, which would be good, but in this case it's a drawback.
I tried using alpha 1, but by about the third epoch it was overdone. That would be fine if I wanted a Lora in a few minutes, but that wasn't the goal. So I reset the alpha to 32 and took the weight down to 32 as well. To slow down the overtraining, I also halved the d_coef to 0.3.
And the miracle happened! The result also yielded images with a score above 0.9 (it generated 0.97 the first time!), and it wasn't blurry! Fantastic! So for anyone interested in this script version, I've attached the file, and a sample prompt for you to experiment with.
This is not 100% "style"-free, but the quality is much better, and on 32/32!
So even after nearly 100 Lora, Prodigy continues to surprise and challenge.
I just tried it with a very poor quality image set. I extracted images from phone video, and CompreFace could only select 13 of them, almost all of which were a bit blurry. I started training this with 8x iterations on 64/64 with the usual Prodigy script with d_coef 0.6.
The end result could be said to be good, because the subject was similar, but produced images as blurred as the sample images. So it also captured the "style" of the sample photos well, which would be good, but in this case it's a drawback.
I tried using alpha 1, but by about the third epoch it was overdone. That would be fine if I wanted a Lora in a few minutes, but that wasn't the goal. So I reset the alpha to 32 and took the weight down to 32 as well. To slow down the overtraining, I also halved the d_coef to 0.3.
And the miracle happened! The result also yielded images with a score above 0.9 (it generated 0.97 the first time!), and it wasn't blurry! Fantastic! So for anyone interested in this script version, I've attached the file, and a sample prompt for you to experiment with.
This is not 100% "style"-free, but the quality is much better, and on 32/32!
So even after nearly 100 Lora, Prodigy continues to surprise and challenge.



