Maybe that was the difference then as I trained the LoRAs with AdamW but the dreambooth finetune wit

Maybe that was the difference then as I trained the LoRAs with AdamW but the dreambooth finetune with Adafactor (I could not fit AdamW in my gpu or a rented 48GB one)
Do you think this could be the difference?
I still believe finetuning should be better so I’m just confused about why LoRA yields better results for me.
Was this page helpful?