Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Hi Furkan, are you referring to a flux finetune in Koyha, and not a dreambooth training? If so, what is the main difference between dreambooth and fine tuning? Fine tuning is more for style/concept version dreambooth being better for specific subjects? Thanks
Discover Everly Heights ClipFrame, a versatile tool designed to revolutionize how professionals and creators analyze, organize, and extract value from video content. Perfect for marketing teams, content creators, video editors, and even AI developers, ClipFrame simplifies the process of breaking down videos, organizing frames, and exporting cust...
Hi Furkan, I've been experimenting with Flux dreambooth and LoRA training for several months now, using a high quality dataset of 3D renders en photographs of a specific car (which is not trained in the base model).
After extensive testing, I am actually getting better results with Flux LoRAs then dreambooth finetune. The finetuned model does not follow the car's body well, the LoRA is replicating the car's shape a lot better.
For the LoRA, I found that AdamW with network dimension 128 and alpha 64 worked best for me. For the dreambooth, I've used your configs and left them untouched.
I am surprised that this is the case as you would expect the oposite. Even the extracted LoRAs from the trained checkpoint performed worse than the LoRA.
I know this is very little information to go on, I can send you more grid tests if you want, but I am wondering if you have an idea where to look. The only thing I can think of is that the LoRA uses AdamW and dreambooth uses Adafactor. I could not use AdamW for dreambooth, not even on a 48GB GPU.
My goal and quality metrics are product (car) accuracy. So perhaps my configs should differ from your configs that are more leaned towards people?
PS. I've left all the common settings the same like image repeats, dataset/captions, noise offset, snr. And I have tested multiple epochs to find the best sample size
I would be happy to test your LoRA configuration on a character/person if you're willing to share it. I have fine-tuned models and created LoRAs for a couple of months and my findings match Furkan's results, so it would be nice to see if there is a better configuration for LoRAs.
I'm happy to share my json or toml file in a message (maybe dropping it here is against the community guidelines?). Keep in mind that I am absolute not a ML expert, I am a 3D product visualization artist dabbling with AI. However I have a beefy pc (4090 + 3090 + 3080) so I've been able to do a lot of testing, running 2 trainings at the same time. Most of my "knowledge" comes from using custom Claude & Perplexity project with documentation about LoRA, koyha etc. In any case, I am also very surprised by these results, as I am sure Furkan has found the best configs
Maybe that was the difference then as I trained the LoRAs with AdamW but the dreambooth finetune with Adafactor (I could not fit AdamW in my gpu or a rented 48GB one) Do you think this could be the difference? I still believe finetuning should be better so I’m just confused about why LoRA yields better results for me.
Hey there! Anyone out here was deploying flux lora training as an API? Would be happy to chat or be grateful if you can share something useful on how to deploy to runpod/other compute providers. P.S. If you can share some useful presets for training person lora - beer on me haha!
Does training lora for Noobai v-pred require different settings? I'm using kohya_ss gui (dev branch that is up to date with the latest script). Someone somewhere said that it needs
recently when people train lora with kohya and they also train clip text encode together, they claim that lora train together with clip text encode give much better result. I wonder your latest lora training script json file have these clip text encode trained with lora or not ?
I'll keep testing and let you know. In general the deambooth finetune does preserve small details betetr (logos, brake caliper, etc), but the LoRA follows the car's shape a lot better. I need to test it further with a contolnet and depth map (taken from the 3D model) to see if this car body issue can be resolved that way. And maybe as a final test I can send a 80GB+ GPU to do a AdamW dreambooth training run.
are you sure fine tune checkpoint is not just undertrained? How many images do you have in the trainign dataset and how many training steps did you do for lora and fine-tuning?
The thing is - lora training is just much faster in terms of steps/resemblance compared to full checkpoint, but checkpoint should yeild better generalization
this chat is full of friendly people sharing their knowledge of training flux, for starters just follow basic tutorials of Dr. Furkan on the subject you are interested, they cover 99% of your questions
Good point, thanks for your suggestion. The dataset has 45 images. LoRA produces good results at around 6000 steps. For dreambooth I am trying around 9 - 10K steps. Do you think I should train longer still?
yes, continue training if you don't see model sanity degradation, you can get better results compared to lora at 18-20k steps or more, if your use case require perfect resemblance at the cost of lower model sanity
Great thanks I will give that a try. Costs and speed are not a priority, obtaining the best product accuracy / quality is, so I will give this a go by training a lot longer and compare the epochs in a grid. Cheers
Would it help to also train the text encoder (at 50% unet LR for example) and use T5 attention mask for dreambooth training just as we do with LoRA training? Has anyone seen any benefits?