Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
No, base model fp8 which uses to train lora does affect the lora. For example 2 train 2 lora same dataset but one with fp16 based model one with fp8. The lora with fp16 have better prompt understand than fp8. My training is person lora. While the person quality is still good with both loras. The prompt understanding every subjects outside of the person is reduce with fp8 lora. For example: my prompt have something like little angels playing in the background. The fp16 lora show some angel with body of human and wing but the fp8 showing some birds.
I am training currently a Lora with Flux using 20 photos and 200 Epoch, is it normal it will take 4 tot 5 hours using an RTX 4090 locally? Or did I miss any optimization setting or script?
I used a Saas solution where I trained on the same 20 photos using Flux Lora, it took me only 30 minutes, any idea what can cause the difference in speed?
I'm really happy with how good Flux can be trained, the rank 3 is perfect for the 4090, thanks Doctor
funny how despite all its capabilities, Flux if it EVER needs to show a nipple, even accidentally its so LOST that it just puts flesh colored gummy candies in their place
Dr. Furkan, could you make a video how to train multiple people, objects or styles in one training? Is it possible to train a person, a style and an object in the same time?
we can get good people loras in around 2k steps, but what about concepts? does that require more learning? like unique physical characteristics and such
Maybe I overlooked the video, do you also have a good tutorial on training with Dreambooth? When using 20 images, can I use 200 epoch like the Lora training? Is there a general rule of thumb regarding amount of pictures vs epochs?
In my last Lora training I used 20 pictures, 200 epochs and Rank_3_18950MB_9_05_Second_IT, it took almost 5 hours to complete. when I use 40, 60, 80 or 100 pictures, how many epochs would your recommend, is there a rule of thumb?
@Furkan Gözükara SECourses Hello! I'm using RTX4090 (24Gb) For full FLUX finetuning I used your config "Rank_1_15500MB_39_Second_IT.json" and It took me 16.5 hours to train 15 images dataset. Then I tried the config "Quality_1_23100MB_14_12_Second_IT.json" and It took me 23.5 hours to train the same dataset Why the training goes slower on the second config although it is more vram consuming?