Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
I'm training a lora tree people 2 same class 1 different class with regularization on flux-dev-de-distill, lets see if regularization images works with flux-dev-de-distill
@Dr. Furkan Gözükara flux-dev-de-distill what I saw was that at 30 epoch's the resemblances was reduced with regularization images, lets see if a full 160 epochs fixes that.
just tensorboard on kohya, 10 or 11 pictures per subject, I have a folder with real people as regularization, 2000 images different ages, genders and races
in theory it should go down until starts overtraining and in that case it should go up again, when the training just starts the loos goes up and down, that is normal on the first epochs, but overall the graphic should go down during the entire training
The training was finished, resemblance is almost perfect, but it still looks a little undertrained, with regularization images it needs much more steps or increase the learning rate to get resemblance. I will resume the training from 160 repetitions per image and I will report back, there was no bleeding between the trained subjects
Nice, If you have to train more than one subject it is your only choice, in my tests is superior on everything, the only problem is if you are going to do a full training the inference of the resulting model will be slower because you have to set the cfg scale to 3.5, to overcome it what can be done is to do the finetune, extract the lora and then use the resulting lora on regular flux-dev and be able to use cfg scale 1 in conjunction with distilled cfg