Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
The great thing about Prodigy is that you can control the learning speed with two switches. The d_coef is the fine tuner, and the alpha is the coarser one. The lower the alpha, the faster it learns. So up to 5 epochs would be enough if the alpha is half the weight. This is just a theory, I haven't tested it, but it would make it even faster to train Lora, even in a few minutes (now about 10-15 minutes on an RTX 3090 card).
I like the 10 epoch and the 100 divisor because it's more transparent, and based on previous tests, it's really the right one, no more, no less. And for that you can already set the d_coef well.
I haven't switched to RealisticVision 6.x for training yet, I think 5.1 is good enough to get Lora results that work well on other models, and I get nice results on RV 6.x.
So with these results I think it's unnecessary to train on SDXL for now, maybe I'll think about SDXL-turbo if I can train it to get even better results in a small size.
Although someone recently argued for SD 2.1, that it could also be trained nicely, but many people quickly gave up on it. It's 768, so it might be worth a try with Prodigy.
I have a lora for lineart style, an image(simple lineart + colored lineart) both then how can we change the style of the little lineart only? Using controlnet or ipadapters? Or change the style of the plane colored image by some anime style lora?
Description Some Trainer (like hcp-diffusion, naifu) have implemented a method of training called "pivtoal tuning", which is basically trained Embedding and LoRA(DreamBooth) at the same t...
Does anyone know why the photos generated when the face is close up are perfect, and when it is shown further away, like with the whole body, it looks bad?