Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
@Dr. Furkan Gözükara you can try flux finetuning with regularization and in the last epochs remove the regularization, may be the model get resemblance at the end but stays more flexible
@crystalwizard You keep claiming that flux-dev can't be trained I was doing many tests with flux-dev-de-distill, I think that it can work. Its behavior on inference is completely different than regular flux-dev it works perfect without distilled cfg, what they did to model probably works, its behavior is similar to sdxl, if you prompt random you get random noise but if you prompt properly you get a perfect image, on regular flux-dev with distilled cfg you always get a coherent image, did you test it?
if you have a bag of chocolate chips, i take them out, you have an empty bag. if i then put potato chips in it, do you have a bag of choclate chips again?
you'd be better off just using SDXL - what was done to flux was not only the distill process, but other things as well. those were done to try to fix an issue that the neural network has. that's resulted in flux being rigid, untrainable, and giving very strange results outside of a narrow window.
flux by itself, if prompted correctly, doesn't need loras or fine tuned for any reason other than the very specific issue of you wanting to make images of stuff it can't possibly know about, anyway
there's no reason to train loras, or make fine tunes of flux, for any reason other than needing to be able to generate images of something it has no data at all about. the community, for some odd reason, thinks they can't generate without all those add ons. all it takes is learning how the model thinks, and then prompting it correctly