Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
what did you tag your photos with? did you tag taken on smartphone or something like amateur photography?
Also have you tried adding back in generated photos of yourself (the ones with good likeness) that dont have that standard style. Or are you only experimenting with how good a model can do if you have very basic dataset
Finally another thing i found to fix style is generating first few steps with 0.4 Lora weight, then generating the remaining with 1.0 weight. Since the first few steps are dictating colors and style it gives it much better style that is closer to what original model would have produced
All the above has helped my loras be much more versatile being able to generate much more natural looking images while retaining likeness
I'm using comfyUI and the advance sample lets you choose start and end steps, and so i have two samplers in series, one takes lower model weight and passes the output latent to the other higher model weight sampler
Expected Behavior Subsequent 16-bit images generated with a LORA in Flux dev should not degrade into fuzz. Actual Behavior When using any LORA, the first image generated after switching to Default ...