Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Update https://huggingface.co/nyanko7/flux-dev-de-distill Lora training, removing the class did not help with class bleeding, the result was almost the same, and maybe worse because difficulted the prompting on inference. I'm doing another test with just one class and some regularization images, idk if kohya new code was implemented yet, hope this will help too.
this feature is part of new webui in development, qwen-flux-wan-such This feature is part of a new webui under development, when finished, Dr. will make a tutorial on how to use it. For now I'm in the final stages and presenting some previews here. This feature only uses the image-to-image diffusion pipeline and a controlnet to refine.
1) It adds a sharpness to the image. The images are more noisier, better realism, and harsh reality kind. TBH, that is not always a good thing. If you like or do not like an image, it is subjective> 2) The ability to modify CFG can give you vastly varying results for the same seed. Its CFG scale is much, much tamer than dev2pro and produces linear effects (as you would expect) if you increase or decrease it. 3) The additional time it adds to inference is very frustrating. Original dev said 60+ steps, and he is right. I got good results at Step 70. You can get the generation much faster on flux_dev with 25 to 42 steps. Adding steps adds time to an already slower generation speed. 4) You need to use extra settings during generations like thresholding, which adds additional complexity to the traditional step-cfg system.