Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Hi, does anyone know the difference between de-distilled and the standard flux model? When I tried it, using an RTX 3090, I usually get output with flux in 30 seconds, but with de-distilled it took 120 seconds, and the image quality felt lower compared to the standard flux. I’m referring to the flux dev version
OHH this is great. Is it for LoRA training or fine-tuning? I’ve been away from home for about a week, and I was finally excited to train a flux LoRA this night. Should I train my LoRA with the de-distilled version? By the way, I will be training a style LoRA
I’ve previously trained a few Stable Diffusion LoRAs for my own work purposes, but this will be my first time training a flux LoRA, so I have some knowledge about training
I don’t know anything about the de-distilled or distilled version; I just heard about it on Reddit a few days ago, and due to my work, I haven’t had much time to research it. However, I plan to retrain my existing SDXL LoRA as a FLUX LoRA
idk I use forge and it supports it, but you can use regular flux-dev for inference, de-distilled is slower because it uses real cfg an distilled cfg is disabled