Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
I don’t know anything about the de-distilled or distilled version; I just heard about it on Reddit a few days ago, and due to my work, I haven’t had much time to research it. However, I plan to retrain my existing SDXL LoRA as a FLUX LoRA
idk I use forge and it supports it, but you can use regular flux-dev for inference, de-distilled is slower because it uses real cfg an distilled cfg is disabled
Alright, thanks man! I’ll train a few different LoRAs with different settings, and I’ll share my experiences here with you. Maybe after doing enough research and gaining some knowledge, I’ll look into the distilled and de-distilled versions
training checkpoint of flux is require much better hardware than lora, so training a checkpoint then extract a lora is not an efficient way to get a lora. I rather train a lora directly