Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
The translation to English of the Japanese text is:
"Unfortunately, I don't have any ideas either. The lack of publicly available technical details about Flux is making the problem more difficult. I think we have no choice but to wait for the community's research."
Get more from SECourses: Tutorials, Guides, Resources, Training, FLUX, MidJourney, Voice Clone, TTS, ChatGPT, GPT, LLM, Scripts by Furkan Gözü on Patreon
speeds are very resolution dependent. I am getting 1.1 it/s with 512 but around 5 it/s with 1024, training Flux Lora. Would you expect the parameters like LR to be the same for both, or have you seen that you need other parameters at higher resolutions?
Hey guys i have question, i. trained lora with my dataset, and it went great, i saw there is some lora's out there like photorealistic loras that people did, can i train again my dataset image with those lora? or you cant do that, if i understand correctly lora it's to train specific things in the model right?
you could, by merging the lora into the Flux checkpoint first, and then train on that. Whether this improves the result for the Flux realistic Lora is open to debate, but your theory is correct. I have trained in the past on an SDXL checkpoint merged like that, because I knew I wanted to use it with another Lora that negatively affected my lora, unless trained like that
i still see no reason to train a full fintetuning and get a 23GB model compared to a few MB for a LoRA. I already have around 20 LoRA's' and the results are very good. But will it be possible to extract a loRA's from the finteuning model? because that could be interesting.