Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Get more from SECourses: Tutorials, Guides, Resources, Training, FLUX, MidJourney, Voice Clone, TTS, ChatGPT, GPT, LLM, Scripts by Furkan Gözü on Patreon
@Dr. Furkan Gözükara Thanks for all your hard work, following your lesson, i've trained my own model based off of flux dev, and running gguf text encoder and models and made this image
I still don't get the fuss about gguf models? Isn't SwarmUI capable of running the 23GB model on slower GPUs also? Is this to increase the speed of the generations? Or to be able to use more, pin-pointed comfyUI nodes?
That might be true. I'll have to check the settings on massed compute, but perhaps I chose automatic there, and the results were kinda better than what I produce using FP8 on my home PC.