Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
I mean, if I use "sample prompts" in kohya ss, it will think this is Flux model, but as I have de-distilled, this needs different CGF strategy? Its like, in Comfy UI, I need different workflow, if I use de-distilled model?
I just like to sneak-peak early on how it goes but I guess I can just stop training at cehckpoints, render previews manually, and resume from state if needed
Hello everyone! Owners of the RTX 3090 and CPUs (WITHOUT integrated graphics), has anyone managed to achieve the training speed 7.3 s/it stated in the config by @Furkan Gözükara SECourses? Even when I don't turn anything off, I still end up using 1000GB of VRAM on Windows 11, even if absolutely no third-party programs are running. How it possible ? i got onaly 11.16 s/it with 24GB config file...
about VRAM, I notice one thing. I have RTX 4090, on 24GB config, I have 6s/it; Then I swap blocks to 12GB config, and have 6.7s/it, almost no difference, but now I have lots of free VRAM and can use my PC normally alongside training. Have no idea why is that so
ah yes, I tried downloading juggernaut11, first from huggingface, didn't get it to work, used api token and all, but from civitai it worked with wget, but forget the end part with -O "filename.safetensors", so model wasn't found in supir. the filename became the download link............haha