Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Thanks! i've just upgraded my setups from 4090 to 5090 and noticing that I have problems running pretty much everything I tried before thanks again! Now i think im getting it
Hey, I wonder, has anyone tried / have article or any sort of content about full checkpoint finetuning on Wan ? I really want to get my hands into it but it'd be great to have some references to start working on it
What are the optimal sdxl fine tune and lora configs? Having a hard time finding them. Using massedcompute. Onetrainer or dreambooth? Was thinking of using big love xl 2.5 as base checkpoint. Like the realism and quality of pics there.
Hi everyone! Great to be here—thanks for all the amazing tutorials and shared resources.
I’m currently running into an issue with my first Flux LoRA training on Koya. I’m on an RTX 3080 (16GB VRAM) and followed the install/setup steps as advised. All required models are downloaded. Everything seems in place, but when I try to run the training, kohya starts up, seems to run ok then without warning goes black, crashes and restarts. It’s happened several times now.
Not sure what I’m missing—or doing wrong? any advice would be hugely appreciated. Thanks in advance!
Oh, just one more thing. For example, is it possible to 'pause' a LoRA at checkpoint 50 and then resume from that point to continue training up to 100 or even 200?
Hello, it’s an eGPU RTX3080 with 16GB RAM, task manager says it’s all available. I noticed my HDD had dropped to 50gbs free space, I’ll free more up and try again.. should I try with one of your 12gb Lora best wf’s instead?
Vram is a type of ram only for the gpu, it has way better bandwitch and optimized for certain type of data and lot of transfer. But it has a lot more latency that the ram and it too slow for random acces file and slow data that's why we got ram and vram
i run a dreambooth training on my machine and when i leave my home and come back i saw the training crash with no explication. I have a 10epoch checkpoint, (i think i was near 15), how i can resume the training ?
Hey thanks for sharing the info on Vram , I feel that I understand it now but still not entirely sure how to measure what Vram I have! I’ve attached a screenshot from my task manager maybe someone can take a look, knowing what Vram I have will help me choose the correct Koyha Flux workflow, thanks in advance!
Thanks for checking that for me! I’ll probably take the good Doc’s advice and upgrade soon — but in the meantime, out of the Lora training workflows available, which one would you recommend I try with my current setup?