Hello everyone. I am Dr. Furkan GözĂŒkara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Hi everyone! Great to be hereâthanks for all the amazing tutorials and shared resources.
Iâm currently running into an issue with my first Flux LoRA training on Koya. Iâm on an RTX 3080 (16GB VRAM) and followed the install/setup steps as advised. All required models are downloaded. Everything seems in place, but when I try to run the training, kohya starts up, seems to run ok then without warning goes black, crashes and restarts. Itâs happened several times now.
Not sure what Iâm missingâor doing wrong? any advice would be hugely appreciated. Thanks in advance!
Oh, just one more thing. For example, is it possible to 'pause' a LoRA at checkpoint 50 and then resume from that point to continue training up to 100 or even 200?
Hello, itâs an eGPU RTX3080 with 16GB RAM, task manager says itâs all available. I noticed my HDD had dropped to 50gbs free space, Iâll free more up and try again.. should I try with one of your 12gb Lora best wfâs instead?
Vram is a type of ram only for the gpu, it has way better bandwitch and optimized for certain type of data and lot of transfer. But it has a lot more latency that the ram and it too slow for random acces file and slow data that's why we got ram and vram
i run a dreambooth training on my machine and when i leave my home and come back i saw the training crash with no explication. I have a 10epoch checkpoint, (i think i was near 15), how i can resume the training ?
Hey thanks for sharing the info on Vram , I feel that I understand it now but still not entirely sure how to measure what Vram I have! Iâve attached a screenshot from my task manager maybe someone can take a look, knowing what Vram I have will help me choose the correct Koyha Flux workflow, thanks in advance!
Thanks for checking that for me! Iâll probably take the good Docâs advice and upgrade soon â but in the meantime, out of the Lora training workflows available, which one would you recommend I try with my current setup?
Thanks Doc! Rather than struggle along I feel it will be more productive that I spend my time learning how to set up Massed Compute for Lora training for now, Iâll give it a whirl!