Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
The base model will definitely work on my Nvidea RTX 3060 12gb Ram 32gb video card? Because I can't on the base model to do generation, not enough memory for generation
You tricked me. The 23gb FLUX dev FP16 with preset Rank_5_11498MB_Slow does not fit my 12gb configuration. As before this model is not suitable for training and gening
I used all Config file train_data_dir. 8 GB GPUs : Rank_9_7514MB.json, 10 GB GPUs : Rank_7_9502MB.json, 12 GB GPUs : Rank_5_11498MB_Slow.json, all give memory errors
I do this all the time on accident, but make sure you are using the LORA tab and not the Dreambooth tab in Koyha. It will fail with out of memory errors.
Hmm, I can see the 12gb it looks like it is failing pretty early. Do you have a game running in the BG or another stable diffusion UI running in the BG.
It looks like it is failing because there is only 5gb free on the card and it needs 6gb