Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
I just figured it out. For those who want to try out: In order to work, you need to manually edit metadata of the model and select architecture type. All gguf models goes into unet folder and after restart it automatically downloads Comfy GGUF Workflow.
Ultimate Kohya GUI FLUX LoRA training tutorial. This tutorial is product of non-stop 9 days research and training. I have trained over 73 FLUX LoRA models and analyzed all to prepare this tutorial video. The research still going on and hopefully the results will be significantly improved and latest configs and findings will be shared. Please wat...
The base model will definitely work on my Nvidea RTX 3060 12gb Ram 32gb video card? Because I can't on the base model to do generation, not enough memory for generation
You tricked me. The 23gb FLUX dev FP16 with preset Rank_5_11498MB_Slow does not fit my 12gb configuration. As before this model is not suitable for training and gening
I used all Config file train_data_dir. 8 GB GPUs : Rank_9_7514MB.json, 10 GB GPUs : Rank_7_9502MB.json, 12 GB GPUs : Rank_5_11498MB_Slow.json, all give memory errors
I do this all the time on accident, but make sure you are using the LORA tab and not the Dreambooth tab in Koyha. It will fail with out of memory errors.
Hmm, I can see the 12gb it looks like it is failing pretty early. Do you have a game running in the BG or another stable diffusion UI running in the BG.
It looks like it is failing because there is only 5gb free on the card and it needs 6gb