Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
If you are interested in using AI, generative AI applications, open source applications in your computer, than this is the most fundamental and important tutorial that you need. In this tutorial I show and explain how to properly install appropriate Python versions accurately, how to switch between different Python versions, how to install diffe...
Hi, @Dr. Furkan Gözükara Did you try training OpenFLUX / Schnell with kohya's latest commits on sd3-flux.1 branch ? I always get a NotImplemented error. tried on torch 2.4.1 and 2.5.0
NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.
NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.
Dr. how to set Kohya SS, in a DreamBooth FLUX Train, to save the train every "x" steps if I need to stop the train and restart it before exactly where I stopped?
When fine tuning flux, any suggestions on lora_rank? The default on the replicate trainer is 16, and it says higher numbers capture more features but take longer to train. Any guidance on this?
I'm ways from understanding the logic of Kohya training. To many parameters. I can't for the life of me understand why there is no better way to do it than the folder preparation And I have watched your video several times. It's a looooong video
In the video you say "You need to select LORA. Because we are currently training LORA"... Which we aren't since now we're talking finetuning, so it's a little bit confusing