Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
@Furkan Gözükara SECourses This doesn’t work. I performed LoRA extraction from the model with conversion parameters set to FP16 quality, then converted this LoRA to FP8, and again got a black screen.
I know there are much better ways to train a face, but I've got some credits I need to burn through on dreamlook.ai - can anyone suggest best settings to use with their expert mode?
for a multi-gpu system Kohya cannot train. It gives me an error for gpu id not set. I tried to specify the GPU id in the Windows_start_Kohya_SS.bat file by somthing like this set CUDA_VISIBLE_DEVICES=5 changing the device IDs but it didnt start -
running from the Install.bat menu worked for me but it is using the first GPU - it would be great to run paralell if they cannot run across the GPUs
@Furkan Gözükara SECourses can a rank 256 fp16 lora work well if I'm using the fp8 version of my flux dev fine tune? or does the lora have to be fp8 as well?
in Acceleration you can set GPU, and multi GPU, but for fine tune using multi GPU you need 80 GB GPUs, if I remember correctly. No need to edit the bat file
Hi Doctor ! @Furkan Gözükara SECourses Please.. do you have some tutorial about training sd3.5? I prefer it to flux for some kind of generations.. I feel it more creative when subjects are not real humans ....
as a separate issue, my swarm generates another folder with the name stableswarm so that the model installer for windows does not work well because it generates another parallel swarm folder with another new folder of models ... I think that maybe something is not quite right or does not fit with my folder configuration.
hey quick question, im trying to train a LoRa on Flux, I have a 3060 with 12gb so im using your config tier 2 for and following the youtube tutorial. I disabled all my start ups and my task manager is showing im using about 3-400mb of memory on my gpu when I start training. Ive reduced my resolution to 512x512, increased block swaps, and tried using the 10gb config. everytime I get a CUDA out of memory error and the training fails
does anybody knows if the free kaggle notebook is still good (I'd rather say:"working") at SDXL training? And, if so, a tutorial to use it with the new Kohya interface?
Guys, are the current commit of sd3-flux.1 branch kohya_ss working for you? I swear the branch worked perfectly yesterday, but I'm having this annoying error.
@Timson hi I also used the sd3-flux.1 but am getting this error. You probably fixed this but I don't understand your solution. Could you explain? I'm doing dreambooth training following Furkan's tutorial. I can do lora but on model training, I get this error
Also, I'm thinking about doing it on massed compute instead. In the tutorial, he said that kohya is not on main branch there. Can I use/switch to this sd3-flux.1 on massed compute?
I would say doing training in the cloud is the best option unless you can't afford spending a few bucks on this project. It requires more effort though and basic linux administration skills