Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
I'm looking to do a finetune with a fp8 but the Dr. setting for it uses shared memory, and is slow at around 10s/it. But speaking to koyha, it seems its an intended behavior to use shared memory.
when training with my local pc, I need 11 hours of 3090 run full throttle, my poor pc fan roar the hell of of my room and the temperature turn up a lot
I'm training this model now, Works much better for training, you can train multiple people on one lora without concept mixing, it behaves very much like sdxl, much more flexible
Is it possible to create a finetune, then extract the LORA? I've had good luck with that for single subjects, and sometimes it's also nice to have a checkpoint as well.