Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
On the new flux lora test updates, i see speeds in s/it for each config json. Are those the speeds we shoukd be getting? Im running config 5 which is the one recommended for 12gb vram cards like my 3060. It was 12.2 s/it but im getting 25.5 s/it. Not sure what im doing wrong if those are the expected rates
For multi gpu training do the gpus need to be equal in vram? Like i have an 8 gb card im not using and a 12 gb card that is installed. Would the 8 gb card work for multi gpu training with the 12 gb one?
You need to add a lora loader after the clip. Model from flux loader to left model port of lora loader. Clip out to lora loader and then model out to scheduler i believe