Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
@Furkan Gözükara SECourses one question for understanding. why would 16GB config be slower than 12gb config. isn't more VRAM better? I am comparing configs and only difference is blocks to swap. should it now slow down training if 12gb config has more blocks to swap ?
L40S and A100 PCIe, are my first choices. L40S is very polyvalent and has a good speed. I use A100 PCIe when I don't want to loose time about searching for the parameters limits and fall into Out Of Memory problems.
@Furkan Gözükara SECourses can we train FP8 models as well using same configs? or do I need to select fp8 in the training parameters. Atm I can train lora on FP8 using your fp8 config.
@Furkan Gözükara SECourses do you know what is used for fp8 finetune ? do other tools like onetrainer support it ? or is the only option to do full training and then save as fp8
@Furkan Gözükara SECourses I did the updates as suggested and played with the repeat and epoch numbers. still it won't go past 1600 steps and 10 epochs (I've set it to 200). is this normal or am i missing something?
Here's a challenge I want to complete. I have a fine-tuned checkpoint for my character, but I want my characters body to be taken from another character. So I trained a lora using pics of the other character, but the lora also includes that characters face. What do you guys suggest I do to get best result? I don't want my face changed because of this lora. So is using the <segment...> syntax in swarmui a way to go, or should I change the training images such that the head is cut off? Appreciate this community so much, I learn something every day!
now when I install v44 or v14 scripts it does update kohya and I am getting correct step count.
in your lora configs you are using additional parameters for block swap. this throws an exit 0 error and training exits. the UI in lora in latest kohya now has block swap option in UI. if I manually set it to what your config has and remove the additional parameter the training works but not sure if as the UI option says (use with fused back pass) which is not available in UI. but the training works with block swap if I set in UI and remove your addition parameters setting. Can you fix the configs so that it works for lora ? have not tested the finetune in new config but see if that also needs update etc.