Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
L40S and A100 PCIe, are my first choices. L40S is very polyvalent and has a good speed. I use A100 PCIe when I don't want to loose time about searching for the parameters limits and fall into Out Of Memory problems.
@Furkan Gözükara SECourses can we train FP8 models as well using same configs? or do I need to select fp8 in the training parameters. Atm I can train lora on FP8 using your fp8 config.
@Furkan Gözükara SECourses do you know what is used for fp8 finetune ? do other tools like onetrainer support it ? or is the only option to do full training and then save as fp8
@Furkan Gözükara SECourses I did the updates as suggested and played with the repeat and epoch numbers. still it won't go past 1600 steps and 10 epochs (I've set it to 200). is this normal or am i missing something?
Here's a challenge I want to complete. I have a fine-tuned checkpoint for my character, but I want my characters body to be taken from another character. So I trained a lora using pics of the other character, but the lora also includes that characters face. What do you guys suggest I do to get best result? I don't want my face changed because of this lora. So is using the <segment...> syntax in swarmui a way to go, or should I change the training images such that the head is cut off? Appreciate this community so much, I learn something every day!
now when I install v44 or v14 scripts it does update kohya and I am getting correct step count.
in your lora configs you are using additional parameters for block swap. this throws an exit 0 error and training exits. the UI in lora in latest kohya now has block swap option in UI. if I manually set it to what your config has and remove the additional parameter the training works but not sure if as the UI option says (use with fused back pass) which is not available in UI. but the training works with block swap if I set in UI and remove your addition parameters setting. Can you fix the configs so that it works for lora ? have not tested the finetune in new config but see if that also needs update etc.
And I also have a question about training full dreambooth checkpoint @Furkan Gözükara SECourses. Since you said training person with dreambooth should separate into 2 part of training. Part 1 to take the epoch which is flexible to style. Part 2 is train new epoch with the base is flexible style model from part 1. What is the different with continue training compare with separte with 2 parts like this. since we used constant learning rate. For example: Part 1 - I choose epoch 30 to use as based model to retrain. Part 2 - I train 100 more epoch with based model in Part 1. But that just similar to train 130 epoch from Part 1 right ? Because we use constant learning rate.
So as a general rule, the more training images you have of 1 character, the less epochs you would need to get good result. Am I getting the logic right? So 300 pics you might get good result with 10 or 20 epochs, but with 15 pics, you will need more epochs, like 100-200.
@Furkan Gözükara SECourses are you in contact with anyone who is doing large finetunes like RealVis/Juggernaut, and is attempting to do the same for Flux? I might have something that helps.
@Furkan Gözükara SECourses ss_network_args": "{"loraplus_lr_ratio": "16", "loraplus_text_encoder_lr_ratio": "4", "dropout": null}" do you know what is this? if know how to do this?