Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
@Furkan Gözükara SECourses The extract lora from the finetune only works on the base model I used to create the finetune from. Any image created with a different model using this lora is just static.
@Furkan Gözükara SECourses I asked bmatis and he added a feature to the gui that was in the code, the ability to set the max amount of save-state folders to what ever we want. If set to 1 it will be overwritten with the newest save-state made.
trying to train on a Flux model but getting this error with it... File "D:\kohya_ss\sd-scripts\library\flux_utils.py", line 81, in analyze_checkpoint_state max_double_block_index = max( ValueError: max() arg is an empty sequence
Does this mean we can now resume training where we left or if I want to train again with more epochs can I just add it and start from where it last trained?
@bmaltais can we add this to gui so many people want to test
Specify a large value for --prior_loss_weight option (not dataset config). We recommend 10-1000.
Set the loss in the training without using the regularization image to be close to the loss in the training using DOP.
doubt that this is a good config recommendation.
all my proof-of-concept samples were at weight 1.
Yes, loss of reg steps is then very low compared to train steps, but this is a function of the reg step prediction already being very close to the target.
you can try at weight 10 or even 1000, but I'd expect that the regularization then overwhelms the training steps, and the model doesn't learn anything anymore.