Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Hey, just wanted to throw this here FYI in case you're looking for STAR alternatives within your APP, you can check if you want, they claim to be better than STAR, however, I haven't been able to test it out. https://github.com/yjsunnn/DLoRAL?tab=readme-ov-file
I'm using the configuration you have been using for 48 gb ram. I rented an H100 GPU with 20 image dataset but the images generated after training does not even match with my training data. Where do you think I might be going wrong? all the parameters are loaded from 15 image dataset config file. Can you please advise?
yes I'm loading them in the dream booth tab. I also am loading the training images properly but just to be sure is there a way for me to know if the training images are being loaded properly or not?
/kaggle/working/kohya_ss/training_data/img/1_ohwx man... 00:37:48-667704 INFO Regularization images directory is missing... not copying regularisation images... 00:37:48-670635 INFO Done creating kohya_ss training folder structure at
and when I start training, you can see them in the message.txt file.
I tried other images including mine but didn't work, so I tried taking a few screenshots from the 15 image dataset of yours and at the 1000th step the sample image I get is not even close to the training images. I will try with Dwayne Johnson images now and give you an update.
I'm not sure either, because the interface I'm using is jupyter lab but the folders being created says kaggle. Let me try using massed compute and I will update you with the results I get. Thanks for your advise.
update -Dwayne johnson training works fine. I get good results around the 3000th step. I will try again for other characters to see if matches with the training data or not. Thanks for your help. I have one more question, is there a way to programatically select the best checkpoint instead of manually checking the checkpoints to see which one is the best?
Another question - In the dataset Dwayne johnson does not have hair but when I generate images a few images for Dwayne Johnson hair is being added, do you know why and how can we keep it real? I can add no hair in the negative prompt but is there a different way to keep it generic without hardcoding.
HI! I have a question about lora-training and captioning. A friend of mine suggest that when you caption your object your not supposed to mention the object in the image at all, only everything around in like a semantic mask and where the object-tag works as the caption for the lora-object. On youtube-tutorials on the other hand everyone I've seen says that you should in detail concistently describe the object and only mention shortly the rest of the image.