Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
if I increase the text encoding rate and train again could that improve the results? I used the low vram settings for your tutorial, I just increased the repetitions from 40 to 100 on the first screen
Hi Boss @Furkan Gözükara SECourses, for SDXL training I can more or less use the same settings for flux, just changing at the beginning ? Or you recommend different tutorial?
just trained a lora on a recent sdxl model (noobAI V-pred0.5), and it turned out like this. but some loras trained on epsilon versions works with vpred checkpoints. this is just a speculation, since the checkpoint is not epsilon structure (dont know what is epsilon or vpred meaning) kohya can't train it or smthing?
hello again, how do I resume a Lora training in Khoya? I know how to pickup a training I left when training a finetuned checkpoint by selecting the latest saved safetensor in the " Pretrained model name or path" but I dont know hot to resume a previews Lora training , thanks for the help
I trained model and merge with hyper or turbo 8 step lora. The image result for 8 step for normal gen is ok but the upscale quality is worst than normal
If my pics are jpg, should they be converted to png? Which tool should I use if I have a jpg file with say, 300 X 300 px to make it a good training image l?
ok thank you. @Furkan Gözükara SECourses I am just about to start dreambooth training, and I want the best quality. I heard you make a note that batch 1 48GB (using a6000) actually yields better quality than batch 7, so the only advantage with batch 7 is the speed. Would you recommend batch 1 then if I want the best quality?
ok trying out batch 7 first as you suggested, if quality is not good enough, I might run batch 1 later. By the way, Why either 15 or 256 images, is that some sort of magic numbers for fine-tuning?