Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
yes 1024,1024 and the training images are 1280x1536 now it works with the stabilityai/stable-diffusion-xl-base-1.0stabilityai/stable-diffusion-xl-base-1.0
Anyone know why I always get bad results from Adetailer? Does it only work well on small further faces? On my images where my face is pretty large (think portrait) Adetailer always makes my face look worse, like its set to a higher CFG Scale (its not) I followed the same settings @Dr. Furkan Gözükara used in the SDXL Dreambooth training video. For some reason this has always been my results with Adetailer and I'm not sure what I could be doing wrong
BTW, do you think teaching SDXL style on a realvisXL model would be better? It seems the custom SDXL models are still flexible with prompts more than the 1.5 were.
Can someone share or show us what a high-quality or ultra-high-quality dataset for training looks like for SDXL Dreambooth? For example how many are enough and what ratio between close-up face, upper body, and full body shots is needed? If a photographer takes shots of the subjects what to take care of etc.
Hello, new to the patreon, looking at the config files and i notice i can't find the network dim or alpha. Is 128/1 still the default we are using here, or is it 128/128?
Hi @Dr. Furkan Gözükara and everyone. My 10 year old nephew wants a cool illustration of himself for xmas so I wan to to do a lora training of him. Can I use the images from the download_man_reg_imgs.sh file? Or is there a child regimages dataset somewhere? Lastly, will the class prompt "man" work in this case or does it have to be "child"? Thanks!