Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
yes 1024,1024 and the training images are 1280x1536 now it works with the stabilityai/stable-diffusion-xl-base-1.0stabilityai/stable-diffusion-xl-base-1.0
Anyone know why I always get bad results from Adetailer? Does it only work well on small further faces? On my images where my face is pretty large (think portrait) Adetailer always makes my face look worse, like its set to a higher CFG Scale (its not) I followed the same settings @Dr. Furkan Gözükara used in the SDXL Dreambooth training video. For some reason this has always been my results with Adetailer and I'm not sure what I could be doing wrong
BTW, do you think teaching SDXL style on a realvisXL model would be better? It seems the custom SDXL models are still flexible with prompts more than the 1.5 were.
Can someone share or show us what a high-quality or ultra-high-quality dataset for training looks like for SDXL Dreambooth? For example how many are enough and what ratio between close-up face, upper body, and full body shots is needed? If a photographer takes shots of the subjects what to take care of etc.
Hello, new to the patreon, looking at the config files and i notice i can't find the network dim or alpha. Is 128/1 still the default we are using here, or is it 128/128?
Hi @Dr. Furkan Gözükara and everyone. My 10 year old nephew wants a cool illustration of himself for xmas so I wan to to do a lora training of him. Can I use the images from the download_man_reg_imgs.sh file? Or is there a child regimages dataset somewhere? Lastly, will the class prompt "man" work in this case or does it have to be "child"? Thanks!
@Dr. Furkan Gözükara have you tried multi gpu lora training with kohya? ddp_bucket_viewddp_bucket_view and the other parameter dont seem to exist even on dev branch
So I'm using all the latest instructions and settings, and I'm running training on a 4090 in Runpod. When I watch the video, Dr Furkan gets a total training time of 50m-2h using a 3090 on Runpod. For some reason I'm getting a training time of ~3hours, despite being on a 4090.
Did something change about the training between the video publication and now, making it more intensive? Or am I doing something wrong?
In the vast expanse of the whimsical world of Widdendream, where every creature was as unique as a snowflake in a winter flurry, there stood two friends: little Lyla Longbeak and the towering Tara Tattercloak. Lyla, with her inquisitive eyes and a beak sharp enough to pick the seeds of wisdom from the fruits of knowledge, gazed up in awe at her friend. Tara, draped in a cloak of midnight blue, tattered at the edges from embracing every thorn and rose life offered, towered above like a gentle giant, her eyes kind pools of understanding.
Together, they were a testament to the beauty of difference, a duo that danced in harmony despite their contrasting tunes. The room they shared was filled with oddities and ends, with hanging cages that held not birds but blooming ideas, and a television that was old and wise, flickering with the stories of yore.
"See, Lyla," Tara's voice rustled like the leaves of an ancient tree, "we are like these cages, different in size and shape, yet home to ideas just as bright and beautiful." Lyla nodded, her heart swelling with the warmth of acceptance.
Their friendship was a mosaic, a splendid tapestry woven from threads of myriad textures and hues, teaching all who visited Widdendream that diversity was not just to be tolerated, but celebrated, for it was the very essence of life's enchanting tapestry.