Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
@Dr. Furkan Gözükara sorry for the delay. Heres screenshots of my adetailer settings. I have also used the photo of ohwx man prompt. Also could DM photos if you want to see examples of with/without adetailer
if you're referring to training, download the .json file and next time you need to use it, upload it. if you're referring to generating images in kaggle, you can upload a png (such as the last image you generated) with the desired settings to "png info" and then send those settings to txt2img
Master Stable Diffusion XL Training on Kaggle for Free! Welcome to this comprehensive tutorial where I'll be guiding you through the exciting world of setting up and training Stable Diffusion XL (SDXL) with Kohya on a free Kaggle account. This video is your one-stop resource for learning everything from initiating a Kaggle session with dual ...
Ive tested a few trainings with your SDXL dreambooth, then lora extraction method, no captions and using reg images but when I generate images using the LORA, it doesnt resemble the subjects face
Hi there, i get an error in dreambooth when saving the models. It shows a massive error list, that starts with : Exception training model: ' Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'encoder.conv_in.weight', 'decoder.up_blocks.1.upsamplers.0.conv.weight', 'decoder.up_blocks.3.resnets.0.conv_shortcut.weight', 'decoder.up_blocks.3.resnets.0.norm1.weight', 'encoder.mid_block.resnets.1.norm1.weight',
Dreambooth is the best training method for Stable Diffusion. In this tutorial, I show how to install the Dreambooth extension of Automatic1111 Web UI from scratch. Additionally, I demonstrate my months of work on the realism workflow, which enables you to produce studio-quality images of yourself through #Dreambooth training. Furthermore, I shar...
The same happened for me. Turn the samples creation off down to 0. Also I turned off xtensors (to default) and turned off cache latents. The dreambooth runs pretty fast on my 4070 laptop. Around 1.35 it/s and 8,7gb vram with 512x512 images. Is it ok? Does default scheduler affect quality. With xtensors on it was very slow with much higher vram usage.
I was thinking about training a model on people wearing my t-shirt design and than use inpainting to put the t-shirts on other people. would that work?
Update (2023-10-31) This issue should now be entirely resolved. NVIDIA has made a help article to disable the system memory fallback behavior. Please upgrade to the latest driver (546.01) and follo...
I recently upgraded from an 2060 to a 3090 (24GB VRAM) - is there anything I’m supposed to do to make sure A1111 and ComfyUI go as fast as possible (maybe I need to reinstall if they do some kind of optimizations during install)
I have 16GB RAM - will this be a bottleneck of some kind?
When using accelerate and sdxl_train.py from the commandline, instead of using the Kohya GUI, how do you specify the "epoch" or "max train epoch" parameters? I don't see any commandline argument being provided that seems related to these, and I can't seem to find any reference to this by googling
Hello lads! It's always me, an error generator. I was following this video https://www.youtube.com/watch?v=16-b1AjvyBE&t=1883s, and everything was fine, until you get to the calculation of the steps. Everything coincides except the last digit to be multiplied. For him it's 1300 / 1 / 1 * 1 * 2 while for me instead of 2, there's always 1. Solutions?
Not sure about max but you can find this by setting it to a number that doesn’t occur elsewhere in the command line and then print the command and you’ll see the number you set it to
Sorry guys, I was trying to train a checkpoint using a custom model (GHArt/Realistic_Stock_Photo_V1.0_xl_fp16), but it gives me the following error. Can you help me, please? Thank you!
Hello... Might not be the right place to post this, but if I have 160 images of a character, what epoch and repeats is the best place to start for a lora?(using kohya)
Hey there community. A quick random question for you. Besides for perhaps training SDXL via dreambooth, have you ever seen a need for more than 24gb vram gpu in your projects or adventures? I’m trying to cut down cloud computing to a minimum so seeing what’s logical and what isn’t to buy.
Boy... Training on DB with a 3060 12GB VRAM works with your "best" settings, but it takes 8 hours for 10 instance pics and 2000 ref pics. Holy sh...... But i tried with your "12GB VRAM" suggestions that take only about 1 hour, and the results are....meehhhhhh.