Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Hi guys, I encountered an error when trying to train with optimizer adam8bit, does anyone have any suggestions how to solve it? Have kohya 21.8.9, Python 30.10.9
Hey @mikemenders , thanks very much for sharing! I'm going to try to test these different options out this week. On that note, it's actually really interesting. Adetailer can be really good if your prompts are a bit simpler, and with certain shape faces and skin tones. If you look at some of my pictures in the "strictly-sfw-image-share channel", with me, it worked very well in most cases. But the more complicated prompts get, or if the person trained has certain features, it starts faltering. Still not terrible, it's definitely better having it than not, but you do start seeing some cracks. I'd be curious to see if the other solutions solve for that.
If you don't have a strong GPU for Stable Diffusion XL training then this is the tutorial you are looking for. We will use Kaggle free notebook to do Kohya SDXL LoRA training. In this tutorial you will master Kohya SDXL with Kaggle! Curious about training Kohya SDXL? Learn why Kaggle outshines Google Colab! We will uncover the power of free Ka...
If you don't have a GPU, or have a strong GPU, or you are using Mac and your computer not supporting Stable Diffusion training, SDXL training, then this is the tutorial you are looking for. In this tutorial, we will use a cheap cloud GPU service provider RunPod to use both Stable Diffusion Web UI Automatic1111 and Stable Diffusion trainer Kohya ...
No I typed wrong , I mean, how can I adjust the details of the enhanced image to make it sharper? Without losing details Refers to my face shape used for training
I liked the hires fix better. For the face trained loras, it definitely worked better with the hires fix than with adetailer. I tried the other adetailer models, but most of the time they made the original result worse. True, I just turned it on by default, not trying to get a better result, but I would think it would work fine by default. But I'll check later to see if I really need a pilot test.
@Dr. Furkan Gözükara hi, I trying to do a training in dreamboth using runpod when I try to put the path of the source directory like "/workspace/768x1024" where my images to be trained is supposed to be I and the click on preprocess I have an error --- Error completing request
files = listfiles(src) File "/workspace/stable-diffusion-webui/modules/textual_inversion/preprocess.py", line 30, in listfiles return os.listdir(dirname) FileNotFoundError: [Errno 2] No such file or directory: '/workspace/768x1024'
Hi @Dr. Furkan Gözükara I follow your runpod tutorial for creating portrait photo images training model of a data set of a woman and the output start to create naked pictures half body there is any way to avoid that?
Hi, could someone plz point me to the right resources for incorporating two safetensors into sd 1.5 or sdxl? Is it possible to use one that's completely different from the other (one that tells the model what my dog looks like & the other that is a pixel art theme)
Very useful & interesting thank you very much. How is your flexibility test going? CivitAI also use batch 2 to train their Lora so I guess batch 2 could be a sweet spot.
should I add negative prompts before start the training? also I’m getting double persons and half bodies instead of portrait shots. I’m training 768x1024 maybe is better use 1024x1024?
Hi guys how do you save Prompt as a text file with ComfyUI please? I found this node of pythongosssss but I'm not a coder so I don't know how to change the "$output/*/.txt" to save the txt file into /workspace/ComfyUI/output/ComfyUI_00471.txt (next to the ComfyUI_00471.png so I can download both of them. I'm using Runpod) https://github.com/pythongosssss/ComfyUI-Custom-Scripts
Initializing bucket counter! Steps: 1%| | 10/1100 [01:38<1:59:03, 6.55s/it, inst_loss=0.0501, loss=0.0501, lr=0.0001, prior_loss=0, vram=6.8] guys is this a normal ? I started use brand new stable diffusion v1.6. If you got advice how can i increase this speed please share me. Sorry for my bed england