Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
this is my tensorboard so far... the black one was only trained 1 epoch, the teal one 2 epochs, and the pink one is in the middle of its 9th epoch right now. Until the 2000 mark all 3 were following the exact same path on the loss/average chart. The first two had the default batchsize/gradient/LR, the pink one has 0.001 as LR (10x default LR). I only used 4 repeats in my dataset cos i put 138 images in and that comes to 552, which is close to the 14x40repeats you were using in your tutorial (which comes out to 560). I used 552 regularization images as well. Am I just using too many images? maybe I should just pick 14 high quality face-only images only like you did
Is there an existing issue for this? I have searched the existing issues and checked the recent builds/commits of both this extension and the webui What happened? Runtime exception pop up on model ...
I use State to save so I don't have to enter all parameters and prompts again & again. It can't save XYZ plot settings but it's really useful if I have to refresh the page or restart SD https://github.com/ilian6806/stable-diffusion-webui-state
Stable Diffusion extension that preserves ui state - GitHub - ilian6806/stable-diffusion-webui-state: Stable Diffusion extension that preserves ui state
Thank you very much. Works perfectly now. I have tested 4 models: Facenet512, VGG-Face, SFace, and ArcFace. Only SFace gave an error (attached image); the other three performed well.
I noticed that all 3 models could detect faces in 25 out of 32 of my images. However, there are 7 images in which these 3 models couldn't detect any faces. Should I exclude these 7 images from my training dataset as well? I think there might be something about these 7 images that makes it challenging for 3 models to detect the faces."
Ok i am getting 99% there on the runpod stuff, but when i go to reload it won't close gradio NOR reconnect to the 3001 port :3 i am taking a break for the moment, and if i can try again later i will :3 i am not giving up I'm just exhausted XD ( I was up late trying to learn how to train an SDXL lora, didnt do that quite until today, and then had success today) -- Can't wait to do it beacuse I have a lora i need to upload to tensor for something xD
Ahh, xD free AI therapy from people in the stable diffusion discords
But all aside, I Did need to hear that - I have an idea how to get around it later, i have a storage pod on pause for kohya-ss, i can always use Furkan's manual install instructions
This is my first time training with tcmalloc and it didnt work out as it's supposed to be.. Previously I tried training without installing tcmalloc on my Linux VM, it worked, so I figure out why not turning it off.
# Uncomment to disable TCMalloc
#export NO_TCMALLOC="True"# Uncomment to disable TCMalloc
#export NO_TCMALLOC="True" can edit this line at webui-user.sh
I am currently toying around with Kohya for LoRA training for SDXL (based on the tutorial) on systems with up to 8x 3090 GPUs. Using just one of them works as expected. When I configure accelerate to use multi-GPU it seems memory utilization goes up significantly resulting in an OOM.
Is this expected? Besides using GPU with more vmem, any workarounds known for that that would allow to utilize more GPU in parallel?
i think when i has copy/paste optimizer args i has make faults >> saved.json and repeat the error when load .json .Now with your help i understand news things (venv + paste command...). That function . Thanks you very much
Hello, i am a beginner watching your video on Transform Your Selfie into a Stunning AI Avatar with Stable Diffusion - Better than Lensa for Free, i tried to run the stable diffusion on google colab, but i encounter a problem when running the code. Can i ask what did i do wrong in the steps yea?
When I did SDXL training on Nvidia L4, I had to disable both latent cache options and lower the Network Rank 128 to not get Cuda or memory errors. Hoping it could be helpful for other people.
trying to create an infinite zoom video using automatic1111 and infinite zoom plugin, but it doesn't seemlessly transition between the last frame and the first frame, and not sure why