Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
but, I uploaded 1700 reg images for 13 instance images and the slider at 100 but it generated another 400 and something more class images. Is it because i set the number of sample to generate at ''2''?
Does anyone know if there’s a fast way to install a trained model into Jupiter? It’s currently estimated to take hours for me to load it in. Just wondering if there’s a faster way?
Sign up RunPod: https://bit.ly/RunPodIO. Our Discord : https://discord.gg/HbqgGaZVmr. This is the Grand Master tutorial for running Stable Diffusion via Web UI on RunPod cloud services. If I have been of assistance to you and you would like to show your support for my work, please consider becoming a patron on https://www.patreon.com/SECourses...
Thank you. Am checking now and after you download CTL you install a model via a url. If I’m trying to upload a model from my desktop, do I simply upload it straight into models without using the terminal? And it will automatically upload through CTL?
@Furkan Gözükara SECourses , i have a question, i was training a model in dreambooth using v1-5-pruned.ckpt as source checkpoint. after 150 epochs i stopped it and wanted to test it out, so i clicked on generate ckpt file, it gives me a savetensors file. when i use that model in the text2img it is only able to create the thing i trained, it is not combined with v1-5-pruned.ckpt anymore, is that right?
yess i did that, but all my checkpoints arent able to generate anything else than what it was trained on. but i did use v1-5-pruned ad base model in the training
I trained a LORA model with as base model 1,5pruned.
As class images i used Photo of Man and i had 40 images of my self as training images called Quirijn my sample prompt was photo of quirijn man After 150 epochs the sample images looked really good. I stopped the training and went to txt2img As model i selected the most recent savetensors model from my training When i generate an image with the prompt quirijn man, it looks really good, but the prompt "an apple on a table" is not working, it only remembers quirijn man.
How can i fix this, should i merge the model with 1,5pruned?
However I get: RuntimeError: Error(s) in loading state_dict for LatentDiffusion: size mismatch for model.diffusion_model.input_blocks.0.0.weight: copying a param with shape torch.Size([320, 9, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 4, 3, 3]).