Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Why I don't research LoRA training? Because if you need LoRA, do a full DreamBooth / Fine Tuning and extract LoRA. You will get better quality. This is for…
In this tutorial, I am going to show you how to install OneTrainer from scratch on your computer and do a Stable Diffusion SDXL (Full Fine-Tuning 10.3 GB VRAM) and SD 1.5 (Full Fine-Tuning 7GB VRAM) based models training on your computer and also do the same training on a very cheap cloud machine from MassedCompute if you don't have such compute...
With the right parameters and number of steps full checkpoint is better, but depending your dataset number of images, etc may be it needs more steps or something
I'm not an expert but I'm talking from my own experience, I had very good results with lora too I'm not gonna lie but in my opinion Dr. Furkan Gözükara is right in this one
It seems curious to me that I spent a lot of time training and spending money training full dreambooth and I got good results but with just one time that I did lora training, it gave me very good results too
Lora is faster easier and it works fine for one purpose, but if you want to train a bigger dataset or many people at the same time full training is better, I had mixed results with both methods but I kept trying with different settings trainers and trainer versions, some with good results and some disappointing ones
Hey, does anyone know how to tell when a file has successfully uploaded into JupyterLab? I've deployed a pod with comfyui and have uploaded a lora into the lora folder, however in http, the lora step has an error. Is there any way to know when the lora has fully loaded in to jupyterlab?
I was training with your new parameters, 5 people different genders and ages, no regularization images, on 30 repetitions the resemblase of the people started to get good with a simple prompt and photorealistic, but I tried some other prompts with other styles some cyberpunk and some of your prompts and they started to look baked worse and worse during the training, I was using the base model, now I stopped the training and disabled the spda optimizer and I started again, lets see..... may be is that... any idea? I used the base model to have better compatibility with loras but if the problem continues I will try with RealVis XL 4, I'm attaching some images look ok at the beginning of the training and the baked ones with 30 repetitions
I've deployed runpod/stable-diffusion:comfy-ui-5.0.0 but it won't let me open the checkpoint folder. I'd like to load in my dreambooth safetensors. Any idea how to do this?
was updated yesterday just getting the new update transferred to all the servers. still transferring this morning.
Basically when you rent and start the Notebook in the terminal you will see a URL that is the IP address of your vm. Copy the whole thing including the token part of the URL and then you can paste it into your own computer browser
@Dr. Furkan Gözükara hey again, I haven't downloaded a safetensor from huggingface to runpod before and don't think i'm doing it correctly. I've uploaded the safetonsor to huggingface and have typed wget with the hugging face link. However when this is installed in the lora folder, it installs the whole hugging face folder, when I only need the safetensor. Do you know how I can just instal the safetensor?