Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
by the way, what is your understanding of max steps. If batch 1 is good at 8000 steps, when you go to batch 2, you have a total of 4000 steps. Do you have a formula already for how many extra steps to give the higher batch to compensate for loss in quality? Generalisation though seems to improve up to a certain number of batch increases.
Hey gang, does anyone have experience on downloading safetensor files from runpod, to your local machine? 1 safetensor, which can be 5-7 gb, can take a half hour for me depending on the connection.
Guys. My RTX3060 12gb is very limited. Can’t really train SDXL models on it and SD1.5 dreambooth training takes overnight. Shoul I try some cloud services? How much vram does Kaggle give? Is it enough for SDXL model training? What do you guys use?
I'm training LoRa SDXL on my 3060 with 12GB VRAM, 768*768 resolution and it's showing 2 hours and 30 minutes of training time. I've used 15 images with 20 repeats. Is this configuration fine?
I would use 7 repeat and classification images with 10 epochs, no captions, 128/64 weights. For Adafactor use classic parameters for LR and Unet LR: 0.0001, TE: 5e-05, token length: 225, model: RealisticVision 5.1, optimizer args: scale_parameter=False relative_step=False warmup_init=False weight_decay=0.01