Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
@Dr. Furkan Gözükara have you tested xformers/gradient checkpointing as well as Memory Efficient Attention? I am running without Gradient CH. and xformers but still have memory efficient attention on. Not sure if it decreases quality
Also, wit A6000, on runpod, running 14 images dreambooth LoRA, at 4 batch size, I run into OOM, without gradient checkpointing and xformers. Using class images. Any thoughts if there are any memory optimizations better than these two? What do you recommend @Dr. Furkan Gözükara ??
Our Discord : https://discord.gg/HbqgGaZVmr. How to do free Stable Diffusion DreamBooth training on Google Colab for free. If I have been of assistance to you and you would like to show your support for my work, please consider becoming a patron on https://www.patreon.com/SECourses
Playlist of Stable Diffusion Tutorials, Automatic1111 and Goo...
by the way, what is your understanding of max steps. If batch 1 is good at 8000 steps, when you go to batch 2, you have a total of 4000 steps. Do you have a formula already for how many extra steps to give the higher batch to compensate for loss in quality? Generalisation though seems to improve up to a certain number of batch increases.
Hey gang, does anyone have experience on downloading safetensor files from runpod, to your local machine? 1 safetensor, which can be 5-7 gb, can take a half hour for me depending on the connection.
Guys. My RTX3060 12gb is very limited. Can’t really train SDXL models on it and SD1.5 dreambooth training takes overnight. Shoul I try some cloud services? How much vram does Kaggle give? Is it enough for SDXL model training? What do you guys use?