Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
@ashleyk quick question about class instances per image, is that based on the amount of images i'm using, or the amount of steps i'm taking? What would be a good number of class instances per image if i'm using 50 images?
yeah so dataset yesterday was 15, and it used my prior preservation images, today i used a dataset of 50, with 50 class instances per image, and wanted to generate 1750 images, as soon as I dropped that to 10, it said my number of class instances per image is sufficient
Which D-Adapt optimizer specifically? In my testing Adan D-Adapt used 32GB of VRAM for regular Dreambooth training vs less than 24GB for any other optimizer.
1) The extensions implementation of the D-Adaptation Optimizers for full Dreambooth training is not correct. 2) There is a bug in the D-Adaptation Optimizers on Linux.
The Dreambooth extension has too many bugs, has too much code committed to dev branch that isn't even tested at all, and is just generally bad quality and has the worst SDLC of any project in the entire world so I have totally given up on it.
well i mean it KINDA works, so i only used 15 images, and it was somewhat resembling my wife, but not 100%, so I upped the sample images to 51, and retesting
what's weird is even thouhg i set lora unet leaerning rate to 1 according to the article, it trains at 1.10e-6, but also his article says that adan dapation ignores l/r so i'm assumign thats' why
The D-Adaptation optimizers adapt the Learning Rate constantly, hence their name. If you install Tensorboard, you will be able to view the graph of the LR and you will see that its not constant.