Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Which D-Adapt optimizer specifically? In my testing Adan D-Adapt used 32GB of VRAM for regular Dreambooth training vs less than 24GB for any other optimizer.
1) The extensions implementation of the D-Adaptation Optimizers for full Dreambooth training is not correct. 2) There is a bug in the D-Adaptation Optimizers on Linux.
The Dreambooth extension has too many bugs, has too much code committed to dev branch that isn't even tested at all, and is just generally bad quality and has the worst SDLC of any project in the entire world so I have totally given up on it.
well i mean it KINDA works, so i only used 15 images, and it was somewhat resembling my wife, but not 100%, so I upped the sample images to 51, and retesting
what's weird is even thouhg i set lora unet leaerning rate to 1 according to the article, it trains at 1.10e-6, but also his article says that adan dapation ignores l/r so i'm assumign thats' why
The D-Adaptation optimizers adapt the Learning Rate constantly, hence their name. If you install Tensorboard, you will be able to view the graph of the LR and you will see that its not constant.
DL-Art-School\experiments\voicecloning_archived_230526-145552\models\2155_gpt.pth ----- Is the the right path ? training is still starting from 0.gpt. Installed xformers as well
only 0.0.16 caused the dreambooth issue for some 3060s and every 40 card. i think. was fixed in the next release. i'm up to 0.0.20 today and it works fine