Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Hey all Please i've been trying to use automatic 1111's dreambooth and i've been getting new errors i had never gotten before like this Missing model directory, removing model: /workspace/stable-diffusion-webui/models/dreambooth/test1/working/unet
Looks like you did something wrong when configuring your model, try again by creating a new model and also click the "Unfreeze" button when you create your new model in Dreambooth.
@ashleyk I did the adan method posted in the article, i did 1600 steps, followed what he did, and my subject had no likeness to my dataset and not sure why
@ashleyk dataset looks great, i know it does cause i did the regular constant with warmup and did 8bitadam and it worked great, but then i tried adan dadaption with the dadap scheduler and nothing
Why don't you just stick to 8bit Adam if that is working for you? Why do you specifically want to use the D-Adaptation optimizers? They are slower. I can't help you with LoRA training unfortunately, I haven't trained any LoRA yet. Maybe that Russian guy was just lucky with his specific dataset he was training using the Facebook D-Adaptation optimizers because I have not been successful in getting them to work yet with regular Dreambooth training either. I guess we need to wait for the good Dr. to test them out and make a video for us.
@ashleyk quick question about class instances per image, is that based on the amount of images i'm using, or the amount of steps i'm taking? What would be a good number of class instances per image if i'm using 50 images?
yeah so dataset yesterday was 15, and it used my prior preservation images, today i used a dataset of 50, with 50 class instances per image, and wanted to generate 1750 images, as soon as I dropped that to 10, it said my number of class instances per image is sufficient
Which D-Adapt optimizer specifically? In my testing Adan D-Adapt used 32GB of VRAM for regular Dreambooth training vs less than 24GB for any other optimizer.
1) The extensions implementation of the D-Adaptation Optimizers for full Dreambooth training is not correct. 2) There is a bug in the D-Adaptation Optimizers on Linux.
The Dreambooth extension has too many bugs, has too much code committed to dev branch that isn't even tested at all, and is just generally bad quality and has the worst SDLC of any project in the entire world so I have totally given up on it.
well i mean it KINDA works, so i only used 15 images, and it was somewhat resembling my wife, but not 100%, so I upped the sample images to 51, and retesting
what's weird is even thouhg i set lora unet leaerning rate to 1 according to the article, it trains at 1.10e-6, but also his article says that adan dapation ignores l/r so i'm assumign thats' why
The D-Adaptation optimizers adapt the Learning Rate constantly, hence their name. If you install Tensorboard, you will be able to view the graph of the LR and you will see that its not constant.