I tried to get ChatGPT to explain dreambooth_training.py to me. So far this worked well. Then I ask

I tried to get ChatGPT to explain dreambooth_training.py to me.
So far this worked well.
Then I asked what values I should put here, starting from 15 images of mine:

lr_warmup_steps, num_class_images, sample_batch_size, max_train_steps,save_interval

After analyzing all the code, I thought she should know on what bases the calculation is based. But apparently it doesn't 🙂

According to the calculation in the video it should be 15*12=180. But ChatGPT says 15...

Furthermore she says:

The number of class images --num_class_images should be set to 12, because this is the number of training images you have for your class.

The --sample_batch_size depends on how much GPU memory you have available. A batch size of 4 is relatively small and should be fine for most GPUs, but it depends on your specific configuration.

The --max_train_steps should be at least large enough to go through all the training frames once. Since you have 12 training frames and use a batch size of 1, you should set at least 12 steps, e.g. --max_train_steps=12.

The --save_interval depends on how often you want to save the model weights. A good rule is to save every few epochs. Since you only have 12 training frames, you might not want to train too many epochs to avoid overfitting. For example, if you want to train 3 epochs, you can use --max_train_steps=36 and --save_interval=12 to save the model after each epoch.
Was this page helpful?