Running training
Num batches each epoch = 0
Num Epochs = 200
Batch Size Per Device = 1
Gradient Accumulation steps = 1
Total train batch size (w. parallel, distributed & accumulation) = 1
Text Encoder Epochs: 150
Total optimization steps = 0
Total training steps = 0
Resuming from checkpoint: False
First resume epoch: 0
First resume step: 0
Lora: False, Optimizer: Torch AdamW, Prec: fp16
Gradient Checkpointing: False
EMA: True
UNET: True
Freeze CLIP Normalization Layers: False
LR: 1e-06
V2: False
Steps: : 0it [00:00, ?it/s]Traceback (most recent call last):
File "/workspace/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/ui_functions.py", line 727, in start_training
result = main(class_gen_method=class_gen_method)
File "/workspace/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/train_dreambooth.py", line 1448, in main
return inner_loop()
File "/workspace/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/memory.py", line 119, in decorator
return function(batch_size, grad_size, prof, *args, **kwargs)
File "/workspace/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/train_dreambooth.py", line 1200, in inner_loop
latents = batch["images"].to(accelerator.device)
TypeError: 'NoneType' object is not subscriptable
Steps: : 0it [00:00, ?it/s]
Restored system models.
Duration: 00:00:08