First Ever SDXL Training With Kohya LoRA...
CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 10.00 GiB total capacity; 9.15 GiB already
allocated; 0 bytes free; 9.26 GiB reserved in total by PyTorch).
Im a bit of a noob, buy why dont torch use the already allocated memory to train my lora? Any suggestions are more than welcomed. Im on a windows machine with rtx3080
