I set up 4xL40S but when starting the starting i got this for each of them : torch.OutOfMemoryError

I set up 4xL40S but when starting the starting i got this for each of them :

torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 22.17 GiB. GPU 0 has a total capacity of 44.40 GiB of which 21.58 GiB is free. Process 271586 has 22.81 GiB memory in use. Of the allocated memory 22.18 GiB is allocated by PyTorch, and 5.70 MiB is reserved by PyTorch but unallocated.

I used the 48GB_GPUz_4x_GPU_Quality_Tier_1_42000MB_10.6_Second_IT_Trains_T5_and_T5_Attention file, then tried with the Quality Tier 3 and still having the same issue. Any clue ?
Was this page helpful?