A100 GPU vram being used
I have a pod running but one of my assigned GPUs has its vram taken up and I can't clear it even if after restarting the pod or torch.cuda.empty_cache

7 Replies
@Hello
Escalated To Zendesk
The thread has been escalated to Zendesk!
Unknown User•10mo ago
Message Not Public
Sign In & Join Server To View
I have the same issue
same issue. i don't want to change pods. i have tons of data on here.
Unknown User•7mo ago
Message Not Public
Sign In & Join Server To View
This issue keep happening this is super annoying. I'm STARTING my pod and my VRAM is already used. Could you please investigate those leakage
Unknown User•4w ago
Message Not Public
Sign In & Join Server To View