getting occasional OOM errors in serverless
I'm running a small service using runpod serverless + comfyUI, and once in a while I get this error.
"error": "Traceback (most recent call last):\n File \"/handler.py\", line 708,
'in handler\n raise RuntimeError(f'{node_type}: {exception_message}')\
nRuntimeError: WanVideoSampler: Allocation on device \nThis error means you ran
out of memory on your GPU.\n\nTIPS: If the workflow worked before you might have
accidentally set the batch_size to a large number.\n",
the weird thing is that I always set my gpu to 32GB pro(rtx 5090), so this error should happen to all or none. Do you have any ideas on what could be the reason? Thanks!1 Reply
Unknown User•4d ago
Message Not Public
Sign In & Join Server To View