serverless - lora from network storage
stuck in cue
Costs
hey we have serverless endpoints but we have no workers for more than 12 hours now !
[Solved] EU-CZ Datacenter not visible in UI
Does Runpod serverless GPU's support NVIDIA MIG
my serverless worker is downloading models to `/runpod-volume/.cache/huggingface` by itself
/runpod-volume
exist at all, but also I have a HF_HOME env var that point somewhere else and it seem huggingface is targeting /runpod-volume
without explanation.
Did I miss something ? Is that related to the new caching feature I was told about a few weeks ago ?...Github Serverless building takes too much
Websocket Connection to Serverless Failing
wss://<pod_id>-<port>.proxy.runpod.net/ws
and expect this to be translated to wss://localhost:<port>/ws
, and the websocket server is run in a thread just before the HTTP server is run. The latter works fine as I am able to communicate with it via the regular https://api.runpod.ai/v2/<pod_id>
URL. The expected port is exposed in the Docker config, as per https://docs.runpod.io/pods/configuration/expose-ports. Any ideas what the issue is?...Pulling from the wrong cache when multiple Dockerfiles in same GitHub repo
Severless confusion
How to pass parameters to deepseek r1

Job stuck in queue and workers are sitting idle

Endpoint/webhook to automatically update docker image tags?
What is expected continuous delivery (CD) setup for serverless endpoints for private models?
InvokeAI to Runpod serverless
Comfyui From pod to serverless
Is serverless Network Volume MASSIVE lag fixed ? Is it now usable as a model store ?
Serverless with network storage
Workers keep respawning and requests queue indefinetely