Endpoint stuck in init

Bug in cancellation

Where is the "input" field on the webhooks?
Issue loading a heavy-ish (HuggingFaceM4/idefics2-8b) model on serverless (slow network?)
handler function like so:...Network bandwidth changes?
GGUF in serverless vLLM
hanging after 500 concurrent requests
is anyone experiencing a massive delay time when sending jobs to GPUs on serverless?
Urgent! all our workers not working! Any network issues?
Send Binary Image with Runpods Serverless
New release will re-pull the entire image.
Requests stuck in IN_QUEUE status
IN_QUEUE status. Any suggestions for what we should we look at to start debugging this?
We've previously been successful deploying LLaVA-v1.5-13b. But again grateful for suggestions...
"Failed to return job results" and 400 bad request with known good code
--rp_serve_api or --test_input, it works perfectly fine. I can also use the same functions in jupyter or a bare python script and it works as expected. But when I deploy the same code to serverless, I get (...) {"requestId": "(...)", "message": "Failed to return job results. | 400, message='Bad Request', url=URL('(...)')", "level": "ERROR"} with...How to schedule active workers?
CUDA env error
Failed to return job results
Clone endpoint failing in UI
Is there any limit on how many environment variables can be added per container?
how to host 20gb models + fastapi code on serverless
Need help putting 23 GB .pt file in serverless enviornment