Docker Image EXTREMELY Slow to load on endpoint but blazing locally
Constantly getting "Failed to return job results."
Why is my serverless endpoint requests waiting in queue when theres free workers?
Github integration
Is VLLM Automatic Prefix Caching enabled by default?
vllm worker OpenAI stream timeout
VLLM model loading, TTFT unhappy path

can't pull image from dockerhub
serverless socket.io support
Running llama 3.3 70b using vLLM and 160gb network volume
I don't know my serverless balance goes down
Structure of "job" JSON
job["id"]
and job["input"]
and we utilize it.
It will help me a great deal if I could send additional information like job["source"]
or other metadata to the handler function.
It seems like no matter how I structure the JSON, only id
and input
end up in the job JSON to the handler.
Is this indeed the case? ...Automatic1111 UI with serverless stable diffusion
Serverless github endpoint stuck at uploading phase

Best Practice for SAAS
Serverless Workers redis client?
Serverless request returns None from python client but web status says completed successfully

Template id missing in serverless dashboard

Disk size when building a github repository as an image on Serverless