Serverless VLLM batching
Hey so every hour I have like 10k prompts I want to send to my serverless instance. Im using vllm and my question is does the batching which vllm does out of the box work for the serverless instance cuz I send all prompts as single request not in one request. I could not find anything about this in the docs and in this chat. Would be really helpful thanks.
24 Replies
And Am I doing it the r ight way like sending each job in a single request cuz with other APIs I send al the prompts in one requst like normal batching.
So I prefer to use sending all prompts in one batch one requst cuz its 10k prompts and I would be limited by runpod rate limits.
This works fine with just text prompts but with prompts for a mulit modal model (Internvl3 14b) it sums up the tokesn from all prompts in the batch and fails cuz it exceed context lenght. It makes no sense that it sums them up as they are seperate conversations.
The token count is also really high per prompt like 4k and the image is not that big.
Do you have any idea why did might be happening
In that thread, I also gave an example of how you can overcome the RunPod API limits if you have huge amounts of data.
Yeah I read that but id still prefer to send it all in one request. It workds perfecly fine with just text prompts but with image prompts it sums up the tokens for the vonersation for some reason
This is a btach request with 6 conversation with iamgs that are about 600x600 each.
If I use the normal /run endpoint the otken count is much less its 400 for one image
How are you even sending the images via standard text completion endpoint? As far as I know it's supposed to be just array on strings.
I convert them to base64
Its also werid that the token count is so much igher via openai instead of the /run thing
This is one single requst
{
"delayTime": 792,
"executionTime": 9456,
"id": "sync-0f22391b-2fcb-4589-99e8-01125d164425-e2",
"output": [
{
"choices": [
{
"finish_reason": "stop",
"index": 0,
"logprobs": null,
"message": {
"content": "",
"reasoning_content": null,
"role": "assistant",
"tool_calls": []
},
"stop_reason": null
}
],
"created": 1749619224,
"id": "chatcmpl-bddf2e19a3204e8fbae98389292ee924",
"kv_transfer_params": null,
"model": "OpenGVLab/InternVL3-14B",
"object": "chat.completion",
"prompt_logprobs": null,
"usage": {
"completion_tokens": 36,
"prompt_tokens": 3835,
"prompt_tokens_details": null,
"total_tokens": 3871
}
}
],
"status": "COMPLETED",
"workerId": "htcn0ja4fv4i18"
}
This is a chat completion format. You use that for single individual requests. Standard text completion can be used for sending a batch in one request but doesn't support multimodal input. So processing 10k individual requests is what you want to do.
Or you have to customize/make your own handler with offline inference. From vLLM docs:
For multimodal batch inference, you must use offline inference where you can pass a list of multimodal prompts to llm.generate. The online OpenAI-compatible server only supports multimodal input via the Chat Completions API, and only one prompt per request is allowed for multimodal data.
Ohh I see that makes se nse
I just dont like the sending single reqeust for each job logic
For 10k requests its already hitting the rate limit and thats not even the polling which comes after
And I want to keep the queue filled all the time so the Gpu are filled effienctly all the time but thats also not that easy to do
I tried tracking the queue with the health endpoint and then sending new prompts when its getting emptier
Unknown User•5mo ago
Message Not Public
Sign In & Join Server To View
You're ignoring my recommendation to bypass the RunPod job API. You can make the engine requests internally from the handler and get the data somewhere else than from the RunPod job input. In fact, even normally the serverless handler itself in the worker is constructing and passing them to the engine like a proxy.
I appreciate your help but I want to avoid using my own docker image cuz the runpod vllm worker has faster coldstart times cuz it’s cached on all instances
„Worker vLLM is now cached on all RunPod machines, resulting in near-instant deployment! Previously, downloading and extracting the image took 3-5 minutes on average.“
And I believe that it is more likely to have flashboot even after some time
Haven’t thought about that
On /run
I run burst tasks so I need like a lot of prompts processed every like few hours and in the setup I have it takes like 60 seconds when warm but when it’s cold it takes an extra 60 seconds or more to start and that makes it not that attractive
It should have only faster initialization (image download to the worker). Coldstarts are not affected, or actually slower than with my image. Flashboot availability is not affected.
Unfortunately, there's no secret magic that would make the official vLLM template faster than custom image, even thought one could expect it.
Hmm but what about donwloading the docker image can it be saved on network storage like llm weights so I can at least avoid the donwload time?
Does it support the batching I need for multi modal input?
For the fastest loading speeds, you want to bake the model (LLM weights) into the image itself. Cold-start time is something to be worried about, not the worker initialization stage - that is not billed and if you choose enough max workers on the endpoint, it's most likely not something that will border you. Container images are hosted on container registry like DockerHub, or you can build the images right from the GitHub Repo
What time difference does baking in the LLM in the image make? I dont like bakin it in cuy building and pushing takes forever.
Unknown User•5mo ago
Message Not Public
Sign In & Join Server To View
I'll keep working for RunPod for free after they stop ignoring the issues. Sounds fair? 😀
Because there's no reason to work on reducing the cold starts to the minimum if the queue delay on the platform is 10s.
Unknown User•5mo ago
Message Not Public
Sign In & Join Server To View
River just told me to use different GPU and then ghosted me 😕 Both on discord and email. The ticket is still open.
Meanwhile users even DM me they have the same problem and they're leaving the platform because of it
The model files are almost 50GB how do I deal with such a big docker image layer?
Building fails on runpod like when uploading the layer for the model it restarts over and over again and at some point it says build failed
Unknown User•5mo ago
Message Not Public
Sign In & Join Server To View
how should I deal with it? Build it on my machine and push it to dockerhub? Any suggestions? Or google cloud build?
I want to bake in the model as its faster
Unknown User•5mo ago
Message Not Public
Sign In & Join Server To View