R
Runpod5mo ago
Foopop

Serverless VLLM batching

Hey so every hour I have like 10k prompts I want to send to my serverless instance. Im using vllm and my question is does the batching which vllm does out of the box work for the serverless instance cuz I send all prompts as single request not in one request. I could not find anything about this in the docs and in this chat. Would be really helpful thanks.
24 Replies
Foopop
FoopopOP5mo ago
And Am I doing it the r ight way like sending each job in a single request cuz with other APIs I send al the prompts in one requst like normal batching.
Foopop
FoopopOP5mo ago
So I prefer to use sending all prompts in one batch one requst cuz its 10k prompts and I would be limited by runpod rate limits. This works fine with just text prompts but with prompts for a mulit modal model (Internvl3 14b) it sums up the tokesn from all prompts in the batch and fails cuz it exceed context lenght. It makes no sense that it sums them up as they are seperate conversations. The token count is also really high per prompt like 4k and the image is not that big. Do you have any idea why did might be happening
3WaD
3WaD5mo ago
In that thread, I also gave an example of how you can overcome the RunPod API limits if you have huge amounts of data.
Foopop
FoopopOP5mo ago
Yeah I read that but id still prefer to send it all in one request. It workds perfecly fine with just text prompts but with image prompts it sums up the tokens for the vonersation for some reason This is a btach request with 6 conversation with iamgs that are about 600x600 each.
This is a batchign requst of 6 conversation with one image each the images are around 600x600
{
"delayTime": 1078,
"executionTime": 436,
"id": "sync-109571ec-5527-4d2c-aa0a-b902c7d88df2-e1",
"output": [
{
"code": 400,
"message": "This model's maximum context length is 8192 tokens. However, you requested 81637 tokens (81381 in the messages, 256 in the completion). Please reduce the length of the messages or completion.",
"object": "error",
"param": null,
"type": "BadRequestError"
}
],
"status": "COMPLETED",
"workerId": "htcn0ja4fv4i18"
}
This is a batchign requst of 6 conversation with one image each the images are around 600x600
{
"delayTime": 1078,
"executionTime": 436,
"id": "sync-109571ec-5527-4d2c-aa0a-b902c7d88df2-e1",
"output": [
{
"code": 400,
"message": "This model's maximum context length is 8192 tokens. However, you requested 81637 tokens (81381 in the messages, 256 in the completion). Please reduce the length of the messages or completion.",
"object": "error",
"param": null,
"type": "BadRequestError"
}
],
"status": "COMPLETED",
"workerId": "htcn0ja4fv4i18"
}
If I use the normal /run endpoint the otken count is much less its 400 for one image
3WaD
3WaD5mo ago
How are you even sending the images via standard text completion endpoint? As far as I know it's supposed to be just array on strings.
Foopop
FoopopOP5mo ago
I convert them to base64
def create_message_with_image(image_data: Dict[str, Any]) -> List[Dict[str, Any]]:
"""Create OpenAI message format with image for chat completions"""
# Validate image data
if not image_data.get('content'):
raise Exception("No image content provided")

# Check image size
image_size = len(image_data['content'])
if image_size > 20 * 1024 * 1024: # 20MB limit
raise Exception(f"Image too large: {image_size} bytes")

image_base64 = base64.b64encode(image_data['content']).decode('utf-8')

# Determine media type
media_type = "image/jpeg"
if image_data['content'].startswith(b'\x89PNG'):
media_type = "image/png"
elif image_data['content'].startswith(b'GIF'):
media_type = "image/gif"

# Create OpenAI message format
messages = [
{
"role": "system",
"content": get_system_prompt()
},
{
"role": "user",
"content": [
{
"type": "text",
"text": get_user_prompt()
},
{
"type": "image_url",
"image_url": {
"url": f"data:{media_type};base64,{image_base64}"
}
}
]
}
]

return messages
def create_message_with_image(image_data: Dict[str, Any]) -> List[Dict[str, Any]]:
"""Create OpenAI message format with image for chat completions"""
# Validate image data
if not image_data.get('content'):
raise Exception("No image content provided")

# Check image size
image_size = len(image_data['content'])
if image_size > 20 * 1024 * 1024: # 20MB limit
raise Exception(f"Image too large: {image_size} bytes")

image_base64 = base64.b64encode(image_data['content']).decode('utf-8')

# Determine media type
media_type = "image/jpeg"
if image_data['content'].startswith(b'\x89PNG'):
media_type = "image/png"
elif image_data['content'].startswith(b'GIF'):
media_type = "image/gif"

# Create OpenAI message format
messages = [
{
"role": "system",
"content": get_system_prompt()
},
{
"role": "user",
"content": [
{
"type": "text",
"text": get_user_prompt()
},
{
"type": "image_url",
"image_url": {
"url": f"data:{media_type};base64,{image_base64}"
}
}
]
}
]

return messages
Its also werid that the token count is so much igher via openai instead of the /run thing This is one single requst { "delayTime": 792, "executionTime": 9456, "id": "sync-0f22391b-2fcb-4589-99e8-01125d164425-e2", "output": [ { "choices": [ { "finish_reason": "stop", "index": 0, "logprobs": null, "message": { "content": "
json\n{\n \"cracked\": true,\n \"battery_health\": null,\n \"color\": null,\n \"condition_score\": 30\n}\n
json\n{\n \"cracked\": true,\n \"battery_health\": null,\n \"color\": null,\n \"condition_score\": 30\n}\n
", "reasoning_content": null, "role": "assistant", "tool_calls": [] }, "stop_reason": null } ], "created": 1749619224, "id": "chatcmpl-bddf2e19a3204e8fbae98389292ee924", "kv_transfer_params": null, "model": "OpenGVLab/InternVL3-14B", "object": "chat.completion", "prompt_logprobs": null, "usage": { "completion_tokens": 36, "prompt_tokens": 3835, "prompt_tokens_details": null, "total_tokens": 3871 } } ], "status": "COMPLETED", "workerId": "htcn0ja4fv4i18" }
3WaD
3WaD5mo ago
This is a chat completion format. You use that for single individual requests. Standard text completion can be used for sending a batch in one request but doesn't support multimodal input. So processing 10k individual requests is what you want to do. Or you have to customize/make your own handler with offline inference. From vLLM docs:
For multimodal batch inference, you must use offline inference where you can pass a list of multimodal prompts to llm.generate. The online OpenAI-compatible server only supports multimodal input via the Chat Completions API, and only one prompt per request is allowed for multimodal data.
Foopop
FoopopOP5mo ago
Ohh I see that makes se nse I just dont like the sending single reqeust for each job logic For 10k requests its already hitting the rate limit and thats not even the polling which comes after And I want to keep the queue filled all the time so the Gpu are filled effienctly all the time but thats also not that easy to do I tried tracking the queue with the health endpoint and then sending new prompts when its getting emptier
Unknown User
Unknown User5mo ago
Message Not Public
Sign In & Join Server To View
3WaD
3WaD5mo ago
You're ignoring my recommendation to bypass the RunPod job API. You can make the engine requests internally from the handler and get the data somewhere else than from the RunPod job input. In fact, even normally the serverless handler itself in the worker is constructing and passing them to the engine like a proxy.
Foopop
FoopopOP5mo ago
I appreciate your help but I want to avoid using my own docker image cuz the runpod vllm worker has faster coldstart times cuz it’s cached on all instances „Worker vLLM is now cached on all RunPod machines, resulting in near-instant deployment! Previously, downloading and extracting the image took 3-5 minutes on average.“ And I believe that it is more likely to have flashboot even after some time Haven’t thought about that On /run I run burst tasks so I need like a lot of prompts processed every like few hours and in the setup I have it takes like 60 seconds when warm but when it’s cold it takes an extra 60 seconds or more to start and that makes it not that attractive
3WaD
3WaD5mo ago
It should have only faster initialization (image download to the worker). Coldstarts are not affected, or actually slower than with my image. Flashboot availability is not affected. Unfortunately, there's no secret magic that would make the official vLLM template faster than custom image, even thought one could expect it.
Foopop
FoopopOP5mo ago
Hmm but what about donwloading the docker image can it be saved on network storage like llm weights so I can at least avoid the donwload time? Does it support the batching I need for multi modal input?
3WaD
3WaD5mo ago
For the fastest loading speeds, you want to bake the model (LLM weights) into the image itself. Cold-start time is something to be worried about, not the worker initialization stage - that is not billed and if you choose enough max workers on the endpoint, it's most likely not something that will border you. Container images are hosted on container registry like DockerHub, or you can build the images right from the GitHub Repo
Foopop
FoopopOP5mo ago
What time difference does baking in the LLM in the image make? I dont like bakin it in cuy building and pushing takes forever.
Unknown User
Unknown User5mo ago
Message Not Public
Sign In & Join Server To View
3WaD
3WaD5mo ago
I'll keep working for RunPod for free after they stop ignoring the issues. Sounds fair? 😀 Because there's no reason to work on reducing the cold starts to the minimum if the queue delay on the platform is 10s.
Unknown User
Unknown User5mo ago
Message Not Public
Sign In & Join Server To View
3WaD
3WaD5mo ago
River just told me to use different GPU and then ghosted me 😕 Both on discord and email. The ticket is still open. Meanwhile users even DM me they have the same problem and they're leaving the platform because of it
Foopop
FoopopOP5mo ago
The model files are almost 50GB how do I deal with such a big docker image layer? Building fails on runpod like when uploading the layer for the model it restarts over and over again and at some point it says build failed
Unknown User
Unknown User5mo ago
Message Not Public
Sign In & Join Server To View
Foopop
FoopopOP5mo ago
how should I deal with it? Build it on my machine and push it to dockerhub? Any suggestions? Or google cloud build? I want to bake in the model as its faster
Unknown User
Unknown User5mo ago
Message Not Public
Sign In & Join Server To View

Did you find this page helpful?