Serverless VLLM concurrency issue
Hello everyone, i deployed a serverless vllm (gemma 12b model) through runpod ui. withj 2 workers of A100 80GB vram.
if i send two requests at the same time, they both become IN PROGRESS but i recieve the ouput stream of one first, the second always waits for the first to finish then i start recieveing the tokens stream. why is it behaving live this?
47 Replies
Unknown User•7mo ago
Message Not Public
Sign In & Join Server To View
@Jason it is the same worker, is there any way i can make it respond to both of them at the same time?
you cant without changing code
bc of the natre of llms
if you give them the same input, the output may vary in length, which affects generation time
Unknown User•7mo ago
Message Not Public
Sign In & Join Server To View
i am using A100 80GB vram and it is supposed to be very fast!
before i used to deploy the same model on A100 40gb vram on gcp with vllm it it had no problem handling concurrent requests
DEFAULT_BATCH_SIZE or BATCH_SIZE ?
Unknown User•7mo ago
Message Not Public
Sign In & Join Server To View
yes same everything
Unknown User•7mo ago
Message Not Public
Sign In & Join Server To View
my issue is not really the speed, the speed is decent when there is no cold start, my issue is handling more than one request at the same time
Unknown User•7mo ago
Message Not Public
Sign In & Join Server To View
yes
first request starts streaming, second request from another client always starts after the first one finishes
with two workers?
ill do some benchmarks and provide you with the number s
2 and 3
tried both
can you check vllm logs
it should say metrics like
current running req, waiting req
etc
and tok/s
Unknown User•7mo ago
Message Not Public
Sign In & Join Server To View
do we need to set batch size with vllm workers?
Unknown User•7mo ago
Message Not Public
Sign In & Join Server To View
vllm intellegently does batching until its kv cache is full
Unknown User•7mo ago
Message Not Public
Sign In & Join Server To View
no i mean i was configuring the endpoint to scale up to multiple workers if needed
logs when sending 2 requests


right now it is configured to only have one worker
try default batch size to 10
i ma setting default batch size to 1 because i noticed streaming used to send very big chunks of tokens
Unknown User•7mo ago
Message Not Public
Sign In & Join Server To View
lol
Unknown User•7mo ago
Message Not Public
Sign In & Join Server To View
i tried it with 50 and 256
that setting means only 1 request should be processed cocurrently
Unknown User•7mo ago
Message Not Public
Sign In & Join Server To View
same behavior of not handling multiple requests with default batch size set to 50 and 256
Unknown User•7mo ago
Message Not Public
Sign In & Join Server To View
no no
sorry fo rthe misinformation
but both requests status appear as IN PROGRESS
its the batch size for streaming tokens
This is the real one but you didnt set it so should be fine

are you sure? i tried it with 5, 10, 50, 256 and i got the same behaviour
but let me try it one more time to confirm
uhh i mean it doesnt matter if you set it to 5 / 10 / etc
Unknown User•7mo ago
Message Not Public
Sign In & Join Server To View
because it is related to token streaming, not the actual requests
@Abdelrhman Nile maybe can you try spamming requests? like 50+?
Unknown User•7mo ago
Message Not Public
Sign In & Join Server To View
set the max workers to 1 and then
spam requests
i kinda did that with vllm benchmark serving script, let me share the results with you
configuration was max workers = 3
and i was NOT setting default batch size , it was left on deafult which i believe is 50
also the script sent 1000 requests
only 857 was succesful
same model, same benchmark but on gcp a100 40 vram machine
will test that
When you initialize the vLLM engine (on cold start) you should see a log similar to this:
Maximum concurrency for 32768 tokens per request: 5.42x as a part of vLLM's memory profiling. Make sure that the engine can perform concurrency > 2.
That being said, the official RunPod vLLM image, unfortunately, does not handle concurrency dynamically (it's hardcoded to 300 or static value), which will result in bottlenecking the jobs anyway. But it's definitely possible to stream multiple responses concurrently from a single serverless worker. Or at least it's working on my implementation.actually that doesnt matter you can batch even if you have less than 2x cocurrency if the requests fit in kv cache
anyways he has enough cache (the requests doesnt even use 5percent of the cache)
idk why it doesnot work either because everything is right