How to optimize batch processing performance?

Use serverless to deploy Qwen/Qwen2-7B model GPU: Nivada A40 48G Environment variables: MODEL_NAME=Qwen/Qwen2-7B HF_TOKEN=xxx ENABLE_LORA=True LORA_MODULES={"name": "cn_writer", "path": "{huggingface_model_name}", "base_model_name": "Qwen/Qwen2-7B"} MAX_LORA_RANK=64 MIN_BATCH_SIZE=384 ENABLE_PREFIX_CACHING=1 My problem: Batch processing takes too long, which is 3-4 times the time of a single request. How should I reduce the time consumption of this batch processing? My code is in the attachment Phenomenon: The time consumption of 64 batch processing requests is 4 times that of a single batch processing request. What I expect is how to make the time of 64 batch processing close to the time of single batch processing
31 Replies
Unknown User
Unknown User8mo ago
Message Not Public
Sign In & Join Server To View
柠檬板烧鸡
柠檬板烧鸡OP8mo ago
I didn't use the openAI SDK. My request code was generated by postman. Is there any optimization in OpenAI SDK in this regard? 【try 32 batch only, what happens with the time】 There is not much fluctuation. The time consumption will increase with the batch size for different batches 【do you get Sequence group X is preempted due to insufficient KV cache space too?】 I don't quite understand this sentence Added ENABLE_CHUNKED_PREFILL = true and MAX_NUM_BATCHED_TOKENS=4000. The time consumption of 64 batches and 32 batches is still much different than that of a single batch. ENABLE_CHUNKED_PREFILL = true and MAX_NUM_BATCHED_TOKENS = 4000 1 sample time: 11.15 s 16 samples time: 22.64 s 32 samples time: 21.78 s 64 samples time: 35.29 s I am trying to keep increasing the value of MAX_NUM_BATCHED_TOKENS MAX_NUM_BATCHED_TOKENS = 8000 1 sampling time: 11.02 s 16 sampling time: 22.84 s 32 sampling time: 23.52 s 64 sampling time: 45.85 s
Unknown User
Unknown User8mo ago
Message Not Public
Sign In & Join Server To View
柠檬板烧鸡
柠檬板烧鸡OP8mo ago
这个是serverless 的worker 的日志。
Unknown User
Unknown User8mo ago
Message Not Public
Sign In & Join Server To View
柠檬板烧鸡
柠檬板烧鸡OP8mo ago
I think the time it takes to process 64 batches should be close to the time it takes to process a single batch. Currently, 64 batches take too long. If I change to a better GPU, can I achieve the goal of [I think the time it takes to process 64 batches should be close to the time it takes to process a single batch. Currently, 64 batches take too long.]?
Unknown User
Unknown User8mo ago
Message Not Public
Sign In & Join Server To View
柠檬板烧鸡
柠檬板烧鸡OP8mo ago
OK, thanks, but serverless can only set up RTX A6000 and A40, and there is no way to choose a better GPU.
Unknown User
Unknown User8mo ago
Message Not Public
Sign In & Join Server To View
柠檬板烧鸡
柠檬板烧鸡OP8mo ago
select this?
柠檬板烧鸡
柠檬板烧鸡OP8mo ago
No description
Unknown User
Unknown User8mo ago
Message Not Public
Sign In & Join Server To View
柠檬板烧鸡
柠檬板烧鸡OP8mo ago
ok thank you very much 141GB, same configuration, test results are as follows 1 sampling time: 2.95 s 16 sampling time: 7.50 s 32 sampling time: 8.59 s 64 sampling time: 12.02 s Compared with 48GB, it can reduce the time consumption, but there is still a situation that 64 sampling times take much longer than 1 sampling time.
Unknown User
Unknown User8mo ago
Message Not Public
Sign In & Join Server To View
柠檬板烧鸡
柠檬板烧鸡OP8mo ago
Yes, the overall speed is improved, but there is still a large gap between the speed of 64 samples and 1 sample, and [How to optimize batch processing performance] seems unable to be approached by replacing a better GPU.
Unknown User
Unknown User8mo ago
Message Not Public
Sign In & Join Server To View
柠檬板烧鸡
柠檬板烧鸡OP8mo ago
Our technical team leader told me this. I remained skeptical and verified this 【How to optimize batch processing performance】 what? If this single batch is 2.95s, then I expect the time range for 64 batches should be 2.0~3.95s
Unknown User
Unknown User8mo ago
Message Not Public
Sign In & Join Server To View
柠檬板烧鸡
柠檬板烧鸡OP8mo ago
My idea is to update the GPU, modify the runpod environment variable, and modify the vllm environment variable to test the time consumption. If the time of 64 samplings cannot be close to the time of 1 sampling, then it means that what my technical team leader said is wrong. Our technical team leader did not provide reference materials, but another colleague of mine is looking for relevant papers for verification. If the time of 64 samplings is close to the time of 1 sampling, then multiple samplings can be used to select the optimal solution, which can improve product quality. Our technical team leader’s point of view is 【There should be a millisecond difference between the time of 64 samplings and the time of 1 sampling】 The tests I am doing now are to verify the technical team leader’s point of view. If it exists, it will be better. If not, you have to come up with a conclusion to convince him.
Unknown User
Unknown User8mo ago
Message Not Public
Sign In & Join Server To View
柠檬板烧鸡
柠檬板烧鸡OP8mo ago
thanks
Unknown User
Unknown User8mo ago
Message Not Public
Sign In & Join Server To View
riverfog7
riverfog78mo ago
low batch sizes are limited by VRAM bandwidth and high batch size is limited by core compute low batch: low throughput low latency high batch : high throughput high latency you have find the middle ground
Unknown User
Unknown User8mo ago
Message Not Public
Sign In & Join Server To View
riverfog7
riverfog78mo ago
probably not because that gpu has to do prompt processing which is 64 times more in batch size 64 than batch size 1 but on 70B models with 4x A40 / A6000 gpus i get roughly this batch 1: about 40tok/s batch 70: about 500tok/s -> not accurate cuz i forgot the actual number prompt processing: ~1000tok/s btw an a40 is just an underclocked dataccenter A6000
Unknown User
Unknown User8mo ago
Message Not Public
Sign In & Join Server To View
riverfog7
riverfog78mo ago
already accepted 🙂
Unknown User
Unknown User8mo ago
Message Not Public
Sign In & Join Server To View
柠檬板烧鸡
柠檬板烧鸡OP8mo ago
The conclusion we reached is that The batch size is 64, which requires a mixture of different paragraphs, so that some prompts are processed in the pre-fill stage (high arithmetic intensivity) and some prompts are processed in the decode stage (low arithemtic intensitivy), so that the parallel function of the tensor core can be fully returned in theory. We did not continue to move forward in this direction
Unknown User
Unknown User8mo ago
Message Not Public
Sign In & Join Server To View
柠檬板烧鸡
柠檬板烧鸡OP8mo ago
I'm not sure, we didn't test it with a mix of different paragraphs

Did you find this page helpful?