Model Maximum Context Length Error
Hi there, I run an AI chat site (https://www.hammerai.com). I was previously using vLLM serverless, but switched over to using dedicated Pods with the vLLM template (
Container Image: vllm/vllm-openai:latest
. Here is my configuration:
I then call it with:
12 Replies
But am now running into a new error:
I didn't see this when using the serverless endpoints. So my question:
- Is there something I can be setting on vLLM to automatically manage the context length for me? I.e. to delete tokens from the
prompt
or messages
automatically for me? Or do I need to manage this myself?
Thanks!Unknown User•9mo ago
Message Not Public
Sign In & Join Server To View
Yep, but won't it just default to something else even if I don't set those? And then we'll run into the same issue at whatever number of tokens that is?
Unknown User•9mo ago
Message Not Public
Sign In & Join Server To View
Yes, but when I do that, specifically setting to 8192, I get a separate error saying that I have exceeded the maximum context length. But in general, even if I manage to set it a little higher, won't I run into the same problem then?
Unknown User•9mo ago
Message Not Public
Sign In & Join Server To View
Unfortunately I didn't save it and Runpod logs don't go back that far - but I guess doesn't it not really matter as long as we have to set a max limit? Because in a chat application we'll eventually go past it.
Unknown User•9mo ago
Message Not Public
Sign In & Join Server To View
Got it - so vLLM doesn't help with truncating things? I just asked b/c coming from Ollama it will automatically update your prompt so that it continues to work even past the max context length.
Unknown User•9mo ago
Message Not Public
Sign In & Join Server To View
Got it. So do you know other AI chat sites handle this? Does everyone just write custom code if they're using Runpod vLLM?
Unknown User•9mo ago
Message Not Public
Sign In & Join Server To View