R
Runpod16mo ago
vladfaust

Sticky sessions (?) for cache reuse

In my case—building an AI chat application (duh)—it'd be useful to be able to direct a succeeding request to the same node of an ever-scaling endpoint for efficient KV cache reusing. Is that currently possible with Rundpod? Because I as see now, there is no way to force a specific node when making request to a endpoint. The question applies both to the vLLM endpoint template & custom handlers.
8 Replies
Unknown User
Unknown User16mo ago
Message Not Public
Sign In & Join Server To View
Encyrption
Encyrption16mo ago
When you create your Serverless Endpoint you can select FlashBoot and RunPod will attempt to re-use your image cache to keep from having to re-load the entire image for each request. This will happen if the subsequent request is QUEUED and ready go when the last request is fulfilled. For this to work optimally your model should be baked into your image (don't use network volume, don't load any models at run-time). The easiest way to get workers that do not reload after each request is to enable some active workers. Also, since you only pay for processing actual requests you should always set your max workers to 30.
vladfaust
vladfaustOP16mo ago
Nope, this is not what we're talking about here. It's about when we already have a bunch of active workers, subsequent requests may use a sticky session to be optimally routed to a certain worker node.
Unknown User
Unknown User16mo ago
Message Not Public
Sign In & Join Server To View
yhlong00000
yhlong0000016mo ago
Yes, when requests come in, they are distributed to any available idle workers
Unknown User
Unknown User16mo ago
Message Not Public
Sign In & Join Server To View
yhlong00000
yhlong0000016mo ago
Yes, I understand. I just wanted to confirm the current behavior. Sticky sessions are not currently available.
Unknown User
Unknown User16mo ago
Message Not Public
Sign In & Join Server To View

Did you find this page helpful?