How to monitor the LLM inference speed (generation token/s) with vLLM serverless endpoint?
I have got started with vLLM deployment and the configuration with my application is straightforward and that woerked as well.
My main concern is how to monitor the speed of inference on the dashboard or on the "metrics" tab? Because, currently, I have to look manually in the logs and find the average token generation speed spit by vLLM.
Any neat solution to this??
Continue the conversation
Join the Discord to ask follow-up questions and connect with the community
R
Runpod
We're a community of enthusiasts, engineers, and enterprises, all sharing insights on AI, Machine Learning and GPUs!