Terrible performance - vLLM serverless for MIstral 7B
Hello,
When I serve Mistral-7B quantized in AWQ using a model such as "TheBloke/Mistral-7B-v0.1-AWQ" in the vLLM serverless instance of runpod, I get terrible performance (accuracy) compared to running Mistral 7B on my CPU using ollama (which uses GGUF quantization and Q4_0), could this be due to a misconfiguration by me in the parameters, although I kept the defaults, or is AWQ quantization known to drop the performance that low?
Thank you
No replies yet
Join the Discord to continue the conversation
R
Runpod
We're a community of enthusiasts, engineers, and enterprises, all sharing insights on AI, Machine Learning and GPUs!