Serverless vLLM deployment stuck at "Initializing" with no logs
I've been trying for hours, initially I was trying to deploy Ollama on Serverless GPU, not working, stuck at initializing. Now I am directly using the Serverless vLLM option and it is still not working. Every time I click the deploy button, it just says "Initializing" and there's nothing more, no logs whatsoever. Any idea? Thanks!
Recent Announcements
Continue the conversation
Join the Discord to ask follow-up questions and connect with the community
R
Runpod
We're a community of enthusiasts, engineers, and enterprises, all sharing insights on AI, Machine Learning and GPUs!