OSError in vLLM worker; issues when its new update was released
I was using vLLM worker 1.7.0 and everything was working fine till yesterday. Today I am facing issues in all of my endpoints where huggingface models are deployed using the vLLM worker. Runpod logs shows OSError and the model cant be identified. 
I then deployed a new endpoint with latest configuration of vLLM worker 1.9 and everything worked in the way it used to. @Justin Merrell 
Runpod should let us know its changes atleast, so it does not affect the endpoints in production.

2 Replies
@Ashique A B
Escalated To Zendesk
The thread has been escalated to Zendesk!
Unknown User•9mo ago
Message Not Public
Sign In & Join Server To View