Serverless pod tasks stay "IN_QUEUE" forever
I have a TTS model that I've deployed flawlessly as a Runpod Pod, and I want to convert it to a serverless endpoint to save costs.
Did an initial attempt, but when I send a request to the deployed serverless endpoint, the task just stays as "queued" forever.
Last line of my dockerfile is
Contents of runpod.py:
Input:
Anyone know what might be going wrong? I am willing to pay a bounty if you can help me solve this issue.
The container logs just print the CUDA notice repeatedly (appears to be turning it on and off). CPU Utilization is generally high.
Not sure what I should do to debug.
7 Replies
Unknown User•12mo ago
Message Not Public
Sign In & Join Server To View
my A1111 worker for serverless does the same thing
its been working for a few months but since last week its been broken
not sure if its a server issue or the code because there wasnt any changes to the code
just stuck at IN_QUEUE and just keeps running
log says server not starting up retrying or something to that effect
Figured out issue 1: mistyped
if __name__ == 'main':, should be __main__ not main. Checking if this works now
Whelp, no luck with that fix, it's still broken. In the logs I can now see "worker exited with code 1" a few times but no logs beyond that.
Tho in one of the workers I saw "in runpod.py..." printed a couple times as it appeared to turn on and off. no "started handler!". Debugging guidance would be greatly appreciatedUnknown User•12mo ago
Message Not Public
Sign In & Join Server To View
from api import handle
return handle(....)
this function probably not working.
Nope, that's not the issue. But I did find the real one:
My file is called runpod.py. When I do
import runpod it tries to import itself rather than the package runpod. Isn't python wonderful? 😛Unknown User•12mo ago
Message Not Public
Sign In & Join Server To View