RunpodR
Runpod15mo ago
14 replies
jhappy

Serverless pod tasks stay "IN_QUEUE" forever

I have a TTS model that I've deployed flawlessly as a Runpod Pod, and I want to convert it to a serverless endpoint to save costs.
Did an initial attempt, but when I send a request to the deployed serverless endpoint, the task just stays as "queued" forever.

Last line of my dockerfile is
CMD ["python", "-u", "runpod.py"]


Contents of runpod.py:
import runpod
from api import handle

def handler(event):
    print('In handler')
    input = event['input']
    return handle(
        input.get("is_stream", False),
        input.get("clip_id"),
        input.get("refer_wav_path"),
        input.get("prompt_text"),
        input.get("prompt_language"),
        input.get("text"),
        input.get("text_language"),
        input.get("cut_punc"),
        input.get("top_k", 15),
        input.get("top_p", 1.0),
        input.get("temperature", 1.0),
        input.get("speed", 1.0),
        input.get("inp_refs", [])
    )

if __name__ == 'main':
    print('In runpod.py...')
    runpod.serverless.start({'handler': handler})
    print('started handler!')


Input:
{
  "input": {
    "clip_id": "12345",
    "is_stream": false,
    "refer_wav_path": "test_short.wav",
    "prompt_text": "Reference text here",
    "prompt_language": "en",
    "text": "Generate this text!",
    "text_language": "en"
  }
}


Anyone know what might be going wrong? I am willing to pay a bounty if you can help me solve this issue.

The container logs just print the CUDA notice repeatedly (appears to be turning it on and off). CPU Utilization is generally high.

Not sure what I should do to debug.
Was this page helpful?