Queued serverless workers not running and getting charged for it?
I woke up this morning to find that all the credits in my Runpod account are gone. I don't have any active pods and only have a single network volume of 100GB.
I didn't know why but noticed that there are 2 queued workers for one of my serverless endpoints.
I was testing in Postman yesterday and sent a few requests, maybe like 10 in total. I had assumed that requests that didn't get a response after some time were automatically terminated.
As you can see, these 2 requests are still in queue after over 10 hours. And I'm guessing I'm being charged the whole time for these requests.
Is this normal behavior? There are no other requests. Just these 2 requests queued up. Why are they queued up? Why aren't they returning a result or at least an error, and simply stuck in queue?
This has been a really bad noob experience to using Runpod. And I'm hesitant putting more money into my account now
I didn't know why but noticed that there are 2 queued workers for one of my serverless endpoints.
I was testing in Postman yesterday and sent a few requests, maybe like 10 in total. I had assumed that requests that didn't get a response after some time were automatically terminated.
As you can see, these 2 requests are still in queue after over 10 hours. And I'm guessing I'm being charged the whole time for these requests.
Is this normal behavior? There are no other requests. Just these 2 requests queued up. Why are they queued up? Why aren't they returning a result or at least an error, and simply stuck in queue?
This has been a really bad noob experience to using Runpod. And I'm hesitant putting more money into my account now

Solution
@Jack They cant tell if ur workers are just working or not. There isnt a runtime timeout bc u might for ex. actually he processing for that long - which is common for a use case like mine doing large video or audio processing.
Recommendation is go through the process on a gpu pod in the future with ur handler.py and make sure works as expected there / then can monitor and send a request using the built in testing endpoint on runpod and monitor how its going with logs.
With a gpu pod at least tho can see in a jupyter notebook if everything with ur handler.py logic is going as expected and can invoke it just calling the method normally
Recommendation is go through the process on a gpu pod in the future with ur handler.py and make sure works as expected there / then can monitor and send a request using the built in testing endpoint on runpod and monitor how its going with logs.
With a gpu pod at least tho can see in a jupyter notebook if everything with ur handler.py logic is going as expected and can invoke it just calling the method normally