Job Never Picked Up by a Worker but Received Execution Timeout Error and Was Charged
I set the execution timeout to a maximum of 45 seconds (the job usually takes about 20–30 seconds) and the idle timeout to 1 second. I sent three requests, with the last one being sent after the first job was completed. However, after 57 seconds, the last request timed out.
I checked the logs, and no workers picked it up, yet I can see that my serverless billing charged me for the last request as well.
We are going live in two weeks, and it's crucial to ensure that we are not charged for requests that were never processed. Any insights on why this might be happening and how to prevent it?
8 Replies
Unknown User•9mo ago
Message Not Public
Sign In & Join Server To View
@vesper
Escalated To Zendesk
The thread has been escalated to Zendesk!
Ticket ID: #12,683
Unknown User•9mo ago
Message Not Public
Sign In & Join Server To View
is this happening for every request? in your case what likely happened is a job came into the queue, a worker was scaled up and took the job, then never responded back within a certain time to run into execution timeout
right now the way logic with queue and serverless is setup, its not possible to run into execution timeout for a job without a worker picking up the job
It only happened once, but I am still in the testing phase. I want to make sure it doesn't happen when I go live in two weeks. I have a retry method for end users, but the main issue is that I am getting charged for these failed requests, which is a deal breaker. Do you provide any refunds if your worker causes this issue?
Unknown User•9mo ago
Message Not Public
Sign In & Join Server To View
This is the part I am struggling with. On my end, I just got a timeout after 57 seconds but on the runpod console there was no logs for this request at all.
Unknown User•9mo ago
Message Not Public
Sign In & Join Server To View