Is execution timeout per request or per worker execution?

https://docs.runpod.io/serverless/endpoints/send-requests#--execution-policy "Execution Timeout: Specifies the maximum duration that a job can run before it's automatically terminated." The endpoint edit UI says - "Maximum amount of time in seconds a request can run for." I read the first one as "max lifetime of a worker" - i.e. if it takes 5s to process a request and execution timeout is 60s, the worker will process 12 requests and die. I read the first one as "if a request takes 60s, the worker will die, but as long as requests take <60s, the worker will run forever" which one is it?
Send a request | RunPod Documentation
The method in which jobs are submitted and returned.
2 Replies
Augenbrauensenker
I would appreciate the answer. Thank you!
digigoblin
digigoblin2mo ago
It is applied to a single request, not 12 requests They both do the same thing. The endpoint config applies to all requests but you can override it on a per request basis, as the docs specify. Its pretty clear in the docs, so I don't know why you are thinking it applies to multiple requests when it applies to 1 request at a time.
Want results from more Discord servers?
Add your server
More Posts
S3 ENV does not work as described in the Runpod DocumentionHi all, I have a serverless function and also all env variable as its written in documention. But itGPU type prioritization seems to have stopped working on 13th of MarchI have an endpoint with 3 cheapest GPU types selected in the order of their price (i.e. 4090 is my 3distributed trainingIs it possible to set up a slurm cluster for distributed training on Runpod?How can i bulk download all my images generated in my Output FolderHow can i bulk download all my images generated in my Output Folder (Fooocus)? I'm in the Jupyter LaHow to run OLLAMA on Runpod Serverless?As the title suggests, I’m trying to find out a way to deploy the OLLAMA on Runpod as a Serverless AData loss on podi rend pod with gpu type A5000, but suddenly my gpu type changed to rtx 3090 and all my data(150 gb)Serverless: module 'gradio.deprecation' has no attribute 'GradioDeprecationWarningHello! I'm getting this error when i use RunPod Fast Stable Diffusion with serverless. Can you pleasImg2txt code works locally but not after deployingI am using a model for Image 2 text , i have made its handler file and tested it locally , for testiUpload files to Network volume? Two days spent on this and can't make it happenHOW do I get my local safetensor LLM files on my PC to the network volume? Is the CLI the only way? Docker image using headless OpenGL (EGL, surfaceless plaform) OK locally, fails to CPU in RunpodHi all, I'm wondering if anyone can educate me on what would be causing this difference in behaviou