Pod restarts container instead of termination
TL;DR: Pod always restarts Docker command, never leaves
Hi there! I was trying to implement running a one-off job using ECR-based Docker image on RunPod.
I'd like to create a Pod that uses a Docker container from AWS ECR, run the command, let the Pod finish the commant. Using polling I want to poll the Pod status and terminate the pod as soon as it finishes
Basically, this approach described in the company blog: https://www.runpod.io/articles/guides/ai-on-a-schedule
Design your container or job to exit when finished. If it’s a one-off batch job, ensure the container’s command will naturally terminate (and not linger).
Problem: a created pod doesn't terminate, it restarts the container command and never actually finishes. What I did:
The command I run is
The pod always restart the command and never leaves
RUNNING state. Company blog claims otherwise.Hi there! I was trying to implement running a one-off job using ECR-based Docker image on RunPod.
I'd like to create a Pod that uses a Docker container from AWS ECR, run the command, let the Pod finish the commant. Using polling I want to poll the Pod status and terminate the pod as soon as it finishes
Basically, this approach described in the company blog: https://www.runpod.io/articles/guides/ai-on-a-schedule
Design your container or job to exit when finished. If it’s a one-off batch job, ensure the container’s command will naturally terminate (and not linger).
Problem: a created pod doesn't terminate, it restarts the container command and never actually finishes. What I did:
- Created a template that points to a container in my AWS ECR
- Ran a pod
- I observe the following logs:
get_pod(...) call returnsThe command I run is
echo "Hello RunPod". The pod always restart the command and never leaves
RUNNING state. What am I missing?Runpod
Explains how to use Runpod’s API to run AI jobs on a schedule or on-demand, so GPUs are active only when needed. Demonstrates how scheduling GPU tasks can reduce costs by avoiding idle time while ensuring resources are available for peak workloads.