Hey @Gabi | Containers and team - follow-up from my ephemeral GitHub runners discussion. **Confir
Hey @Gabi | Containers and team - follow-up from my ephemeral GitHub runners discussion.
Confirmed Understanding:
Container disk is ephemeral and deleted on stop (thanks for clarifying!)
Only paying for active execution time
Inactive containers don't consume storage
Remaining Question:
While the container instances clean up properly, the Durable Object instances themselves accumulate in the namespace indefinitely.
Current Pattern:
Confirmed Understanding:
Remaining Question:
While the container instances clean up properly, the Durable Object instances themselves accumulate in the namespace indefinitely.
Current Pattern:
- Each job:
→ unique DO createdidFromName('job-{jobId}') - Container runs → stops → DO hibernates (but persists)
- At 1000 jobs/day → 30K DOs/month → 365K DOs/year**Dashboard shows all these inactive DOs forever** (see screenshot from last week).**Questions:**
- Is this accumulation pattern expected/acceptable for ephemeral workloads?
- Do hibernated DOs (with empty storage via deleteAll()) have any cost or limits?
- Should I implement DO pooling to reuse ~100 DOs across all jobs?
- Or just call storage.deleteAll() + deleteAlarm() in onStop() and not worry about it?For context: GitHub Actions runners are true ephemeral workloads - each job should leave zero trace. Want to ensure I'm architecting this correctly for production SaaS.Current scale: 10 concurrent, but could scale up to 100-500 as usage grows.Thanks!
