Optimizing Docker Image Loading Times on RunPod Serverless – Persistent Storage Options?
I'm working with a large Docker image on RunPod Serverless, containing several trained models. While I've already optimized the image size, the initial docker pull during job startup remains a bottleneck as it takes too long time to complete.
Is there a way to leverage persistent storage on RunPod to cache my Docker image? Ideally, I'd like to avoid the docker pull step altogether and have the image instantly available for faster job execution.
Thanks,
Continue the conversation
Join the Discord to ask follow-up questions and connect with the community
R
Runpod
We're a community of enthusiasts, engineers, and enterprises, all sharing insights on AI, Machine Learning and GPUs!