serverless container disk storage size vs network volume
When I add a Serverless endpoint, it defaults to 5GB container disk. I tried to change it to a crazy high number like 50000GB, and it seems to be ok with it??
I'm confused, is this disk storage physically attached to the GPU machine? there a limit of this storage size? Does it cost any extra money? My ComfyUI docker image needs to download many different models, which can be hundreds of GB in total, what would happen if it exceeds the storage limit?
If I choose to attach a network volume, does it mean my docker image (contains many different models) will be deploy to store in the network volume? And the deployed volume need to communicate with the GPU via network requests?? So it will be slower?
If I have 4 GPU workers running in parallel, can they share the same network volume?
Recent Announcements
Continue the conversation
Join the Discord to ask follow-up questions and connect with the community
R
Runpod
We're a community of enthusiasts, engineers, and enterprises, all sharing insights on AI, Machine Learning and GPUs!