Mounting network storage at runtime - serverless
I am running my own docker container and at the moment, I’m using the runpod interface to select network storage which then presents at /runpod-volume
This is OK, however, what I am hoping to do (instead) is mount the volume at runtime programmatically.
Is this in anyway possible through libraries or API? Basically I would want to list the available volumes, and where the volume exists within the same region as the container / worker, it will mount it.
I’m wanting to do this as I plan to make a volume in every region and then by not selecting the volume at the serverless create interface, and instead mounting at runtime, it would in theory be able to then use ANY available GPU in all regions, whilst still having access to that regions volume.
If not, I need to create a serverless cluster in every region, and then I may be routing requests to a cluster that has no available GPU at that point in time. It is far from ideal.
5 Replies
I have a simmilar Issue, where i want to store my llm models on the network drives, but then i am locked on one region, would be nice to be able to add a range of network drives (one for each region) in the serverless GUI, so multiple locations can be selected for a single endpoint.
Unknown User•12mo ago
Message Not Public
Sign In & Join Server To View
The easiest way to deploy globally is to build all your files as part of docker image, so you don’t need network volume
Unknown User•12mo ago
Message Not Public
Sign In & Join Server To View
works if the model is small. otherwise it takes an age to download the image and rarely is it cached
Is there any plan to allow network storage to host our docker images? Or a persistant cache otherwise?
It is something I'd happily pay to have