Hello All, We have multiple serverless endpoints that downloads the model and generate the inference. Is there is a way to mount a common volume to all the serverless endpoint system. We don't want to down the model every time endpoint boots up.
It would be nice if you can please share a concrete example