Question about Network Volumes
Hi!
Just a quick 2 questions about network volumes working in tandem with Serverless endpoints:
- Does it reduce the cold start time or availability of the serverless gpus? I want to store AI models inside a network volume and access it with a serverless gpu endpoint, I would like to know the issues I may run into as cold starts and availability are pretty important to me.
- What's the file path to a network volume if I want to set up my serverless container to use stuff in it?
Thanks all!
8 Replies
1) Actually it has been found that using a network volume actually increases the cold startup time of serverless endpoints. It also decreases response time with Flashboot. In most every case you are better off storing models, etc., directly in container storage. Network volume is the cheapest storage on RunPod, but IMHO it is not worth using it.
2) If you attach a network volume to your image it will be attached to /runpod-volume on your serverless endpoint when it runs.
Unknown User•17mo ago
Message Not Public
Sign In & Join Server To View
what about if it's 24gb total
@nerdylive
Unknown User•17mo ago
Message Not Public
Sign In & Join Server To View
any idea what might be the issue at https://discord.com/channels/912829806415085598/1258893524175159388
On more question, my container storage setting is at 20gb, but my container is 25gb.
I’ve had no problems but this technically doesn’t make sense lol does anyone want to explain?
Unknown User•17mo ago
Message Not Public
Sign In & Join Server To View
makes sense