R
RunPod•3w ago
Sulove

RunPod GPU Availability: Volume and Serverless Endpoint Compatibility

Hey everyone! Quick question about RunPod's GPU availability across different deployment types. I'm a bit confused about something: I created a volume in a data center where only a few GPU types were available. But when I'm setting up a serverless endpoint, I see I can select configs with up to 8 GPUs - including some that weren't available when I created my volume. Also noticed that GPU availability keeps fluctuating - sometimes showing low availability and sometimes none at all. So I'm wondering: 1. What happens if I pick a GPU type for my serverless endpoint that wasn't originally available in my volume's data center? 2. If I stick to only the GPUs that were available when creating my network volume, how does that work when those GPUs suddenly show low/no availability? Just trying to understand how RunPod handles these scenarios. Would really appreciate any insights! 🤔 Thanks in advance!"
1 Reply
nerdylive
nerdylive•3w ago
1. so you mean it says "unavailable"? try it out, you probably wont get any worker up extra: its best to choose more than 1 gpu type or even copy your network storage to another dc to ensure availability 2. well some of the workers might get throttled, if the demand is high, but im sure if you're using your workers frequently you will still have your workers if your workers are sitting idly for some time then it might get throttled and be used for another person
Want results from more Discord servers?
Add your server