RunpodR
Runpod•14mo ago
Sulove

RunPod GPU Availability: Volume and Serverless Endpoint Compatibility

Hey everyone! Quick question about RunPod's GPU availability across different deployment types. I'm a bit confused about something:

I created a volume in a data center where only a few GPU types were available. But when I'm setting up a serverless endpoint, I see I can select configs with up to 8 GPUs - including some that weren't available when I created my volume.

Also noticed that GPU availability keeps fluctuating - sometimes showing low availability and sometimes none at all. So I'm wondering:

  1. What happens if I pick a GPU type for my serverless endpoint that wasn't originally available in my volume's data center?
  2. If I stick to only the GPUs that were available when creating my network volume, how does that work when those GPUs suddenly show low/no availability?
Just trying to understand how RunPod handles these scenarios. Would really appreciate any insights! 🤔

Thanks in advance!"
Was this page helpful?