Hello there. I am trying to test and deploy a basic serverless endpoint. Issues that I have and don't understand: - If I create a serverless endpoint through the web, then I don't need to select a network volume. In this case I think I am forced to use Docker. - Because I don't want to explicitly use docker, iam using runpodclt as described: https://blog.runpod.io/runpod-dockerless-cli-innovation/
Deploying project Pod on RunPod... Trying to get a Pod with NVIDIA GeForce RTX 4080... Unavailable. Trying to get a Pod with NVIDIA RTX A4000... Unavailable. Trying to get a Pod with NVIDIA RTX A4500... Unavailable. Trying to get a Pod with NVIDIA RTX A4500 Ada... Unavailable. Trying to get a Pod with NVIDIA RTX A5000... Unavailable. none of the selected GPU types were available
Can you help me?
It is possible to avoid having to select a region when creating a network volume? Thans
Discover the future of AI development with RunPod's Dockerless CLI tool. Experience seamless deployment, enhanced performance, and intuitive design, revolutionizing how you bring AI projects from concept to reality.
Similar Threads
Recent Announcements
Continue the conversation
Join the Discord to ask follow-up questions and connect with the community
R
Runpod
We're a community of enthusiasts, engineers, and enterprises, all sharing insights on AI, Machine Learning and GPUs!