R
RunPodChanchana

Billing increases last two days heavily from delay time in RTX 4000 Ada

I checked my billing history and I saw that my serverless bills increased a lot and the culprit is the usage of RTX 4000 Ada. I checked the log and it's because of CUDA runtime error as ONNX runtime doesn't work with this specific GPU. This causes the container to retry again and again, increasing the delay time. I have never met this issue before so I'm not sure why it's happening now without any code change on my side. Can runpod give me coupon to redeem last 2 days of error? Also how do I prevent the usage of RTX 4000 Ada from now on?
No description
J
justin16d ago
u can ping @flash-singh with ur details of account id / endpoint so they can verify the logs / fix it up + give u ur credit back
F
flash-singh16d ago
you can edit endpoint and uncheck the 4000 ada, also set max execution time to be safe for any refunds you can reach out to support from web chat, but considering what happened here, it would be an uphill battle, we usually give refunds depending on if its an issue on our end or yours, something you can control
C
Chanchana16d ago
I'm not sure whether Runpod has updated anything on the 4000 Ada machine because this kind of cost has never happened to me before and I always tick 4000 Ada. Also @flash-singh why is the web support give me access to a chatbot rather than a real human? I can't even switch to a real human and this chatbot is basically useless when anyone needs a real support. I only see the real human support available in mobile
F
flash-singh16d ago
always starts with AI bot
W
walmartbag15d ago
say 'speak to a live agent' and nothing else most chatbots (support ones) wait till you say that, otherwise its gonna not gonna care
Want results from more Discord servers?
Add your server
More Posts
Nvidia driver versionWhere can I see what driver versions pods use? Is it the same for all GPU types? I get this error evProfiling CUDA kernels in runpodHi! I'm trying to profile my kernel with nsight-compute and I'm getting error : "==ERROR== ERR_NVGPUInconsistency with volumesWe have an issue where when we startup a container/pod we run a script that should exists inside of Bug prevents changing a Serverless Pod to a GPU Podhttps://i.imgur.com/DNxVc1y.gifNo availability issueWhen renting some instances, the main screen says 'High availability', or etc.. yet it has none whenError: CUDA error: CUDA-capable device(s) is/are busy or unavailableI have 15 production endpoints deployed using Runpod and today they started to raise this error randL40 and shared storageFor my workloads I want to use a L40, but I also need shared storage. Do I get it right, that this iRun container only onceHi everyone, I want to run a container for a single life-cycle only (i.e. my container is designed tAuto-scaling issues with A1111Hey, I'm running an A1111 worker (https://github.com/ashleykleynhans/runpod-worker-a1111) on ServerlClone a Runpod NetworkvolumeHi! Is there some way to clone a Network Volume in the Runpod interface or is this something i have Insufficient Permissions for Nvidia Multi-GPU Instance (MIG)I was planning to test some new Nvidia GPU features using a pod with Nvidia A100 80G. I tried `nvidAutomatic1111 - Thread creation failed: Resource temporarily unavailableHello, we started to get this error more often. Normally we were getting it time to time, and after How can I view logs remotely?Hi! I am ttrying to view the logs of a training build I am doing but it seems to stop here. The contHow to make Supir in Serverless?Please tell me how to create serverles with the supir project? Or perhaps someone can do this for moCan we use serverless faster Whisper for local audio?I deployed faster Whisper using serverless and invoked it using "import requests url = "https://apis there any method to deploy bert architecture models serverlessly?is there any method to deploy bert architecture models serverlessly?change the GPU pod type without recreatingIs there an option available whereby, if the previous GPU becomes unavailable, I can select another l40s "no ressources available"Hello everytime i try to choose a l40s i keep getting a "no ressource avilable" message. There are mNGC containersHas anyone gotten NGC containers running on runpod? I see it as an option but I think it doesn't worDo endpoints support custom images?I am able fetch custom images from my GCP artifact registry for normal Pods. However, when I create