R
RunPod6mo ago
runvnc

billing not adding up

The time listed in billing doesn't add up Hello guys. I see a bunch of charges listed as 1 minute but billed for 1 hour? I have been testing start/stop scripts and other things. The actual total billed might be correct. Not sure. Billing page makes it look completely wrong though.
No description
11 Replies
ashleyk
ashleyk6mo ago
@flash.singh I guess this is something @rutvikhp needs to look at?
flash-singh
flash-singh6mo ago
thanks we will look into this, its near holiday and most are off
runvnc
runvnc6mo ago
ok appreciate your help. one theory is maybe it is something to do with a minimum charge if they are deleted or something. but still an issue if that's the case because I won't be able to tell what is going on from the report
ashleyk
ashleyk6mo ago
I didn't even notice that billed time toggle so thanks for making me aware of it @runvnc
Iemonade
Iemonade6mo ago
Hi, I have a similar cost spike in the same period. Would be nice to get some clarity on this. A4000 prices are sub $1/hour..
runvnc
runvnc6mo ago
any update on this? I think that the Billed Time toggle is broken and actually I can't tell if the billing is correct I think it is getting minutes and hours mixed up or something
nerdylive
nerdylive6mo ago
i guess on 2024 lol
Iemonade
Iemonade6mo ago
Probably a small FE bug, the label is currently minutes instead of hours
flash-singh
flash-singh6mo ago
@Rutvik is this fixed as well?
Rutvik
Rutvik6mo ago
@Iemonade can you dm me your email? @Iemonade @runvnc issue is fixed
runvnc
runvnc6mo ago
Ok thanks
Want results from more Discord servers?
Add your server
More Posts
Do I need to keep Pod open after using it to setup serverless APIs for stable diffusion?Hi I'm following this tutorial on building serverless endpoints for running txt2img with ControlNet SSH key not workingHello, im trying to get SSH working. My pod is pre-configured. I added my key to the pod variables. how do you access the endpoint of a deployed llm on runpod webui and access it through Python?how do you access the endpoint of a deployed llm on runpod webui and access it through Python?Is runpod UI accurate when saying all workers are throttled?To be honest, I cannot tell if the image I see is correct? I have two endpoints both with max 3 workserverless: any way to figure out what gpu type a job ran on?trying to get data on speeds across gpu types for our jobs, and i'm wondering if the api exposes thiIs it possible to build an API for an automatic1111 extension to be used through Runpod serverless?I want to use the faceswaplab extension for automatic1111 as a serverless endpoint on Runpod. I manhosting mistral model in productionhi, I wish to host mistral model in runpod for production. what will happen to the app during scheduJobs suddenly queuing up: only 1 worker active, 9 jobs queued** Endpoint: vieo12phdoc8kh** Hi, are there any known issues at the moment with 4090s? Our processiIssues with building the new `worker-vllm` Docker ImageI've been using the previous version of `worker-vllm` with the `awq` model in production, and it recImportError: version conflict: '/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/psutil/_psI'm spinning up a new pod and copying from backblaze B2, it works just fine before the download but