R
RunPod8mo ago
justin

Is runpod UI accurate when saying all workers are throttled?

To be honest, I cannot tell if the image I see is correct? I have two endpoints both with max 3 workers, and saying every GPU is throttled? I can't test right now, but why would it fall into this state / is it accurate? Worker Ids: ugv9p9kcxlmu1c 5snyuonk8vkisq Hopefully falls out later when I can test it, but it just makes me wonder, if I send a request when it says this, will the GPUs be unthrottled? or? Is this expected behavior that can occur? Is it if I send a request, will my GPUs will be pushed higher in priority / they get throttled when not in use? Just trying to understand this so I don't start sending requests one day and find all my gpus and throttled.
No description
No description
4 Replies
flash-singh
flash-singh8mo ago
when all gpus are throttled, your request will sit in queue and a worker will start as soon as it becomes available
justin
justin8mo ago
I see interesting - if I keep a min of one worker is this the best way to counter it in prod?
flash-singh
flash-singh8mo ago
yes
justin
justin8mo ago
Thank u! Perfect
Want results from more Discord servers?
Add your server
More Posts
serverless: any way to figure out what gpu type a job ran on?trying to get data on speeds across gpu types for our jobs, and i'm wondering if the api exposes thiIs it possible to build an API for an automatic1111 extension to be used through Runpod serverless?I want to use the faceswaplab extension for automatic1111 as a serverless endpoint on Runpod. I manhosting mistral model in productionhi, I wish to host mistral model in runpod for production. what will happen to the app during scheduJobs suddenly queuing up: only 1 worker active, 9 jobs queued** Endpoint: vieo12phdoc8kh** Hi, are there any known issues at the moment with 4090s? Our processiIssues with building the new `worker-vllm` Docker ImageI've been using the previous version of `worker-vllm` with the `awq` model in production, and it recImportError: version conflict: '/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/psutil/_psI'm spinning up a new pod and copying from backblaze B2, it works just fine before the download but Jupyter runpod proxy extremely slowHello, since a few days im having massive issues with Jupyter running on runpod proxys. Its abysmallRunpod Running Slower Than Local MachineI conducted a benchmark test on stable diffusion image-to-image. My pipeline involves using ControlNHow to transfer outputs when GPU is not available?I ran into an issue with trying to transfer my outputs from a pod with 0 GPU's. I wasn't able to useCan I spin up a pod pre-loaded with my /workspace?I've been spinning up pods under a network volume but I'm not quite sure how to actually use it to m