R
Runpod2y ago
Yash

0% GPU utilization and 100% CPU utilization on Faster Whisper quick deploy endpoint

I used the "Quick Deploy" option to deploy a Faster Whisper custom endpoint (https://github.com/runpod-workers/worker-faster_whisper). Then, I called the endpoint to transcribe a 1 hour long podcast by using the following parameters:
{
'input': {
'audio': 'https://www.podtrac.com/pts/redirect.mp3/pdst.fm/e/traffic.megaphone.fm/ISOSO6446456065.mp3?updated=1715037715',
'model': 'large-v3',
'language': 'en',
}
}
{
'input': {
'audio': 'https://www.podtrac.com/pts/redirect.mp3/pdst.fm/e/traffic.megaphone.fm/ISOSO6446456065.mp3?updated=1715037715',
'model': 'large-v3',
'language': 'en',
}
}
The job completed in 201 seconds. I'm not sure if this is actually using the GPU and the graphs are wrong, or it's actually only using the CPU and it would have completed much faster had it been using the GPU.
No description
4 Replies
Unknown User
Unknown User2y ago
Message Not Public
Sign In & Join Server To View
Yash
YashOP2y ago
I am getting back "device": "cuda" in my output: https://github.com/runpod-workers/worker-faster_whisper/blob/main/src/predict.py#L120 Does that mean that it's actually using the GPU?
Unknown User
Unknown User2y ago
Message Not Public
Sign In & Join Server To View
Yash
YashOP2y ago
Ok I think you're right, I tried it on a 32 cpu instance and got a bunch of nvidia-smi: not found logs plus it took longer than 200 seconds So I guess the graph is wrong then Thank you for your help!

Did you find this page helpful?