Hi guys I'm currently migrating from Faster Whisper endpoint to Serverless. What configuration would give me similar inference speed to the Faster Whisper endpoint? Also what cost difference should I expect?
Continue the conversation
Join the Discord to ask follow-up questions and connect with the community
R
Runpod
We're a community of enthusiasts, engineers, and enterprises, all sharing insights on AI, Machine Learning and GPUs!