Multiprocessing for CUDA App stuck on seconds request on Serverless worker.
Hi, I deployed an app in Runpod and run multiprocessing on serverless. But from second request to the worker the request was stucking. How can I resolve this issue?
Recent Announcements
Continue the conversation
Join the Discord to ask follow-up questions and connect with the community
R
Runpod
We're a community of enthusiasts, engineers, and enterprises, all sharing insights on AI, Machine Learning and GPUs!