CUDA profiling
Hey guys, how can I profile kernels on serverless GPUs
Like I have a cuda kernal, how can I know it’s performance using serverless GPUs like RunPod gpus
13 Replies
Serverless workers are pods deployed with your template, it's the same hardware in the same datacenters - only a small amount of room on each node is dedicated to serverless processing.
Aha so I can use Nvidia Nsight compute on them?
I think so? I believe there's some type of benchmarking, profiling, tool in that domain that requires privileges we don't give our pods because they're containerized. I can look into it a little more in a moment here
It's Nsight I was thinking of that won't work unless you buy out the whole node and ask us to give you permission. :frowning3:
Can you offer it soon, I am a founder of a startup and we can sign a collaboration for it
Win win situation:D
Unknown User•7mo ago
Message Not Public
Sign In & Join Server To View
Or test on other services (should be a real VM, not docker based) with the same type of GPU and then deploy to runpod
Nice ideas but I have never done that before, do you have guys any documentation explaining that
Unknown User•7mo ago
Message Not Public
Sign In & Join Server To View
Renting bare metal will cost a fortune tho
https://www.runpod.io/console/bare-metal
Unknown User•7mo ago
Message Not Public
Sign In & Join Server To View
5000 bucks minimum
For a month
If you are a startup in korea look for this
https://aihub.or.kr/devsport/aicomputingsport/list.do?currMenu=121&topMenu=101

Ohhh sadly
Thanks brother
You’re the best