Does the pod hardware differ a lot in US?
Hi,
We deployed several times in US region (secure cloud) with runpod cli, but the inference performance/speed differs a lot, even model loading time differs a lot, what's the reason? and how do I know what data center I'm using. it only shows 'US'.
thanks
12 Replies
i use lscpu, looks like their cpu are same model, and right now the only difference i can see is Nvidia driver version
GPU is 4090

the sysbench shows some memory difference

the top commands shows some cpu difference(the inference was stopped)

the load average and user time shows big differences, when they have same processes environments (infernece program stopped)
maybe the difference is because the host VM cpu usage,
i thinks the memory benchmark results proved the performance difference of my programs on two pods
is it possible that the busy cpu on host instance (maybe from other containers in the same instance), causes high memory contention and thus slow down the memory speed
Hi runpod, we really need your help, it's severely affecting our inference performance
i'm not using network volume, i just create pod from runpod cli and pass in arg "US"
Unknown User•15mo ago
Message Not Public
Sign In & Join Server To View
but now the point is we see inference performance difference in different pods in US,
yes, have another problem, but still don't know root cause, i see some memory benchamrk difference on slow and fast pod in US
Unknown User•15mo ago
Message Not Public
Sign In & Join Server To View
where to create ticket?
@lil_xiang
Escalated To Zendesk
The thread has been escalated to Zendesk!
Unknown User•15mo ago
Message Not Public
Sign In & Join Server To View
cool,thank! i'll create one
Unknown User•15mo ago
Message Not Public
Sign In & Join Server To View