deepseek-r is loading for >1h into vram.
Seems it is related to nmap on network drive. How do you solve it?
11 Replies
Unknown User•7mo ago
Message Not Public
Sign In & Join Server To View
are you using our vllm with network volume? it might downloading the model which could take a while.
I use my own docker container with sglang inside. For rocm you have only pytorch, no vllm or sglang.
I use loaded to the disk model. There is no space to load the 2nd one: 670gb total. and it is downloading for 4 hours, not for 1
Unknown User•7mo ago
Message Not Public
Sign In & Join Server To View
loading safetensors checkpoint shards for 1 hour
no oom, this is 8xMI300X
I read on the github that it is often related on nmap over network drives, but not sure
Unknown User•7mo ago
Message Not Public
Sign In & Join Server To View
function, that reads .tensor and load it to gpu, takes extremally long time. mmap is mapping file as memory to load data directly from ssd to vram with no ram consumption
Unknown User•7mo ago
Message Not Public
Sign In & Join Server To View
dont use network storage to load the models, instead move them to container disk or pod volume disk, see if that loads them any faster
It is on pod volume disk

6xA100 Q4_K_S GGUF is possible on https://koboldai.org/runpodcpp (if you adjust the container storage to 500GB), won't be sglang but it does have OpenAI API support so it should be easy to integrate with