Serverless with network storage

Hi all, I am trying to setup a serverless worker for comfyui (currently using customized template from this https://github.com/blib-la/runpod-worker-comfy . I have a several large models which I would like not to bake into the image. I see there is an option to mount network storage to serverless worker, I tried to mount it (with the required models to run the workflow) to the serverless comfy worker, but when I send a request with the workflow I see in the worker logs that it does not see any of the models in the mounted storage. There are also no customization for the network storage mount for a serverless worker, so I am not even sure if the paths are mounted correctly. Therefore I want to ask: - Is this type of functionality/usecase is supported/feasible? - If it is, what am I missing or not doing correctly? Thanks in advance!
GitHub
GitHub - blib-la/runpod-worker-comfy: ComfyUI as a serverless API o...
ComfyUI as a serverless API on RunPod. Contribute to blib-la/runpod-worker-comfy development by creating an account on GitHub.
13 Replies
tzushi
tzushi9mo ago
on serverless network volume isn't mounted on /workspace, not really clear in docs check /runpod-volume i think
Unknown User
Unknown User9mo ago
Message Not Public
Sign In & Join Server To View
kranas4755
kranas4755OP9mo ago
I also assume this is probably because of the incorrectly mounted paths, but I do not see the option to specify the network storage path neither in the serverless deploy options, not in the template creation screen (which is weird as there is an option to specify network storage mount path for a non-serverless option, but when you select serverless the option is no longer there), nor in the documentation. Maybe someone from runpod team can clarify/help with this?
Unknown User
Unknown User9mo ago
Message Not Public
Sign In & Join Server To View
kranas4755
kranas4755OP9mo ago
roger, will try to add a symlink in the Dockerfile for /runpod-volume and see if it works
apluka
apluka9mo ago
did it work @kranas4755 ?
kranas4755
kranas4755OP9mo ago
yes, seems to work when I add a symlink to the start.sh file in the docker
rm -rf /comfyui/models && ln -s /runpod-volume/ComfyUI/models /comfyui/models
needed to figure out the path to runpod-volume myself, I think it should be documented somewhere thanks @nerdylive @tzushi
Xqua
Xqua9mo ago
@kranas4755 I'm interested to know wether this works with little cold start lag for you ? Do you have quick start time or does the network storage takes a minute or more to boot up
kranas4755
kranas4755OP9mo ago
Have only tested once so far, didnt seem to be much different loading wise than without network storage, but will test some more later
apluka
apluka9mo ago
So you just deploy a normal pod with the volume and download models to runpod-volume/ path, right? Then we can use this storage volume with serverless @kranas4755
kranas4755
kranas4755OP9mo ago
Yes
apluka
apluka9mo ago
oh, normal pod doesn't have runpod-volume/ path itself, i have to create a dir for this
JohnnyAPI
JohnnyAPI9mo ago
Endpoint configurations | RunPod Documentation
Configure your Endpoint settings to optimize performance and cost, including GPU selection, worker count, idle timeout, and advanced options like data centers, network volumes, and scaling strategies.

Did you find this page helpful?