Mounting a network storage on comfyui serverless endpoint
I have a network storage where i have downloaded all the models that i will need to generate the image using comfyui interface. All the models and custom model have been verified by running some workflow in a POD instance and images are generated as i had intended.
Just to avoid manual setup, i have used the comfyui image on serverless endpoint and i have used some default model to generate the image using flux1-dev-fp8 model. Images were generated perfectly and then i tried to generate the images using my own workflow and as expected i had got missing custom node issue.
So i edited the endpoint and added the network storage from the advance setting but still getting the same error related to missing custom nodes. Can anyone guide me to solve this issue?
25 Replies
So what was never explained to me until I researched it after a bit was when you're using a serverless instance, the workspace volume doesn't exist. It gets renamed to Network volume so you're mounting point on serverless is different than your mounting Point using a standard pod. If you look at the Handler file I have posted in one of the other serverless threads, you can see the actual file Association code I have at the top of it that tells the system to look in the correct location for the workspace volume. Without it even though you're attaching the volume the serverless worker doesn't know where to find your files
Unknown User•2w ago
Message Not Public
Sign In & Join Server To View
Yes. You're right. My bad. And it was the start.sh, not the handler like I said
# Link volume if needed
[ ! -L /workspace ] && ln -s /runpod-volume /workspace
I did tried this command,
The network volume that i'm using is actually starting with /workspace/ directory only.
Somehow my serverless endpoint been executing for the last 10 minutes, though there was some errors in the log tab,
[error]worker exited with exit code 2
[info]Finished.
[info]worker-comfyui - Closing websocket connection.\n
[info]\n
[info]
[info] raise ValueError(f"{error_message}. Raw response: {response.text}")\n
[info] File "/handler.py", line 413, in queue_workflow\n
[info] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n
[info] queued_workflow = queue_workflow(workflow, client_id)\n
[info] File "/handler.py", line 536, in handler\n
[info] raise e\n
[info] File "/handler.py", line 550, in handler\n
[info]Traceback (most recent call last):\n
[info]
[info]
[info]worker-comfyui - Parsed error data: {'error': {'type': 'invalid_prompt', 'message': 'Cannot execute because node UnetLoaderGGUF does not exist.', 'details': "Node ID '#45'", 'extra_info': {}}, 'node_errors': {}}\n
[info]
[info]invalid prompt: {'type': 'invalid_prompt', 'message': 'Cannot execute because node UnetLoaderGGUF does not exist.', 'details': "Node ID '#45'", 'extra_info': {}}\n
Right. So the /workspace directory you are using changes it's name to /runpod-volume when connecting to serverless. It happens on the backend and you never see the change in the Jupyter notebook. Add the code snippet I provided near the top of your start.sh file before the call to initialize comfyui, push the changes to docker under a new version tag, and try running it again
I wish i could change the docket image, but i have directly deployed the endpoint from the runpod interface only, i did got the issue, so for now i have took a sample starting script from one of the threat,
sh -c "ln -s /runpod-volume/ComfyUI/custom_nodes /comfyui/custom_nodes && ln -s /runpod-volume/ComfyUI/models /comfyui/models && /start.sh"
I'm again trying with following changes just to check whether following command solves my issue.
Unknown User•2w ago
Message Not Public
Sign In & Join Server To View
Is'nt there a way to handle it from runpod interface only?
Unknown User•2w ago
Message Not Public
Sign In & Join Server To View
That's where i'm facing the issue,
I did raised a support ticket, but instead got a reply to deploy the whole serverless endpoint from the git respository,
https://github.com/runpod-workers/worker-comfyui.git
GitHub
GitHub - runpod-workers/worker-comfyui: ComfyUI as a serverless API...
ComfyUI as a serverless API on RunPod. Contribute to runpod-workers/worker-comfyui development by creating an account on GitHub.
Unknown User•2w ago
Message Not Public
Sign In & Join Server To View
Let me look into the response you got from support.
Hello, I've been stuck trying to configure runpod and the storage. I dropped a couple things inside /workspace/comfyui but wont get recognized. When I place content inside /ComfyUI it seems to recognize the content but it doesn't help since once I turn off the pod and run it again, everything is completely gone. I've been trying to configure it so I can use the storage for my serverless endpoint. I even created a extra_models_path.yaml and placed it inside /ComfyUI in hopes it'll recognize files inside /workspace. Not really sure what I'm doing wrong but assuming if it works for everyone else, I have something misconfigured. Created a docker as well hoping that wouldve fixed the issue on the endpoint. Any help will be appreciated thanks.
Unknown User•2w ago
Message Not Public
Sign In & Join Server To View
Ok so this is what i have done,
Step 1 : cloned the https://github.com/runpod-workers/worker-comfyui directory,
Step 2 : Updated the Dockerfile script to include the models and custom nodes that i want
Step 3 : Ran the docker push script to a xyz/abc:v1 (just a example), and after the execution was done, i got something similar to naming to docker.io/xyz/abc:v1
Step 4 : Create a serverless endpoint using the same docker image that i had pushed.
Step 5 : Allocate the required resource, setup the number of workers and Active worker
Step 6 : Create the endpoint,
Am i missing something, cause even after doing this too and by creating multiple serverless endpoint i'm keep on facing some new issues.
First the serverless instance was on initializing phase even i have allocated 20GB to container disk size and 80GB of GPU.
Second time i had made the request which was in queue for more than 57+ minutes and even in the logs i was not getting anything.
GitHub
GitHub - runpod-workers/worker-comfyui: ComfyUI as a serverless API...
ComfyUI as a serverless API on RunPod. Contribute to runpod-workers/worker-comfyui development by creating an account on GitHub.
Unknown User•7d ago
Message Not Public
Sign In & Join Server To View
Has anyone face this issue,
"error": "ComfyUI server (127.0.0.1:8188) not reachable after multiple retries."
I have used the network volume where all of my models are stored and mounted the network storage directory with that of serverless environment using
extra_model_paths.yaml script.
Though the serverless endpoint was launched, but it is not able to connect to Comfyui Server, isn't that start.sh script should handle that?
Unknown User•6d ago
Message Not Public
Sign In & Join Server To View
zzmgrfoyc59h1s[info]worker-comfyui - Failed to connect to server at http://127.0.0.1:8188/ after 500 attempts.\n
zzmgrfoyc59h1s[info]worker-comfyui - Checking API server at http://127.0.0.1:8188/...\n
zzmgrfoyc59h1s[info]Started.
zzmgrfoyc59h1s[info]Jobs in progress: 1
zzmgrfoyc59h1s[info]Jobs in queue: 1
Got this from the logs,
And in the workers logs, it is just some info related to container
Unknown User•6d ago
Message Not Public
Sign In & Join Server To View
Do i have to manually run the comfyui from the start command ? or is there a issue with start.sh command.
Cause i have verified the Dockerfile whether the Comfyui script is installed or not.
Unknown User•6d ago
Message Not Public
Sign In & Join Server To View
That's all the log that i've got, and all woekers exited for the same reason

Unknown User•6d ago
Message Not Public
Sign In & Join Server To View
exit 127 indicates a command was not found