value not in list on serverless

i have a network storage setup with comfyui that i use to deploy pods on, now i want to use that storage with serverless. i followed the guide to https://github.com/runpod-workers/worker-comfyui/blob/main/docs/customization.md , tried method 2 and created an endpoint with the runpod/comfyui-worker:5.5.0-base and set the network storage to this endpoint. when trying to use a simple workflow (flux1-dev) on serverless that perfectly works when connected with a pod, i get a error "value not in list" for all the models.
GitHub
worker-comfyui/docs/customization.md at main · runpod-workers/work...
ComfyUI as a serverless API on RunPod. Contribute to runpod-workers/worker-comfyui development by creating an account on GitHub.
Solution:
For the records: In my network storage the models from the comfyui setup to run a pod are saved in /(workspace)/ComfyUI/models/.. but the serverless worker is looking at /(runpod-volume)/models/.. , putting models there fixed the "value not in list" error on serverless. it was a matter of not reading the docs carefully enough on my side, its mentioned in the very bottom note @ https://github.com/runpod-workers/worker-comfyui/blob/main/docs/customization.md: "Note: When a Network Volume is correctly attached, ComfyUI running inside the worker container will automatically detect and load models from the standard directories (/workspace/models/...) within that volume....
Jump to solution
1 Reply
Solution
paulHAX
paulHAX3d ago
For the records: In my network storage the models from the comfyui setup to run a pod are saved in /(workspace)/ComfyUI/models/.. but the serverless worker is looking at /(runpod-volume)/models/.. , putting models there fixed the "value not in list" error on serverless. it was a matter of not reading the docs carefully enough on my side, its mentioned in the very bottom note @ https://github.com/runpod-workers/worker-comfyui/blob/main/docs/customization.md: "Note: When a Network Volume is correctly attached, ComfyUI running inside the worker container will automatically detect and load models from the standard directories (/workspace/models/...) within that volume. This method is not suitable for installing custom nodes; use the Custom Dockerfile method for that." Meaning /workspace/ is automatically replaced with /runpod-volume/ for serverless worker now trying to symlink or fallback to switching to actual template on pod to be able to use thesame storage for pods and serverless

Did you find this page helpful?