R
Runpodβ€’6d ago
Snow ❄

ComfyUI + custom models & nodes

I've read this here, and tried it: https://github.com/runpod-workers/worker-comfyui But im still not sure if I did it correctly. So I made a docker file based on one of the versions and add the things I need:
# start from a clean base image
FROM runpod/worker-comfyui:5.3.0-base

# install custom nodes using comfy-cli
RUN comfy-node-install comfyui_ipadapter_plus
RUN comfy-node-install ComfyUI_yanc

# download models using comfy-cli
RUN comfy model download --url https://civitai.com/api/download/models/789646 --relative-path models/checkpoints --filename realvisxlV50_v50Bakedvae.safetensors
# start from a clean base image
FROM runpod/worker-comfyui:5.3.0-base

# install custom nodes using comfy-cli
RUN comfy-node-install comfyui_ipadapter_plus
RUN comfy-node-install ComfyUI_yanc

# download models using comfy-cli
RUN comfy model download --url https://civitai.com/api/download/models/789646 --relative-path models/checkpoints --filename realvisxlV50_v50Bakedvae.safetensors
Is this enough? also when I set it up on runpod, each run takes time, and it's not working all the time (queue mode), so I feel like I did something wrong
GitHub
GitHub - runpod-workers/worker-comfyui: ComfyUI as a serverless API...
ComfyUI as a serverless API on RunPod. Contribute to runpod-workers/worker-comfyui development by creating an account on GitHub.
Solution:
You'll stop seeing the error you had, where a worker was spawned to try to handle that job but it was throwing: requirement error: unsatisfied condition: cuda>=12.6, please update your driver to a newer version, or use an earlier cuda container: unknown...
Jump to solution
22 Replies
Dj
Djβ€’6d ago
Each run will take a bit of time as the workers have the overhead of downloading the image itself. After the initial rampup you should have the same amount of cold start as everyone else. If you want you can share your endpoint ID and I can take a look to see if you've tripped any errors I can view.
Snow ❄
Snow ❄OPβ€’6d ago
yea ok, write it here?
Dj
Djβ€’6d ago
Yes, here or in private whichever you're more comfortable with.
Snow ❄
Snow ❄OPβ€’6d ago
c9q99ujckyd710 it needs credentials anyway
Dj
Djβ€’6d ago
Yeah just some people are more paranoid.
Snow ❄
Snow ❄OPβ€’6d ago
if they dont know how it works πŸ€“ one of the last requests I did ran for 12 minutes and was on queue and nothing happened so canceled it manually btw, if i'm changing to load balance, I need to change something in the handler?
Dj
Djβ€’6d ago
Yes, load balance is more for a standard web server than what our handler does. Also also, change your allowed CUDA versions to 12.6, 12.7, 12.8, and 12.9
Solution
Dj
Djβ€’6d ago
You'll stop seeing the error you had, where a worker was spawned to try to handle that job but it was throwing: requirement error: unsatisfied condition: cuda>=12.6, please update your driver to a newer version, or use an earlier cuda container: unknown
Snow ❄
Snow ❄OPβ€’6d ago
where do I do it?
Dj
Djβ€’6d ago
Manage, Edit Endpoint, Advanced, "Allowed CUDA Versions"
Dj
Djβ€’6d ago
No description
Snow ❄
Snow ❄OPβ€’6d ago
done πŸ‘Œ
Dj
Djβ€’6d ago
You should be good :fbslightsmile:
Snow ❄
Snow ❄OPβ€’6d ago
so my endpoint is good?
Dj
Djβ€’6d ago
Yes, you won't have that error anymore
Snow ❄
Snow ❄OPβ€’6d ago
ok, i'll check it now and my problem wasnt the error lol its the long queue time
Dj
Djβ€’6d ago
Your long queue time was because of this error, sorry
Snow ❄
Snow ❄OPβ€’6d ago
ohhh ok cool
Dj
Djβ€’6d ago
I can see like 7 hours ago is when you had the long queue time? In the admin logs it's a bunch of "failed to start container"
Snow ❄
Snow ❄OPβ€’6d ago
this one probably
No description
Dj
Djβ€’6d ago
Yes Then I see a1b10813-3406-460e-a25b-2a2e2c8470f2-e1 And no more after that
Snow ❄
Snow ❄OPβ€’6d ago
tested one now, looks great thanks πŸ™

Did you find this page helpful?