ComfyUI + custom models & nodes
I've read this here, and tried it: https://github.com/runpod-workers/worker-comfyui
But im still not sure if I did it correctly.
So I made a docker file based on one of the versions and add the things I need:
Is this enough? also when I set it up on runpod, each run takes time, and it's not working all the time (queue mode), so I feel like I did something wrong
GitHub
GitHub - runpod-workers/worker-comfyui: ComfyUI as a serverless API...
ComfyUI as a serverless API on RunPod. Contribute to runpod-workers/worker-comfyui development by creating an account on GitHub.
Solution:Jump to solution
You'll stop seeing the error you had, where a worker was spawned to try to handle that job but it was throwing:
requirement error: unsatisfied condition: cuda>=12.6, please update your driver to a newer version, or use an earlier cuda container: unknown...
22 Replies
Each run will take a bit of time as the workers have the overhead of downloading the image itself. After the initial rampup you should have the same amount of cold start as everyone else. If you want you can share your endpoint ID and I can take a look to see if you've tripped any errors I can view.
yea ok, write it here?
Yes, here or in private whichever you're more comfortable with.
c9q99ujckyd710
it needs credentials anyway
Yeah just some people are more paranoid.
if they dont know how it works π€
one of the last requests I did ran for 12 minutes and was on queue and nothing happened
so canceled it manually
btw, if i'm changing to load balance, I need to change something in the handler?
Yes, load balance is more for a standard web server than what our handler does.
Also also, change your allowed CUDA versions to 12.6, 12.7, 12.8, and 12.9
Solution
You'll stop seeing the error you had, where a worker was spawned to try to handle that job but it was throwing:
requirement error: unsatisfied condition: cuda>=12.6, please update your driver to a newer version, or use an earlier cuda container: unknown
where do I do it?
Manage, Edit Endpoint, Advanced, "Allowed CUDA Versions"

done π
You should be good :fbslightsmile:
so my endpoint is good?
Yes, you won't have that error anymore
ok, i'll check it now
and my problem wasnt the error lol
its the long queue time
Your long queue time was because of this error, sorry
ohhh
ok cool
I can see like 7 hours ago is when you had the long queue time?
In the admin logs it's a bunch of "failed to start container"
this one probably

Yes
Then I see a1b10813-3406-460e-a25b-2a2e2c8470f2-e1
And no more after that
tested one now, looks great
thanks π