R
RunPod4mo ago
doc

comfyui + runpod serverless

I'm looking to host my comfyui workflow via runpod serverless. I'm curious how does the comfyui startup process work with serverless. For example, in my local setup, everytime I restart my comfyui localhost, it takes awhile to get up and running, let's call this the "comfyui cold start". But once it is setup, it's relatively quick to run many generations one after another. My Question: If I integrate my comfyui setup with runpod serverless, anytime there is an API call, does the call have to go through the "comfyui cold start" everytime? (This is not to be confused with typical serverless cold starts which I assume is what is referred to from the runpod serverless webpage which says "cold start in milliseconds" https://www.runpod.io/serverless-gpu)
8 Replies
justin
justin4mo ago
https://discord.com/channels/912829806415085598/1144401747100582008 - Just sharing, might not be exactly what u want,but Rob has made a pretty comprehensive docker image for comfyui
ashleyk
ashleyk4mo ago
"This is not to be confused with typical serverless cold starts". This statement is totally incorrect, this is exactly what a typical serverless cold start is.
doc
doc4mo ago
I see. So runpod's serverless would restart comfyui everytime an API call is made? Could you share the link pls
RobBalla
RobBalla4mo ago
This is from a simple request on a cold start
RobBalla
RobBalla4mo ago
No description
RobBalla
RobBalla4mo ago
Startup time for ComfyUI is typically under two seconds
RobBalla
RobBalla4mo ago
GitHub
GitHub - ai-dock/comfyui: ComfyUI docker images
ComfyUI docker images. Contribute to ai-dock/comfyui development by creating an account on GitHub.
doc
doc4mo ago
Thanks for this, I'll take a look for now and get back if I have any uncertainties