R
Runpod•15mo ago
Kazrik

ComfyUI Serverless with access to lots of models

Hi, I have a pre-sales question. I am currently hosting a Discord bot and website for image generation using ComfyUI API endpoints on a local PC. it has around 1TB of checkpoints and loras available to be used, but as the number of users are growing I'm considering a serverless gpu where I can pay just for compute time. With Runpod serverless, am I able to quickly deploy instances of Comfy, with any checkpoints/loras that the user wants for their generation? I was thinking of having the most popular models stored on runpod storage for fastest deployment and ones that are rarely used are downloaded on demand and swapped out to make room when needed. Am I able to do this, or something similar?
Solution:
Message Not Public
Sign In & Join Server To View
Jump to solution
4 Replies
Solution
Unknown User
Unknown User•15mo ago
Message Not Public
Sign In & Join Server To View
Kazrik
KazrikOP•15mo ago
Fantastic, do you have any documentation for this kind of setup?
Unknown User
Unknown User•15mo ago
Message Not Public
Sign In & Join Server To View
Kazrik
KazrikOP•15mo ago
Thank you 🙂

Did you find this page helpful?