Setting up runpod serverless from scratch

Hello everyone, i am trying to deploy a comfyui severless endpoint to execute workflow that uses flux1-dev model to generate some model. I'm new to it and i have tried to create one serverless endpoint using Comfyui template from hub-listing. I already have a downloaded all the model in a network file along with all the custom nodes that i'll be using in the comfyui workflow that i want to execute. I did saw someone suggesting to make some changes in the handler.py script but i have no idea how to set that up. Can anyone guide me how can i set the serverless endpoint, because i have took the help of the documentation but those didn't help me and instead those document confused me.
4 Replies
Unknown User
Unknown User4w ago
Message Not Public
Sign In & Join Server To View
SourabhSingh
SourabhSinghOP3w ago
Instead of error, i'm getting something else, { "delayTime": 112303, "error": "Error queuing workflow: HTTP Error 400: Bad Request", "executionTime": 4530, "id": "61fe734c-137e-4401-97b1-e7c245f94a92-e1", "status": "FAILED", "workerId": "de8luwnyky7101" } Following was the only message and there were no error or warning which was logged. Just wanted to make sure that all the custom nodes and checkpoints that i want to use with serverless endpoint. I had made a new network volume and then downloaded all the models there and manually tested all the workflow there. It took on an average of 15 seconds to generate the image. On the time of serverless setup, i had selected the GPU configuration i have selected, 24GB, 141GB, 180GB and 80GB.
Unknown User
Unknown User3w ago
Message Not Public
Sign In & Join Server To View
SourabhSingh
SourabhSinghOP3w ago
Do you have any suggestion to tackle this issue, cause this is the exact workflow that i have been using with the POD. And even in the logs also i'm not getting anything related to custom node or model.

Did you find this page helpful?