R
RunPod4mo ago
Abhi

How to use the comfyui API when running it inside Runpod GPU pods

I can use the UI running on port 3000 using the template runpod/stable-diffusion:comfy-ui-5.0.0 but I am not able to call the API is there any documentation or examples for this scenario. I am using this example code top call the API https://github.com/comfyanonymous/ComfyUI/blob/master/script_examples/basic_api_example.py Please help.
GitHub
ComfyUI/script_examples/basic_api_example.py at master · comfyanony...
The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface. - comfyanonymous/ComfyUI
6 Replies
ashleyk
ashleyk4mo ago
RunPod can't document every application in existence, you can change line 106.
req = request.Request("http://127.0.0.1:8188/prompt", data=data)
req = request.Request("http://127.0.0.1:8188/prompt", data=data)
Change that to your pod URL + /prompt.
Abhi
Abhi4mo ago
Thank you for the reply. Yes I tried that but I am getting an error HTTPError: HTTP Error 403: Forbidden I haven't setup any restrictions or API tokens for this gpu pod like we do for serverles API still I got this error. Is there a API token we can setup for endpoints running on gpu pods
ashleyk
ashleyk4mo ago
Its up to your application
Abhi
Abhi4mo ago
okay I was asking because I used the template directly without any changes so I thought there is some thing I missed. Do we need to configure any ingestion routes otherthan the HTTP ports are they enopugh to deal with POST requests? Thank you runpod/stable-diffusion:comfy-ui-5.0.0
ashleyk
ashleyk4mo ago
They are sufficient for a POST request, your application handles everything.
Abhi
Abhi4mo ago
Thank you