R
RunPod4mo ago
KJK

Stable Diffusion GPU Pod and API

Is there a way to connect a GPU Pod running the stable diffusion template to an API layer that is externally exposed.? I have a serverless instance running @ashleyk 's docker which is working great and much appreciated, albeit 10x slower than the GPU Pods. I am attempting to leverage the processing power and number of GPUs on the pod side -- but need an API endpoint that I can expose to my external app... @flash-singh ? @justin I appreciate your answers, but I am directing this to the RunPod support or @ashleyk if willing to chip in.
4 Replies
ashleyk
ashleyk4mo ago
You can set active workers or increase your idle timeout to speed up serverless. You can't use A1111 API with GPU Cloud in production because it doesn't scale, so either use serverless A1111 or if you want to optimize speed, look at using diffusers instead of A1111.
KJK
KJK4mo ago
Thanks Ashley. Even with active workers, I am getting 10 times the response time than what I get on GPU Pods. I'll look at the diffusers.
flash-singh
flash-singh4mo ago
you need to run same codebase to get similar speeds, theres many packages that offer speeds and its a optimization issue most of the time
ashleyk
ashleyk4mo ago
Yeah, should maybe consider using Forge instead of A1111.
Want results from more Discord servers?
Add your server
More Posts