Best tips for lowering SDXL text2image API startup latency?
I'm currently using the https://github.com/ashleykleynhans/runpod-worker-a1111 along with a network volume. I only use a single model with the sd text2image endpoint and I don't need the UI. Right now, I'm experiencing an 80+ second delay for cold startups on the first request. Do you have any suggestions on how to optimize this (without 1 constant active worker)? Thanks in advance!
Continue the conversation
Join the Discord to ask follow-up questions and connect with the community
R
Runpod
We're a community of enthusiasts, engineers, and enterprises, all sharing insights on AI, Machine Learning and GPUs!