RunPod

R

RunPod

We're a community of enthusiasts, engineers, and enterprises, all sharing insights on AI, Machine Learning and GPUs!

Join

⚡|serverless

⛅|pods-clusters

installing and using extensions on Automatic 1111

Hi everyone,
I'm trying to figure out how to install extensions on my panel via RunPod Serverless.
Since it looks like the panel needs a restart after adding an extension to apply the changes, I'm not sure how to handle that properly in a Serverless setup.
Also, for features like Refactor or using other specific options that don't seem to have direct API endpoints, is there any recommended way to interact with them?
It feels like there’s no clear method for these cases, and I'm a bit lost. ...

Stuck when run is triggered via API call but not on dashboard?

I have a project that let's me upload videos on google cloud storage (it is very bare and that's the only thing that it does at the moment). If I trigger the request form serverless dashboard, the job gets completed, but if it is triggered via API it is stuck forever this is what the code looks like:...
No description

No Space Left on Device /var/lib/docker/tmp reported during Worker Initialization

I am seeing "no space left on device" failure when initializing a serverless worker, RTX 4090 / 41 GB RAM class in US-IL. Does this mean that the worker does not even have enough disk space to deploy my Docker image?
-- snip -- 69168d8a856c Extracting [==============================================> ] 1.83GB/1.96GB 69168d8a856c Extracting [===============================================> ] 1.846GB/1.96GB...

Questions About Running ComfyUI Serverless on RunPod

I set up my ComfyUI project, ComfyUI Manager, custom nodes, and models on RunPod inside the /workspace directory of my network volume. When I temporarily deploy the volume and run python main.py --listen, I can access my ComfyUI workflow through the web on RunPod and generate images without any issues. However, after spending a few days trying to figure it out, I still can’t get it working with the serverless API. I've gone through a bunch of docs and videos, but to be honest, I'm just more confused now. The workflow runs perfectly through the web but I could never get it to run through serverless. Since everything is working fine on the web version, I feel like I'm really close to getting it working through the serverless API too. I'd really appreciate any help with this. I can also send over my files via DM if needed....

ComfyUI: "Failed to connect to server at http://127.0.0.1:8188 after 500 attempts" on serverless

Hi everyone, help would be greatly appreciated! 🙂 We're trying to move from permanent Pods to serverless and ran into this brickwall. We're having a sales call with Runpod on Monday so it's time sensitive. I followed the official instructions at https://github.com/runpod-workers/worker-comfyui . I've opened the port on the serverless endpoint but it does not solve the issue. We're using the Dockerfile from the official repo with slight modifications. Any ideas?...
Solution:
Thank you @Jason for the help! I had to tweak COMFY_API_AVAILABLE_MAX_RETRIES in rp_handler instead, but it did resolve the issue

US-NC-1 Failing to pull images

Just an FYI - Constantly having to kill these ones as they get stuck in Initializing error pulling image: Error response from daemon: Head "https://registry-1.docker.io/v2/runpod/worker-v1-vllm/manifests/v2.4.0stable-cuda12.1.0": Get "https://auth.docker.io/token?scope=repository%3Arunpod%2Fworker-v1-vllm%3Apull&service=registry.docker.io": read tcp 172.19.7.13:37010->98.85.153.80:443: read: connection reset by peer Worker ID - 45hzf7q7kf58sy...

Billing question

Hey, the math doesn't add up here... please check the images! How can I get the same amount through the API?
No description

stuck at waiting for build

because clone endpoint is not working on my end i have to recreate the endpoint
No description

Serverless instances are not assigned GPUs, resulting in job error in Production. Require Assist

Error Message 1 with Stack Trace: Task Failed [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:121 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char, const char, ERRTYPE, const char, const char, int) [with ERRTYPE = cudnnStatus_t; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:114 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char, const char, ERRTYPE, const char, const char, int) [with ERRTYPE = cudnnStatus_t; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] CUDNN failure 4: CUDNN_STATUS_INTERNAL_ERROR ; GPU=0 ; hostname=0220236a79a1 ; file=/onnxruntime_src/onnxruntime/core/providers/cuda/cuda_execution_provider.cc ; line=177 ; expr=cudnnCreate(&cudnnhandle); \n Error Message 2: Failed to get job. | Error Type: ClientOSError | Error Message: [Errno 104] Connection reset by peer...

How to know which graphics card worker was ran on?

Hello! How can I tell which graphics card a job ran on? There's no info about video card neither in callback nor /status endpoint. this is all i've got ``` {...

Has anyone successfully deployed a serverless instance using wan2.1 to generate i2v?

I tried the most comfyui+wan templates but they are all for RunPod. Resources for creating serverless instance for this purpose seem quite scarce too. Halp pls?

serverless instances having issue with caching container image?

We are seeing an increased egress cost a repo of Artifact Registry of GCP since a few days ago. Two serverless instances uses the container image from that repo. The repo is on US and the serverless instances are in Europe. The access to the repo is done with Registry Credentials configured on Docker Configuration of serverless....

Question about Serverless V2 API Payload for Automatic1111 Inpainting

Hi, I'm trying to perform inpainting using the RunPod V2 API (/runsync) with my Serverless endpoint ID (which runs an Automatic1111-compatible image). I'm sending a JSON payload in the input object that includes prompt, init_images (as a list with one base64 string), mask (as a base64 string), denoising_strength, inpainting_fill, inpaint_full_res, and mask_blur. However, the generation ignores the init image and mask. The response info from the backend shows "is_using_inpainting_conditioning": false....

Issue with Websocket latency over serverless http proxy since runpod outage

We have a runpod serverless endpoint which we have been using to stream frames over direct one-to-one websocket. We have a lightweight version of this endpoint we've been using that streams simple diagnostic images, and a production version that streams AI generated frames. Frames are configured to stream at 18fps in both cases to create an animation. We now see that both versions of this endpoint fail to stream frames at a reasonable rate, hovering around 1 fps. The lightweight diagnostic frames take virtually no time to generate, and we have confirmed with logging that the AI generated frames in the production version are not generating any slower, and should still be able to meet the 18 fps demand. But we see that the time to send frames over websocket is on the order of 1s per frame, and is very unstable. See below a snippet from our logs showing fast image generation times, but slow times for sending images over websocket ```...
No description

Runpod down?

Getting error 400 for all our routes

On-Demand vs. Spot Pod

Hi ! I was read FAQ about its but I have one more question- is Spot Pod billing is also based on actual usage like on-demand? With power off like described of course....
Next