Help Request: ODM Container Only Using CPU
Has anyone tried to deploy an ODM processing node using a pod before?
https://github.com/OpenDroneMap/NodeODM
How do I add the --gpus all to the pod?...
GraphQL Schema
Hi there,
is it possible to get RunPod's GraphQL Schema or enable introspection?
I need it for an integration I'm currently working on. 🙂...
Solution:
nope
How saving plan work ?
Could someone clarify how saving plans work? The documentation is quite limited.
I understand that it helps reduce costs over a set period, but I'd like to know if, when I get a saving plan for a pod, it guarantees access to the same GPU for the entire reservation duration.
If I stop my pod for some reason, do I have to rebuild it, or can I simply restart it?...
502
Hello we are having a trouble with 502 error
we are running a comfyUI with runpod/pytorch:2.2.0-py3.10-cuda12.1.1-devel-ubuntu22.04
our port 8188 is still running and we also can send a get api to 8188 port...

Decommissioning on November 7th
I received this email:
"We are reaching out because you currently have serverless workers or pods running in the EUR-NO-1 data center, which is scheduled for decommissioning on November 7th. This change is part of our efforts to upgrade capacity, enhance the network, and improve other infrastructure."
What actions should I take if I'm currently running a pod with a savings plan? How I restore a pod with the same savings plan?...
Lost my GPU
Hello,
I stopped my pod and when I came back, I have 0 GPUs available. Should I hope that this machine can get the GPU back, or it will never get it back and I should switch to a new pod?...
Where are default models mounted? I can't find them under /comfy-models
```root@054f3147d5b1:/# ls -al /comfy-models/
total 4
drwxr-xr-x 2 root root 10 Oct 25 09:17 .
drwxr-xr-x 1 root root 4096 Nov 4 10:00 ..
root@054f3147d5b1:/workspace/ComfyUI/custom_nodes/comfyui_controlnet_aux# df -h...
Port forwarding understanding
Greetings, I have been a user of vast ai, and there they have a list of ports alreadt assigned to it and they map to exactly same one on your machine. But in runpod they map to a different one. I have to run a miner and I need to tell two of my ports to it, now should I be telling it my external or internal ports and how would they map to internal ones?
I am also attaching picture of vast ports and yours as well...

Problems starting my pod with and without GPU.
Container LOGs (ID: tb7bqtktnwh9gy)
2024-11-02T18:47:01.634671114Z [SSH] Configuring SSH to allow root login with a password...
2024-11-02T18:47:01.720536800Z * Starting periodic command scheduler cron
2024-11-02T18:47:01.809391559Z ...done.
2024-11-02T18:47:01.926771417Z * Restarting OpenBSD Secure Shell server sshd...
Is there something wrong in US-OR-1?
There seemed to be issues on Thu 10/31 making my ComfyUI pod unusable (Comfy taking 20+ minutes to start and be available, throwing errors when it ran, web terminal challenging for auth and continuously rejecting valid creds). Are these issues ongoing? I went to start a pod today (11/1) and it seemed to exhibit the same issues so I backed away before I burned more credits.
Money on new account
Hey everyone, I’m a new user and I was trying to put money on my account but my card got declined for no reasons. Anyone experienced this problem and knows how to bypass it? Thank you!
Is there a way to launch a pod and then setup cloud sync (from Google Drive) via API/SDK?
The document doesn't seem to have any GraphQL to configure Cloud Sync at launch of a pod. Is this not supported yet?
ComfyUI: Diagnosing errors like "Syntax error: Unexpected token '>'" by logging to file?
All of my ComfyUI workflows stopped working on all instances with a syntax error when i try to run my workflow. The System and Container logs that I can access through the RunPod UI say nothing. Is there a way for me to start ComfyUI manually so I can see errors in the console and log it to a file?
Python SDK resume_pod
Hi, I'm using the Python SDK to resume a pod. However, I can't resume a pod with 0 gpu
runpod.resume_pod(
pod_id=pod_id,
gpu_count=0...
Network Volume as Storage for images
Hi, I am building an image generation application which will store images to a database for which I was thinking RunPod's network volumes attached with a CPU pod sending and fetching images from the volume. Will network volumes be worth it?
Network Volume Integrity
Ever since last night every pod I deploy on my network volume:
fpomddpaq0
there are certain files that I cannot open (I believe they have been corrupted). I get a 'launcher error 524' (timeout) when I try to open these specific files (.ipynb). I have tried changing images to the latest pytorch image but that did not help. I have cross checked with a fresh volume in the same region and the error does not occur there. I have now confirmed the issue using the file
command via web terminal but...Stable diffusion checkpoint list empty with Better Forge template
Following the instructions here: https://blog.runpod.io/introducing-better-forge-spin-up-new-stable-diffusion-pods-quicker-than-before/ which were written just last month...
I have downloaded two different checkpoints from civit into the stable-diffusion-webui-forge/models/Stable-diffusion folder. However the dropdown list of checkpoints in the webui is empty. I have tried clicking the refresh button, refreshing the page, and restarting the pod, but no matter what I do the models will not show up. What is going on?...
Can't select 2x GPU for my old pod, while I could start a new pod with the same GPU setup
Might be a stupid question, but I had a pod running yesterday with 2 H100 PCIe and I can start my pod only with 0 or 1 GPU, which looks like an availability issue. But If I want to deploy a new pod, I can choose 2 H100 PCIe and the availability is medium.