gpu Not usable

No description
3 Replies
Joshcandle
Joshcandle4mo ago
Traceback (most recent call last): File "/workspace/stable-diffusion-webui/launch.py", line 48, in <module> main() File "/workspace/stable-diffusion-webui/launch.py", line 39, in main prepare_environment() File "/workspace/stable-diffusion-webui/modules/launch_utils.py", line 384, in prepare_environment raise RuntimeError( RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
ashleyk
ashleyk4mo ago
Give pod id for RunPod to investigate
Axiom
Axiom4mo ago
I'm having an issue with it running on cpu instead of GPU as well-- I keep getting "addmmimpl_cpu" not implemented for 'Half' and RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' which apparently mean that it is not using GPU and won't load checkpoints as fp16 ... I am not a developer and this is my first time using Runpod and I'm kinda lost and spent several hours trying to get up and running :/
Want results from more Discord servers?
Add your server
More Posts
After tying the service for the first time, out of funds because of a stale pod after disconnectingHello. As per the title. I'm a professional comic book artist working with Krita and trying to stay pod does not show public ip & portswe have a template with tcp port configured when we deploy that template using community cloud with Pod is unable to find/use GPU in pythonHi, I'm trying to connect to this pod: RunPod Pytorch 2.2.10 ID: zgel6p985mjmmn 1 x A30 8 vCPU 31 GPod is stuck in a loop and does not finish creatingHi, I'm trying to start a 1 x V100 SXM2 32GB with additional disk space (40 GB). It worked fine untoptimize ComfyUI on serverlessI have ComfyUI deployed on runpod serverless, so I send the json workflows to runpod and receive theProbleme when writing a multi processing handlerHi there ! I got an issue when I try to write a handler that processes 2 tasks in parallel (I use ThIdle time: High Idle time on server but not getting tasks from queueI'm testing servers with high Idle time to keep alive and get new tasks, but the worker is showing iIs there a programatic way to activate servers on high demand / peak hours load?We are testing the serverless for production deployment for next month. I want to assure we will havRunpodctl in container receiving 401Over the past few days, I have sometimes been getting a 401 response when attempting to stop pods wiIncreasing costs?guys last few days seems an increase in cost without a spike in active usage. do you have any idea w