RuntimeError: CUDA driver initialization failed, you might not have a CUDA gpu
I've been using serverless recently, and I don't know what's going on—why do I keep running into the "no GPU available" issue?
========== == CUDA == ========== CUDA Version 12.4.1 Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. This container image and its contents are governed by the NVIDIA Deep Learning Container License. By pulling and using the container, you accept the terms and conditions of this license: https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience. runpod-worker-comfy: Starting ComfyUI runpod-worker-comfy: Starting RunPod Handler Checkpoint files will always be loaded safely. Traceback (most recent call last): File "/comfyui/main.py", line 147, in <module> import execution File "/comfyui/execution.py", line 15, in <module> import comfy.model_management File "/comfyui/comfy/model_management.py", line 237, in <module> total_vram = get_total_memory(get_torch_device()) / (1024 * 1024) File "/comfyui/comfy/model_management.py", line 187, in get_torch_device return torch.device(torch.cuda.current_device()) File "/usr/local/lib/python3.10/dist-packages/torch/cuda/init.py", line 1071, in current_device _lazy_init() File "/usr/local/lib/python3.10/dist-packages/torch/cuda/init.py", line 412, in _lazy_init torch._C._cuda_init() RuntimeError: CUDA driver initialization failed, you might not have a CUDA gpu.