R
Runpod4mo ago
to

GPU not visible in the pod.

I have a very simple Docker image with FastAPI which I pushed to my repo, then I use that image as a template to start a H100 PCIe pod. I used runpod/pytorch:2.2.0-py3.10-cuda12.1.1-devel-ubuntu22.04 as a base image. But for some reason the GPU is not available in the container. If I run nvidia-smi in the container it complains about missing drivers. I did try terminating and getting a new pod up several times. My Dockerfile: FROM runpod/pytorch:2.2.0-py3.10-cuda12.1.1-devel-ubuntu22.04 WORKDIR /app COPY requirements.txt . RUN pip install -r requirements.txt COPY . . EXPOSE 8000 CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"] My requirements.txt: fastapi uvicorn[standard] transformers accelerate huggingface_hub
2 Replies
to
toOP4mo ago
this is my latest try, you can see in the logs: "using device: cpu" - this comes from a print in my python code: device = "cuda" if torch.cuda.is_available() else "cpu" print(f"Using device: {device}")
No description
Unknown User
Unknown User4mo ago
Message Not Public
Sign In & Join Server To View

Did you find this page helpful?