Encountered error when deploying pytorch:2.8.0-py3.11-cuda12.8.1 to RTX 4090
why am I facing the following error, when deploying Docker image runpod/pytorch:2.8.0-py3.11-cuda12.8.1-cudnn-devel-ubuntu22.04 with RTX 4090?
---
error starting container: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: requirement error: unsatisfied condition: cuda>=12.8, please update your driver to a newer version, or use an earlier cuda container: unknown
7 Replies
filter only 12.8 cuda version
What do you mean? Look for other posts about 12.8 cuda or look for a different Docker image?
Oh the explanation wasn't clear enough, I mean when creating a pod, use the cuda filter ( expand the menus to see it) and filter only for 12.8

because the machine of your pod is lower than 12.8
or you can use other template with cuda12.4 perhaps instead of template with cuda 12.8 (unless you need it for newer architecture gpu model)
yes I need it for pytorch >2.5.0
looks like if I filter for 12.8, RTX 4090 is not available for the data center that my network storage is in. is it possible to transfer network storage?
yes you need to do it manually using any file trasfer protocol or app