Can't get alleninstituteforai/olmocr running on any recommended pod.
Hello! A bit new here - I'm trying to run alleninstituteforai/olmocr:latest.
Requirements are:
If I choose an RTX 4090 with Pytorch 2.8.0 (all pods tried are "official" secure on-demand instances) it says:
With olmocr as image, it says:
If I choose RTX5090 with Pytorch 2.8.0, I have to install Docker, but then I can't get Docker to start.
With olmocr as image, the log says:
Trying to ssh says:
I only need to do about 10 PDFs (which is what this does) - actual work is less than an hour, but I've spend almost all my Sunday trying to get it working!
Any ideas? Thanks.
Requirements are:
"Recent NVIDIA GPU (tested on RTX 4090, L40S, A100, H100) with at least 15 GB of GPU RAM
30GB of free disk space"
If I choose an RTX 4090 with Pytorch 2.8.0 (all pods tried are "official" secure on-demand instances) it says:
nvidia-container-cli: requirement error: unsatisfied condition: cuda>=12.8, please update your driver to a newer version, or use an earlier cuda container: unknownWith olmocr as image, it says:
If I choose RTX5090 with Pytorch 2.8.0, I have to install Docker, but then I can't get Docker to start.
With olmocr as image, the log says:
Trying to ssh says:
I only need to do about 10 PDFs (which is what this does) - actual work is less than an hour, but I've spend almost all my Sunday trying to get it working!
Any ideas? Thanks.
