Can't get alleninstituteforai/olmocr running on any recommended pod.
Hello! A bit new here - I'm trying to run alleninstituteforai/olmocr:latest.
Requirements are:
"Recent NVIDIA GPU (tested on RTX 4090, L40S, A100, H100) with at least 15 GB of GPU RAM 30GB of free disk space"If I choose an RTX 4090 with Pytorch 2.8.0 (all pods tried are "official" secure on-demand instances) it says:
nvidia-container-cli: requirement error: unsatisfied condition: cuda>=12.8, please update your driver to a newer version, or use an earlier cuda container: unknown
With olmocr as image, it says:
If I choose RTX5090 with Pytorch 2.8.0, I have to install Docker, but then I can't get Docker to start.
With olmocr as image, the log says:
Trying to ssh says:
I only need to do about 10 PDFs (which is what this does) - actual work is less than an hour, but I've spend almost all my Sunday trying to get it working!
Any ideas? Thanks.2 Replies
unsatisfied condition: cuda>=12.8
use filter when deploying pod
Thanks. Yes, I can see that would help to get the right pod initially, but even when I do, it's not running...
UPDATE: Baptism of fire over the last 24 hours, finally ended up with success on runpod. Struggled initially until I found that you need to add:
{ "cmd": ["3600"], "entrypoint": ["sleep"] } to the pod startup script to actually be able to ssh in.
All working now - thanks!