Docker image using headless OpenGL (EGL, surfaceless plaform) OK locally, fails to CPU in Runpod
Hi all, I'm wondering if anyone can educate me on what would be causing this difference in behaviour when running a container locally versus in Runpod, and whether there is a solution.
In summary I'm trying to run a headless OpenGL program in a docker container, by using ELG with the surfaceless platform (https://registry.khronos.org/EGL/extensions/MESA/EGL_MESA_platform_surfaceless.txt). I was able to get the program working as intended in a container outside of Runpod. But once deployed to Runpod, it falls back to CPU processing.
As a minimal testcase, it's sufficient to simply run
eglinfo, a utility which tells you what EGL devices are available. Outside of runpod multiple are available, but in Runpod none are. The testcase and example outputs are available here: https://github.com/rewbs/egldockertest .
Any ideas very much appreciated!
(As an aside, I should note I'm by no means an OpenGL expert so I might be getting confused, or at very least getting the terminology wrong.)GitHub
GitHub - rewbs/egldockertest: Egl in docker container / cog
Egl in docker container / cog. Contribute to rewbs/egldockertest development by creating an account on GitHub.
14 Replies
What kind of program to try to run?
My desktop image uses EGL and is derived from Selkies EGL for Kubernetes (linked in the repo). You'll need to install Nvidia display drivers because there is no /dev/dri on RunPod
An old-school audio-visualisation renderer (I'm the author of https://vizrecord.app/ which is client side – from there you can probably guess what I'm building 🙂 ).
Thanks so much, will take a look
Wow that looks like an impressive piece of work. Am I right in thinking your image re-installs the driver on every startup? If so I assume it's designed for a long-running pod rather than serverless tasks – and probably won't be sensible for my serverless usecase where a job execution would typically be under 30s.
Yeah that wouldn't make much sense unfortunately. I raised an issue with the Selkies EGL repo and their feedback was that the driver install shouldn't be necessary but my experience was llvmpipe rendering without it - But I am hopeful there is a solution
hey! been a while, but im running into the same problem. were you able to resolve?
Hey, nope – still can't get it to run on GPU. I'm resorting to running this process in parallel to other tasks (that do use the GPU) within the same serverless invocation! 🙂
If you figure it out please report back! I wonder if it's something to do with the privileges made available to docker containers in Runpod vs locally.
Unknown User•2y ago
Message Not Public
Sign In & Join Server To View
The hardware is definitely there and supported. 🙂 My serverless endpoint kicks off 2 concurrent processes on the same serverless worker: one surfaceless ELG task (similar to the example codebase above), which fails to detect and use the Nvidia GPU, and one "standard" python ML process, which does find and use the Nvidia GPU.
Unknown User•2y ago
Message Not Public
Sign In & Join Server To View
Oh. Which software are you referring to though? My code? (there are many layers of software in play here 🙂 )
Unknown User•2y ago
Message Not Public
Sign In & Join Server To View
No worries, this is not an easy problem. EGL is a software layer above OpenGL which supports headless rendering.
Unknown User•2y ago
Message Not Public
Sign In & Join Server To View
finally got it actually by installing the right nvidia driver (5.35) on our debian slim image. we’re not doing serverless tho just pods for now