RuntimeError: vk::PhysicalDevice::createDeviceUnique: ErrorExtensionNotPresent
RuntimeError: vk::PhysicalDevice::createDeviceUnique: ErrorExtensionNotPresent
27 Replies
Unknown User•4mo ago
Message Not Public
Sign In & Join Server To View
Not every pod is vulkan capable
They dont test for it
If this is amd its certain not to have it, AMD is not good at supporting their cards so their MI300x has no vulkan driver
If its nvidia it depends on the docker container stuff the host has installed.
Btw @Dj if you wish to know if a host has vulkan to fix this issue across the nvidia fleet use https://koboldai.org/runpodcpp and in the KCPP_ARGS replace --usecublas with --usevulkan if the template works that host is good.
Hi, my pod is running 1 x RTX 4090
14 vCPU 50 GB RAM
runpod/pytorch:2.4.0-py3.11-cuda12.4.1-devel-ubuntu22.04.
And I need to run GPU for rendering my simulation platform, the system returned error:
[2025-07-07 02:27:22.747] [svulkan2] [error] Vulkan is incompatible with your driver. You may not use the renderer to render, however, CPU resources will be still available.
scripts/bridge.sh: line 36: 609 Segmentation fault (core dumped)
Unknown User•4mo ago
Message Not Public
Sign In & Join Server To View
sorry how should I check this exactly
"https://koboldai.org/runpodcpp and in the KCPP_ARGS replace --usecublas with --usevulkan if the template works that host is good." ?
The link brings me to the GPU selection page
Unknown User•4mo ago
Message Not Public
Sign In & Join Server To View
Yes
Unknown User•4mo ago
Message Not Public
Sign In & Join Server To View

after this I should see KCPP_ARGS right?
Unknown User•4mo ago
Message Not Public
Sign In & Join Server To View
Okay
Unknown User•4mo ago
Message Not Public
Sign In & Join Server To View
Oh I see
I've created this template, but I just got disconnected so many times
I never experienced this on other templates

My answer wasnt for him, it was so that runpod can more easily fix it
Its very hostbound
Some runpod hosts have vulkan some dont
Unknown User•4mo ago
Message Not Public
Sign In & Join Server To View
Basically someone like dj has to use my template to detect which hosts dont have it and fix it
Working with our infra team to see what we can do to install Vulkan on every device that supports it :fbslightsmile:
It would be part of a bigger rollout for some infra stuff, so I can't guarantee a date or timeline
Vulkan itself is likely installed, its specifically the nvidia docker support that also passes trough the vulkan bins
does the timeout indicate that pod isn't good for vulkan?
If you mean that with using my template then probably, mine is only meant for running KoboldCpp not for whatever you want to run. I merely posted it so that runpods team has an easy way of testing vulkan.
We intend on testing Vulkan by just installing the SDK - https://docs.vulkan.org/guide/latest/checking_for_support.html#_ways_of_checking_for_vulkan
The nvidia icd stuff has to be installed and thats auto injected
My template I ship the genetic mesa icd's but none of the runpod gpu's support that, they need the nvidia driver to add its own
GitHub
Mount Vulkan icd.d JSON file into the container · Issue #16 · NVI...
The Vulkan loader uses JSON files to locate vendor implementations. For example, on my host, /usr/share/vulkan/icd.d/nvidia_icd.json contains { "file_format_version" : "1.0.0", ...