HW Acceleration with CUDA on NixOS

I've been trying to get hardware acceleration working on my machine for a while, and I haven't been able to figure it out. I'm running immich using docker compose on a NixOS system with a GTX 1070. For a while I thought there was something wrong with my nix config, but all these docker examples work fine on my machine: https://docs.docker.com/desktop/features/gpu/ However, the docker-compose files I am using are exactly the same as the ones provided in the documentation, so I'm unsure of what exactly my issue is. The immich_machine_learning container has the following logs when I attempt to run an ml job:
[11/14/25 20:03:55] INFO Starting gunicorn 23.0.0
[11/14/25 20:03:55] INFO Listening at: http://[::]:3003 (3)
[11/14/25 20:03:55] INFO Using worker: immich_ml.config.CustomUvicornWorker
[11/14/25 20:03:55] INFO Booting worker with pid: 4
[11/14/25 20:03:56] INFO generated new fontManager
[11/14/25 20:03:56] INFO Started server process [4]
[11/14/25 20:03:56] INFO Waiting for application startup.
[11/14/25 20:03:56] INFO Created in-memory cache with unloading after 300s
of inactivity.
[11/14/25 20:03:56] INFO Initialized request thread pool with 16 threads.
[11/14/25 20:03:56] INFO Application startup complete.
[11/14/25 20:32:13] INFO Loading visual model 'ViT-B-32__openai' to memory
[11/14/25 20:32:13] INFO Setting execution providers to
['CUDAExecutionProvider', 'CPUExecutionProvider'],
in descending order of preference
[11/14/25 20:32:20] ERROR Worker (pid:4) was sent code 139!
[11/14/25 20:32:20] INFO Booting worker with pid: 47
[11/14/25 20:32:21] INFO Started server process [47]
... more errors below ...
[11/14/25 20:03:55] INFO Starting gunicorn 23.0.0
[11/14/25 20:03:55] INFO Listening at: http://[::]:3003 (3)
[11/14/25 20:03:55] INFO Using worker: immich_ml.config.CustomUvicornWorker
[11/14/25 20:03:55] INFO Booting worker with pid: 4
[11/14/25 20:03:56] INFO generated new fontManager
[11/14/25 20:03:56] INFO Started server process [4]
[11/14/25 20:03:56] INFO Waiting for application startup.
[11/14/25 20:03:56] INFO Created in-memory cache with unloading after 300s
of inactivity.
[11/14/25 20:03:56] INFO Initialized request thread pool with 16 threads.
[11/14/25 20:03:56] INFO Application startup complete.
[11/14/25 20:32:13] INFO Loading visual model 'ViT-B-32__openai' to memory
[11/14/25 20:32:13] INFO Setting execution providers to
['CUDAExecutionProvider', 'CPUExecutionProvider'],
in descending order of preference
[11/14/25 20:32:20] ERROR Worker (pid:4) was sent code 139!
[11/14/25 20:32:20] INFO Booting worker with pid: 47
[11/14/25 20:32:21] INFO Started server process [47]
... more errors below ...
Docker Documentation
GPU support
How to use GPU in Docker Desktop
7 Replies
Immich
Immich7d ago
:wave: Hey @124274sashimi, Thanks for reaching out to us. Please carefully read this message and follow the recommended actions. This will help us be more effective in our support effort and leave more time for building Immich :immich:. References - Container Logs: docker compose logs docs - Container Status: docker ps -a docs - Reverse Proxy: https://immich.app/docs/administration/reverse-proxy - Code Formatting https://support.discord.com/hc/en-us/articles/210298617-Markdown-Text-101-Chat-Formatting-Bold-Italic-Underline#h_01GY0DAKGXDEHE263BCAYEGFJA Checklist I have... 1. :ballot_box_with_check: verified I'm on the latest release(note that mobile app releases may take some time). 2. :ballot_box_with_check: read applicable release notes. 3. :ballot_box_with_check: reviewed the FAQs for known issues. 4. :ballot_box_with_check: reviewed Github for known issues. 5. :ballot_box_with_check: tried accessing Immich via local ip (without a custom reverse proxy). 6. :ballot_box_with_check: uploaded the relevant information (see below). 7. :ballot_box_with_check: tried an incognito window, disabled extensions, cleared mobile app cache, logged out and back in, different browsers, etc. as applicable (an item can be marked as "complete" by reacting with the appropriate number) Information In order to be able to effectively help you, we need you to provide clear information to show what the problem is. The exact details needed vary per case, but here is a list of things to consider: - Your docker-compose.yml and .env files. - Logs from all the containers and their status (see above). - All the troubleshooting steps you've tried so far. - Any recent changes you've made to Immich or your system. - Details about your system (both software/OS and hardware). - Details about your storage (filesystems, type of disks, output of commands like fdisk -l and df -h). - The version of the Immich server, mobile app, and other relevant pieces. - Any other information that you think might be relevant. Please paste files and logs with proper code formatting, and especially avoid blurry screenshots. Without the right information we can't work out what the problem is. Help us help you ;) If this ticket can be closed you can use the /close command, and re-open it later if needed.
124274sashimi
124274sashimiOP7d ago
docker-compose.yml
# -- snip --
immich-machine-learning:
container_name: immich_machine_learning
# For hardware acceleration, add one of -[armnn, cuda, rocm, openvino, rknn] to the image tag.
# Example tag: ${IMMICH_VERSION:-release}-cuda
image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}-cuda
extends: # uncomment this section for hardware acceleration - see https://docs.immich.app/features/ml-hardware-acceleration
file: hwaccel.ml.yml
service: cuda # set to one of [armnn, cuda, rocm, openvino, openvino-wsl, rknn] for accelerated inference - use the `-wsl` version for WSL2 where applicable
volumes:
- model-cache:/cache
env_file:
- .env
restart: always
healthcheck:
disable: false
# -- snip --
immich-machine-learning:
container_name: immich_machine_learning
# For hardware acceleration, add one of -[armnn, cuda, rocm, openvino, rknn] to the image tag.
# Example tag: ${IMMICH_VERSION:-release}-cuda
image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}-cuda
extends: # uncomment this section for hardware acceleration - see https://docs.immich.app/features/ml-hardware-acceleration
file: hwaccel.ml.yml
service: cuda # set to one of [armnn, cuda, rocm, openvino, openvino-wsl, rknn] for accelerated inference - use the `-wsl` version for WSL2 where applicable
volumes:
- model-cache:/cache
env_file:
- .env
restart: always
healthcheck:
disable: false
hwaccel.ml.yml is unchanged from https://github.com/immich-app/immich/releases/latest/download/hwaccel.ml.yml .env
# You can find documentation for all the supported env variables at https://docs.immich.app/install/environment-variables

# The location where your uploaded files are stored
UPLOAD_LOCATION=./library

# The location where your database files are stored. Network shares are not supported for the database
DB_DATA_LOCATION=./postgres

# To set a timezone, uncomment the next line and change Etc/UTC to a TZ identifier from this list: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List
TZ=America/New_York

# The Immich version to use. You can pin this to a specific version like "v2.1.0"
IMMICH_VERSION=v2

# Connection secret for postgres. You should change it to a random password
# Please use only the characters `A-Za-z0-9`, without special characters or spaces
DB_PASSWORD=postgres

# The values below this line do not need to be changed
###################################################################################
DB_USERNAME=postgres
DB_DATABASE_NAME=immich
# You can find documentation for all the supported env variables at https://docs.immich.app/install/environment-variables

# The location where your uploaded files are stored
UPLOAD_LOCATION=./library

# The location where your database files are stored. Network shares are not supported for the database
DB_DATA_LOCATION=./postgres

# To set a timezone, uncomment the next line and change Etc/UTC to a TZ identifier from this list: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List
TZ=America/New_York

# The Immich version to use. You can pin this to a specific version like "v2.1.0"
IMMICH_VERSION=v2

# Connection secret for postgres. You should change it to a random password
# Please use only the characters `A-Za-z0-9`, without special characters or spaces
DB_PASSWORD=postgres

# The values below this line do not need to be changed
###################################################################################
DB_USERNAME=postgres
DB_DATABASE_NAME=immich
I'm running Immich v2.2.3 and all docker containers are running and healthy
Immich
Immich7d ago
Successfully submitted, a tag has been added to inform contributors. :white_check_mark:
124274sashimi
124274sashimiOP7d ago
Here's the complete log from my immich_machine_learning container
124274sashimi
124274sashimiOP5d ago
This issue seems to be the same one as reported here: https://github.com/immich-app/immich/issues/23450
Immich
Immich5d ago
[Issue] Machine Learning Errors after Update (immich-app/immich#23450)
124274sashimi
124274sashimiOP5d ago
Running coredumpctl on my last coredump file shows the following stacktrace when a segfault occurs:
Stack trace of thread 587:
#0 0x00007f1107933564 n/a (/opt/venv/lib/python3.11/site-packages/onnxruntime/capi/libonnxruntime_providers_cuda.so + 0x333564)
#1 0x0000000000000000 n/a (n/a + 0x0)
ELF object binary architecture: AMD x86-64
Stack trace of thread 587:
#0 0x00007f1107933564 n/a (/opt/venv/lib/python3.11/site-packages/onnxruntime/capi/libonnxruntime_providers_cuda.so + 0x333564)
#1 0x0000000000000000 n/a (n/a + 0x0)
ELF object binary architecture: AMD x86-64

Did you find this page helpful?