How do I enable hardware acceleration for ml and transcoding?

I have a ds224+ with Intel Celeron J4125. I read on reddit that they were able to enable hw accelaration on this nas but forgot how they did it.

Please help me with setting up this part of compose.yml:

services:
immich-server:
container_name: immich_server
user: 1026:100
image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
# extends:
# file: hwaccel.transcoding.yml
# service: cpu # set to one of [nvenc, quicksync, rkmpp, vaapi, vaapi-wsl] for accelerated transcoding
volumes:
# Do not edit the next line. If you want to change the media storage location on your system, edit the value of UPLOAD_LOCATION in the .env file
  • ${UPLOAD_LOCATION}:/data
  • /etc/localtime:/etc/localtime:ro
  • /volume1/homes/abhirham/Photos:/usr/src/app/external:roimmich-machine-learning:container_name: immich_machine_learning

    For hardware acceleration, add one of -[armnn, cuda, rocm, openvino, rknn] to the image tag.

    Example tag: ${IMMICH_VERSION:-release}-cuda

    image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}

    extends: # uncomment this section for hardware acceleration - see https://immich.app/docs/features/ml-hardware-acceleration

    file: hwaccel.ml.yml

    service: cpu # set to one of [armnn, cuda, rocm, openvino, openvino-wsl, rknn] for accelerated inference - use the -wsl version for WSL2 where applicable

Was this page helpful?