NappingSodion
IImmich
•Created by NappingSodion on 3/31/2025 in #help-desk-support
Unable to Utilize GPU (CUDA) for Machine Learning in Docker
config:
log:
...
immich-machine-learning:
container_name: immich_machine_learning
image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities:
- gpu
volumes:
- model-cache:/cache
env_file:
- .env
restart: always
healthcheck:
disable: false
...
...
immich-machine-learning:
container_name: immich_machine_learning
image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities:
- gpu
volumes:
- model-cache:/cache
env_file:
- .env
restart: always
healthcheck:
disable: false
...
Initializing Immich ML v1.130.3
INFO Starting gunicorn 23.0.0
INFO Listening at: http://[::]:3003 (9)
INFO Using worker: app.config.CustomUvicornWorker
INFO Booting worker with pid: 10
INFO Started server process [10]
INFO Waiting for application startup.
INFO Created in-memory cache with unloading after 300s of inactivity.
INFO Initialized request thread pool with 8 threads.
INFO Application startup complete.
INFO Loading detection model 'buffalo_l' to memory
INFO Setting execution providers to ['CPUExecutionProvider'], in descending order of preference
INFO Loading recognition model 'buffalo_l' to memory
INFO Setting execution providers to ['CPUExecutionProvider'], in descending order of preference
INFO Loading visual model 'ViT-B-32__openai' to memory
INFO Setting execution providers to ['CPUExecutionProvider'], in descending order of preference
Initializing Immich ML v1.130.3
INFO Starting gunicorn 23.0.0
INFO Listening at: http://[::]:3003 (9)
INFO Using worker: app.config.CustomUvicornWorker
INFO Booting worker with pid: 10
INFO Started server process [10]
INFO Waiting for application startup.
INFO Created in-memory cache with unloading after 300s of inactivity.
INFO Initialized request thread pool with 8 threads.
INFO Application startup complete.
INFO Loading detection model 'buffalo_l' to memory
INFO Setting execution providers to ['CPUExecutionProvider'], in descending order of preference
INFO Loading recognition model 'buffalo_l' to memory
INFO Setting execution providers to ['CPUExecutionProvider'], in descending order of preference
INFO Loading visual model 'ViT-B-32__openai' to memory
INFO Setting execution providers to ['CPUExecutionProvider'], in descending order of preference
5 replies