Remote ML

I have no idea what I am doing wrong for remote ML
services:
  immich-machine-learning:
    image: ghcr.io/immich-app/immich-machine-learning:release-cuda
    container_name: immich-ml
    restart: unless-stopped
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]
    environment:
      - NVIDIA_VISIBLE_DEVICES=all
      - NVIDIA_DRIVER_CAPABILITIES=compute,utility
      - TRANSFORMERS_CACHE=/cache
    volumes:
      - ./cache:/cache
    ports:
      - "3003:3003"


My remote ml file

it loads into memory, but GPU utilisation is 0%
Was this page helpful?