Hi, I setuped a immich-machine-learning container on a remote machine so that my raspi is not overwhelmed.
Is this the correct output when ML should be running ?
[01/13/26 15:35:13] INFO Loading detection model 'PP-OCRv5_mobile' to memory
[01/13/26 15:35:13] INFO Setting execution providers to
['CPUExecutionProvider'], in descending order of
preference
[01/13/26 15:35:13] INFO Loading visual model 'ViT-B-32__openai' to memory
[01/13/26 15:35:13] INFO Setting execution providers to
['CPUExecutionProvider'], in descending order of
preference
[01/13/26 15:35:14] INFO Loading detection model 'buffalo_l' to memory
[01/13/26 15:35:14] INFO Setting execution providers to
['CPUExecutionProvider'], in descending order of
preference
[01/13/26 15:35:14] INFO Loading recognition model 'buffalo_l' to memory
[01/13/26 15:35:14] INFO Setting execution providers to
['CPUExecutionProvider'], in descending order of
preference
[01/13/26 15:35:15] INFO Loading recognition model 'PP-OCRv5_mobile' to
memory
[01/13/26 15:35:15] INFO Setting execution providers to
['CPUExecutionProvider'], in descending order of
preference
[INFO] 2026-01-13 15:35:15,640 [RapidOCR] base.py:22: Using engine_name: onnxruntime
`