Where can I find the logs?
Newbie question here.
I have been using Immich for a couple days, migrating my old pictures with the CLI.
Succesful on first 2 users, but crash on the third one. All jobs seems to hang, only the "extract metadata one" pauses by itself.
I was considering looking at the logs to seek help, but can't find them 😦
9 Replies
If it is docker you can look at them using docker logs command
thanks...
I guess this is my issue then:
Request #1682864931472: Sleeping for 4s and then retrying request...
Request #1682864931472: Request to Node 0 failed due to "undefined Request failed with HTTP code 503 | Server said: Not Ready or Lagging"
Request #1682864931472: Sleeping for 4s and then retrying request...
Request #1682864931472: Request to Node 0 failed due to "undefined Request failed with HTTP code 503 | Server said: Not Ready or Lagging"
Request #1682864931472: Sleeping for 4s and then retrying request...
Request #1682864931472: Request to Node 0 failed due to "undefined Request failed with HTTP code 503 | Server said: Not Ready or Lagging"
Request #1682864931472: Sleeping for 4s and then retrying request...
Request #1682864931472: Request to Node 0 failed due to "undefined Request failed with HTTP code 503 | Server said: Not Ready or Lagging"
Request #1682864931472: Sleeping for 4s and then retrying request...
Typesense is not up and running yet...
Anything I can do about it?
Restarting containers didn't help
Does your compose file have depends on section?
Yes, I bascially copy-pasted the instrcutions
immich-server:
container_name: immich_server
image: ghcr.io/immich-app/immich-server:release
entrypoint: ["/bin/sh", "./start-server.sh"]
volumes:
- ${UPLOAD_LOCATION}:/usr/src/app/upload
env_file:
- .env
depends_on:
- redis
- database
- typesense
restart: always
You can maybe just wait a bit after the server starts?
Or start typesense first and just wait until it is ready
It actually crashed halfway through import somehow.
Waiting a few hours didn't help. As the installation is pretty fresh, I'll start up a new VM from scratch. Hopefully it was a one-time something