I
Immich•2w ago
Hossy

Immich Won't Start After Upgrade to v1.143.0

Today, I noticed that my immich-server instance was constantly restarting. I also noticed v1.143.0 was released yesterday and that is the version showing in the logs. I tried doing docker-compose down/up, but that didn't help. I also noticed the database instance takes a number of minutes to become fully healthy, but that might relate to the size of the db? I also tried reverting to v1.142.1 as reverting to the previous release then back to latest helped in a previous issue, but that did not resolve my problem. I've attached my docker-compose, .env, and immich_server logs.
40 Replies
Immich
Immich•2w ago
:wave: Hey @Hossy, Thanks for reaching out to us. Please carefully read this message and follow the recommended actions. This will help us be more effective in our support effort and leave more time for building Immich :immich:. References - Container Logs: docker compose logs docs - Container Status: docker ps -a docs - Reverse Proxy: https://immich.app/docs/administration/reverse-proxy - Code Formatting https://support.discord.com/hc/en-us/articles/210298617-Markdown-Text-101-Chat-Formatting-Bold-Italic-Underline#h_01GY0DAKGXDEHE263BCAYEGFJA Checklist I have... 1. :ballot_box_with_check: verified I'm on the latest release(note that mobile app releases may take some time). 2. :ballot_box_with_check: read applicable release notes. 3. :ballot_box_with_check: reviewed the FAQs for known issues. 4. :ballot_box_with_check: reviewed Github for known issues. 5. :ballot_box_with_check: tried accessing Immich via local ip (without a custom reverse proxy). 6. :ballot_box_with_check: uploaded the relevant information (see below). 7. :blue_square: tried an incognito window, disabled extensions, cleared mobile app cache, logged out and back in, different browsers, etc. as applicable (an item can be marked as "complete" by reacting with the appropriate number) Information In order to be able to effectively help you, we need you to provide clear information to show what the problem is. The exact details needed vary per case, but here is a list of things to consider: - Your docker-compose.yml and .env files. - Logs from all the containers and their status (see above). - All the troubleshooting steps you've tried so far. - Any recent changes you've made to Immich or your system. - Details about your system (both software/OS and hardware). - Details about your storage (filesystems, type of disks, output of commands like fdisk -l and df -h). - The version of the Immich server, mobile app, and other relevant pieces. - Any other information that you think might be relevant. Please paste files and logs with proper code formatting, and especially avoid blurry screenshots. Without the right information we can't work out what the problem is. Help us help you ;) If this ticket can be closed you can use the /close command, and re-open it later if needed. Successfully submitted, a tag has been added to inform contributors. :white_check_mark:
Alex Tran
Alex Tran•2w ago
Can you try remove the image and repull it?
Hossy
HossyOP•2w ago
sure, one sec that didn't resolve it either. still restarting
Alex Tran
Alex Tran•2w ago
@mertalev Any ideas about this issue?
mertalev
mertalev•2w ago
The docker compose file is very outdated. Have you been following the release notes and tracking breaking changes?
Hossy
HossyOP•2w ago
@mertalev I only just learned of those today. I don't recall seeing any UI/app notifications about them. I noticed the change to redis and postgres. Since redis has no exported volumes, I don't suspect changing that right now would be impactful, but I'm wondering if trying to change postgres right now, given that the immich-server isn't starting (with no error output) could make things worse. Thoughts?
mertalev
mertalev•2w ago
The db image change doesn't seem related to what you're seeing, at least from what I can tell. It seems to get past the database initialization and gets stuck during the storage mount checks. The next logs are supposed to be
LOG [Microservices:StorageService] Verifying system mount folder checks, current state: {"mountChecks":{"thumbs":true,"upload":true,"backups":true,"library":true,"profile":true,"encoded-video":true}}
LOG [Microservices:StorageService] Successfully verified system mount folder checks
LOG [Microservices:StorageService] Verifying system mount folder checks, current state: {"mountChecks":{"thumbs":true,"upload":true,"backups":true,"library":true,"profile":true,"encoded-video":true}}
LOG [Microservices:StorageService] Successfully verified system mount folder checks
how big is the immich library and what is it stored on?
Hossy
HossyOP•2w ago
checking size... UPLOAD_LOCATION is 1.5T Stored locally on a Synology NAS RS3617xs+
247G /volume1/photo/# immich/encoded-video
4.0K /volume1/photo/# immich/library
1.2T /volume1/photo/# immich/upload
4.0K /volume1/photo/# immich/profile
59G /volume1/photo/# immich/thumbs
14G /volume1/photo/# immich/backups
1.5T /volume1/photo/# immich/
247G /volume1/photo/# immich/encoded-video
4.0K /volume1/photo/# immich/library
1.2T /volume1/photo/# immich/upload
4.0K /volume1/photo/# immich/profile
59G /volume1/photo/# immich/thumbs
14G /volume1/photo/# immich/backups
1.5T /volume1/photo/# immich/
mertalev
mertalev•2w ago
how much ram does the server have?
Hossy
HossyOP•2w ago
128G
mertalev
mertalev•2w ago
that has to be the beefiest synology i've seen lol
Hossy
HossyOP•2w ago
šŸ™‚
mertalev
mertalev•2w ago
can you try restarting the database?
Hossy
HossyOP•2w ago
i've dcdown'd the whole thing multiple times actually changed immich-server to use conditions so it stops trying to start before the db is ready now
depends_on:
redis:
condition: service_healthy
database:
condition: service_healthy
depends_on:
redis:
condition: service_healthy
database:
condition: service_healthy
starting with IMMICH_LOG_LEVEL=debug added to .env. maybe it will give us more? nope šŸ™ trying verbose I got two more lines.
root@Hossy-NAS01:/volume1/docker/immich# docker-compose logs -ft immich-server
immich_server | 2025-09-24T01:19:15.716624159Z Initializing Immich v1.143.1
immich_server | 2025-09-24T01:19:15.731044226Z skipping libmimalloc - path not found /usr/lib/x86_64-linux-gnu/libmimalloc.so.2
immich_server | 2025-09-24T01:19:16.036029563Z Detected CPU Cores: 12
immich_server | 2025-09-24T01:19:46.038732213Z Starting api worker
immich_server | 2025-09-24T01:19:46.118995858Z Starting microservices worker
immich_server | 2025-09-24T01:19:46.595959762Z Healthcheck failed: exit code 7
immich_server | 2025-09-24T01:19:51.365963427Z [Nest] 7 - 09/23/2025, 8:19:51 PM LOG [Microservices:EventRepository] Initialized websocket server
immich_server | 2025-09-24T01:19:51.548886590Z [Nest] 7 - 09/23/2025, 8:19:51 PM WARN [Microservices:DatabaseService] DEPRECATION WARNING: The pgvecto.rs extension is deprecated and support for it will be removed very soon.
immich_server | 2025-09-24T01:19:51.549221152Z See https://immich.app/docs/install/upgrading#migrating-to-vectorchord in order to switch to the VectorChord extension instead.
immich_server | 2025-09-24T01:19:51.874522748Z [Nest] 40 - 09/23/2025, 8:19:51 PM LOG [Api:EventRepository] Initialized websocket server
immich_server | 2025-09-24T01:20:17.596234788Z Healthcheck failed: exit code 7
root@Hossy-NAS01:/volume1/docker/immich# docker-compose logs -ft immich-server
immich_server | 2025-09-24T01:19:15.716624159Z Initializing Immich v1.143.1
immich_server | 2025-09-24T01:19:15.731044226Z skipping libmimalloc - path not found /usr/lib/x86_64-linux-gnu/libmimalloc.so.2
immich_server | 2025-09-24T01:19:16.036029563Z Detected CPU Cores: 12
immich_server | 2025-09-24T01:19:46.038732213Z Starting api worker
immich_server | 2025-09-24T01:19:46.118995858Z Starting microservices worker
immich_server | 2025-09-24T01:19:46.595959762Z Healthcheck failed: exit code 7
immich_server | 2025-09-24T01:19:51.365963427Z [Nest] 7 - 09/23/2025, 8:19:51 PM LOG [Microservices:EventRepository] Initialized websocket server
immich_server | 2025-09-24T01:19:51.548886590Z [Nest] 7 - 09/23/2025, 8:19:51 PM WARN [Microservices:DatabaseService] DEPRECATION WARNING: The pgvecto.rs extension is deprecated and support for it will be removed very soon.
immich_server | 2025-09-24T01:19:51.549221152Z See https://immich.app/docs/install/upgrading#migrating-to-vectorchord in order to switch to the VectorChord extension instead.
immich_server | 2025-09-24T01:19:51.874522748Z [Nest] 40 - 09/23/2025, 8:19:51 PM LOG [Api:EventRepository] Initialized websocket server
immich_server | 2025-09-24T01:20:17.596234788Z Healthcheck failed: exit code 7
Healthcheck failed: exit code 7 repeated again
mertalev
mertalev•2w ago
is it terminating the server before it has enough time to start up?
Hossy
HossyOP•2w ago
then docker-compose gives me immich_server exited with code 0 lemme check healthcheck for the image. it isn't defined in dcyml launching with start_period: 300s it's running longer. still printing out Healthcheck failed: exit code 7 which I presume is dockerd calling immich-healthcheck. Now, I'm distantly and vaguely recalling an issue (and with my memory, it may not have even been with immich), but I was running into a problem starting a docker app and I set the startup delay to something crazy like an hour and I think it (whatever "it" was) took about 37 minutes to start the fact it hasn't died yet points to dockerd killing immich-server due to hc failure and not immich itself so there must be something immich is trying to do on startup that it isn't sharing in the logs
mertalev
mertalev•2w ago
try changing the db image, if only because the new extension is lighter on resources
Hossy
HossyOP•2w ago
i was planning on it, once I could get a good backup šŸ™‚ damn it, dockerd killed the container changed start_period to 3600s interestingly, on the previous start, it took 5m19s to display
[Nest] 7 - 09/23/2025, 8:30:45 PM LOG [Microservices:Migrations] Converting database file paths from relative to absolute (source=upload/*, target=/usr/src/app/upload/*)
[Nest] 7 - 09/23/2025, 8:30:45 PM LOG [Microservices:Migrations] Converting database file paths from relative to absolute (source=upload/*, target=/usr/src/app/upload/*)
mertalev
mertalev•2w ago
geez, your db is horrifically slow
Hossy
HossyOP•2w ago
I have 18 spindles on RAID 6 accelerated by read/write SSD cache. Volume Util % is not maxed either. not sure what could make it slow(er) SSD cache hit rate is 99% with ~1 TB free
mertalev
mertalev•2w ago
can you check htop? something isn't adding up
Hossy
HossyOP•2w ago
sure... nothing is standing out at me. all normal in htop. Volume Write IOPS is rather high as of 4 am, which corresponds with my scheduled task to pull and dcup immich. Normally I'm at ~230/s and i'm about 3k/s. I suspect that would drop to the floor if I stopped immich-server. it's postgres writing to disk like MAD
mertalev
mertalev•2w ago
the server isn't doing anything interesting at this stage of bootstrap. it's all postgres
Hossy
HossyOP•2w ago
(seen with iotop) walwriter and occasionally checkpointer on the upside, immich-server container has remained running without quitting, but no new logs other than healcheck failure after
[Nest] 6 - 09/23/2025, 8:38:04 PM LOG [Microservices:Migrations] Converting database file paths from relative to absolute (source=upload/*, target=/usr/src/app/upload/*)
[Nest] 6 - 09/23/2025, 8:38:04 PM LOG [Microservices:Migrations] Converting database file paths from relative to absolute (source=upload/*, target=/usr/src/app/upload/*)
so, an issue with immich-healthcheck not considering long-running migration/conversion operations, maybe?
mertalev
mertalev•2w ago
so i guess it's just truly that slow at doing this migration. try changing your db command to this:
command: >-
postgres
-c shared_preload_libraries=vectors.so
-c 'search_path="$$user", public, vectors'
-c logging_collector=on
-c max_wal_size=10GB
-c shared_buffers=2GB
-c work_mem=32MB
-c maintenance_work_mem=1GB
-c wal_compression=on
command: >-
postgres
-c shared_preload_libraries=vectors.so
-c 'search_path="$$user", public, vectors'
-c logging_collector=on
-c max_wal_size=10GB
-c shared_buffers=2GB
-c work_mem=32MB
-c maintenance_work_mem=1GB
-c wal_compression=on
then bring down the stack and start the db alone and run the sql query vacuum (full, analyze); then start everything else (or restart the stack) i suspect your db is in a very degraded state at this point
Hossy
HossyOP•2w ago
i should be able to leave redis and ML running, ya?
mertalev
mertalev•2w ago
that's fine yeah
Hossy
HossyOP•2w ago
vacuum running
mertalev
mertalev•2w ago
that's gonna take a while to finish, but you should have less issues after it's done
Hossy
HossyOP•2w ago
thanks. i'll monitor it. so far, running for 4 minutes well, write io dropped and read io went through the roof šŸ¤”
mertalev
mertalev•2w ago
did you just run vacuum instead of vacuum (full, analyze)?
Hossy
HossyOP•2w ago
no, i ran vacuum (full, analyze); read io went back down and write io is high again
mertalev
mertalev•2w ago
the table is probably pretty bloated with dead tuples so it has a lot of garbage to scan through
Hossy
HossyOP•2w ago
VACUUM

Query returned successfully in 10 min 6 secs.
VACUUM

Query returned successfully in 10 min 6 secs.
mertalev
mertalev•2w ago
alright, now you can start the server
Hossy
HossyOP•2w ago
yup, starting it. had to stop postgres bc i didn't run detached and forgot the key sequence to detach from console [Nest] 7 - 09/23/2025, 9:11:37 PM LOG [Microservices:Migrations] Converting database file paths from relative to absolute (source=upload/*, target=/usr/src/app/upload/*) appeared quickly this time write io is high, but a third of what it was and we're STARTING webui is up running v1.143.1 166,669 photos 7,102 videos 1.2T šŸ˜… do i maybe need to retrigger any of the jobs or would immich remember where it left off from a previous run? ok, logs got a little much... restarting with verbose logging turned off ha i clicked all of the Missing buttons and it queued a bunch of work thank you so much for your help! @mertalev in the end, long-running migration/conversion op that immich-healthcheck wasn't aware of which caused dockerd to keep restarting immich-server thinking it was broken. compounded by a db in need of a vacuum
mertalev
mertalev•2w ago
nice, glad you got it working!
Sergey Katsubo
Sergey Katsubo•2w ago
What is your docker setup/engine/info? Vanilla docker engine does not restart unhealthy containers. Do you have some kind of auto-heal container / service / cron running? (Maybe you have it as a part of another stack? Or configured it in the past and forgot about it?)
Hossy
HossyOP•7d ago
That's a good question. I cannot find anything that would point to Docker restarting the container. This is standalone Docker running on Synology.
Docker version 24.0.2, build 610b8d0
Docker Compose version v2.20.1-6047-g6817716
Docker version 24.0.2, build 610b8d0
Docker Compose version v2.20.1-6047-g6817716
The RestartPolicy was set to always, but that shouldn't cause a healthcheck restart. For testing, I constructed a container that would purposely fail healthcheck:
docker run -d \
--name test_unhealthy \
--health-cmd "false" \
--health-interval 3s \
--health-timeout 1s \
--health-retries 3 \
--health-start-period 1s \
alpine sleep 3600
docker run -d \
--name test_unhealthy \
--health-cmd "false" \
--health-interval 3s \
--health-timeout 1s \
--health-retries 3 \
--health-start-period 1s \
alpine sleep 3600
I see this log entry, and docker_event_watcherd is a binary at /var/packages/ContainerManager/target/tool/docker_event_watcherd, but it doesn't necessarily mean that process is the originator of the kill signal.
Sep 27 00:20:27 Hossy-NAS01 docker_event_watcherd[28229]: docker_event_watcherd.cpp:165 {"status":"kill","id":"21f0de0c1cf49d40b0428f1a223e93ae3997002f22c2bb8a037fce33f783ecea","from":"alpine","Type":"container","Action":"kill","Actor":{"ID":"21f0de0c1cf49d40b0428f1a223e93ae3997002f22c2bb8a037fce33f783ecea","Attributes":{"image":"alpine","name":"test_unhealthy","signal":"15"}},"scope":"local","time":1758950427,"timeNano":1758950427805272198}
Sep 27 00:20:27 Hossy-NAS01 docker_event_watcherd[28229]: docker_event_watcherd.cpp:165 {"status":"kill","id":"21f0de0c1cf49d40b0428f1a223e93ae3997002f22c2bb8a037fce33f783ecea","from":"alpine","Type":"container","Action":"kill","Actor":{"ID":"21f0de0c1cf49d40b0428f1a223e93ae3997002f22c2bb8a037fce33f783ecea","Attributes":{"image":"alpine","name":"test_unhealthy","signal":"15"}},"scope":"local","time":1758950427,"timeNano":1758950427805272198}
21f0de0c1cf49d40b0428f1a223e93ae3997002f22c2bb8a037fce33f783ecea is the alpine container id running sleep My best guess is that Synology Container Manager is doing it
Sergey Katsubo
Sergey Katsubo•7d ago
Interesting You can use smth like https://access.redhat.com/solutions/165993 "How to track who/what is sending SIGKILL to a process?" auditctl
# install
apt-get install -y auditd
# add rules
auditctl -a exit,always -F arch=b64 -S kill -k who_killed
# check rules
auditctl -l
# view logs
tail -f /var/log/audit/audit.log | grep --line-buffered who_killed | grep -v postgres

# output
# type=SYSCALL msg=audit(1758952437.445:327): arch=c000003e syscall=62 success=yes exit=0 a0=1d9cf4 a1=f a2=0 a3=7b8dd2e54ac0 items=0 ppid=1936526 pid=1936527 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts3 ses=337 comm="bash" exe="/usr/bin/bash" subj=unconfined key="who_killed"ARCH=x86_64 SYSCALL=kill AUID="sk" UID="root" GID="root" EUID="root" SUID="root" FSUID="root" EGID="root" SGID="root" FSGID="root"
# install
apt-get install -y auditd
# add rules
auditctl -a exit,always -F arch=b64 -S kill -k who_killed
# check rules
auditctl -l
# view logs
tail -f /var/log/audit/audit.log | grep --line-buffered who_killed | grep -v postgres

# output
# type=SYSCALL msg=audit(1758952437.445:327): arch=c000003e syscall=62 success=yes exit=0 a0=1d9cf4 a1=f a2=0 a3=7b8dd2e54ac0 items=0 ppid=1936526 pid=1936527 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts3 ses=337 comm="bash" exe="/usr/bin/bash" subj=unconfined key="who_killed"ARCH=x86_64 SYSCALL=kill AUID="sk" UID="root" GID="root" EUID="root" SUID="root" FSUID="root" EGID="root" SGID="root" FSGID="root"

Did you find this page helpful?