Immich Won't Start After Upgrade to v1.143.0
Today, I noticed that my immich-server instance was constantly restarting. I also noticed v1.143.0 was released yesterday and that is the version showing in the logs.
I tried doing docker-compose down/up, but that didn't help. I also noticed the database instance takes a number of minutes to become fully healthy, but that might relate to the size of the db?
I also tried reverting to v1.142.1 as reverting to the previous release then back to latest helped in a previous issue, but that did not resolve my problem.
I've attached my docker-compose, .env, and immich_server logs.
40 Replies
:wave: Hey @Hossy,
Thanks for reaching out to us. Please carefully read this message and follow the recommended actions. This will help us be more effective in our support effort and leave more time for building Immich :immich:.
References
- Container Logs:
docker compose logs
docs
- Container Status: docker ps -a
docs
- Reverse Proxy: https://immich.app/docs/administration/reverse-proxy
- Code Formatting https://support.discord.com/hc/en-us/articles/210298617-Markdown-Text-101-Chat-Formatting-Bold-Italic-Underline#h_01GY0DAKGXDEHE263BCAYEGFJA
Checklist
I have...
1. :ballot_box_with_check: verified I'm on the latest release(note that mobile app releases may take some time).
2. :ballot_box_with_check: read applicable release notes.
3. :ballot_box_with_check: reviewed the FAQs for known issues.
4. :ballot_box_with_check: reviewed Github for known issues.
5. :ballot_box_with_check: tried accessing Immich via local ip (without a custom reverse proxy).
6. :ballot_box_with_check: uploaded the relevant information (see below).
7. :blue_square: tried an incognito window, disabled extensions, cleared mobile app cache, logged out and back in, different browsers, etc. as applicable
(an item can be marked as "complete" by reacting with the appropriate number)
Information
In order to be able to effectively help you, we need you to provide clear information to show what the problem is. The exact details needed vary per case, but here is a list of things to consider:
- Your docker-compose.yml and .env files.
- Logs from all the containers and their status (see above).
- All the troubleshooting steps you've tried so far.
- Any recent changes you've made to Immich or your system.
- Details about your system (both software/OS and hardware).
- Details about your storage (filesystems, type of disks, output of commands like fdisk -l
and df -h
).
- The version of the Immich server, mobile app, and other relevant pieces.
- Any other information that you think might be relevant.
Please paste files and logs with proper code formatting, and especially avoid blurry screenshots.
Without the right information we can't work out what the problem is. Help us help you ;)
If this ticket can be closed you can use the /close
command, and re-open it later if needed.
Successfully submitted, a tag has been added to inform contributors. :white_check_mark:Can you try remove the image and repull it?
sure, one sec
that didn't resolve it either. still restarting
@mertalev Any ideas about this issue?
The docker compose file is very outdated. Have you been following the release notes and tracking breaking changes?
@mertalev I only just learned of those today. I don't recall seeing any UI/app notifications about them. I noticed the change to redis and postgres. Since redis has no exported volumes, I don't suspect changing that right now would be impactful, but I'm wondering if trying to change postgres right now, given that the immich-server isn't starting (with no error output) could make things worse. Thoughts?
The db image change doesn't seem related to what you're seeing, at least from what I can tell. It seems to get past the database initialization and gets stuck during the storage mount checks. The next logs are supposed to be
how big is the immich library and what is it stored on?
checking size...
UPLOAD_LOCATION is 1.5T
Stored locally on a Synology NAS RS3617xs+
how much ram does the server have?
128G
that has to be the beefiest synology i've seen lol
š
can you try restarting the database?
i've dcdown'd the whole thing multiple times
actually changed immich-server to use conditions so it stops trying to start before the db is ready now
starting with
IMMICH_LOG_LEVEL=debug
added to .env. maybe it will give us more?
nope š
trying verbose
I got two more lines.
Healthcheck failed: exit code 7
repeated againis it terminating the server before it has enough time to start up?
then docker-compose gives me
immich_server exited with code 0
lemme check healthcheck for the image. it isn't defined in dcyml
launching with start_period: 300s
it's running longer. still printing out Healthcheck failed: exit code 7
which I presume is dockerd calling immich-healthcheck
. Now, I'm distantly and vaguely recalling an issue (and with my memory, it may not have even been with immich), but I was running into a problem starting a docker app and I set the startup delay to something crazy like an hour and I think it (whatever "it" was) took about 37 minutes to start
the fact it hasn't died yet points to dockerd killing immich-server due to hc failure and not immich itself
so there must be something immich is trying to do on startup that it isn't sharing in the logstry changing the db image, if only because the new extension is lighter on resources
i was planning on it, once I could get a good backup š
damn it, dockerd killed the container
changed
start_period
to 3600s
interestingly, on the previous start, it took 5m19s to display
geez, your db is horrifically slow
I have 18 spindles on RAID 6 accelerated by read/write SSD cache. Volume Util % is not maxed either.
not sure what could make it slow(er)
SSD cache hit rate is 99% with ~1 TB free
can you check
htop
? something isn't adding upsure...
nothing is standing out at me. all normal in htop. Volume Write IOPS is rather high as of 4 am, which corresponds with my scheduled task to pull and dcup immich. Normally I'm at ~230/s and i'm about 3k/s. I suspect that would drop to the floor if I stopped immich-server.
it's postgres
writing to disk like MAD
the server isn't doing anything interesting at this stage of bootstrap. it's all postgres
(seen with iotop)
walwriter
and occasionally checkpointer
on the upside, immich-server container has remained running without quitting, but no new logs other than healcheck failure after
so, an issue with
immich-healthcheck
not considering long-running migration/conversion operations, maybe?so i guess it's just truly that slow at doing this migration. try changing your db command to this:
then bring down the stack and start the db alone and run the sql query
vacuum (full, analyze);
then start everything else (or restart the stack)
i suspect your db is in a very degraded state at this pointi should be able to leave redis and ML running, ya?
that's fine yeah
vacuum
runningthat's gonna take a while to finish, but you should have less issues after it's done
thanks. i'll monitor it. so far, running for 4 minutes
well, write io dropped and read io went through the roof š¤
did you just run
vacuum
instead of vacuum (full, analyze)
?no, i ran
vacuum (full, analyze);
read io went back down and write io is high againthe table is probably pretty bloated with dead tuples so it has a lot of garbage to scan through
alright, now you can start the server
yup, starting it. had to stop postgres bc i didn't run detached and forgot the key sequence to detach from console
[Nest] 7 - 09/23/2025, 9:11:37 PM LOG [Microservices:Migrations] Converting database file paths from relative to absolute (source=upload/*, target=/usr/src/app/upload/*)
appeared quickly this time
write io is high, but a third of what it was
and we're STARTING
webui is up
running v1.143.1
166,669 photos
7,102 videos
1.2T
š
do i maybe need to retrigger any of the jobs or would immich remember where it left off from a previous run?
ok, logs got a little much... restarting with verbose logging turned off ha
i clicked all of the Missing
buttons and it queued a bunch of work
thank you so much for your help! @mertalev
in the end, long-running migration/conversion op that immich-healthcheck
wasn't aware of which caused dockerd to keep restarting immich-server thinking it was broken. compounded by a db in need of a vacuumnice, glad you got it working!
What is your docker setup/engine/info?
Vanilla docker engine does not restart unhealthy containers.
Do you have some kind of auto-heal container / service / cron running? (Maybe you have it as a part of another stack? Or configured it in the past and forgot about it?)
That's a good question. I cannot find anything that would point to Docker restarting the container. This is standalone Docker running on Synology.
The
RestartPolicy
was set to always
, but that shouldn't cause a healthcheck restart.
For testing, I constructed a container that would purposely fail healthcheck:
I see this log entry, and docker_event_watcherd
is a binary at /var/packages/ContainerManager/target/tool/docker_event_watcherd
, but it doesn't necessarily mean that process is the originator of the kill signal.
21f0de0c1cf49d40b0428f1a223e93ae3997002f22c2bb8a037fce33f783ecea is the alpine container id running sleep
My best guess is that Synology Container Manager is doing itInteresting
You can use smth like https://access.redhat.com/solutions/165993 "How to track who/what is sending SIGKILL to a process?"
auditctl