I
Immich15h ago
cark

Immich service periodically runs out of memory and stops during thumbnail generation

Hello - I have been using Immich for a while in my home server (Radxa Rock 5B 4GB) and it has been working mostly fine. However, I recently moved it over to an SSD, which seemed to go ahead correctly, until I started scanning an existing external library with about 800GB worth of pictures (mixed formats), approximately some 42k files. The trouble comes when I scan these directories. Initially Immich would hog all the memory and freeze the system. That took a while to figure out and fix. Forcing resource limits in Compose (2GB for the immich service, 512MB for redis and postrgre respectively) stopped that, which let me bring down the thumbnail generation queues to just 2 and disabling ML. The initial issue seemed to be because I mistakenly made the external library directory sit in the same place as the 'db' folder, which had quickly turned into a 15+ million file folder. I started with a fresh database and that took care of that, while also removing and pulling again all images. So I figured, let's start the import process again, from the right directory this time and disable all memory limits as well as resetting the queue settings to defaults. That seemed to work well, until after a few minutes the server froze again and Immich was consuming all available memory again. In order to get some resemblance of performance again I have had to reintroduce the aforementioned memory limits to the Compose file and lowered the thumbnail generation queues back to 2. If I give it more memory it'll end up hogging it all and crashing the box, if I increase the queues it'll hit the memory limit and the immich service will crash (show an unknown version on the web UI), start all over again, hit the memory limit again, crash, rinse and repeat. With only these two queues it seems to take much longer to get into this cycle, even though it does happen. Any ideas?
14 Replies
Immich
Immich15h ago
:wave: Hey @cark, Thanks for reaching out to us. Please carefully read this message and follow the recommended actions. This will help us be more effective in our support effort and leave more time for building Immich :immich:. References - Container Logs: docker compose logs docs - Container Status: docker ps -a docs - Reverse Proxy: https://immich.app/docs/administration/reverse-proxy - Code Formatting https://support.discord.com/hc/en-us/articles/210298617-Markdown-Text-101-Chat-Formatting-Bold-Italic-Underline#h_01GY0DAKGXDEHE263BCAYEGFJA Checklist I have... 1. :blue_square: verified I'm on the latest release(note that mobile app releases may take some time). 2. :blue_square: read applicable release notes. 3. :blue_square: reviewed the FAQs for known issues. 4. :blue_square: reviewed Github for known issues. 5. :blue_square: tried accessing Immich via local ip (without a custom reverse proxy). 6. :blue_square: uploaded the relevant information (see below). 7. :blue_square: tried an incognito window, disabled extensions, cleared mobile app cache, logged out and back in, different browsers, etc. as applicable (an item can be marked as "complete" by reacting with the appropriate number) Information In order to be able to effectively help you, we need you to provide clear information to show what the problem is. The exact details needed vary per case, but here is a list of things to consider: - Your docker-compose.yml and .env files. - Logs from all the containers and their status (see above). - All the troubleshooting steps you've tried so far. - Any recent changes you've made to Immich or your system. - Details about your system (both software/OS and hardware). - Details about your storage (filesystems, type of disks, output of commands like fdisk -l and df -h). - The version of the Immich server, mobile app, and other relevant pieces. - Any other information that you think might be relevant. Please paste files and logs with proper code formatting, and especially avoid blurry screenshots. Without the right information we can't work out what the problem is. Help us help you ;) If this ticket can be closed you can use the /close command, and re-open it later if needed.
bo0tzz
bo0tzz15h ago
You're on a pretty memory constrained system. 4GB for Immich alone is the bottom end of what we usually recommend, and for a library of this size it probably needs more than that. You can try turning the concurrency of all the queues down to 1, and pausing them to run only one queue at a time. It'll take a while though
cark
carkOP15h ago
Thanks for looking into it! interestingly, Immich seems to ignore the queue settings at times - for instance, right now it's sitting on 2 thumbnail queues, but earlier it was on 6 despite the limit being set to 2 the docker logs don't show anything other than the starting sequence every time the service crashes and starts again the container status shows as running, not much else
Zeus
Zeus15h ago
With such a big library on 4GB I don’t think there is any path other than 1 concurrency and manual pausing. The 2 vs 6 thing is strange, may be a display bug of some kind This will not be a good experience unfortunately the resources are just not enough
cark
carkOP15h ago
interesting - I figured it'd be more processor limited rather than memory limited
Zeus
Zeus15h ago
Not really. Processes can always go slower. OOM killing can’t be “waited” for Similar to over provisioning of VMs. CPU is no big deal, RAM is bad times
cark
carkOP15h ago
Why does the library size impact memory usage from the main immich service? I'd imagined it would impact the database service more but I'm only a network guy so you might have to chew that a bit for me :p
Zeus
Zeus15h ago
To be clear, 4GB is bare minimum for ANY size library. It already doesn’t work well The scanning process is probably a lot more intense. Certainly the DB is using more, which leaves less for the immich server (Also, constraining PG to 512 may lead to some very bad / corruption leading results FYI)
cark
carkOP15h ago
I see - thanks. Reckon 8GB is enough? Postgre hasn't been hitting that limit so I guess I've been lucky in that regard- I've removed the limit and it doesn't seem to have any impact in usage
Zeus
Zeus15h ago
8 is much better for immich yes Obviously depends on what else you have running but that’s good for immich and the OS
bo0tzz
bo0tzz15h ago
Postgre hasn't been hitting that limit
Postgres will adapt its memory usage to what is available, but it'll definitely take (big) performance hits in doing so
cark
carkOP15h ago
right now it seems to be ticking down the single queue and using barely 1.5GB of memory for the whole system - which is what it used to do before I screwed up the external library import as per OP - hence my assumption that 4GB was plenty enough
Zeus
Zeus15h ago
I think I’ve seen people with the 512 limit getting PG OOM killed but maybe I’m remembering wrong. Maybe it was 256?
cark
carkOP15h ago
FWIW - it seems to stay around the 1.4-1.8GB mark for a few minutes until something makes it shoot up and crash it's not like it does it immediately after restarting the service

Did you find this page helpful?