Immich DB Size seems big ?
Hello,
I might be wrong but I have around 400Gb of data.
This is around 53k pictures and 3k videos.
Yet my PG DB size seems to be around 14Gb.
Id it expected to be that significant ?
Thanks
21 Replies
:wave: Hey @RNab,
Thanks for reaching out to us. Please carefully read this message and follow the recommended actions. This will help us be more effective in our support effort and leave more time for building Immich :immich:.
References
- Container Logs:
docker compose logs
docs
- Container Status: docker ps -a
docs
- Reverse Proxy: https://immich.app/docs/administration/reverse-proxy
- Code Formatting https://support.discord.com/hc/en-us/articles/210298617-Markdown-Text-101-Chat-Formatting-Bold-Italic-Underline#h_01GY0DAKGXDEHE263BCAYEGFJAChecklist
I have...
1. :ballot_box_with_check: verified I'm on the latest release(note that mobile app releases may take some time).
2. :ballot_box_with_check: read applicable release notes.
3. :ballot_box_with_check: reviewed the FAQs for known issues.
4. :ballot_box_with_check: reviewed Github for known issues.
5. :ballot_box_with_check: tried accessing Immich via local ip (without a custom reverse proxy).
6. :ballot_box_with_check: uploaded the relevant information (see below).
7. :ballot_box_with_check: tried an incognito window, disabled extensions, cleared mobile app cache, logged out and back in, different browsers, etc. as applicable
(an item can be marked as "complete" by reacting with the appropriate number)
Information
In order to be able to effectively help you, we need you to provide clear information to show what the problem is. The exact details needed vary per case, but here is a list of things to consider:
- Your docker-compose.yml and .env files.
- Logs from all the containers and their status (see above).
- All the troubleshooting steps you've tried so far.
- Any recent changes you've made to Immich or your system.
- Details about your system (both software/OS and hardware).
- Details about your storage (filesystems, type of disks, output of commands like
fdisk -l
and df -h
).
- The version of the Immich server, mobile app, and other relevant pieces.
- Any other information that you think might be relevant.
Please paste files and logs with proper code formatting, and especially avoid blurry screenshots.
Without the right information we can't work out what the problem is. Help us help you ;)
If this ticket can be closed you can use the /close
command, and re-open it later if needed.GitHub
immich-app immich · Discussions
Explore the GitHub Discussions forum for immich-app immich. Discuss code, ask questions & collaborate with the developer community.
FAQ | Immich
User
GitHub
Issues · immich-app/immich
High performance self-hosted photo and video management solution. - Issues · immich-app/immich
Successfully submitted, a tag has been added to inform contributors. :white_check_mark:
That's a bit much yes
Is it the database or the database + backups?
docker exec -it immich_postgres psql -U postgres
(Switch out postgres with your immich user if you changed it)
\c immich
and VACUUM FULL;
This can take a while
it should output
Database only indeed
Let me try your commands tonight (but backup first 🙂 )
Can you run
\l+
in Postgres?Will do all these when I’m home (in 2/3 hours) and come back here to confirm
Do that before the vacuum stuff and again after
And post what it shows
I need to login first right v
?
The commands are for the commandline of the server which hosts your docker containers
Oh sorry I meant for the \l
But yes your commands contains indeed login/passwd
Im in k3s so ill have to slightly amend them but shouldn’t impact anything
Ok so the \l+ command gives max 760MB database
So that’s not the source of it
Where are you seeing 14?
Longhorn
Inside the container if I do df -h size is very reasonable (1.3G)
That’s fine clearly it’s not an Immich « issue » so I’ll investigate this one on my own
Both of you, thanks for your quick help
To clarify : should I still do the vacuum job ? Or not really required ?
I don't think it will fix things :p
Maybe your postgres logs are crazy large
For reference:
and \l+ gave me ~1400MB
That looks fine. Idk where the 14GB comes from
That's my postgres @Zeus 😛
oh nevermind, misunderstood there!
I think it’s how Longhorn reports volume size rather than what is really being used.
So very misleading but not at all Immich related as far as I can see
https://longhorn.io/docs/archives/1.4.1/concepts/#21-thin-provisioning-and-volume-size
A Longhorn volume itself cannot shrink in size if you’ve removed content from your volume. For example, if you create a volume of 20 GB, used 10 GB, then removed the content of 9 GB, the actual size on the disk would still be 10 GB instead of 1 GB. This happens because Longhorn operates on the block level, not the filesystem level, so Longhorn doesn’t know if the content has been removed by a user or not. That information is mostly kept at the filesystem level.I think that's what you're seeing
What you are seeing in Longhorn is block storage not the actual storage. It will always be larger. There are ways to reduce this. and that is to trim the volumes. You can run longhorn recurring jobs to trim these. If you have multiple replicas that is going to add overhead to it.
Hey. Sorry just saw your message right now.
I didn’t know about this. Will look into it more closely
How often is it recommended to run these ?
whenever you make a massive culling on the filesystem give it a go
but i might do it as a job once a week