Migration Hell: "Clean" Docker Compose on Proxmox LXC crashes only after importing v2.2.x backup

Hello community, I am truly desperate after 30+ hours trying to migrate my Immich instance with Gemini 2.5 Pro and Flash. My Systems: Source (System A): Synology NAS running Immich v2.2.x (via Portainer/Docker). This instance holds my precious backup file (.sql.gz) with 2000+ manually sorted albums and face data. Target (System B): Proxmox PVE. I built a new, clean LXC (Privileged, nesting=1 enabled) and installed Docker using the official latest/download/docker-compose.yml and example.env files. My Goal: A clean SSD/NAS split. I want to use the modern .env file to set: UPLOAD_LOCATION (Originals) ➔ _NASPATH THUMBNAIL_PATH, CACHE_PATH, etc. (Metadata) ➔ _SSDPATH The Problem (The "Catch-22"): My new, clean setup (System B) works perfectly. I can access the webUI (http://[LXC-IP]:2283) and see the "Welcome" page. The migration fails immediately after I import my database backup from System A. As soon as the backup is restored, the immich-server (even the release tag) gets stuck in a crash-loop (Restarting...). The Error (The Proof): The logs prove that the app is ignoring my clean .env settings (like UPLOAD_LOCATION=/mnt/nas...). Instead, it enters a "Legacy Check Mode" and crashes because it's looking for old, hardcoded paths that don't exist in my clean setup: ERROR ... Failed to read (/data/library/.immich): ENOENT If I "fix" this by manually adding that volume to the docker-compose.yml, it just crashes on the next legacy path: ERROR ... Failed to read (/data/upload/.immich): ENOENT ...and then /data/backups, and so on.
34 Replies
Immich
Immich2w ago
:wave: Hey @Patrick, Thanks for reaching out to us. Please carefully read this message and follow the recommended actions. This will help us be more effective in our support effort and leave more time for building Immich :immich:. References - Container Logs: docker compose logs docs - Container Status: docker ps -a docs - Reverse Proxy: https://immich.app/docs/administration/reverse-proxy - Code Formatting https://support.discord.com/hc/en-us/articles/210298617-Markdown-Text-101-Chat-Formatting-Bold-Italic-Underline#h_01GY0DAKGXDEHE263BCAYEGFJA Checklist I have... 1. :blue_square: verified I'm on the latest release(note that mobile app releases may take some time). 2. :blue_square: read applicable release notes. 3. :blue_square: reviewed the FAQs for known issues. 4. :blue_square: reviewed Github for known issues. 5. :blue_square: tried accessing Immich via local ip (without a custom reverse proxy). 6. :blue_square: uploaded the relevant information (see below). 7. :blue_square: tried an incognito window, disabled extensions, cleared mobile app cache, logged out and back in, different browsers, etc. as applicable (an item can be marked as "complete" by reacting with the appropriate number) Information In order to be able to effectively help you, we need you to provide clear information to show what the problem is. The exact details needed vary per case, but here is a list of things to consider: - Your docker-compose.yml and .env files. - Logs from all the containers and their status (see above). - All the troubleshooting steps you've tried so far. - Any recent changes you've made to Immich or your system. - Details about your system (both software/OS and hardware). - Details about your storage (filesystems, type of disks, output of commands like fdisk -l and df -h). - The version of the Immich server, mobile app, and other relevant pieces. - Any other information that you think might be relevant. Please paste files and logs with proper code formatting, and especially avoid blurry screenshots. Without the right information we can't work out what the problem is. Help us help you ;) If this ticket can be closed you can use the /close command, and re-open it later if needed.
Patrick
PatrickOP2w ago
My Question: I refuse to accept that my 2000+ albums are lost. But I also refuse to accept this "dirty" docker-compose.yml "Gebastel" (messy workaround) where I have to manually map 10 legacy paths. It seems my v2.2.x backup is forcing the new app into a "Legacy Mode" that is incompatible with the modern, clean .env-only configuration. How can I migrate my data without triggering this legacy crash-loop? Is there a way to "clean" the database backup before importing it? Or is the only solution to manually map all those legacy paths in the docker-compose.yml? Thanks for any ideas.
Finn
Finn2w ago
/data is actually the new path. Please provide your setup as mentioned in the checklist above. Compose and .env should be enough to help.
Patrick
PatrickOP2w ago
Thanks in advance for the feedback. Attached are files with the relevant settings.
bo0tzz
bo0tzz2w ago
You say you run into problems after you restore the postgres backup; are you also copying over the UPLOAD_LOCATION (/data) files?
Patrick
PatrickOP2w ago
I copied all cache, profile, thumbnails, and encoded videos to the SSD. The library is supposed to stay on the NAS and is practically empty anyway.
bo0tzz
bo0tzz2w ago
Oh, I failed to read the full .env file sorry. You have a bunch of _PATH vars in there that you're not using in the compose file That'll be the problem
Patrick
PatrickOP2w ago
Does that mean I have to re-enter all the paths in docker-compose? That seems very messy to me. I'm about ready to start from scratch and abandon my backup. I'd like to start as cleanly and future-proof as possible. What would be your recommendation regarding Proxmox, integrating the external libraries from the NAS, and storing the uploaded files on the NAS?
bo0tzz
bo0tzz2w ago
If you want separate storage for thumbnails etc then you have to set that in the compose file yes
Patrick
PatrickOP2w ago
So I also need to do it in docker-compose, not just in .env. What would be your recommendation regarding Proxmox, integrating the external libraries from the NAS, and storing the uploaded files on the NAS? LXC? VM?
Mraedis
Mraedis2w ago
VM and NFS mount
Patrick
PatrickOP2w ago
Okay, because I've already tried countless times with an LXC container, whether created manually or with the helper script. It always crashes when I make changes. My goal is for all original uploads to be on the NAS using the HDD, while everything else—thumbs, cache, etc.—remains on the SSD. That's where I'm stuck. At this point, I'm not even concerned about the backup anymore. Is my plan even feasible? And is a VM with NFS the solution here?
Mraedis
Mraedis2w ago
If it always crashes in every configuration then that might simply point to your hardware being unstable at high load, no?
Patrick
PatrickOP2w ago
No, I'm simply assuming I'm always choosing the wrong settings. When I install everything "normally," whether via docker-compose in LXC or using the helper script, everything works fine. As soon as I mount it and then try to adjust the settings, problems arise. The hardware is actually more than adequate, with a 4-core i5 and 8GB of RAM. I simply think something is wrong with the NFS connection to the NAS and the settings. Another approach would be to try the LXC file system. Then I could try a VM again. The question is: Is my plan even feasible, so that only the uploads end up on the NAS and the rest (thumbs and cache) remain on the SSD, allowing Immich to load faster? If that's not possible at all, then the last 30 hours of trying will have been for nothing.
Mraedis
Mraedis2w ago
Ah sorry I have misread a bit, I thought you meant hard crashes but really you are restoring a DB to an environment with different settings For sure the thumbs etc can be on your SSD, but you don't seem to actually be doing that in your compose file @Patrick ?
Patrick
PatrickOP2w ago
Actually, I've mentally given up on the backup option. I had to deviate from the official instructions for my NAS anyway. I'd just like a clean start, even if it means rescanning all the thumbnails and faces. My biggest problem is with the albums. I have five of them, each with 1,000-2,000 pictures. I hope I can solve that with the duplicate identifier. But my problem is that I can't even get a clean solution with the desired structure. Is a VM with NFS recommended? Is Debian or Ubuntu a better choice?
Zeus
Zeus2w ago
Yes, a VM would be recommended It’s up to you how to handle the storage. If you are less experienced a volume mount through the hypervisor may be easier
Patrick
PatrickOP2w ago
Okay, I am starting completely clean. I am abandoning my old backup and will manually recover my 5 main albums using the "duplicate upload trick" (ITv4 guide). Could you please do a sanity check on my final, clean infrastructure plan? My goal is a stable setup with a clean SSD/NAS split (Metadata on SSD, Originals on NAS). Is this (KVM + NFS in VM + manual docker-compose.yml split) the final, correct way to achieve the SSD/NAS split? Thanks for checking.
ITv4 :: Tipps & Tricks rund um IT
Von Synology Photos zu Immich – Meine Tipps für den Umzug - ITv4.de
Ich habe auf meinem Synology NAS den Umstieg von Synology [...]
Patrick
PatrickOP2w ago
I've now installed a VM with Ubuntu. Then I created the mounts and the two environment and docker-compose files. And once again, I can't access the desktop environment. Here are my two files. What did I do wrong? I desperately need help; after so many hours, I'm at my wit's end.
Mraedis
Mraedis2w ago
# Do not edit the next line. If you want to change the media storage location on your system, edit the value of U>
- ${UPLOAD_LOCATION}:/data
- ./thumbs:/usr/src/app/upload/thumbs
- ./profile:/usr/src/app/upload/profile
- ./cache:/usr/src/app/upload/cache
- ./encoded-video:/usr/src/app/upload/encoded-video
- /etc/localtime:/etc/localtime:ro
# Do not edit the next line. If you want to change the media storage location on your system, edit the value of U>
- ${UPLOAD_LOCATION}:/data
- ./thumbs:/usr/src/app/upload/thumbs
- ./profile:/usr/src/app/upload/profile
- ./cache:/usr/src/app/upload/cache
- ./encoded-video:/usr/src/app/upload/encoded-video
- /etc/localtime:/etc/localtime:ro
Should be
# Do not edit the next line. If you want to change the media storage location on your system, edit the value of U>
- ${UPLOAD_LOCATION}:/data
- ${THUMBNAIL_PATH}:/data/thumbs
- ${PROFILE_PATH}:/data/profile
- ${ENCODED_VIDEO_PATH}:/data/encoded-video
- /etc/localtime:/etc/localtime:ro
# Do not edit the next line. If you want to change the media storage location on your system, edit the value of U>
- ${UPLOAD_LOCATION}:/data
- ${THUMBNAIL_PATH}:/data/thumbs
- ${PROFILE_PATH}:/data/profile
- ${ENCODED_VIDEO_PATH}:/data/encoded-video
- /etc/localtime:/etc/localtime:ro
# file: hwaccel.ml.yml
# service: cpu # set to one of [armnn, cuda, rocm, openvino, openvino-wsl, rknn] for accelerated inference - use >
volumes:
- model-cache:/cache
- ./thumbs:/usr/src/app/upload/thumbs
- ./profile:/usr/src/app/upload/profile
- ./cache:/usr/src/app/upload/cache
- ./encoded-video:/usr/src/app/upload/encoded-video
# file: hwaccel.ml.yml
# service: cpu # set to one of [armnn, cuda, rocm, openvino, openvino-wsl, rknn] for accelerated inference - use >
volumes:
- model-cache:/cache
- ./thumbs:/usr/src/app/upload/thumbs
- ./profile:/usr/src/app/upload/profile
- ./cache:/usr/src/app/upload/cache
- ./encoded-video:/usr/src/app/upload/encoded-video
Should be
# file: hwaccel.ml.yml
# service: cpu # set to one of [armnn, cuda, rocm, openvino, openvino-wsl, rknn] for accelerated inference - use >
volumes:
- ${CACHE_PATH}:/cache
# file: hwaccel.ml.yml
# service: cpu # set to one of [armnn, cuda, rocm, openvino, openvino-wsl, rknn] for accelerated inference - use >
volumes:
- ${CACHE_PATH}:/cache
And actually you can leave out the cache_path variable and just use:
volumes:
- model-cache:/cache
volumes:
- model-cache:/cache
Actually scratch all that @Patrick Simply have in your env file:
DATA_LOC=./library
UP_LOC=/mnt/nas/immich-uploads/immich/library/upload
LIB_LOC=/mnt/nas/immich-uploads/immich/library/library
DATA_LOC=./library
UP_LOC=/mnt/nas/immich-uploads/immich/library/upload
LIB_LOC=/mnt/nas/immich-uploads/immich/library/library
And compose:
# Do not edit the next line. If you want to change the media storage location on your system, edit the value of
- ${DATA_LOC}:/data
- ${UP_LOC}:/data/upload
- ${LIB_LOC}:/data/library
- /etc/localtime:/etc/localtime:ro
# Do not edit the next line. If you want to change the media storage location on your system, edit the value of
- ${DATA_LOC}:/data
- ${UP_LOC}:/data/upload
- ${LIB_LOC}:/data/library
- /etc/localtime:/etc/localtime:ro
Patrick
PatrickOP2w ago
Thank you for the quick reply. I've now adjusted the files as follows, but unfortunately, it's still not working.
Mraedis
Mraedis2w ago
Could you post the errors
Patrick
PatrickOP2w ago
here it is
Patrick
PatrickOP2w ago
Or is there a better command to get more information?
Mraedis
Mraedis2w ago
Are you clearing the database folder on every attempt or moving the folders around @Patrick ? Because this error basically means "I tried to start up, but things weren't where I thought they were", which you should really only get if you moved things around after starting it up already
Patrick
PatrickOP2w ago
A database reset did the trick! sudo rm -rf ./postgres sudo rm -rf ./thumbs/* ./profile/* ./cache/* ./encoded-video/* Thank you so much for your help!!! One more quick question. I'm now adding external libraries. The jobs are running. I've disabled "Smart Search" and "OCR" for now (precisely because I don't need OCR). I've also disabled "Transcode Videos" with "Do Not Transcode Videos" (to save SSD space and because I don't actually need it). However, it's still writing a lot of files to the "Transcode Videos" job. Can I just let it run since they're probably just Motion Photos, or is there something wrong with my settings?
Mraedis
Mraedis2w ago
I'm not too sure whether it queues a video even if it doesn't need a transcode, are there files appearing in the folder @Patrick ?
Patrick
PatrickOP2w ago
You mean in the Video-encoded folder? Files are written there when the job is briefly running. But with the note "Motion Photos." That's why I thought it only affected those.
Mraedis
Mraedis2w ago
Are you sure you pressed save on the settings? 😅
Patrick
PatrickOP2w ago
Yes, definitely. I went through it twice and adjusted the settings before importing the external library. That's why I'm so confused. 😅 It must have something to do with the Motion Photos feature on the Google Pixel. I've added 800 videos, and now there are 4,064 videos in the transcoding queue. Should I just let the job start?
Mraedis
Mraedis2w ago
Give it a go, if the number goes down fast you'll know they aren't transcoded
Patrick
PatrickOP2w ago
I had considered that too. In the worst-case scenario, could "clear" the SSD also delete potentially "real" videos?
Mraedis
Mraedis2w ago
No, everything in encoded-video is a derivative unless you have severely broken your mounts somehow
Patrick
PatrickOP2w ago
It was all motion photos. In the end, the folder contained 13 GB and the 4,000 videos were finished within 10 minutes. The integration worked perfectly. Thank you very much for your support.

Did you find this page helpful?