502 Bad Gateway from Nginx ingress after updating to v1.124.0
I just updated to the newest version and now I can't access the server at all. Everything was working fine before the update. Running on k3s.
Nothing in the logs to suggest and issue but it seems to be stuck here for about 30 minutes. I've tried accessing directly from the service too
12 Replies
:wave: Hey @justjon,
Thanks for reaching out to us. Please carefully read this message and follow the recommended actions. This will help us be more effective in our support effort and leave more time for building Immich :immich:.
References
- Container Logs:
docker compose logs
docs
- Container Status: docker ps -a
docs
- Reverse Proxy: https://immich.app/docs/administration/reverse-proxy
- Code Formatting https://support.discord.com/hc/en-us/articles/210298617-Markdown-Text-101-Chat-Formatting-Bold-Italic-Underline#h_01GY0DAKGXDEHE263BCAYEGFJAChecklist
I have...
1. :ballot_box_with_check: verified I'm on the latest release(note that mobile app releases may take some time).
2. :ballot_box_with_check: read applicable release notes.
3. :ballot_box_with_check: reviewed the FAQs for known issues.
4. :ballot_box_with_check: reviewed Github for known issues.
5. :ballot_box_with_check: tried accessing Immich via local ip (without a custom reverse proxy).
6. :ballot_box_with_check: uploaded the relevant information (see below).
7. :ballot_box_with_check: tried an incognito window, disabled extensions, cleared mobile app cache, logged out and back in, different browsers, etc. as applicable
(an item can be marked as "complete" by reacting with the appropriate number)
Information
In order to be able to effectively help you, we need you to provide clear information to show what the problem is. The exact details needed vary per case, but here is a list of things to consider:
- Your docker-compose.yml and .env files.
- Logs from all the containers and their status (see above).
- All the troubleshooting steps you've tried so far.
- Any recent changes you've made to Immich or your system.
- Details about your system (both software/OS and hardware).
- Details about your storage (filesystems, type of disks, output of commands like
fdisk -l
and df -h
).
- The version of the Immich server, mobile app, and other relevant pieces.
- Any other information that you think might be relevant.
Please paste files and logs with proper code formatting, and especially avoid blurry screenshots.
Without the right information we can't work out what the problem is. Help us help you ;)
If this ticket can be closed you can use the /close
command, and re-open it later if needed.GitHub
immich-app immich · Discussions
Explore the GitHub Discussions forum for immich-app immich. Discuss code, ask questions & collaborate with the developer community.
FAQ | Immich
User
GitHub
Issues · immich-app/immich
High performance self-hosted photo and video management solution. - Issues · immich-app/immich
Successfully submitted, a tag has been added to inform contributors. :white_check_mark:
What version didd you upgrade from?
Please include compose, env and all logs
Please test using local IP
Upgrading from v1.123.0
I have included all logs in the original post
Can you connect over local ip?
Please also include details about the postgres setup
It seems after sometime the logs start spamming
missing 'error' handler on this Redis client
. I've checked my redis instance and its accessible. I also restarted it and it started without any errors. I've been using the same configuration for months without issue
CNPG configuration
Restarted my DB and it started working again. Not a bug with immich but I guess it would be useful to some kind of troubleshooting message that it can't connect to DBIf it's a hard failure it does log an error. The behaviour above where it just stops writing more logs is something we usually see on DBs that don't have enough RAM allocated, so things are moving just incredibly slowly. Your setup doesn't have any RAM limits, but I'm guessing something else somehow caused similar behaviour
My node definitely doesn't have resource pressure. I also checked my grafana dashboards and didn't notice anything abnormal. Thanks for looking, closing as solved
This thread has been closed. To re-open, use the button below.