I
Immich4mo ago
justjon

502 Bad Gateway from Nginx ingress after updating to v1.124.0

I just updated to the newest version and now I can't access the server at all. Everything was working fine before the update. Running on k3s. Nothing in the logs to suggest and issue but it seems to be stuck here for about 30 minutes. I've tried accessing directly from the service too
Initializing Immich v1.124.0
DEBUG: cgroup v2 detected.
DEBUG: No CPU limits set.
Detected CPU Cores: 16
Starting api worker
Starting microservices worker
[Nest] 7 - 01/07/2025, 11:16:54 PM LOG [Microservices:EventRepository] Initialized websocket server
[Nest] 17 - 01/07/2025, 11:16:54 PM LOG [Api:EventRepository] Initialized websocket server
Initializing Immich v1.124.0
DEBUG: cgroup v2 detected.
DEBUG: No CPU limits set.
Detected CPU Cores: 16
Starting api worker
Starting microservices worker
[Nest] 7 - 01/07/2025, 11:16:54 PM LOG [Microservices:EventRepository] Initialized websocket server
[Nest] 17 - 01/07/2025, 11:16:54 PM LOG [Api:EventRepository] Initialized websocket server
root@immich-server-94d476dfb-nvsg5:/usr/src/app# curl immich-server.cloud:2283
curl: (7) Failed to connect to immich-server.cloud port 2283 after 0 ms: Couldn't connect to server
root@immich-server-94d476dfb-nvsg5:/usr/src/app#
root@immich-server-94d476dfb-nvsg5:/usr/src/app# curl immich-server.cloud:2283
curl: (7) Failed to connect to immich-server.cloud port 2283 after 0 ms: Couldn't connect to server
root@immich-server-94d476dfb-nvsg5:/usr/src/app#
12 Replies
Immich
Immich4mo ago
:wave: Hey @justjon, Thanks for reaching out to us. Please carefully read this message and follow the recommended actions. This will help us be more effective in our support effort and leave more time for building Immich :immich:. References - Container Logs: docker compose logs docs - Container Status: docker ps -a docs - Reverse Proxy: https://immich.app/docs/administration/reverse-proxy - Code Formatting https://support.discord.com/hc/en-us/articles/210298617-Markdown-Text-101-Chat-Formatting-Bold-Italic-Underline#h_01GY0DAKGXDEHE263BCAYEGFJA
Immich
Immich4mo ago
Checklist I have... 1. :ballot_box_with_check: verified I'm on the latest release(note that mobile app releases may take some time). 2. :ballot_box_with_check: read applicable release notes. 3. :ballot_box_with_check: reviewed the FAQs for known issues. 4. :ballot_box_with_check: reviewed Github for known issues. 5. :ballot_box_with_check: tried accessing Immich via local ip (without a custom reverse proxy). 6. :ballot_box_with_check: uploaded the relevant information (see below). 7. :ballot_box_with_check: tried an incognito window, disabled extensions, cleared mobile app cache, logged out and back in, different browsers, etc. as applicable (an item can be marked as "complete" by reacting with the appropriate number) Information In order to be able to effectively help you, we need you to provide clear information to show what the problem is. The exact details needed vary per case, but here is a list of things to consider: - Your docker-compose.yml and .env files. - Logs from all the containers and their status (see above). - All the troubleshooting steps you've tried so far. - Any recent changes you've made to Immich or your system. - Details about your system (both software/OS and hardware). - Details about your storage (filesystems, type of disks, output of commands like fdisk -l and df -h). - The version of the Immich server, mobile app, and other relevant pieces. - Any other information that you think might be relevant. Please paste files and logs with proper code formatting, and especially avoid blurry screenshots. Without the right information we can't work out what the problem is. Help us help you ;) If this ticket can be closed you can use the /close command, and re-open it later if needed.
GitHub
immich-app immich · Discussions
Explore the GitHub Discussions forum for immich-app immich. Discuss code, ask questions & collaborate with the developer community.
GitHub
Issues · immich-app/immich
High performance self-hosted photo and video management solution. - Issues · immich-app/immich
Immich
Immich4mo ago
Successfully submitted, a tag has been added to inform contributors. :white_check_mark:
Zeus
Zeus4mo ago
What version didd you upgrade from? Please include compose, env and all logs Please test using local IP
justjon
justjonOP4mo ago
root@immich-server-94d476dfb-nvsg5:/usr/src/app# curl localhost:2283
curl: (7) Failed to connect to localhost port 2283 after 0 ms: Couldn't connect to server
root@immich-server-94d476dfb-nvsg5:/usr/src/app#
root@immich-server-94d476dfb-nvsg5:/usr/src/app# curl localhost:2283
curl: (7) Failed to connect to localhost port 2283 after 0 ms: Couldn't connect to server
root@immich-server-94d476dfb-nvsg5:/usr/src/app#
Upgrading from v1.123.0
justjon
justjonOP4mo ago
I have included all logs in the original post
Alex Tran
Alex Tran4mo ago
Can you connect over local ip?
bo0tzz
bo0tzz4mo ago
Please also include details about the postgres setup
justjon
justjonOP4mo ago
It seems after sometime the logs start spamming missing 'error' handler on this Redis client. I've checked my redis instance and its accessible. I also restarted it and it started without any errors. I've been using the same configuration for months without issue CNPG configuration
# yaml-language-server: $schema=https://raw.githubusercontent.com/datreeio/CRDs-catalog/main/postgresql.cnpg.io/cluster_v1.json
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: cloudnative-pgvecto
namespace: default
annotations:
kyverno.io/ignore: "true"
spec:
env:
- name: TZ
value: ${TIMEZONE}
instances: 1
imageName: ghcr.io/tensorchord/cloudnative-pgvecto.rs:16.2-v0.2.0
primaryUpdateStrategy: unsupervised
storage:
size: 10Gi
storageClass: local-hostpath
superuserSecret:
name: cloudnative-pg-superuser
enableSuperuserAccess: true
postgresql:
shared_preload_libraries:
- "vectors.so"
parameters:
max_connections: "600"
max_slot_wal_keep_size: 10GB
shared_buffers: 512MB
resources:
requests:
memory: "512Mi"
limits:
hugepages-2Mi: "512Mi"
monitoring:
enablePodMonitor: true
backup:
retentionPolicy: 7d
barmanObjectStore:
wal:
compression: bzip2
maxParallel: 8
destinationPath: s3://${S3_POSTGRESQL_BUCKET}/
endpointURL: https://${S3_ENDPOINT}
serverName: cloudnative-pgvecto
s3Credentials:
accessKeyId:
name: cloudnative-pg-superuser
key: aws-access-key-id
secretAccessKey:
name: cloudnative-pg-superuser
key: aws-secret-access-key
# yaml-language-server: $schema=https://raw.githubusercontent.com/datreeio/CRDs-catalog/main/postgresql.cnpg.io/cluster_v1.json
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: cloudnative-pgvecto
namespace: default
annotations:
kyverno.io/ignore: "true"
spec:
env:
- name: TZ
value: ${TIMEZONE}
instances: 1
imageName: ghcr.io/tensorchord/cloudnative-pgvecto.rs:16.2-v0.2.0
primaryUpdateStrategy: unsupervised
storage:
size: 10Gi
storageClass: local-hostpath
superuserSecret:
name: cloudnative-pg-superuser
enableSuperuserAccess: true
postgresql:
shared_preload_libraries:
- "vectors.so"
parameters:
max_connections: "600"
max_slot_wal_keep_size: 10GB
shared_buffers: 512MB
resources:
requests:
memory: "512Mi"
limits:
hugepages-2Mi: "512Mi"
monitoring:
enablePodMonitor: true
backup:
retentionPolicy: 7d
barmanObjectStore:
wal:
compression: bzip2
maxParallel: 8
destinationPath: s3://${S3_POSTGRESQL_BUCKET}/
endpointURL: https://${S3_ENDPOINT}
serverName: cloudnative-pgvecto
s3Credentials:
accessKeyId:
name: cloudnative-pg-superuser
key: aws-access-key-id
secretAccessKey:
name: cloudnative-pg-superuser
key: aws-secret-access-key
Restarted my DB and it started working again. Not a bug with immich but I guess it would be useful to some kind of troubleshooting message that it can't connect to DB
bo0tzz
bo0tzz4mo ago
If it's a hard failure it does log an error. The behaviour above where it just stops writing more logs is something we usually see on DBs that don't have enough RAM allocated, so things are moving just incredibly slowly. Your setup doesn't have any RAM limits, but I'm guessing something else somehow caused similar behaviour
justjon
justjonOP4mo ago
My node definitely doesn't have resource pressure. I also checked my grafana dashboards and didn't notice anything abnormal. Thanks for looking, closing as solved
Immich
Immich4mo ago
This thread has been closed. To re-open, use the button below.

Did you find this page helpful?