Shardool
Shardool
ZZITADEL
Created by Shardool on 5/22/2025 in #questions-help-bugs
Actions v2 on v3.1.0 returning [internal] An internal error occurred (QUERY-y2u7vctrha)
Upgraded recently from v2.67.2 to v3.1.0 locally on docker compose. Tried adding a target and an action. After an action was added, I started getting [internal] An internal error occurred (QUERY-y2u7vctrha) and im not able to see my action that was added. Its not being triggered either
5 replies
ZZITADEL
Created by Shardool on 2/3/2025 in #questions-help-bugs
Service user fetching /oauth/v2/token timing out
Zitadel version: v2.67.2 Running 3 instances behind nginx Hello folks, Have been running zitadel in production for a couple of months now and am starting to issues with service users fetching tokens from zitadel timing out. This happens pretty often between successful requests. We have about ~5 service users that will request for new tokens every 5 seconds or so with their key. We started off ok but have noticed degradation after running this setup for ~15 days. We should ideally cache the tokens but is this behavior expected? Nginx logs:
2025/02/03 16:57:13 [error] 26#26: *94 upstream timed out (110: Operation timed out) while reading response header from upstream, client: 192.168.45.171, server: auth.***.***, request: "POST /oauth/v2/token HTTP/2.0", upstream: "grpc://192.168.17.41:8080", host: "auth.***.***"
192.168.45.171 - - [03/Feb/2025:16:57:13 +0000] "POST /oauth/v2/token HTTP/2.0" 504 160 "-" "Go-http-client/2.0" 855 60.002 [***-prod-zitadel-auth-8080] ] 192.168.17.41:8080 0 60.001 504 2e330162c99f856fec6ede6765bbec8d
2025/02/03 16:57:29 [error] 26#26: *113 upstream timed out (110: Operation timed out) while reading response header from upstream, client: 192.168.45.171, server: auth.***.***, request: "POST /oauth/v2/token HTTP/2.0", upstream: "grpc://192.168.60.40:8080", host: "auth.***.***"
192.168.45.171 - - [03/Feb/2025:16:57:29 +0000] "POST /oauth/v2/token HTTP/2.0" 504 160 "-" "Go-http-client/2.0" 855 60.001 [***-prod-zitadel-auth-8080] ] 192.168.60.40:8080 0 60.000 504 01421677980b20d9f1920e0e37f9d581
2025/02/03 16:57:13 [error] 26#26: *94 upstream timed out (110: Operation timed out) while reading response header from upstream, client: 192.168.45.171, server: auth.***.***, request: "POST /oauth/v2/token HTTP/2.0", upstream: "grpc://192.168.17.41:8080", host: "auth.***.***"
192.168.45.171 - - [03/Feb/2025:16:57:13 +0000] "POST /oauth/v2/token HTTP/2.0" 504 160 "-" "Go-http-client/2.0" 855 60.002 [***-prod-zitadel-auth-8080] ] 192.168.17.41:8080 0 60.001 504 2e330162c99f856fec6ede6765bbec8d
2025/02/03 16:57:29 [error] 26#26: *113 upstream timed out (110: Operation timed out) while reading response header from upstream, client: 192.168.45.171, server: auth.***.***, request: "POST /oauth/v2/token HTTP/2.0", upstream: "grpc://192.168.60.40:8080", host: "auth.***.***"
192.168.45.171 - - [03/Feb/2025:16:57:29 +0000] "POST /oauth/v2/token HTTP/2.0" 504 160 "-" "Go-http-client/2.0" 855 60.001 [***-prod-zitadel-auth-8080] ] 192.168.60.40:8080 0 60.000 504 01421677980b20d9f1920e0e37f9d581
Attaching zitadel logs for reference. Seeing unable to filter events / duplicate key violates unique constraint pretty often.
6 replies
ZZITADEL
Created by Shardool on 12/22/2024 in #questions-help-bugs
Issues with using AWS application load balancer
Hi i've installed the zitadel helm chart and i've been trying to expose it over AWS ALB but without much luck. I have a next.js frontend which redirects to zitadel for login and then my backend Go performs token introspection and grpc calls to zitadel. With following configuration, login works but the grpc calls don't (which i guess is expected).
ExternalSecure: true
ExternalPort: 443
ExternalDomain: auth.ryvn.app
ExternalSecure: true
ExternalPort: 443
ExternalDomain: auth.ryvn.app
ingress:
enabled: true
className: "alb"
className: "nginx"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:70402007981:certificate/32674b10-3478-4b72-8144-2837c51fd23a
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/listen-ports: '[{HTTP:80},{"HTTPS":443}]'
alb.ingress.kubernetes.io/group.name: "alb-group"
alb.ingress.kubernetes.io/healthcheck-path: /debug/healthz
alb.ingress.kubernetes.io/healthcheck-port: "8080"
hosts:
- host: example.test.app
paths:
- path: /
pathType: Prefix
ingress:
enabled: true
className: "alb"
className: "nginx"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:70402007981:certificate/32674b10-3478-4b72-8144-2837c51fd23a
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/listen-ports: '[{HTTP:80},{"HTTPS":443}]'
alb.ingress.kubernetes.io/group.name: "alb-group"
alb.ingress.kubernetes.io/healthcheck-path: /debug/healthz
alb.ingress.kubernetes.io/healthcheck-port: "8080"
hosts:
- host: example.test.app
paths:
- path: /
pathType: Prefix
Forcing the backend fixes the backend calls made to zitadel but next.js stops working (presumably because next-auth force HTTP1.1).
alb.ingress.kubernetes.io/backend-protocol-version: "HTTP2"
alb.ingress.kubernetes.io/backend-protocol-version: "HTTP2"
In both cases, i get 464 which is ALB's error so I don't think this is primarily a zitadel concern but just wanted to ask in here in case anyone has bumped into this before. Thanks!
2 replies