H
Homarr4mo ago
vocoder

dns caching results in 500 error

2025-09-05T19:38:02.813Z warn: The callback of 'mediaRequestList' succeeded but took 240.66ms longer than expected (2500ms). This may indicate that your network performance, host performance or something else is too slow. If this happens too often, it should be looked into. Error [TRPCClientError]: fetch failed at a.from (.next/server/src/middleware.js:13:39867) at <unknown> (.next/server/src/middleware.js:13:125415) { shape: undefined, data: undefined, meta: undefined, [cause]: TypeError: fetch failed at bJ (.next/server/src/middleware.js:13:41606) at async bK (.next/server/src/middleware.js:13:41719) { [cause]: Error: connect ECONNREFUSED 1.2.3.4:3000 at <unknown> (Error: connect ECONNREFUSED 1.2.3.4:3000) { errno: -111, code: 'ECONNREFUSED', syscall: 'connect', address: '1.2.3.4', port: 3000 } } } ⨯ Error: Cannot append headers after they are sent to the client at p (.next/server/app/_not-found/page.js:2:4781) at async L (.next/server/app/_not-found/page.js:2:6944) { code: 'ERR_HTTP_HEADERS_SENT' } ⨯ Error: Cannot append headers after they are sent to the client at p (.next/server/app/_not-found/page.js:2:4781) at async L (.next/server/app/_not-found/page.js:2:6944) { code: 'ERR_HTTP_HEADERS_SENT' } 21:M 05 Sep 2025 19:38:19.091 * 1 changes in 60 seconds. Saving... 21:M 05 Sep 2025 19:38:19.091 * Background saving started by pid 57 57:C 05 Sep 2025 19:38:19.106 * DB saved on disk 57:C 05 Sep 2025 19:38:19.107 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB 21:M 05 Sep 2025 19:38:19.191 * Background saving terminated with success 2025-09-05T19:38:43.696Z info: tRPC request from unknown by user 'undefined (undefined)'
41 Replies
Cakey Bot
Cakey Bot4mo ago
Thank you for submitting a support request. Depending on the volume of requests, our team should get in contact with you shortly.
⚠️ Please include the following details in your post or we may reject your request without further comment: - Log (See https://homarr.dev/docs/community/faq#how-do-i-open-the-console--log) - Operating system (Unraid, TrueNAS, Ubuntu, ...) - Exact Homarr version (eg. 0.15.0, not latest) - Configuration (eg. docker-compose, screenshot or similar. Use ``your-text`` to format) - Other relevant information (eg. your devices, your browser, ...)
Frequently Asked Questions | Homarr documentation
Can I install Homarr on a Raspberry Pi?
vocoder
vocoderOP4mo ago
Basically seems that maybe one of my services is unreachable, and its blocking the whole thing from starting Gonna try kicking jellyseerr and a full reboot, but no matter what.. that is a bug Rebooting the stack fixes it. Starting homarr while jelly is unreachable = unrecoerable homarr Probably chicken b4 dns egg
Manicraft1001
Manicraft10014mo ago
@Meierschlumpf
Meierschlumpf
Meierschlumpf4mo ago
Hmm, okay I see This is proably related, yes @vocoder do you have 1.2.3.4 as ip somewhere specified?
vocoder
vocoderOP4mo ago
No, I obsfuscated mine in the dump It is internal, but w/e
Meierschlumpf
Meierschlumpf4mo ago
Okay I see Just thought. But did you specify it or is it just the one from within docker?
vocoder
vocoderOP4mo ago
I specified it, it's the internal docker address of another container 172.17.0.x
Meierschlumpf
Meierschlumpf4mo ago
Hmm okay I see
vocoder
vocoderOP4mo ago
I give all my containers static IPs. And sometimes with the VPN jellyseerr tries to hit the images too much and then hangs. So I immediately suspected that
Meierschlumpf
Meierschlumpf4mo ago
Can you give me a quick example on how you specify the static ips? then I may be able to reproduce it on my side
vocoder
vocoderOP4mo ago
homarr:
container_name: homarr
image: ghcr.io/homarr-labs/homarr:latest
networks:
default:
ipv4_address: ${DOCKER_REFLECTOR}101
isolated6:
ipv4_address: ${DOCKER_ISOLATED_SERVER}6.101
isolated12:
ipv4_address: ${DOCKER_ISOLATED_SERVER}12.101
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
environment:
# PUID: ${SECUREPUID}21
# PGID: ${PGID}
TZ: ${TIMEZONE}
DEFAULT_COLOR_SCHEME: dark
NEXT_TELEMETRY_DISABLED: 1
AUTH_PROVIDERS: oidc
AUTH_OIDC_ISSUER: https://auth.${DOMAIN_NAME}
AUTH_OIDC_CLIENT_NAME: "Authelia"
AUTH_OIDC_AUTO_LOGIN: true
AUTH_OIDC_FORCE_USERINFO: true
NEXTAUTH_URL: https://lan.${DOMAIN_NAME}
AUTH_TRUST_HOST: true
restart: "no"
volumes:
- ${DOCKER_PATH}/${CONFIG_PATH}/homarr/appdata:/appdata
- ${DOCKER_PATH}/${CONFIG_PATH}/homarr/img:/app/apps/nextjs/public/mnt/img
env_file:
- ${DOCKER_PATH}/${SECRET_PATH}/homarr-oidc.env
homarr:
container_name: homarr
image: ghcr.io/homarr-labs/homarr:latest
networks:
default:
ipv4_address: ${DOCKER_REFLECTOR}101
isolated6:
ipv4_address: ${DOCKER_ISOLATED_SERVER}6.101
isolated12:
ipv4_address: ${DOCKER_ISOLATED_SERVER}12.101
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
environment:
# PUID: ${SECUREPUID}21
# PGID: ${PGID}
TZ: ${TIMEZONE}
DEFAULT_COLOR_SCHEME: dark
NEXT_TELEMETRY_DISABLED: 1
AUTH_PROVIDERS: oidc
AUTH_OIDC_ISSUER: https://auth.${DOMAIN_NAME}
AUTH_OIDC_CLIENT_NAME: "Authelia"
AUTH_OIDC_AUTO_LOGIN: true
AUTH_OIDC_FORCE_USERINFO: true
NEXTAUTH_URL: https://lan.${DOMAIN_NAME}
AUTH_TRUST_HOST: true
restart: "no"
volumes:
- ${DOCKER_PATH}/${CONFIG_PATH}/homarr/appdata:/appdata
- ${DOCKER_PATH}/${CONFIG_PATH}/homarr/img:/app/apps/nextjs/public/mnt/img
env_file:
- ${DOCKER_PATH}/${SECRET_PATH}/homarr-oidc.env
For example ${DOCKER_REFLECTOR} is a variable that is just 172.17.0. similar with the other env and then at the bottom, you need to define the network, maybe
Meierschlumpf
Meierschlumpf4mo ago
Okay I see
vocoder
vocoderOP4mo ago
dmz-server1_isolated18:
driver: bridge
enable_ipv6: false
driver_opts:
com.docker.network.driver.mtu: 9000
ipam:
driver: default
config:
- subnet: ${DOCKER_ISOLATED_DMZ_SERVER}18.0/24
gateway: ${DOCKER_ISOLATED_DMZ_SERVER}18.1
dmz-server1_isolated18:
driver: bridge
enable_ipv6: false
driver_opts:
com.docker.network.driver.mtu: 9000
ipam:
driver: default
config:
- subnet: ${DOCKER_ISOLATED_DMZ_SERVER}18.0/24
gateway: ${DOCKER_ISOLATED_DMZ_SERVER}18.1
etc under networks: that entry corresponds with the above ipv4 allotment same subnet, so the ip range is valid (not that exact example, but you get the idea). I'd strip out all env variables if I was testing.
Meierschlumpf
Meierschlumpf4mo ago
Can you set the LOG_LEVEL to debug? It should show a lot more details
vocoder
vocoderOP4mo ago
I need to wait until Jellyseer starts acting up, or make it do so but I have added the var and I'll kick homarr next time stopping jellyseerr, restarting homarr didnt fail, worked. Maybe its when jellyseerr is in a straight hang state, not a 404 or 502 no HTTP code response At least it seems mostly working, hopefully its an edge case
Meierschlumpf
Meierschlumpf4mo ago
What is DOCKER_ISOLATED_DMZ_SERVER? Yes it's pretty likely that it is an edge case
vocoder
vocoderOP4mo ago
DOCKER_REFLECTOR=172.17.0. DOCKER_ISOLATED_SERVER=172.18. DOCKER_ISOLATED_DMZ_SERVER=172.19. I had a 'hanging' jellyseerr and those logs about media requests is literally the only diff between then and now
Meierschlumpf
Meierschlumpf4mo ago
Okay so it is somehow working now? Have you opened Homarr without it showing a 500 error?
vocoder
vocoderOP4mo ago
Correct. When I brought my entire container stack down and brought it back, the error went away. Whatever it was. Maybe it was happier with everything unresovable since docker compose up is more or less a race condition with non-specific apps haha
Meierschlumpf
Meierschlumpf4mo ago
hmm okay, so we'll leave it running for now? Let me know if it happens again and we might actually need to fix something Thanks for taking the time and trying to debug it with me. Hope it is goind to work and was just a random occurence
vocoder
vocoderOP4mo ago
Sure. I'll let ya know. Pretty sure it's a http non-response during a homarr launch but ill keep my eyes peeled thank you!! could be a fluke too haha
Meierschlumpf
Meierschlumpf4mo ago
Okay I think it is a general problem: https://github.com/homarr-labs/homarr/issues/4006
GitHub
bug: No Longer working as of v1.36.0 · Issue #4006 · homarr-labs/...
Describe the bug Recently, the newest image for homarr on my docker compose setup breaks and stops working with the server error 500. I made sure I have the secret encryption key and reverting back...
vocoder
vocoderOP4mo ago
👍 Sounds good. am happy to help as well, am interested in the feature working, as I also have pihole ;P
Meierschlumpf
Meierschlumpf2mo ago
@vocoder do you still have the issue with version v1.41.0 when you enable dns caching? ENABLE_DNS_CACHING=true If so can you share a bit more details on your setup. I wasn't able to get it fail with the following configuration:
services:
homarr:
container_name: homarr-static-ips
image: ghcr.io/homarr-labs/homarr:v1.36.0
networks:
dmz-server1_isolated18:
ipv4_address: 10.10.18.101
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
environment:
SECRET_ENCRYPTION_KEY: <encryption-key>
ports:
- 7590:7575
restart: "no"
volumes:
- ./appdata:/appdata
networks:
dmz-server1_isolated18:
driver: bridge
enable_ipv6: false
driver_opts:
com.docker.network.driver.mtu: 9000
ipam:
driver: default
config:
- subnet: 10.10.18.0/24
gateway: 10.10.18.1
services:
homarr:
container_name: homarr-static-ips
image: ghcr.io/homarr-labs/homarr:v1.36.0
networks:
dmz-server1_isolated18:
ipv4_address: 10.10.18.101
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
environment:
SECRET_ENCRYPTION_KEY: <encryption-key>
ports:
- 7590:7575
restart: "no"
volumes:
- ./appdata:/appdata
networks:
dmz-server1_isolated18:
driver: bridge
enable_ipv6: false
driver_opts:
com.docker.network.driver.mtu: 9000
ipam:
driver: default
config:
- subnet: 10.10.18.0/24
gateway: 10.10.18.1
vocoder
vocoderOP2mo ago
Hello, yes - on my internal server with a more complex set of apps and stuff, it does result in a 500 with :latest tag still immediately after loading if its =true, and even if I clear my cache/cookies. On my dmz instance, it works either way. In the homarr logs, when I try to hit it, it sometimes throws:
28:M 12 Oct 2025 13:28:12.156 * Background saving terminated with success
Error [TRPCClientError]: fetch failed
at a.from (.next/server/src/middleware.js:13:39867)
at <unknown> (.next/server/src/middleware.js:13:126631) {
shape: undefined,
data: undefined,
meta: undefined,
[cause]: TypeError: fetch failed
at bJ (.next/server/src/middleware.js:13:41646)
at async bK (.next/server/src/middleware.js:13:41726) {
[cause]: Error: connect ECONNREFUSED 172.28.0.101:3000
at <unknown> (Error: connect ECONNREFUSED 172.28.0.101:3000) {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '172.28.0.101',
port: 3000
}
}
}
⨯ Error: Cannot append headers after they are sent to the client
at p (.next/server/app/_not-found/page.js:2:4781)
at async L (.next/server/app/_not-found/page.js:2:6944) {
code: 'ERR_HTTP_HEADERS_SENT'
}
⨯ Error: Cannot append headers after they are sent to the client
at p (.next/server/app/_not-found/page.js:2:4781)
at async L (.next/server/app/_not-found/page.js:2:6944) {
code: 'ERR_HTTP_HEADERS_SENT'
}
28:M 12 Oct 2025 13:28:12.156 * Background saving terminated with success
Error [TRPCClientError]: fetch failed
at a.from (.next/server/src/middleware.js:13:39867)
at <unknown> (.next/server/src/middleware.js:13:126631) {
shape: undefined,
data: undefined,
meta: undefined,
[cause]: TypeError: fetch failed
at bJ (.next/server/src/middleware.js:13:41646)
at async bK (.next/server/src/middleware.js:13:41726) {
[cause]: Error: connect ECONNREFUSED 172.28.0.101:3000
at <unknown> (Error: connect ECONNREFUSED 172.28.0.101:3000) {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '172.28.0.101',
port: 3000
}
}
}
⨯ Error: Cannot append headers after they are sent to the client
at p (.next/server/app/_not-found/page.js:2:4781)
at async L (.next/server/app/_not-found/page.js:2:6944) {
code: 'ERR_HTTP_HEADERS_SENT'
}
⨯ Error: Cannot append headers after they are sent to the client
at p (.next/server/app/_not-found/page.js:2:4781)
at async L (.next/server/app/_not-found/page.js:2:6944) {
code: 'ERR_HTTP_HEADERS_SENT'
}
Meierschlumpf
Meierschlumpf2mo ago
Okay, can you describe further how you have set it up there? As I said I wasn't able to reproduce the issue with the above configuration. Best of course would be a minimal reproducible so I can also debug it locally
vocoder
vocoderOP2mo ago
This seems like something you may have added for testing:
ports:
- 7590:7575
ports:
- 7590:7575
Exposing a port is not part of this; this has an IP on the same subnet as the nginx / swag servers, so they can reverse proxy it. As far as what's different, the internal one has a bunch of Integrations and the other one does not. Some are https://container:port and some are https://fq.dn You kinda stripped down a lot of the potential failure points in your simplified example, e.g. permissions, OIDC. With the value set to true, it produces a 500, but on the container with DEBUG logging, It looks like everything (background tasks )are running. looking for errors is limited to the stuff above, everything else that gets through seems to be fine in the log (SQL, tasks, etc). Some of the integrations have URL's that need to be resolved by the docker container DNS resolver. (e.g. https://container:port). Some by a upstream resolver https://someserver.org/blah. Some are just IPs. Maybe its not related to that at all, but I'm just trying to think what's different between my setups because they both use OIDC, they both hairpin this way, etc, just one hosts a ton more stuff since its internal. If you're caching DNS, that might play here. When I have some more time, I'll try deleting all of my integrations and switching it. Is there a flag to start with tasks off?
Meierschlumpf
Meierschlumpf2mo ago
No I don't think so, why?
vocoder
vocoderOP2mo ago
I was wrong anyways, deleted all services and still getting the 500. Wonder if it cant resolve who I am 2025-10-14T13:35:26.483Z info: tRPC request from unknown by user 'undefined (undefined)' Usually not like that It works roughly 1/3rd of the time on a restart not sure why. No errors worth seeing in the access/error logs of the reverse proxies. Just plain 500 there too. Maybe a race condition someplace. Maybe somehow it's resolving the OIDC endpoint as a local address or something, even if its not. No errors in my authelia log either,
Meierschlumpf
Meierschlumpf2mo ago
Hm okay weird... I'm not sure how we can move forward here to get to the root cause
vocoder
vocoderOP2mo ago
Maybe a build with more debug markers?
Meierschlumpf
Meierschlumpf2mo ago
True might be something, I'll have to set something up
vocoder
vocoderOP2mo ago
Maybe a way to dump its internal DNS mappings, and whatever else If I can see that now I might be able to see why its not working Any threads I could poke in the git history, or files? Maybe something will stick out with my config. Is it a lib basically or how does it work?
Meierschlumpf
Meierschlumpf2mo ago
Sorry for not responding, if you want I can publish a test image where we log the current state of the dns-cache and what happens behind the hood. I would do so from the state of v1.42.1, ok? My suggestion is that I put the following things as outputs (I use warning log level for the debug messages so we don't need to show the debug logs for all other things in parallel):
Meierschlumpf
Meierschlumpf2mo ago
No description
Meierschlumpf
Meierschlumpf2mo ago
You can try it out with image tag debug-dns-caching (which is compatible with v1.42.1 meaning you can just go back to that version after trying it out)
vocoder
vocoderOP2mo ago
Thank you!, trying it out soon Ran it, not seeing "DNS: " logging with LOG_LEVEL: debug with that image at all, but still seeing the 500 me@server1:~$ docker exec homarr node -e " (async () => { try { const { DnsCacheManager } = require('dns-caching'); const logger = { debug:console.debug, info:console.log, warn:console.warn, error:console.error }; const m = new DnsCacheManager({ cacheMaxEntries: 1000, forceMinTtl: 5601000, logger }); await m.initialize(); const dns = require('dns'); dns.lookup('google.com', (e, addr, fam) => console.log('lookup-after-init', { err: e && e.code, addr, fam })); console.log('dns-caching: init OK'); } catch (e) { console.error('dns-caching: init FAIL\n' + (e && e.stack || e)); process.exit(1); } })(); " dns-caching: init FAIL Error: Cannot find module 'dns-caching' Require stack: - /app/[eval] at Function._resolveFilename (node:internal/modules/cjs/loader:1383:15) at defaultResolveImpl (node:internal/modules/cjs/loader:1025:19) at resolveForCJSWithHooks (node:internal/modules/cjs/loader:1030:22) at Function._load (node:internal/modules/cjs/loader:1192:37) at TracingChannel.traceSync (node:diagnostics_channel:322:14) at wrapModuleLoad (node:internal/modules/cjs/loader:237:24) at Module.require (node:internal/modules/cjs/loader:1463:12) at require (node:internal/modules/helpers:147:16) at [eval]:4:33 at [eval]:15:3 Looks like it cannot download/initialize that module cleanly. I am trying to figure out why it works 'some' of the time
Meierschlumpf
Meierschlumpf2mo ago
I'll try publish the functionality as cli commands so we can try out different hostnames / urls. I'll give you some more details once the updated image is deployed Okay it is now available, it's under the same image tag debug-dns-caching (don't forget to pull it) You can use it by opening the shell in your container and running the following commands:
homarr dns-url-test -u http://github.com

homarr dns-hostname-test --ho github.com
homarr dns-real-test --ho github.com
homarr dns-url-test -u http://github.com

homarr dns-hostname-test --ho github.com
homarr dns-real-test --ho github.com
with dns-url-test it makes two fetch requests to the url specified with dns-hostname-test it makes two lookups to the hostname specified with the caching library with dns-real-test it makes two lookups to the hostname specified with the normal dns library from nodejs
vocoder
vocoderOP2mo ago
switched tag to debug-dns-caching and forced it up, Shows 500 when I try to go there, etc. https://pastebin.com/YqZGF5QN
Meierschlumpf
Meierschlumpf2mo ago
Okay, can you try out some things like other services in your docker network etc (especially also with your static ips) and also the ip that is shown in the error when you try to open Homarr For ips you can not use the dns-hostname-test I think, because fetch etc handle ipv4 addresses separatly
vocoder
vocoderOP2mo ago
Did some more testing. the FQDN at the top is part of a split horizon DPS, it is interesting to see the activeAddress having different entries for that. But nothing is failing there

Did you find this page helpful?