sidecar in kubernetes template

hello, I just finished modifying this template (yes, I think it's a bit of dirty hack, but i guess it works) My setup: my server(coder in kubernetes k3s>traefik)>cloudflare tunnel. Usually with the default kubernetes templaet, everything was fine in the last couple of weeks, no issue. goal: I'm trying to have a nodejs project that can be run/debug in realtime with coder so my problem is: it boot up just fine, I can access my "web" container through the "Poogle" button app, it works as expected. While i got my hope up, dev console was throwing wss 1006 and failed websocket like every 2s and agent unhealthy->crash. That happens like 1m after I'm just trying open some dir at /home/coder/breederpage-demo upon checking with kubectl, I found this logs right after it crashed:
2025-09-24 06:34:33.067 [info] coderd.agentrpc.yamux.stdlib: [ERR] yamux: keepalive failed: i/o deadline reached owner=Decoyer workspace_name=web-auto agent_name=main request_id=97eadba5-87c1-43e5-8748-1a46b15ae495
2025-09-24 06:34:33.067 [info] coderd.agentrpc.yamux.stdlib: [ERR] yamux: Failed to read header: failed to get reader: context canceled owner=Decoyer workspace_name=web-auto agent_name=main request_id=97eadba5-87c1-43e5-8748-1a46b15ae495
2025-09-24 06:34:51.528 [erro] coderd: encode containers request_id=d4432e2b-843f-4698-81c9-cb21b4fafe68 error="failed to write msg: use of closed network connection"
2025-09-24 06:34:33.067 [info] coderd.agentrpc.yamux.stdlib: [ERR] yamux: keepalive failed: i/o deadline reached owner=Decoyer workspace_name=web-auto agent_name=main request_id=97eadba5-87c1-43e5-8748-1a46b15ae495
2025-09-24 06:34:33.067 [info] coderd.agentrpc.yamux.stdlib: [ERR] yamux: Failed to read header: failed to get reader: context canceled owner=Decoyer workspace_name=web-auto agent_name=main request_id=97eadba5-87c1-43e5-8748-1a46b15ae495
2025-09-24 06:34:51.528 [erro] coderd: encode containers request_id=d4432e2b-843f-4698-81c9-cb21b4fafe68 error="failed to write msg: use of closed network connection"
at first I thought it was the issue between traefik<->cloudflare, but after update coder domain -> my server local ip (editing /etc/hosts), it still got the same issue, so here I am. My best guess is something has to be done between traefik<->coder? this is as far as I can go debugging this issue, can someone help?
1 Reply
ÙwÚ
ÙwÚOP2w ago
I've been on this for like hrs, checked with forums and bunch of llm as well, no luck I feel like the template shouldnt be used like this, so I'm not sure if I should keep going to try some traefik middleware to increase timeout (with gpt help, because I have no knowledge whatsoever related to websocket/networking, I do know a bit of ingress, ip, here and there but not anything that deep at ws, wss) OR I should stop here and find work around (maybe like my template launch a helm installation with prehook/posthook?) it's about 1am here, so if my wording/description wasnt cleared, please tell Hello, thankfully gpt help me fixed it, apparently it was a combination of issues: + vscode reach some watchfile limit I think, the kernel’s inotify limits (so I bump those to around 2M file) + low ram + coredns, when I look at coredns it seems like my server try to look externally to cloudflared, while coder just right around my traefik, so I add:
rewrite name regex (.*)\.my\.domain traefik.traefik.svc.cluster.local answer auto
rewrite name regex (.*)\.my\.domain traefik.traefik.svc.cluster.local answer auto
so it help workspace pod circle right back at traefik->coder pod

Did you find this page helpful?