Need a help for Post "https://api.crowdsec.net/v3/watchers": net/http: TLS handshake timeout
Hi, CrowdSec,
I got this problem after restart my crowdsec container last week.
And then, I try to re-contruct the container like a new installation with following steps:
1)
I run the crowdsec container following the crowdsec's docker compose file. The first time I start the container, the following problem occurs:
level=warning msg="can't load CAPI credentials from “/etc/crowdsec//online_api_credentials.yaml” (missing login field)’
Error: api client register (‘https://api.crowdsec.net/’): api register (https://api.crowdsec.net/): Post ‘https://api.crowdsec.net/v3/ watchers": net/http: TLS handshake timeout
2)
I tried to start the container for the second time and it worked.
I then went into the container and used the cscli capi register command to register, and encountered the TLS handshake timeout issue many times.
After successfully registering, I continue to use the cscli console enroll -e context ... --overwrite, which also encountered several timeouts before it succeeded, and then I restarted the container.
*** Before rebooting, I noticed one of the container's messages, not sure if it's related to solving the problem: time=’...’ level=info msg=‘127.0.0.1 - [Mon, 02 Jun 2025 ... CST] ’POST /v1/watchers/login HTTP/1.1 200 317.608723ms "crowdsec/v1.6.9-rc2-83d54e98- docker‘ ’’
3)
I then restarted the container for the third time and continued to do so indefinitely.
I get this error.
time=’...’ level=fatal msg="api server init: unable to run local API: authenticate watcher (...) : Post ‘https://api.crowdsec.net/v3/watchers/login\’: performing jwt auth: net/http: TLS handshake timeout’
Thank you very much if you have any suggestion.
14 Replies
Important Information
This post has been marked as resolved. If this is a mistake please press the red button below or type
/unresolve
© Created By WhyAydan for CrowdSec ❤️
CAPI is behind an AWS api gateway, so it's extremely unlikely to be a server side issue (the TLS handshake is handled by AWS infra, so we can probably assume it will always work: our code behind may fail, but you would get a different error).
I think the most likely explanation would be a networking issue on your side, probably something related to IPv6: if the machine has an IPv6, crowdsec will use that, but if you have some firewall or a configuration issue, the traffic may be blocked
try lowering MTU to 1300 for the docked daemon ( and make sure to restart the daemon and the containers or recreate it etc). If it works then start to raise it until you find the working value probably going to be around 1370)
Thanks for your suggestions!!! I'll test your methods and report the results to you. :alpacas:
I try lowering MTU to 1300 on the docker network CrowdSec used and it works!
The strong defender CrowdSec is back~ 👍
Resolving Need a help for Post "https://api.crowdsec.net/v3/watchers": net/http: TLS handshake timeout
This has now been resolved. If you think this is a mistake please run
/unresolve
I'll try to recover my previous working configurations then.
And shell consult you for further usage.
Thanks! :alpacas:
If you have a ping binary in the containe rimage you can test which mtu is the best by using donnot fragment flag and setting the packet size ( remember the header also takes few bytes), there are some howto over the net
Crazy... I had a similar error trying to update Crowdsec on 2 x LXC containers... getting TLS handshake errors... I changed the MTU to 1300 for the containers... now it updates 🤷♂️
So you solve the problem you were having?
So changing the MTU has caused another issue... I run Mailcow which I am protecting with Crowdsec. Changing the MTU to fix the updating issue, has now broken large emails from being sent, they time-out 🤷♂️
lolwhat, it shouldn't be a problem at all, because smaller MTU just forces sending more packets so it just increases overhead
Yep, it's clear as day in my Mailcow postfix logs
lost connection after DATA (81902 bytes) from unknown[192.168.0.6]
I stopped receiving some of my information emails from TrueNAS at the start of the month... couldn't figure out why, until I realised the dates matched when I change the MTU settings to get the Crowdsec updates to work (still a mystery why on earth MTU size stops Crowdsec updates from downloading)... bumped it up to 1500 MTU earlier today, and that fixed the email issues. But no idea if they will break Crowdsec updates again.that sounds like some nastiness in network stack
or maybe there is a mix of incorret mtu and mss
or maybe PMTU is broken because you filtered out ICMP?
so try mss clamping, by default it is 1460 on 1500 mtu, so try just lowering mss
are you using some tunnels?
see https://man7.org/linux/man-pages/man8/ip-route.8.html advmss
The Cloudflare Blog
Path MTU discovery in practice
Last week, a very small number of our users who are using IP tunnels (primarily tunneling IPv6 over IPv4) were unable to access our services because a networking change broke "path MTU discovery" on our servers.