Container Acquisition randomly stopping

I setup CrowdSec to acquire logs from Traefik and Authelia. The logs are read through a socket proxy. So i have a config file for Authelia like this:
container_name:
- authelia
docker_host: tcp://docker-socket-proxy:2375
labels:
type: authelia
log_level: info
source: docker
container_name:
- authelia
docker_host: tcp://docker-socket-proxy:2375
labels:
type: authelia
log_level: info
source: docker
and a similar one for Traefik. I noticed that from time to time, acquisition will randomly just stop. Like i check the Grafana metrics and notice that there have been no logs ingested at all. This happened right now for Autelia: - The Authelia and Socket-Proxy containers have been running for 1 day and 12 hours. - CrowdSec is up for 25 minutes. In the logs i can see the log collection started fine:
Sep 05 19:57:11 homeserver crowdsec[3055152]: time="2025-09-05T19:57:11+02:00" level=info msg="start tail" container_name=authelia type=docker
Sep 05 19:57:11 homeserver crowdsec[3055152]: time="2025-09-05T19:57:11+02:00" level=info msg="start tail" container_name=traefik type=docker
Sep 05 19:57:11 homeserver crowdsec[3055152]: time="2025-09-05T19:57:11+02:00" level=info msg="start tail" container_name=authelia type=docker
Sep 05 19:57:11 homeserver crowdsec[3055152]: time="2025-09-05T19:57:11+02:00" level=info msg="start tail" container_name=traefik type=docker
Then, after exactly 10 minutes:
Sep 05 20:07:11 homeserver crowdsec[3055152]: time="2025-09-05T20:07:11+02:00" level=info msg="container acquisition stopped for container 'authelia'"
Sep 05 20:07:11 homeserver crowdsec[3055152]: time="2025-09-05T20:07:11+02:00" level=info msg="container acquisition stopped for container 'authelia'"
Is there some mechanism that will stop acquisition if there was no activity within a certain time frame? If yes, is there a way to disable it. I randomly find CrowdSec being "halted" basically with no logs being read anymore and theres no error or anything that would make me realize something is going wrong.
6 Replies
CrowdSec
CrowdSec4w ago
Important Information
Thank you for getting in touch with your support request. To expedite a swift resolution, could you kindly provide the following information? Rest assured, we will respond promptly, and we greatly appreciate your patience. While you wait, please check the links below to see if this issue has been previously addressed. If you have managed to resolve it, please use run the command /resolve or press the green resolve button below.
Log Files
If you possess any log files that you believe could be beneficial, please include them at this time. By default, CrowdSec logs to /var/log/, where you will discover a corresponding log file for each component.
Guide Followed (CrowdSec Official)
If you have diligently followed one of our guides and hit a roadblock, please share the guide with us. This will help us assess if any adjustments are necessary to assist you further.
Screenshots
Please forward any screenshots depicting errors you encounter. Your visuals will provide us with a clear view of the issues you are facing.
© Created By WhyAydan for CrowdSec ❤️
Tarrew
TarrewOP4w ago
Checking the docker-socket-proxy logs for the same timestamp i find this:
Sep 05 20:07:11 homeserver docker-socket-proxy[3037644]: ::ffff:10.89.5.26:51788 [05/Sep/2025:17:57:11.055] dockerfrontend dockerbackend/dockersocket 0/0/0/3/600010 200 181 - - CD-- 15/15/2/2/0 0/0 "GET /v1.41/containers/e61084d9c42782bdb0bc15e6beb64b296cf2aa7871316e41924bc40420c82dff/logs?follow=1&since=1757095030.000000000&stderr=1&stdout=1&tail= HTTP/1.1"
Sep 05 20:07:11 homeserver docker-socket-proxy[3037644]: ::ffff:10.89.5.26:51788 [05/Sep/2025:17:57:11.055] dockerfrontend dockerbackend/dockersocket 0/0/0/3/600010 200 181 - - CD-- 15/15/2/2/0 0/0 "GET /v1.41/containers/e61084d9c42782bdb0bc15e6beb64b296cf2aa7871316e41924bc40420c82dff/logs?follow=1&since=1757095030.000000000&stderr=1&stdout=1&tail= HTTP/1.1"
So it seems like the request times out after 10 minutes of no activity, and then CrowdSec won't try to restart the log-tail
Tarrew
TarrewOP4w ago
Seems like this behavior has been fixed in the past: https://github.com/crowdsecurity/crowdsec/issues/1469 But looking at the log and the current code, the wrong branch seems to be entered: https://github.com/crowdsecurity/crowdsec/blob/85e021e02d0d5b2d3d73b5bb546cbfd29b6b9a62/pkg/acquisition/modules/docker/docker.go#L493
GitHub
Bug/Docker datasource: logs are not read if not data for 10 minutes...
When using docker-socket-proxy, if a container that we are tailing does not emit any logs for 10 minutes, we won't see any of the logs that are written later. This is because the haproxy config...
GitHub
crowdsec/pkg/acquisition/modules/docker/docker.go at 85e021e02d0d5b...
CrowdSec - the open-source and participative security solution offering crowdsourced protection against malicious IPs and access to the most advanced real-world CTI. - crowdsecurity/crowdsec
iiamloz
iiamloz4w ago
The problem has probably been reintroduced when we moved over to the docker events hook (since we only update once a container comes alive or starts then we dont see the tomb silently dying anymore). hmmm let me think about how to handle this. @Tarrew ^^
iiamloz
iiamloz4w ago
GitHub
enhance: add timeout resilience to Docker acquisition (socket proxy...
Fix Docker Acquisition Timeout Issues Problem: Docker log acquisition fails when socket proxies timeout, causing permanent log loss. Solution: Health-based retries using errdefs.IsNotFound for pre...
Tarrew
TarrewOP4w ago
Cool, thanks for fixing it

Did you find this page helpful?