Nginx ingress controller unexpected DNS response
Hi everyone,
I'm trying out Crowdsec for the first time in Kubernetes.
It's an AKS cluster with an Nginx ingress controller.
The setup is currently working and blocking visitors using the free community blocklist.
But there seems to be an issue in the nginx controller logs:
2024/11/05 13:36:36 [error] 25#25: unexpected DNS response for crowdsec-service.crowdsec.svc.cluster.local
2024/11/05 13:36:36 [error] 25#25: unexpected DNS response for crowdsec-service.crowdsec.svc.cluster.local
I'm guessing it has something to do with the logging format.
The following guide was followed: https://docs.crowdsec.net/u/getting_started/installation/kubernetes/
The yaml files are in the next post.
The nginx service externalTraffic was set to local.
What I've tried:
- Changing the service name : this gave an error that the service was not reachable
Might be an issue? : Nginx logs are going to /std/out and /std/err (standard setting of Kubernetes NGINX ingress)
Did anyone have this error before?
I've searched for some days and can't figure it out.
Thanks for reading this and sorry if I did not follow some rules. First time using Discord.
Best regards,
GunnarKubernetes | CrowdSec
Before proceeding, ensure you have met all prerequisites necessary for running CrowdSec on Kubernetes.
10 Replies
Important Information
This post has been marked as resolved. If this is a mistake please press the red button below or type
/unresolve
© Created By WhyAydan for CrowdSec ❤️
crowdsec-ingress-bouncer.yaml:
controller:
extraVolumes:
- name: crowdsec-bouncer-plugin
emptyDir: {}
extraInitContainers:
- name: init-clone-crowdsec-bouncer
image: crowdsecurity/lua-bouncer-plugin
imagePullPolicy: IfNotPresent
env:
- name: API_URL
value: "http://crowdsec-service.crowdsec.svc.cluster.local:8080"
- name: API_KEY
value: "<hidden>"
- name: BOUNCER_CONFIG
value: "/crowdsec/crowdsec-bouncer.conf"
- name: CAPTCHA_PROVIDER
value: "recaptcha" # valid providers are recaptcha, hcaptcha, turnstile
- name: BAN_TEMPLATE_PATH
value: /etc/nginx/lua/plugins/crowdsec/templates/ban.html
- name: CAPTCHA_TEMPLATE_PATH
value: /etc/nginx/lua/plugins/crowdsec/templates/captcha.html
command: ["sh", "-c", "apk update; apk add bash; bash /docker_start.sh; mkdir -p /lua_plugins/crowdsec/; cp -R /crowdsec/* /lua_plugins/crowdsec/",]
volumeMounts:
- name: crowdsec-bouncer-plugin
mountPath: /lua_plugins
extraVolumeMounts:
- name: crowdsec-bouncer-plugin
mountPath: /etc/nginx/lua/plugins/crowdsec
subPath: crowdsec
config:
plugins: "crowdsec"
lua-shared-dicts: "crowdsec_cache: 50m"
crowdsec-values.yaml:
# for raw logs format: json or cri (docker|containerd)
container_runtime: containerd
agent:
# Specify each pod whose logs you want to process
acquisition:
# The namespace where the pod is located
- namespace: ingress-nginx
# The pod name
podName: ingress-nginx-controller-*
# as in crowdsec configuration, we need to specify the program name to find a matching parser
program: nginx
poll_without_inotify: true
env:
- name: COLLECTIONS
value: "crowdsecurity/nginx"
- name: PARSERS
value: "crowdsecurity/nginx-logs"
lapi:
env:
# To enroll the Security Engine to the console
- name: ENROLL_KEY
value: "<hidden>"
- name: ENROLL_INSTANCE_NAME
value: "staging-cluster"
- name: ENROLL_TAGS
value: "k8s linux staging"
Some more info:
crowdsec agent config.yaml:
crowdsec-agent-hgbbn:/etc/crowdsec# more config.yaml
common:
daemonize: false
log_media: stdout
log_level: info
log_dir: /var/log/
config_paths:
config_dir: /etc/crowdsec/
data_dir: /var/lib/crowdsec/data/
simulation_path: /etc/crowdsec/simulation.yaml
hub_dir: /etc/crowdsec/hub/
index_path: /etc/crowdsec/hub/.index.json
notification_dir: /etc/crowdsec/notifications/
plugin_dir: /usr/local/lib/crowdsec/plugins/
crowdsec_service:
acquisition_path: /etc/crowdsec/acquis.yaml
acquisition_dir: /etc/crowdsec/acquis.d
parser_routines: 1
plugin_config:
user: nobody
group: nobody
cscli:
output: human
db_config:
log_level: info
type: sqlite
db_path: /var/lib/crowdsec/data/crowdsec.db
flush:
max_items: 5000
max_age: 7d
use_wal: false
api:
client:
insecure_skip_verify: false
credentials_path: /etc/crowdsec/local_api_credentials.yaml
server:
log_level: info
listen_uri: 0.0.0.0:8080
profiles_path: /etc/crowdsec/profiles.yaml
trusted_ips: # IP ranges, or IPs which can have admin API access
- 127.0.0.1
- ::1
enable: false
prometheus:
enabled: true
level: full
listen_addr: 0.0.0.0
listen_port: 6060
crowdsec-agent-hgbbn
crowdsec lapi config.yaml:
crowdsec-lapi-b5547ff7f-jn9c6:/etc/crowdsec# more config.yaml
common:
daemonize: false
log_media: stdout
log_level: info
log_dir: /var/log/
config_paths:
config_dir: /etc/crowdsec/
data_dir: /var/lib/crowdsec/data/
simulation_path: /etc/crowdsec/simulation.yaml
hub_dir: /etc/crowdsec/hub/
index_path: /etc/crowdsec/hub/.index.json
notification_dir: /etc/crowdsec/notifications/
plugin_dir: /usr/local/lib/crowdsec/plugins/
crowdsec_service:
acquisition_path: /etc/crowdsec/acquis.yaml
acquisition_dir: /etc/crowdsec/acquis.d
parser_routines: 1
plugin_config:
user: nobody
group: nobody
cscli:
output: human
db_config:
log_level: info
type: sqlite
db_path: /var/lib/crowdsec/data/crowdsec.db
flush:
max_items: 5000
max_age: 7d
use_wal: false
api:
client:
insecure_skip_verify: false
credentials_path: /etc/crowdsec/local_api_credentials.yaml
server:
log_level: info
listen_uri: 0.0.0.0:8080
profiles_path: /etc/crowdsec/profiles.yaml
trusted_ips: # IP ranges, or IPs which can have admin API access
- 127.0.0.1
- ::1
online_client: # Central API credentials (to push signals and receive bad IPs)
credentials_path: /etc/crowdsec//online_api_credentials.yaml
enable: true
prometheus:
enabled: true
level: full
listen_addr: 0.0.0.0
listen_port: 6060
There is no data under /var/log/ (in agent and lapi) and only the symlinks on the nginx ingress controller(and an empty audit folder).
These errors are in the agent terminal log:
time="2024-11-04T14:59:16Z" level=warning msg="crowdsec local API is disabled because 'enable' is set to false"
msg="Exprhelpers loaded without database client."
This is expected since the
agent
pods only run the log processor
they dont run a lapi
as that is the role of lapi
pods
Are you running anything inside the cluster that is acting as a DNS service?Thanks for your response. There is coredns running on the node.
Is the nginx ingress configured to use the coredns instead of the normal k3s networking?
Not sure, thanks for the hint, will investigate.
I've scaled down the coredns pods to zero and still have the same responses. Could be it's not using the coredns pods or if it just falls back to the VNET DNS. There is no custom DNS installed.
In the Azure docs it says:
Azure Kubernetes Service (AKS) uses the CoreDNS project for cluster DNS management and resolution with all 1.12. x and higher clustersNot sure if it matters but it's a K8s cluster. The image above shows that the VNET DNS server is the Azure provided DNS service. I've installed dnsutil and tried the service which shows it should be reachable:
bash-5.0# host -a crowdsec-service.crowdsec.svc.cluster.local
Trying "crowdsec-service.crowdsec.svc.cluster.local.default.svc.cluster.local"
Trying "crowdsec-service.crowdsec.svc.cluster.local.svc.cluster.local"
Trying "crowdsec-service.crowdsec.svc.cluster.local.cluster.local"
Trying "crowdsec-service.crowdsec.svc.cluster.local.oqlpvgtasvbe5f3r5bq5rfs2ra.ax.internal.cloudapp.net"
Trying "crowdsec-service.crowdsec.svc.cluster.local"
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 57357
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0
;; QUESTION SECTION:
;crowdsec-service.crowdsec.svc.cluster.local. IN ANY
;; AUTHORITY SECTION:
cluster.local. 30 IN SOA ns.dns.cluster.local. hostmaster.cluster.local. 1730898047 7200 1800 86400 30
Received 154 bytes from 10.0.0.10#53 in 4 ms

are you sure it's actually managing to resolve it ?
Unless I'm misreading what you pasted, there's no
ANSWER
section, meaning nothing was resolved (or more accurately, there was no A/AAAA record)Hi thanks for your answer, and pointing me in the right direction. That's awesome!
It's indeed not resolving the service name correctly. Changing the API_URL to the crowdsec service endpoint IP works and does not give the error anymore.
- name: API_URL
value: "http://10.244.0.27:8080"
Sorry should have tried that earlier since I had the same problem with Redis on another cluster. Might be an AKS thing that the service names don't resolve correctly with the default settings.
Resolving Nginx ingress controller unexpected DNS response
This has now been resolved. If you think this is a mistake please run
/unresolve