Z
Zerops3mo ago
motters

Kafka External Access

We have a trusted external provider which needs to connect to our Zerops kafka instance. Is there any way todo this without them using the "zcli vpn"? Of which they wont do, it's a PaaS/SaaS offering.
8 Replies
Jan Saidl
Jan Saidl3mo ago
Hello @motters , unfortunately, it's not currently possible. Kafka is not configured with TLS support. One way to do it would be to use a proxy that handles TLS and securely exposes its port. https://github.com/grepplabs/kafka-proxy
motters
mottersOP3mo ago
Hi @Jan Saidl, thanks for replying. We guessed this and have been trying to configure https://www.envoyproxy.io/. The documentation on ubuntu services is limited for Zerops. This is what we have but it's failing at reading the yaml. Could you help? Or do you suggest we use the proxy you suggested? Docs: https://www.envoyproxy.io/docs/envoy/latest/start/sandboxes/kafka https://www.envoyproxy.io/docs/envoy/latest/configuration/listeners/network_filters/kafka_broker_filter.html
zerops:
- setup: kafkagateway
build:
deployFiles:
- ./envoy.yaml
run:
base: ubuntu@24.04
prepareCommands:
- apt-get update -y
- apt-get install -y curl gnupg apt-transport-https lsb-release
- curl -sL 'https://getenvoy.io/gpg' | apt-key add -
- echo "deb [arch=amd64] https://getenvoy.io/debian stable main" > /etc/apt/sources.list.d/getenvoy.list
- apt-get update -y
- apt-get install -y getenvoy-envoy
start: |
# Run Envoy with the uploaded config
envoy -c /var/www/envoy.yaml --log-level info
ports:
- port: 9094
protocol: TCP
httpSupport: false
zerops:
- setup: kafkagateway
build:
deployFiles:
- ./envoy.yaml
run:
base: ubuntu@24.04
prepareCommands:
- apt-get update -y
- apt-get install -y curl gnupg apt-transport-https lsb-release
- curl -sL 'https://getenvoy.io/gpg' | apt-key add -
- echo "deb [arch=amd64] https://getenvoy.io/debian stable main" > /etc/apt/sources.list.d/getenvoy.list
- apt-get update -y
- apt-get install -y getenvoy-envoy
start: |
# Run Envoy with the uploaded config
envoy -c /var/www/envoy.yaml --log-level info
ports:
- port: 9094
protocol: TCP
httpSupport: false
^^ Message was too long for Discord envoy.yaml
static_resources:
listeners:
- name: kafka_public
address:
socket_address:
address: 0.0.0.0
port_value: 9094
filter_chains:
- filters:
- name: envoy.filters.network.kafka_broker
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.kafka_broker.v3.KafkaBroker
stat_prefix: kafka_gateway
enable_request_mutation: true
enable_response_mutation: true
broker_address_rewrite_spec:
host: "{EXTERNAL_HOST}"
port: {EXTERNAL_PORT}
- name: envoy.filters.network.tcp_proxy
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy
stat_prefix: tcp_to_kafka
cluster: kafka_internal

clusters:
- name: kafka_internal
connect_timeout: 1s
type: STRICT_DNS
load_assignment:
cluster_name: kafka_internal
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: "{INTERNAL_BROKER_HOST}"
port_value: {INTERNAL_BROKER_PORT}
static_resources:
listeners:
- name: kafka_public
address:
socket_address:
address: 0.0.0.0
port_value: 9094
filter_chains:
- filters:
- name: envoy.filters.network.kafka_broker
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.kafka_broker.v3.KafkaBroker
stat_prefix: kafka_gateway
enable_request_mutation: true
enable_response_mutation: true
broker_address_rewrite_spec:
host: "{EXTERNAL_HOST}"
port: {EXTERNAL_PORT}
- name: envoy.filters.network.tcp_proxy
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy
stat_prefix: tcp_to_kafka
cluster: kafka_internal

clusters:
- name: kafka_internal
connect_timeout: 1s
type: STRICT_DNS
load_assignment:
cluster_name: kafka_internal
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: "{INTERNAL_BROKER_HOST}"
port_value: {INTERNAL_BROKER_PORT}
Jan Saidl
Jan Saidl3mo ago
Hi @motters , I'll definitely check it out and try to create a recipe that would solve it.
motters
mottersOP3mo ago
@Jan Saidl Thanks, I think we've solved it (just this second), just testing atm. We switched to using what you suggested. It'd be good to have a guide wrote on this, as you'd be the only affordable kafka provider.
zerops:
- setup: kafkagateway
run:
base: docker@26.1
prepareCommands:
- docker pull grepplabs/kafka-proxy:latest
start: docker run --rm --network=host grepplabs/kafka-proxy:latest server --log-level debug --dynamic-listeners-disable --bootstrap-server-mapping "node-stable-1.db.kafka.zerops:9092,0.0.0.0:19092,XXX.XXX.XXX.207:19092" --bootstrap-server-mapping "node-stable-2.db.kafka.zerops:9092,0.0.0.0:19093,XXX.XXX.XXX.207:19093" --bootstrap-server-mapping "node-stable-3.db.kafka.zerops:9092,0.0.0.0:19094,XXX.XXX.XXX.207:19094"
ports:
- { port: 19092, httpSupport: false }
- { port: 19093, httpSupport: false }
- { port: 19094, httpSupport: false }
zerops:
- setup: kafkagateway
run:
base: docker@26.1
prepareCommands:
- docker pull grepplabs/kafka-proxy:latest
start: docker run --rm --network=host grepplabs/kafka-proxy:latest server --log-level debug --dynamic-listeners-disable --bootstrap-server-mapping "node-stable-1.db.kafka.zerops:9092,0.0.0.0:19092,XXX.XXX.XXX.207:19092" --bootstrap-server-mapping "node-stable-2.db.kafka.zerops:9092,0.0.0.0:19093,XXX.XXX.XXX.207:19093" --bootstrap-server-mapping "node-stable-3.db.kafka.zerops:9092,0.0.0.0:19094,XXX.XXX.XXX.207:19094"
ports:
- { port: 19092, httpSupport: false }
- { port: 19093, httpSupport: false }
- { port: 19094, httpSupport: false }
I'll update after testing.
Jan Saidl
Jan Saidl3mo ago
Great job 👍 . Maybe it will even run without Docker. Just a note on the original zerops.yaml. In the prepareCommands for apt, you need to use sudo.
motters
mottersOP3mo ago
We'd prefer this to run without docker as it can auto-scale resources. Ah thanks, noted on 'sudo'.
Aleš
Aleš3mo ago
if you don't mind me asking, how does the rest of your stack look like / what would be the reason not to run it fully inside Zerops?
motters
mottersOP3mo ago
@Aleš Our system itself doesn’t really need Kafka. The only reason we use it is because our external partner, who provides device management services for our IoT platform, can only stream device data via Kafka. Their cloud (large and complex) needs access to the Kafka instance we’re hosting with you so they can push data in. From there, our cloud (hosted with you) can pull it out and process it. Effectively, we can’t host our partner’s cloud with you, as dedicated instances would be prohibitively expensive. It’s not really what Kafka is meant for, but it’s a reasonable solution given that we have no control over our partner’s development pipeline. Hope that helps 👍 We’ve been testing the proxy over the weekend and everything is working well. We’ll leave it running for a few more days to continue monitoring. However always open to better suggestion or implementations.

Did you find this page helpful?