New Section 1
New Section 1
The first thing we must do is to install the Strimzi Cluster Operator which is responsible for creating
the Kafka Broker
pods and Zookeeper pods in our cluster. You can use kubectl to install using the latest released
version or use Helm to achieve the same.
curl -L https://github.com/strimzi/strimzi-kafka-operator/releases/download/0.15.0/strimzi-cluster-
operator-0.15.0.yaml \
| sed 's/namespace: .*/namespace: tls-kafka/' \
| kubectl apply -f - -n tls-kafka
Before we proceed, let’s verify that our operator was created successfully
Now that our operator is running, we can create new Custom Resources corresponding to our Kafka
Cluster, specifying the kafka version, number of brokers, enabling TLS/SSL on brokers and certain
configuration using the following YAML file:
code my-tls-cluster.yaml
YAML:
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
name: my-tls-cluster
spec:
kafka:
version: 2.2.1
replicas: 3
listeners:
tls:
authentication:
type: tls
config:
log.message.format.version: "2.2"
ssl.client.auth: "required"
tlsSidecar:
It’s good to know that ZooKeeper services are secured with encryption and authentication and are
not intended to be used by external applications that are not part of Strimzi. If you really want to
access ZooKeeper though, for example, using the Kafka Command line tools such has kafka-topics.sh
you can connect to the local end of the TLS tunnel to ZooKeeper on localhost:2181 and finally, each
Zookeeper and Kafka Pod typically has 2 containers one for Kafka/Zookeeper and a sidecar for TLS.
Since we enabled the TLS authentication for brokers, our Operator has created the CA certificate for
our brokers and generated it as a Kubernetes Secret.
The secret is named as <cluster-name>-cluster-ca-cert so in our case its my-tls-cluster-cluster-ca-cert
The data field contains the ca.crt value which we will use later.
As it turns out, a Kafka topic is modeled as a Custom Resource in Strimzi so you can create topics
either by using kubectl or using any of the kafka utilities we will create a Custom Resource for
KafkaTopic and apply it via kubectl
code kafka-topics.yaml
YAML:
apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic
metadata:
name: test
labels:
strimzi.io/cluster: my-tls-cluster
spec:
partitions: 6
replicas: 3
config:
retention.ms: 7200000
segment.bytes: 1073741824
---
apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic
metadata:
name: test-one-rep
labels:
strimzi.io/cluster: my-tls-cluster
spec:
partitions: 6
replicas: 1
config:
retention.ms: 7200000
segment.bytes: 1073741824
One can specify the topic name, the number of partitions and replicas the topic should have and also
any topic configuration in the Custom Resource object. Note that even when you create a Topic
using kafka command line tools, the Strimzi Topic Operator will notice that and update the etcd
store with that topic so when you get the list of topics using kubectl you will get the correct updated
list.
Like Topics, User is also a Custom Resource and can be created in a similar fashion. Strimzi supports
code kafka-users.yaml
YAML:
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaUser
metadata:
name: my-user
labels:
strimzi.io/cluster: my-tls-cluster
spec:
authentication:
type: tls
authorization:
type: simple
acls:
- resource:
type: topic
name: test
patternType: literal
operation: All
- resource:
type: topic
name: test-one-rep
patternType: literal
operation: All
The User Operator is responsible for creating the User object and also the related ACLs and
generates secrets for the user with the same name as that of user in our case it generates the secret
my-user. This secret contains the user.crt and user.key which we will be using in our Kafka clients to
connect to the brokers
In this example, we do not have to expose the brokers to the outside world (outside the cluster) as
our clients (both producers and consumers) are also pods/deployments running within the AKS
cluster. So we can use the default generated headless service to connect to the brokers. But first,
let’s create 3 Kafka clients and pass them the cluster CA certificate and my-user credentials in order
to connect to the brokers successfully
code kafka-client.yaml
YAML:
apiVersion: apps/v1
One can specify the topic name, the number of partitions and replicas the topic should have and also
any topic configuration in the Custom Resource object. Note that even when you create a Topic
using kafka command line tools, the Strimzi Topic Operator will notice that and update the etcd
store with that topic so when you get the list of topics using kubectl you will get the correct updated
list.
Before we can start sending messages and consuming messages, we need to configure the truststore
and keystore of the Kafka clients created in the last step to use the secrets passed to it, via
environment variables. To do so, we use the following script. With this script, we end up configuring
out truststore and keystore and also create a config file at location /opt/kafka/config/ssl that we
will use with our producers and consumers to connect with.
code ssl_setup.sh
SSL SETUP:
#!/bin/bash
set +x
# Parameters:
# $1: Path to the new truststore
# $2: Truststore password
# $3: Public key to be imported
# $4: Alias of the certificate
function create_truststore {
if [ -f "$1" ]; then
echo "Truststore exists so removing it since we are using a new random password."
rm -f "$1"
fi
keytool -keystore "$1" -storepass "$2" -noprompt -alias "$4" -import -file "$3" -storetype PKCS12
}
# Parameters:
# $1: Path to the new keystore
# $2: Truststore password
# $3: Public key to be imported
# $4: Private key to be imported
# $5: Alias of the certificate
function create_keystore {
if ! hash openssl; then
if hash apt-get; then
apt-get update && apt-get install openssl -y
else
echo "FAILED TO CREATED KEYSTORE!"
exit 1
fi
fi
if [ -f "$1" ]; then
echo "Keystore exists so removing it since we are using a new random password."
rm -f "$1"
fi
RANDFILE=/tmp/.rnd openssl pkcs12 -export -in "$3" -inkey "$4" -name "$HOSTNAME" -password
"pass:$2" -out "$1"
}
if [ "$CA_CRT" ];
then
echo "Preparing truststore"
export TRUSTSTORE_PASSWORD=$(< /dev/urandom tr -dc _A-Z-a-z-0-9 | head -c32)
echo "$CA_CRT" > /tmp/ca.crt
create_truststore /opt/kafka/truststore.p12 "$TRUSTSTORE_PASSWORD" /tmp/ca.crt ca
export TRUSTSTORE_PATH=/opt/kafka/truststore.p12
Step 12: Create Trust Store and Key Store for Mutual TLS
Now we can finally exec into our kafka clients and produce/consume data. To produce messages we
will use kafka command line tools for this example.
Attach to client 0:
kubectl -n tls-kafka exec -it kafkaclient-0 -- bash
Do not write anything to prompt yet. First create a consumer in another cloud shell window.
Open another tab and go to portal.azure.com and open another Azure Cloud Shell session.
Attach to client 1:
kubectl -n tls-kafka exec -it kafkaclient-1 -- bash
After you run Kafka Consumer go to the Cloud Shell tab where you are running the producer and
type anything to prompt. Swith to the Consumer tab and see the messages are reflected.
Workshop is complete.