0% found this document useful (0 votes)
14 views

New Section 1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

New Section 1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

6 Ağustos 2020 Perşembe 09:23

PUBLISH - SUBSCRIBE WORKSHOP

Step 1: Create a namespace

Let’s create a namespace for our experiment and name it tls-kafka

kubectl create namespace tls-kafka

Step 2: Install the Strimzi Operator

The first thing we must do is to install the Strimzi Cluster Operator which is responsible for creating
the Kafka Broker
pods and Zookeeper pods in our cluster. You can use kubectl to install using the latest released
version or use Helm to achieve the same.

curl -L https://github.com/strimzi/strimzi-kafka-operator/releases/download/0.15.0/strimzi-cluster-
operator-0.15.0.yaml \
| sed 's/namespace: .*/namespace: tls-kafka/' \
| kubectl apply -f - -n tls-kafka

Before we proceed, let’s verify that our operator was created successfully

kubectl get pods -n tls-kafka

Step 3: Create the Kafka Cluster

Now that our operator is running, we can create new Custom Resources corresponding to our Kafka
Cluster, specifying the kafka version, number of brokers, enabling TLS/SSL on brokers and certain
configuration using the following YAML file:

code my-tls-cluster.yaml

YAML:
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
name: my-tls-cluster
spec:
kafka:
version: 2.2.1
replicas: 3
listeners:
tls:
authentication:
type: tls
config:
log.message.format.version: "2.2"
ssl.client.auth: "required"
tlsSidecar:

New Section 1 Page 1


tlsSidecar:
resources:
requests:
cpu: 200m
memory: 64Mi
limits:
cpu: 500m
memory: 128Mi
storage:
type: jbod
volumes:
- id: 0
class: managed-premium
type: persistent-claim
size: 64Gi
deleteClaim: false
zookeeper:
replicas: 3
storage:
type: persistent-claim
class: managed-premium
size: 32Gi
deleteClaim: false
tlsSidecar:
resources:
requests:
cpu: 200m
memory: 64Mi
limits:
cpu: 500m
memory: 128Mi
entityOperator:
topicOperator: {}
userOperator: {}

Create the cluster:


kubectl apply -f my-tls-cluster.yaml -n tls-kafka

Step 4: Verify the Kafka Cluster

kubectl get pods -n tls-kafka

It’s good to know that ZooKeeper services are secured with encryption and authentication and are
not intended to be used by external applications that are not part of Strimzi. If you really want to
access ZooKeeper though, for example, using the Kafka Command line tools such has kafka-topics.sh
you can connect to the local end of the TLS tunnel to ZooKeeper on localhost:2181 and finally, each
Zookeeper and Kafka Pod typically has 2 containers one for Kafka/Zookeeper and a sidecar for TLS.

Step 5: Verify the Certificate Bundle

Since we enabled the TLS authentication for brokers, our Operator has created the CA certificate for
our brokers and generated it as a Kubernetes Secret.
The secret is named as <cluster-name>-cluster-ca-cert so in our case its my-tls-cluster-cluster-ca-cert
The data field contains the ca.crt value which we will use later.

New Section 1 Page 2


kubectl get secret my-tls-cluster-cluster-ca-cert -n tls-kafka -o yaml

Step 6: Create some Topics

As it turns out, a Kafka topic is modeled as a Custom Resource in Strimzi so you can create topics
either by using kubectl or using any of the kafka utilities we will create a Custom Resource for
KafkaTopic and apply it via kubectl

code kafka-topics.yaml

YAML:

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic
metadata:
name: test
labels:
strimzi.io/cluster: my-tls-cluster
spec:
partitions: 6
replicas: 3
config:
retention.ms: 7200000
segment.bytes: 1073741824
---
apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic
metadata:
name: test-one-rep
labels:
strimzi.io/cluster: my-tls-cluster
spec:
partitions: 6
replicas: 1
config:
retention.ms: 7200000
segment.bytes: 1073741824

Create the topic:


kubectl apply -f kafka-topics.yaml -n tls-kafka

Step 7: Validate the Topics

One can specify the topic name, the number of partitions and replicas the topic should have and also
any topic configuration in the Custom Resource object. Note that even when you create a Topic
using kafka command line tools, the Strimzi Topic Operator will notice that and update the etcd
store with that topic so when you get the list of topics using kubectl you will get the correct updated
list.

kubectl get kafkatopics -n tls-kafka

Step 8: Create Users

Like Topics, User is also a Custom Resource and can be created in a similar fashion. Strimzi supports

New Section 1 Page 3


Like Topics, User is also a Custom Resource and can be created in a similar fashion. Strimzi supports
the default authorization mechanism offered by Kafka — Access Control Lists (ACLs) for Users on
resources such as Topics , Clusters , ConsumerGroups and TransactionalIDs — in simpler terms you
could control which users are able to do what operations on these resources. To configure this, set
spec.authorization.type to simple denoting SimpleACLAuthorizer Kafka plugin.

code kafka-users.yaml

YAML:
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaUser
metadata:
name: my-user
labels:
strimzi.io/cluster: my-tls-cluster
spec:
authentication:
type: tls
authorization:
type: simple
acls:
- resource:
type: topic
name: test
patternType: literal
operation: All
- resource:
type: topic
name: test-one-rep
patternType: literal
operation: All

Create the user:


In this example, we define a user my-user which has authentication specified as tls (denoting TLS
Client Authentication) and authorization specified as simple ( denoting Simple ACL Authorization ).
Our user my-user in this example has complete access to perform any operation on the topics test
and test-one-rep We create this user custom resource object in our namespace using kubectl

kubectl apply -f kafka-users.yaml -n tls-kafka

The User Operator is responsible for creating the User object and also the related ACLs and
generates secrets for the user with the same name as that of user in our case it generates the secret
my-user. This secret contains the user.crt and user.key which we will be using in our Kafka clients to
connect to the brokers

Step 9: Create Kafka Clients

In this example, we do not have to expose the brokers to the outside world (outside the cluster) as
our clients (both producers and consumers) are also pods/deployments running within the AKS
cluster. So we can use the default generated headless service to connect to the brokers. But first,
let’s create 3 Kafka clients and pass them the cluster CA certificate and my-user credentials in order
to connect to the brokers successfully

code kafka-client.yaml

YAML:
apiVersion: apps/v1

New Section 1 Page 4


apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kafkaclient
spec:
replicas: 3
serviceName: kafkaclient
selector:
matchLabels:
app: kafkaclient
template:
metadata:
labels:
app: kafkaclient
spec:
containers:
- name: kafka
image: solsson/kafka:0.11.0.0
command:
- sh
- -c
- exec tail -f /dev/null
env:
- name: CA_CRT
valueFrom:
secretKeyRef:
name: my-tls-cluster-cluster-ca-cert
key: ca.crt
- name: USER_CRT
valueFrom:
secretKeyRef:
name: my-user
key: user.crt
- name: USER_KEY
valueFrom:
secretKeyRef:
name: my-user
key: user.key

Deploy Kafka Clients:


This will deploy 3 replicas of our StatefulSet and every pod is injected with environment variables
from the Kubernetes Secret for Cluster CA Certificate, User credentials. This is essential to
successfully connect to the broker.

kubectl apply -f kafka-client.yaml -n tls-kafka

Step 10: Validate Kafka Clients

One can specify the topic name, the number of partitions and replicas the topic should have and also
any topic configuration in the Custom Resource object. Note that even when you create a Topic
using kafka command line tools, the Strimzi Topic Operator will notice that and update the etcd
store with that topic so when you get the list of topics using kubectl you will get the correct updated
list.

kubectl get pods -n tls-kafka

New Section 1 Page 5


Step 11: Produce & Consume

Before we can start sending messages and consuming messages, we need to configure the truststore
and keystore of the Kafka clients created in the last step to use the secrets passed to it, via
environment variables. To do so, we use the following script. With this script, we end up configuring
out truststore and keystore and also create a config file at location /opt/kafka/config/ssl that we
will use with our producers and consumers to connect with.

code ssl_setup.sh

SSL SETUP:
#!/bin/bash
set +x

# Parameters:
# $1: Path to the new truststore
# $2: Truststore password
# $3: Public key to be imported
# $4: Alias of the certificate
function create_truststore {
if [ -f "$1" ]; then
echo "Truststore exists so removing it since we are using a new random password."
rm -f "$1"
fi
keytool -keystore "$1" -storepass "$2" -noprompt -alias "$4" -import -file "$3" -storetype PKCS12
}

# Parameters:
# $1: Path to the new keystore
# $2: Truststore password
# $3: Public key to be imported
# $4: Private key to be imported
# $5: Alias of the certificate
function create_keystore {
if ! hash openssl; then
if hash apt-get; then
apt-get update && apt-get install openssl -y
else
echo "FAILED TO CREATED KEYSTORE!"
exit 1
fi
fi
if [ -f "$1" ]; then
echo "Keystore exists so removing it since we are using a new random password."
rm -f "$1"
fi
RANDFILE=/tmp/.rnd openssl pkcs12 -export -in "$3" -inkey "$4" -name "$HOSTNAME" -password
"pass:$2" -out "$1"
}

if [ "$CA_CRT" ];
then
echo "Preparing truststore"
export TRUSTSTORE_PASSWORD=$(< /dev/urandom tr -dc _A-Z-a-z-0-9 | head -c32)
echo "$CA_CRT" > /tmp/ca.crt
create_truststore /opt/kafka/truststore.p12 "$TRUSTSTORE_PASSWORD" /tmp/ca.crt ca
export TRUSTSTORE_PATH=/opt/kafka/truststore.p12

New Section 1 Page 6


export TRUSTSTORE_PATH=/opt/kafka/truststore.p12
fi

if [[ "$USER_CRT" && "$USER_KEY" ]];


then
echo "Preparing keystore"
export KEYSTORE_PASSWORD=$(< /dev/urandom tr -dc _A-Z-a-z-0-9 | head -c32)
echo "$USER_CRT" > /tmp/user.crt
echo "$USER_KEY" > /tmp/user.key
create_keystore /opt/kafka/keystore.p12 "$KEYSTORE_PASSWORD" /tmp/user.crt /tmp/user.key
/tmp/ca.crt "$HOSTNAME"
export KEYSTORE_PATH=/opt/kafka/keystore.p12
fi

cat << EOF > /opt/kafka/config/ssl-config.properties


security.protocol=SSL
ssl.truststore.location=$TRUSTSTORE_PATH
ssl.truststore.password=$TRUSTSTORE_PASSWORD
ssl.truststore.type=PKCS12
ssl.keystore.location=$KEYSTORE_PATH
ssl.keystore.password=$KEYSTORE_PASSWORD
ssl.keystore.type=PKCS12
ssl.key.password=$KEYSTORE_PASSWORD
EOF

Step 12: Create Trust Store and Key Store for Mutual TLS

Run Script Command:

for i in $(seq 0 2); do


kubectl -n tls-kafka cp ./ssl_setup.sh "kafkaclient-$i:/opt/kafka/setup_ssl.sh"
kubectl -n tls-kafka exec -it "kafkaclient-$i" -- bash setup_ssl.sh
done

Step 13: Create Producer

Now we can finally exec into our kafka clients and produce/consume data. To produce messages we
will use kafka command line tools for this example.

Attach to client 0:
kubectl -n tls-kafka exec -it kafkaclient-0 -- bash

Run Kafka Producer:


bin/kafka-console-producer.sh --broker-list my-tls-cluster-kafka-bootstrap.tls-kafka:9093 --topic
test --producer.config /opt/kafka/config/ssl-config.properties

Do not write anything to prompt yet. First create a consumer in another cloud shell window.
Open another tab and go to portal.azure.com and open another Azure Cloud Shell session.

Attach to client 1:
kubectl -n tls-kafka exec -it kafkaclient-1 -- bash

Run Kafka Consumer:


bin/kafka-console-consumer.sh --bootstrap-server my-tls-cluster-kafka-bootstrap.tls-kafka:9093 --
topic test --from-beginning --consumer.config /opt/kafka/config/ssl-config.properties

New Section 1 Page 7


Couple of things to note here:
We pass the ssl-config.properties file created with the required SSL credentials to the producer and
consumers using --producer.config and --consumer.config options respectively
We are using the ClusterIP Kubernetes service my-tls-cluster-kafka-bootstrap to connect to our
brokers

After you run Kafka Consumer go to the Cloud Shell tab where you are running the producer and
type anything to prompt. Swith to the Consumer tab and see the messages are reflected.

Workshop is complete.

New Section 1 Page 8

You might also like