Cilium Ipv6 Notes
Cilium Ipv6 Notes
and Observability
A Hands-on lab by Isovalent
Version 1.2.0
⚠️ DISCLAIMER
This document serves as a reference guide summarizing the content and steps
of the hands-on lab. It is intended to help users recall the concepts and
practices they learned when taking the lab.
Please note that the scripts, tools, and environment configurations required to
recreate the lab are not included in this document.
For a complete hands-on experience, please refer to the live lab environment
instead.
❓ Final Quiz
🥋 Exam Challenge
🚀 IPv6 on Kubernetes
🏅 Get a Badge!
yq /etc/kind/nocni_3workers_dual.yaml
🖥 Nodes
In the nodes section, you can see that the cluster consists of four nodes:
🔀 Networking
In the networking section of the configuration file, the default CNI has been
disabled so the cluster won't have any Pod network when it starts. Instead, Cilium is
being deployed to the cluster to provide this functionality.
To see if the Kind cluster is ready, verify that the cluster is properly running by listing
its nodes:
You should see the four nodes appear, all marked as NotReady. This is normal, since
the CNI is disabled, and we will install Cilium in the next step. If you don't see all
nodes, the worker nodes might still be joining the cluster. Relaunch the command
until you can see all three nodes listed.
The cilium CLI tool can install and update Cilium on a cluster, as well as
activate features —such as Hubble and Cluster Mesh.
❯ cilium install
🔮 Auto-detected Kubernetes kind: kind
ℹ️ Using Cilium version 1.17.1
🔮 Auto-detected cluster name: kind-kind
🔮 Auto-detected kube-proxy has been installed
We are enabling the IPv6 option with --set ipv6.enabled=true (it’s disabled by
default):
cilium install \
--version 1.17.1 \
--set kubeProxyReplacement=true \
--set k8sServiceHost=kind-control-plane \
--set k8sServicePort=6443 \
--set ipv6.enabled=true
This might take a few minutes. When it's done, verify that Cilium has been
configured with IPv6 enabled:
The nodes should now be in Ready state. Check with the following command:
With the following command, you can see the PodCIDRs from which IPv4 and IPv6
addresses will be allocated to your Pods.
Now that Cilium is functional on our cluster for IPv6 connectivity, let's enable Hubble
for IPv6 observability.
🔭 Activate Hubble
After about a minute, Hubble should be enabled, and you can check that Hubble is
running well (the status should be OK):
The demo application includes Pods across multiple nodes and a Service
fronting these Pods.
Let's deploy the demo app and verify IPv6 connectivity and AAAA name
resolution for the Service.
Deploy a couple of Pods with the command below. We will run some pings between
the two to verify that traffic is being sent over IPv6:
yq pod1.yaml pod2.yaml
You will see we are pinning the pods on different nodes (with spec.nodeName set to
kind-worker and kind-worker2) for the purpose of the lab (it's not necessarily a
common practice).
Check the Pods have been successfully deployed. Notice it has two IP addresses
allocated – IPv4 and IPv6.
Let's directly get the IPv6 address from pod-worker2 with this command.
Let's run an IPv6 ping from pod-worker to pod-worker2. Because the Pods were
pinned to different nodes, it should show successful IPv6 connectivity between Pods
on different nodes.
cat -n echo-kube-ipv6.yaml
Deploy it:
Check the echoserver Service: you should see both IPv4 and IPv6 addresses
allocated.
You should see a successful JSON output, beginning with something like this:
{
"host": {
"hostname": "[fd00:10:96::d5d3]",
"ip": "fd00:10:244:2::48c",
"ips": []
},
"http": {
"method": "GET",
"baseUrl": "",
"originalUrl": "/",
"protocol": "http"
},
"request": {
"params": {
"0": "/"
},
"query": {},
"cookies": {},
"body": {}
}
}
Well done - you have validated inter-node IPv6 connectivity with ICMPv6 and pod-
to-Service IPv6 connectivity over HTTP.
So far we have only operated with IP addresses. But we can also use DNS as AAAA
records are assigned automatically to Services.
You should see a successful DNS resolution for the AAAA record:
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: echoserver.default.svc.cluster.local
Address: fd00:10:96::a645
You should see an output similar to the one in the previous task.
👓 Observing Flows
The hubble CLI connects to the Hubble Relay component in the cluster and
retrieves logs called "Flows". This command line tool then enables you to visualize
and filter the flows.
With the hubble CLI, you will be able see a list of logs, each with:
• a timestamp
• a source pod, along with its namespace, port, and Cilium identity
• the direction of the flow (->, <-, or at times <> if the direction could not be
determined)
• a destination pod, along with its namespace, port, and Cilium identity
• a trace observation point (e.g. to-endpoint, to-stack, to-overlay)
• a verdict (e.g. FORWARDED or DROPPED)
• a protocol (e.g. UDP, TCP), optionally with flags
🔭 IPv6 Observability
Let's head over to >_ Terminal 2 to run an IPv6 ping from pod-worker to pod-
worker2.
Head back to >_ Terminal 1 to execute the hubble observe command to monitor
the traffic flows.:
You might also see flows generated in the previous task but the most recent ones
should be the ICMPv6 flows, such as:
Let's now print the node where the Pods are running with the --print-node-name:
By default, Hubble will translate IP addresses to logical names such as Pod name or
FQDN. You can disable it if you want the source and destination IPv6 addresses:
Head back to >_ Terminal 2 and run the curl to the IPv6 Service command again:
In >_ Terminal 1 , you will now see HTTP (with 80 as the DESTINATION port and TCP
flags in the SUMMARY) and ICMPv6 flows:
------------
TIMESTAMP: Nov 24 12:47:22.293
SOURCE: [fd00:10:244:2::e29d]:39200
DESTINATION: [fd00:10:244:3::91b]:80
TYPE: to-endpoint
VERDICT: FORWARDED
SUMMARY: TCP Flags: ACK
If you just want to see your ping messages, you can simply filter based on the
protocol with the flag --protocol ICMPv6:
And that’s it! You now can visualize IPv6 flows in Kubernetes and hopefully you can
see how running IPv6 on Kubernetes does not need to be an operational nightmare,
if you have the right tools in place.
Let's conclude with a short quiz and a lab to verify our learnings. On completion of
the lab, you will be awarded a badge.
❓ Final Quiz
The DNS AAAA record matches a domain name with an IPv4 address.
This last challenge is an exam that will allow you to win a badge.
After you complete the exam, you will receive an email from Credly with
a link to accept your badge. Make sure to finish the lab!
🥋 Exam Challenge
📝 Exam Instructions
1. Deploy a Pod based on the nginx image and verify that it has an IPv6 address
allocated. Make sure the Pod is called my-nginx.
2. Expose the nginx app with a NodePort Service. You can use the pre-populated
YAML file service-challenge.yaml as a starting point. The file is located in
the /exam/ folder.
3. Verify with curl that access to nginx server over the Node IPv6 address is
successful. Use TCP and port 80 to access this server.
Notes:
• For the first task, you might want to use a command such as
kubectl run to deploy the Pod.
• For the second task, you can use the </> Editor tab. You can also
look at the service created in a previous task for inspiration.
• You may need to use commands such as kubectl describe svc
and kubectl describe nodes during this challenge.
• The exam will be completed if the command curl http://
[$NODE]:$PORT is successful (with $NODE the IPv6 address of any
of the nodes and $PORT, the NodePort).
• The NodePort is within the 30000-32767 range.
Good luck!
Badge
Thanks for taking part in this Dual Stack IPv4/IPv6 lab, and for learning something
about the capabilities of Cilium.