0% found this document useful (0 votes)
0 views

Cilium Ipv6 Notes

This document is a reference guide for a hands-on lab focused on Cilium IPv6 networking and observability in Kubernetes. It outlines the steps to install Cilium, deploy a dual-stack demo application, and observe IPv6 flows using Hubble. Users are encouraged to complete the lab for a comprehensive learning experience and to earn a badge upon completion.

Uploaded by

Aung Aung
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

Cilium Ipv6 Notes

This document is a reference guide for a hands-on lab focused on Cilium IPv6 networking and observability in Kubernetes. It outlines the steps to install Cilium, deploy a dual-stack demo application, and observe IPv6 flows using Hubble. Users are encouraged to complete the lab for a comprehensive learning experience and to earn a badge upon completion.

Uploaded by

Aung Aung
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

🥈 Cilium IPv6 Networking

and Observability
A Hands-on lab by Isovalent
Version 1.2.0
⚠️ DISCLAIMER

This document serves as a reference guide summarizing the content and steps
of the hands-on lab. It is intended to help users recall the concepts and
practices they learned when taking the lab.

Please note that the scripts, tools, and environment configurations required to
recreate the lab are not included in this document.

For a complete hands-on experience, please refer to the live lab environment
instead.

You can access the lab at https://isovalent.com/labs/cilium-ipv6.

The World of Cilium Map at https://labs-map.isovalent.com also lets you access


all labs and view your progress through the various learning paths.

© 2025 Cisco and/or its affiliates. All rights reserved. 2


Challenges

🏛️ The Lab Environment

⬢ Install Cilium and Hubble

🚀 Deploying a Dual Stack demo app

🔍 Observe IPv6 flows

❓ Final Quiz

🥋 Exam Challenge

© 2025 Cisco and/or its affiliates. All rights reserved. 3


🏛️ THE LAB ENVIRONMENT

🚀 IPv6 on Kubernetes

Kubernetes is not only IPv6-ready but it also provides a transitional


pathway from IPv4 to IPv6.
With Dual Stack, each pod is allocated both an IPv4 and an IPv6 address,
so it can communicate both with IPv6 systems and the legacy apps and
cloud services that use IPv4.
In order to run Dual Stack on Kubernetes, you need a CNI that supports
it: of course, Cilium does.
In this lab, you will learn how to run Kubernetes in Dual Stack using
Cilium, learn about IPv6 networking for Pods and Services in Kubernetes
and verify connectivity with Hubble.
Press Start to begin the lab.

🏅 Get a Badge!

By completing this lab, you will be able to earn a badge.


Make sure to finish the lab in order to get your badge!

© 2025 Cisco and/or its affiliates. All rights reserved. 4


🏛️ THE LAB ENVIRONMENT

🏛️ The Lab Environment

🏛 The Kind Cluster

Let's have a look at this lab's Kubernetes environment, based on Kind.

Check its configuration:

yq /etc/kind/nocni_3workers_dual.yaml

The important parameters here are:

• disableDefaultCNI is set to true as Cilium will be deployed instead of the


default CNI.
• ipFamily set to dual for Dual Stack (IPv4 and IPv6 support). More details can
be found on the official Kubernetes docs.

🖥 Nodes

In the nodes section, you can see that the cluster consists of four nodes:

• 1 control-plane node running the Kubernetes control plane and etcd


• 3 worker nodes to deploy the applications

🔀 Networking

In the networking section of the configuration file, the default CNI has been
disabled so the cluster won't have any Pod network when it starts. Instead, Cilium is
being deployed to the cluster to provide this functionality.

To see if the Kind cluster is ready, verify that the cluster is properly running by listing
its nodes:

© 2025 Cisco and/or its affiliates. All rights reserved. 5


🏛️ THE LAB ENVIRONMENT

kubectl get nodes

You should see the four nodes appear, all marked as NotReady. This is normal, since
the CNI is disabled, and we will install Cilium in the next step. If you don't see all
nodes, the worker nodes might still be joining the cluster. Relaunch the command
until you can see all three nodes listed.

Now that we have a Kind cluster, let's install Cilium on it!

© 2025 Cisco and/or its affiliates. All rights reserved. 6


⬢ INSTALL CILIUM AND HUBBLE

🖥️ The Cilium CLI

The cilium CLI tool can install and update Cilium on a cluster, as well as
activate features —such as Hubble and Cluster Mesh.

❯ cilium install
🔮 Auto-detected Kubernetes kind: kind
ℹ️ Using Cilium version 1.17.1
🔮 Auto-detected cluster name: kind-kind
🔮 Auto-detected kube-proxy has been installed

© 2025 Cisco and/or its affiliates. All rights reserved. 7


⬢ INSTALL CILIUM AND HUBBLE

⬢ Install Cilium and Hubble

⬢ The Cilium CLI

Let's start by installing Cilium on the Kind cluster.

We are enabling the IPv6 option with --set ipv6.enabled=true (it’s disabled by
default):

cilium install \
--version 1.17.1 \
--set kubeProxyReplacement=true \
--set k8sServiceHost=kind-control-plane \
--set k8sServicePort=6443 \
--set ipv6.enabled=true

The installation usually takes about a minute.

You can check that all is running well with:

cilium status --wait

This might take a few minutes. When it's done, verify that Cilium has been
configured with IPv6 enabled:

cilium config view | grep ipv6

The nodes should now be in Ready state. Check with the following command:

© 2025 Cisco and/or its affiliates. All rights reserved. 8


⬢ INSTALL CILIUM AND HUBBLE

kubectl get nodes

With the following command, you can see the PodCIDRs from which IPv4 and IPv6
addresses will be allocated to your Pods.

kubectl describe nodes | grep PodCIDRs

Now that Cilium is functional on our cluster for IPv6 connectivity, let's enable Hubble
for IPv6 observability.

🔭 Activate Hubble

Once Cilium is installed, activate Hubble:

cilium hubble enable --ui

After about a minute, Hubble should be enabled, and you can check that Hubble is
running well (the status should be OK):

cilium status --wait

© 2025 Cisco and/or its affiliates. All rights reserved. 9


🚀 DEPLOYING A DUAL STACK DEMO APP

🚀 Deploy app and verify IPv6 connectivity

The demo application includes Pods across multiple nodes and a Service
fronting these Pods.
Let's deploy the demo app and verify IPv6 connectivity and AAAA name
resolution for the Service.

© 2025 Cisco and/or its affiliates. All rights reserved. 10


🚀 DEPLOYING A DUAL STACK DEMO APP

🚀 Deploying a Dual Stack demo app

⬢ Deploy sample applications

Deploy a couple of Pods with the command below. We will run some pings between
the two to verify that traffic is being sent over IPv6:

kubectl apply -f pod1.yaml -f pod2.yaml

Consult the manifests:

yq pod1.yaml pod2.yaml

You will see we are pinning the pods on different nodes (with spec.nodeName set to
kind-worker and kind-worker2) for the purpose of the lab (it's not necessarily a
common practice).

✅ Test IPv6 Ping Pod to pod

Check the Pods have been successfully deployed. Notice it has two IP addresses
allocated – IPv4 and IPv6.

kubectl describe pod pod-worker | grep -A 2 IPs


kubectl describe pod pod-worker2 | grep -A 2 IPs

If no IP address is displayed, wait a few more seconds before re-running the


command as the Pod might still be booting up.

Let's directly get the IPv6 address from pod-worker2 with this command.

© 2025 Cisco and/or its affiliates. All rights reserved. 11


🚀 DEPLOYING A DUAL STACK DEMO APP

IPv6=$(kubectl get pod pod-worker2 -o jsonpath='{.status.pod


IPs[1].ip}')
echo $IPv6

Let's run an IPv6 ping from pod-worker to pod-worker2. Because the Pods were
pinned to different nodes, it should show successful IPv6 connectivity between Pods
on different nodes.

kubectl exec -it pod-worker -- ping6 -c 5 $IPv6

🏢 Test IPv6 Connectivity Pod to Service

You can now test Pod to Service connectivity.

Let's first review the manifest:

cat -n echo-kube-ipv6.yaml

Notice the ipFamilyPolicy and ipFamilies Service settings in lines 30 to 33


required for IPv6.

Deploy it:

kubectl apply -f echo-kube-ipv6.yaml

Check the echoserver Service: you should see both IPv4 and IPv6 addresses
allocated.

kubectl describe svc echoserver

Let's test connectivity from a Pod to a Service over IPv6.

© 2025 Cisco and/or its affiliates. All rights reserved. 12


🚀 DEPLOYING A DUAL STACK DEMO APP

First, extract the IPv6 ClusterIP:

ServiceIPv6=$(kubectl get svc echoserver -o jsonpath='{.spec


.clusterIP}')
echo $ServiceIPv6

Run a curl to the IPv6 Service IP.

kubectl exec -i -t pod-worker -- curl -6 http://


[$ServiceIPv6]/ | jq

You should see a successful JSON output, beginning with something like this:

{
"host": {
"hostname": "[fd00:10:96::d5d3]",
"ip": "fd00:10:244:2::48c",
"ips": []
},
"http": {
"method": "GET",
"baseUrl": "",
"originalUrl": "/",
"protocol": "http"
},
"request": {
"params": {
"0": "/"
},
"query": {},
"cookies": {},
"body": {}
}
}

© 2025 Cisco and/or its affiliates. All rights reserved. 13


🚀 DEPLOYING A DUAL STACK DEMO APP

Well done - you have validated inter-node IPv6 connectivity with ICMPv6 and pod-
to-Service IPv6 connectivity over HTTP.

🪪 Verify IPv6 DNS

So far we have only operated with IP addresses. But we can also use DNS as AAAA
records are assigned automatically to Services.

To verify that, let's use nslookup from the pod-worker Pod.

kubectl exec -i -t pod-worker -- nslookup -q=AAAA


echoserver.default

You should see a successful DNS resolution for the AAAA record:

Server: 10.96.0.10
Address: 10.96.0.10#53

Name: echoserver.default.svc.cluster.local
Address: fd00:10:96::a645

Let's verify connectivity is successful by running a curl command against the


Service name:

kubectl exec -i -t pod-worker -- curl -6 'http://


echoserver.default.svc' | jq

You should see an output similar to the one in the previous task.

Well done - you have validated:

• inter-node IPv6 connectivity with ICMPv6,


• pod-to-Service IPv6 connectivity over HTTP,
• DNS resolution for AAAA records.

© 2025 Cisco and/or its affiliates. All rights reserved. 14


🚀 DEPLOYING A DUAL STACK DEMO APP

In the next challenge, we will leverage Hubble to understand these flows.

© 2025 Cisco and/or its affiliates. All rights reserved. 15


🔍 OBSERVE IPV6 FLOWS

🏛 Visualizing the Architecture

We have now deployed a demo application.


How can we visualize the application architecture and how the
components talk to each other?

© 2025 Cisco and/or its affiliates. All rights reserved. 16


🔍 OBSERVE IPV6 FLOWS

🔍 Observe IPv6 flows

👓 Observing Flows

The hubble CLI connects to the Hubble Relay component in the cluster and
retrieves logs called "Flows". This command line tool then enables you to visualize
and filter the flows.

With the hubble CLI, you will be able see a list of logs, each with:

• a timestamp
• a source pod, along with its namespace, port, and Cilium identity
• the direction of the flow (->, <-, or at times <> if the direction could not be
determined)
• a destination pod, along with its namespace, port, and Cilium identity
• a trace observation point (e.g. to-endpoint, to-stack, to-overlay)
• a verdict (e.g. FORWARDED or DROPPED)
• a protocol (e.g. UDP, TCP), optionally with flags

🔭 IPv6 Observability

In >_ Terminal 1 , enable Hubble Port Forwarding to visualize these flows:

cilium hubble port-forward &

Let's head over to >_ Terminal 2 to run an IPv6 ping from pod-worker to pod-
worker2.

IPv6=$(kubectl get pod pod-worker2 -o jsonpath='{.status.pod


IPs[1].ip}')
kubectl exec -it pod-worker -- ping -c 5 $IPv6

© 2025 Cisco and/or its affiliates. All rights reserved. 17


🔍 OBSERVE IPV6 FLOWS

Head back to >_ Terminal 1 to execute the hubble observe command to monitor
the traffic flows.:

hubble observe --ipv6 --from-pod pod-worker

You might also see flows generated in the previous task but the most recent ones
should be the ICMPv6 flows, such as:

Nov 24 10:30:13.756: default/pod-worker (ID:1067) ->


default/pod-worker2 (ID:32270) to-endpoint FORWARDED (ICMPv6
EchoRequest)

Let's now print the node where the Pods are running with the --print-node-name:

hubble observe --ipv6 --from-pod pod-worker --print-node-


name

By default, Hubble will translate IP addresses to logical names such as Pod name or
FQDN. You can disable it if you want the source and destination IPv6 addresses:

hubble observe --ipv6 --from-pod pod-worker \


-o dict \
--ip-translation=false \
--protocol ICMPv6

Head back to >_ Terminal 2 and run the curl to the IPv6 Service command again:

© 2025 Cisco and/or its affiliates. All rights reserved. 18


🔍 OBSERVE IPV6 FLOWS

ServiceIPv6=$(kubectl get svc echoserver -o jsonpath='{.spec


.clusterIP}')
echo $ServiceIPv6
kubectl exec -i -t pod-worker -- curl -6 http://
[$ServiceIPv6]/ | jq

In >_ Terminal 1 , you will now see HTTP (with 80 as the DESTINATION port and TCP
flags in the SUMMARY) and ICMPv6 flows:

hubble observe --ipv6 --from-pod pod-worker -o dict --ip-


translation=false

You should see logs such as:

------------
TIMESTAMP: Nov 24 12:47:22.293
SOURCE: [fd00:10:244:2::e29d]:39200
DESTINATION: [fd00:10:244:3::91b]:80
TYPE: to-endpoint
VERDICT: FORWARDED
SUMMARY: TCP Flags: ACK

If you just want to see your ping messages, you can simply filter based on the
protocol with the flag --protocol ICMPv6:

hubble observe --ipv6 --from-pod pod-worker -o dict --ip-


translation=false --protocol ICMPv6

And that’s it! You now can visualize IPv6 flows in Kubernetes and hopefully you can
see how running IPv6 on Kubernetes does not need to be an operational nightmare,
if you have the right tools in place.

© 2025 Cisco and/or its affiliates. All rights reserved. 19


🔍 OBSERVE IPV6 FLOWS

Let's conclude with a short quiz and a lab to verify our learnings. On completion of
the lab, you will be awarded a badge.

© 2025 Cisco and/or its affiliates. All rights reserved. 20


❓ FINAL QUIZ

❓ Final Quiz

Select all answers that apply.

Dual-stack IPv4/IPv6 Networking Reached General Availability in Kubernetes


1.23.

Hubble supports both IPv4 and IPv6 flows.

IPv6 is enabled by default with Cilium.

The DNS AAAA record matches a domain name with an IPv4 address.

© 2025 Cisco and/or its affiliates. All rights reserved. 21


🥋 EXAM CHALLENGE

🏆 Final Exam Challenge

This last challenge is an exam that will allow you to win a badge.
After you complete the exam, you will receive an email from Credly with
a link to accept your badge. Make sure to finish the lab!

© 2025 Cisco and/or its affiliates. All rights reserved. 22


🥋 EXAM CHALLENGE

🥋 Exam Challenge

📝 Exam Instructions

For this practical exam, you will need to:

1. Deploy a Pod based on the nginx image and verify that it has an IPv6 address
allocated. Make sure the Pod is called my-nginx.
2. Expose the nginx app with a NodePort Service. You can use the pre-populated
YAML file service-challenge.yaml as a starting point. The file is located in
the /exam/ folder.
3. Verify with curl that access to nginx server over the Node IPv6 address is
successful. Use TCP and port 80 to access this server.

Notes:

• For the first task, you might want to use a command such as
kubectl run to deploy the Pod.
• For the second task, you can use the </> Editor tab. You can also
look at the service created in a previous task for inspiration.
• You may need to use commands such as kubectl describe svc
and kubectl describe nodes during this challenge.
• The exam will be completed if the command curl http://
[$NODE]:$PORT is successful (with $NODE the IPv6 address of any
of the nodes and $PORT, the NodePort).
• The NodePort is within the 30000-32767 range.

Good luck!

Badge

Thanks for taking part in this Dual Stack IPv4/IPv6 lab, and for learning something
about the capabilities of Cilium.

Don't forget to finish the challenge now to receive your badge!

© 2025 Cisco and/or its affiliates. All rights reserved. 23

You might also like