Exploring Kubernetes (Online Learning Version)
Exploring Kubernetes (Online Learning Version)
k8s
kubernetes
Copyright © Dell Inc. All Rights Reserved.
Why Kubernetes?
Storage
Network
Compute failures
Storage
Operating system
failures
Host Host Host
Scalability challenges
Inconsistent
configurations
Compute failures
Storage
Storage failures
Operating system
failures
Host Host Host
Scalability challenges
Inconsistent
configurations
Compute failures
Storage
Storage failures
Kubernetes
✓ Reliability
✓ Resiliency Network
This is transparent to us
✓ Scalability
✓ Self-Healing
Storage Storage Storage
Language Runtime
App
App Libraries and
Dependencies
Kubernetes
Control Plane
We deploy our containerised
apps here
Node
Node
Node
Worker Data Plane
Node
Kubernetes Cluster
• Data Plane
– run the actual applications we deploy
– comprises of one or more Worker Nodes
Control Plane
kubectl
Node
Node
Node
Worker Data Plane
Node
$ kubectl command
https://www.docker.com/products/docker-desktop
On Windows
On Linux
$ k3d version
https://minikube.sigs.k8s.io
https://killercoda.com/playgrounds/scenario/kubernetes
$ kubectl version
$ kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Control Plane
kubectl
Node
Node
Node
Worker Data Plane
Node
Worker Node
Bare Metal
Control Plane
kubectl
Node
Node
Container
Node Runtime Data Plane
Kubelet Kube-Proxy
Worker Node
Container Runtime
Kubelet Kube-Proxy
Worker Node
Container Runtime
Kubelet Kube-Proxy
Worker Node
Container Runtime
Kubelet Kube-Proxy
Worker Node
pod/web created
kubectl
Pulls the nginx image from Docker
Hub and runs the container web
on the worker node
web
Container Runtime
Instructs Container Runtime
to run the nginx image
Kubelet Kube-Proxy
Worker Node
ContainerCreating Unknown
Running
Succeeded Failed
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web 1/1 Running 0 25h 10.42.0.10 k3d-mycluster-server-0 <none> <none>
We only have one node in our cluster. This node has the role of
control-plane and master. At the same time, it is also functioning as a
worker node where the pod web is deployed to
Control Plane
Worker
Node
Single Node
$ kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
node/k3d-mycluster-server-0 tainted
Control Plane
Controller Manager
Scheduler
API-Server
Node
Node
Node Data Plane
Worker
Node
• Watches for new pods with no assigned worker nodes and assigns
them to the target node for execution
• Its primary role is to ensure that pods are distributed across the
available nodes according to specified requirements and constraints
Controller Manager
Scheduler
API-Server
etcd
Control Plane
kubectl
Node
Node
Container
Node Runtime
Kubelet Kube-Proxy
Worker Node
Copyright © Dell Inc. All Rights Reserved.
Let’s deploy a pod on our new cluster again
pod/web created
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web 1/1 Running 0 83s 10.42.1.7 k3d-mycluster-agent-1 <none> <none>
etcd
Control Plane
Container
Kubelet Kube-Proxy
Worker Node
$ alias k=kubectl
We will be typing `kubectl` a lot. Setting up this short alias `k` can help us to save time on typing.
Throughout the slides, we will still be using the full name of kubectl for clarity and consistency.
web
Kubernetes
This is transparent to us
web
Kubernetes
✓ High Availability
✓ Reliability
✓ Self-Healing
deployment.apps/web-deploy created
Controller Manager
Sends request to
API-Server
Scheduler
API-Server Monitors deployment
to have 1 pod running
etcd
Control Plane
Container
Deletes pod
Container Runtime
Kubelet Kube-Proxy
Worker Node
Copyright © Dell Inc. All Rights Reserved.
What is a Deployment?
web-deploy
Kubernetes
This is transparent to us
deployment.apps/web-deploy scaled
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-deploy-67cc48865-2xhfz 1/1 Running 0 73s 10.42.0.8 k3d-mycluster-agent-0 <none> <none>
web-deploy-67cc48865-m6zm7 1/1 Running 0 13s 10.42.0.9 k3d-mycluster-agent-1 <none> <none>
web-deploy-67cc48865-pv9p9 1/1 Running 0 13s 10.42.1.6 k3d-mycluster-agent-2 <none> <none>
Control Plane
The Deployment is designed to manage and ensureWorker Node-0 Worker Node-1 Worker Node-2
the desired number of replicas of our pods across
multiple worker nodes in the cluster
Warning: Immediate deletion does not wait for confirmation that the running resource has been
terminated. The resource may continue to run on the cluster indefinitely.
pod "web-deploy-67cc48865-pv9p9" force deleted
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-deploy-67cc48865-2xhfz 1/1 Running 0 73s 10.42.0.8 k3d-mycluster-agent-0 <none> <none>
web-deploy-67cc48865-m6zm7 1/1 Running 0 13s 10.42.0.9 k3d-mycluster-agent-1 <none> <none>
web-deploy-67cc48865-sp826 1/1 Running 0 13s 10.42.1.7 k3d-mycluster-agent-1 <none> <none>
Control Plane
adjust
Deployment
ReplicaSet
YAML
apiVersion
API version of the Kubernetes resource
kind
metadata
Type of the Kubernetes resource
E.g. pod, replicaset, deployment etc
spec
Describes the high level information about
the Kubernetes resource
YAML
apiVersion: v1
Using core Kubernetes API version 1
kind: Pod
$ vim nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: web
Press ‘i’ to type
spec:
containers:
- name: web
image: nginx
pod/web created
Control Plane
YAML
Container
Pod
Worker Node
metadata:
name: web-deploy This YAML describes a Deployment
spec:
replicas: 3
selector:
3 parts in the .spec section
matchLabels:
app: web
● replicas - the desired number of pods
template:
metadata:
● label selector - pods with the specified label
labels: will be managed
app: web
spec: ● pod template - the spec for the pod
containers:
- name: web
image: nginx
kind: Deployment
metadata:
name: web-deploy
spec:
replicas: 3
selector:
matchLabels:
app: web The pod labels must match the label selector of the
template: deployment, in our case, it will be app: web
metadata:
labels:
app: web
spec:
containers:
- name: web
image: nginx
kind: Deployment
metadata:
name: web-deploy
spec:
replicas: 3
apiVersion: v1
selector:
kind: Pod
matchLabels:
app: web
metadata:
name: web
template:
metadata:
spec:
labels:
containers:
app: web
- name: web
spec:
image: nginx
containers:
- name: web
image: nginx
$ vim nginx-deployment.yaml
apiVersion: apps/v1
template:
kind: Deployment metadata:
labels:
metadata: app: web
name: web-deploy spec:
containers:
spec: - name: web
replicas: 3 image: nginx
selector:
matchLabels:
app: web
deployment.apps/web-deploy created
pod/web-deploy-7869bd98c5-f7t2k labeled
A new pod with the label app=web is The label of this pod is changed to app=db
created by the replicaset and is no longer managed by the
deployment
ReplicaSet
pod/web-deploy-7869bd98c5-8jw7f labeled
We can filter by multiple labels by separating each label with a comma. Take
note this only supports boolean AND, not boolean OR.
i.e. this filter returns pods that have both tier=frontend AND app=web labels
apiVersion: apps/v1
template:
kind: Deployment metadata:
labels:
metadata: app: web
name: web-deploy spec:
containers:
spec: Change the replicas to 5 - name: web
replicas: 3 image: nginx
selector:
matchLabels:
app: web
deployment.apps/web-deploy configured
The --watch flag enables the continuous 2 additional replicas are in the
monitoring of changes to resources. Any process of being created
changes to the status will be streamed to the
console output
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"web-deploy","namespace":"default"},"spec":{"replica
s":5,"selector":{"matchLabels":{"app":"web"}},"template":{"metadata":{"labels":{"app":"web"}},"spec":{"containers":[{"image":"nginx"
,"name":"web"}]}}}}
creationTimestamp: "2024-02-14T08:03:21Z"
generation: 2
name: web-deploy
namespace: default Specifies the output format as YAML
resourceVersion: "24646"
uid: c888df99-8167-46be-84bd-1fbe1ae44fb3
spec:
progressDeadlineSeconds: 600
replicas: 5
revisionHistoryLimit: 10
selector:
matchLabels:
app: web
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: web
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: web
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 5
conditions:
- lastTransitionTime: "2024-02-14T08:03:21Z"
lastUpdateTime: "2024-02-14T08:03:38Z"
message: ReplicaSet "web-deploy-7869bd98c5" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2024-02-16T00:38:16Z"
lastUpdateTime: "2024-02-16T00:38:16Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 2
readyReplicas: 5
replicas: 5
updatedReplicas: 5
apiVersion
kind
metadata
Desired state of the object
spec
YAML
status:
availableReplicas: 5
conditions:
- lastTransitionTime: "2024-02-14T08:03:21Z"
lastUpdateTime: "2024-02-14T08:03:38Z"
message: ReplicaSet "web-deploy-7869bd98c5" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2024-02-16T00:38:16Z"
lastUpdateTime: "2024-02-16T00:38:16Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 2
readyReplicas: 5 We see 2 additional replicas have been added
replicas: 5
updatedReplicas: 5 to reach a total of 5 replicas, which is the
desired state we have specified
Instead of printing the output directly to the screen, we can redirect it to a file
where we can use it for other purposes such as version control, make
additional changes or share the configuration etc
spec:
progressDeadlineSeconds: 600
replicas: 2
Change the replicas to 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: web
Enter Esc and :wq! to save configuration and exit edit mode
Keep in mind that using `kubectl edit` directly in a production environment may not be the best practice, as it can introduce
manual changes and potential human error. In production, it's often recommended to use declarative configuration files
(kubectl apply -f) or infrastructure-as-code practices to manage and version control your Kubernetes configurations
$ cat mytestdeploy.yaml
• For imperative method, you specify how to achieve a desired state by running a
series of commands in sequence
• For declarative method, you define the desired end state using YAML manifest
and let Kubernetes figure out how to achieve it
• Labels are key-value pairs used to identify and group objects in the cluster
• Using declarative method rather than imperative is considered best practice as
you can manage and version control your Kubernetes configurations
10.0.0.2
App A App B
Host Host
IP Address IP Address
10.0.0.1 10.0.0.2
10.42.0.9
Pod
IP Address
10.42.0.9
Deployment
Client
Pod
app=
web
Pod
? IP Address
10.42.1.9
app=
web
Pod
IP Address
10.42.2.9
Pod
IP Address
Deployment
10.42.0.9
app=
web
10.42.0.1
Client Service
Pod app=
web
Pod
Fixed IP Address
10.42.0.1
IP Address
10.42.1.9
app=
web
Pod
IP Address
10.42.2.9
Copyright © Dell Inc. All Rights Reserved.
Remember our Kube-Proxy?
• Keeps the mapping between the virtual IP address of the
services and the pods up to date as the pods come and go
• Kube-Proxy queries the API-Server to learn about new
services in cluster and updates the nodes’ iptables rules
accordingly
Container Container
Pod-2
Pod-2
Pod-1 Pod-2
Pod-2
Container Runtime
Kubelet Kube-Proxy
Worker Node
• Each type is responsible for exposing the service inside and/or outside of the
cluster
app=
web
External Service
Client [ClusterIP]
Pod
app= app=
web web
Pod Pod
Cluster
service/web-service exposed
The type of Service to create The port number on which the Service
will be exposed
apiVersion: v1
The API version for Service is v1
kind: Service
$ vim nginx-clusterip-service.yaml
apiVersion: v1
selector:
kind: Service app: web
metadata:
name: web-service
spec:
type: ClusterIP
ports:
- port: 8080
targetPort: 80
service/web-service created
This command will run within the pod when the pod
We are using the alpine image to run a starts, in our case is the shell command
test container for testing
We are going to use curl to test the connectivity from the testpod to the
ClusterIP service. But Alpine does not come pre-installed with curl
/ # curl -s 10.43.225.11:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; } This is the Cluster IP service’s address. Take note of the port
body { width: 35em; margin: 0 auto; 8080 that we expose in the service YAML manifest
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
app=
web
External Service
Client [ClusterIP]
Pod
app= app=
web web
Pod Pod
Cluster
app= app=
web web
External
NodePort app= Pod Pod
Client web
app= app=
web db
Worker Node-2
Cluster
apiVersion: v1
kind: Service
metadata: metadata:
name: web-nodeport-service name: web-service
spec: spec:
type: NodePort type: ClusterIP
ports: ports:
- port: 8080 - port: 8080
targetPort: 80 targetPort: 80
nodePort: 30001
selector:
selector: app: web
app: web
targetPort
80
app= app=
web web
External NodePort port
30001 8080 app= Pod Pod
Client web
app= app=
web db
NodePort
30001
Pod Pod
Worker Node-2
Cluster
$ vim nginx-nodeport-service.yaml
apiVersion: v1
selector:
kind: Service app: web
metadata:
name: web-nodeport-service
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
nodePort: 30001
service/web-nodeport-service created
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-deploy-7869bd98c5-5l9x5 1/1 Running 0 60m 10.42.2.4 k3d-mycluster-agent-2 <none> <none>
web-deploy-7869bd98c5-k5dnj 1/1 Running 0 60m 10.42.2.5 k3d-mycluster-agent-2 <none> <none>
web-deploy-7869bd98c5-sdpvv 1/1 Running 0 60m 10.42.0.6 k3d-mycluster-agent-0 <none> <none>
web-deploy-7869bd98c5-cx29q 1/1 Running 0 60m 10.42.1.5 k3d-mycluster-agent-1 <none> <none>
web-deploy-7869bd98c5-pt254 1/1 Running 0 60m 10.42.1.4 k3d-mycluster-agent-1 <none> <none>
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3d-mycluster-agent-1 Ready <none> 64m v1.27.4+k3s1 192.168.144.5 <none> K3s dev 4.15.0-213-generic containerd://1.7.1-k3s1
k3d-mycluster-server-0 Ready control-plane,master 64m v1.27.4+k3s1 192.168.144.2 <none> K3s dev 4.15.0-213-generic containerd://1.7.1-k3s1
k3d-mycluster-agent-0 Ready <none> 64m v1.27.4+k3s1 192.168.144.3 <none> K3s dev 4.15.0-213-generic containerd://1.7.1-k3s1
k3d-mycluster-agent-2 Ready <none> 64m v1.27.4+k3s1 192.168.144.4 <none> K3s dev 4.15.0-213-generic containerd://1.7.1-k3s1
$ curl -s http://192.168.144.3:30001
<!DOCTYPE html>
<html> Worker Node-1
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
$ curl -s http://192.168.144.4:30001
<!DOCTYPE html>
<html> Worker Node-2
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
$ curl -s http://192.168.144.5:30001
<!DOCTYPE html>
<html> Worker Node-3
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
This command will run within the pod when the pod
We are using the alpine image to run a starts, in our case is the shell command
test container for testing
/ # curl -s http://192.168.144.4:30001
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; } This is one of the worker node’s IP address accessed
body { width: 35em; margin: 0 auto; through port 30001
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
/ # curl -s http://10.43.17.185:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; } We can access through its cluster IP address as well
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
app= app=
web web
External
NodePort app= Pod Pod
Client web
app= app=
web db
Worker Node-2
Cluster
app=
web
app= app=
web web
Cloud Service
External NodePort Pod Pod
Provider Load
Client [LoadBalancer]
Balancer
Worker Node-1
app= app=
web db
Worker Node-2
Cluster
kind: Service
metadata:
name: web-lb-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip AWS load balancer
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing annotations
spec:
type: LoadBalancer
The Service type is LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
app: web
Copyright © Dell Inc. All Rights Reserved.
Comparison between Service Types
NodePort nodePort,
Exposes service on each
Yes Yes port,
worker node’s port
targetPort
! We’re Experiencing
Some Downtime
OK
Deployment
v1 v1 v1 v2 v2 v2
strategy:
type: RollingUpdate maxSurge determines the number of pods that can be run
rollingUpdate: temporarily in addition to the replicas specified in the update. In our
maxSurge: 1
case here, it will be maximum of 4 pods
maxUnavailable: 1
selector:
matchLabels:
app: web maxUnavailable determines the number of pods that may be
template: unavailable during the update. In our case here, it will be only 2
metadata: pods are available at a time
labels:
app: web
spec:
containers:
- name: web
image: nginx:alpine
spec:
replicas: 3
strategy:
type: RollingUpdate Instead of absolute numbers, we can use percentage as well.
rollingUpdate: This is the default if we do not specify.
maxSurge: 25%
maxUnavailable: 25%
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: nginx:alpine
$ vim nginx-rollingupdate-deployment.yaml
apiVersion: apps/v1
selector:
kind: Deployment matchLabels:
app: web
metadata:
name: web-deploy template: We will update the deployment’s
metadata: image to nginx:alpine
spec: labels:
replicas: 3 app: web
spec:
strategy: containers:
type: RollingUpdate - name: web
rollingUpdate: image: nginx:alpine
maxSurge: 1
maxUnavailable: 1
deployment.apps/web-deploy configured
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-deploy-7869bd98c5-k5dnj 1/1 Running 0 2d20h 10.42.2.5 k3d-mycluster-agent-2 <none> <none>
web-deploy-7869bd98c5-pt254 1/1 Running 0 2d20h 10.42.1.4 k3d-mycluster-agent-1 <none> <none>
web-deploy-599fd76bc8-q49td 0/1 ContainerCreating 0 5s <none> k3d-mycluster-agent-1 <none> <none>
web-deploy-599fd76bc8-9vv6h 0/1 ContainerCreating 0 5s <none> k3d-mycluster-agent-0 <none> <none>
...
Deployment
v1 v1 v1 v2 v2 v2
• All old pods are deleted before the new ones are created
• Short period of downtime when the app becomes completely
unavailable
• Use this strategy if your app does not support multiple versions in
parallel
strategy:
type: Recreate
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: nginx:perl
$ vim nginx-recreate-deployment.yaml
apiVersion: apps/v1
selector:
kind: Deployment matchLabels:
app: web
metadata:
name: web-deploy template: We will update the deployment’s
metadata: image to nginx:perl
spec: labels:
replicas: 3 app: web
spec:
strategy: containers:
type: Recreate - name: web
image: nginx:perl
deployment.apps/web-deploy configured
Service
[LoadBalancer]
v1 v1 v1 v2 v2 v2
Testing complete!
ReplicaSet Deployment complete! ReplicaSet
Deployment Deployment
(Old) (New)
selector: selector:
matchLabels: matchLabels:
app: blue app: green
template: template: green deployment
metadata: blue deployment label metadata: label
labels: labels:
app: blue app: green
spec: spec:
containers: containers:
- name: web - name: web
image: nginx image: httpd
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
The label selector is currently pointing to the replicas
selector: managed by the blue deployment
app: blue
ports:
- port: 80
targetPort: 80
$ vim nginx-blue-deployment.yaml
apiVersion: apps/v1
selector:
kind: Deployment matchLabels:
app: blue
metadata:
name: web-deploy-blue template:
metadata:
spec: labels:
replicas: 3 app: blue
spec:
containers:
- name: web
image: nginx
deployment.apps/web-deploy-blue created
$ vim nginx-green-deployment.yaml
apiVersion: apps/v1
selector:
kind: Deployment matchLabels:
app: green
metadata:
name: web-deploy-green template:
metadata:
spec: labels:
replicas: 3 app: green
spec:
containers:
- name: web
image: httpd
deployment.apps/web-deploy-green created
$ vim nginx-bluegreen-service.yaml
apiVersion: v1
kind: Service
metadata:
name: web-service
service/web-service created
We are going to use curl to test the connectivity from the testpod to
the service. But Alpine does not come pre-installed with curl
/ # curl -s http://10.43.117.202
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; } Access the service and we see
body { width: 35em; margin: 0 auto; the output returned by nginx
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
$ vim bluegreen-service.yaml
apiVersion: v1
kind: Service
metadata:
name: web-service Update from `blue` to
`green` to switch
spec: deployment
selector:
app: blue
ports:
- port: 80
targetPort: 80
service/web-service configured
We are going to use curl to test the connectivity from the testpod to the
service. But Alpine does not come pre-installed with curl
/ # curl -s http://10.43.117.202
<html><body><h1>It works!</h1></body></html>
Service
[LoadBalancer]
v1 v1 v1 v2 v2 v2
Testing complete!
ReplicaSet Deployment complete! ReplicaSet
Deployment Deployment
(Old) (New)
Service
[LoadBalancer]
v1 v1 v1 v2
Deployment Deployment
(Old) (New)
selector: selector:
matchLabels: matchLabels:
app: canary app: canary
template: Same label for both template:
metadata: deployments metadata:
labels: labels:
app: canary app: canary
spec: spec:
containers: containers:
- name: web - name: web
image: nginx image: httpd
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
The label selector is currently pointing to the replicas managed
selector: by both the old and new deployment
app: canary
ports:
- port: 80
targetPort: 80
$ vim nginx-old-deployment.yaml
apiVersion: apps/v1
selector:
kind: Deployment matchLabels:
app: canary
metadata:
name: web-deploy-old template:
metadata:
spec: labels:
replicas: 3 app: canary
spec:
containers:
- name: web
image: nginx
deployment.apps/web-deploy-old created
$ vim httpd-new-deployment.yaml
apiVersion: apps/v1
selector:
kind: Deployment matchLabels:
app: canary
metadata:
name: web-deploy-new template:
metadata:
spec: labels:
replicas: 1 app: canary
spec:
containers:
- name: web
image: httpd
deployment.apps/web-deploy-new created
$ vim canary-service.yaml
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
selector:
app: canary
ports:
- port: 80
targetPort: 80
service/web-service created
Name: web-service
Namespace: default
Labels: <none>
Annotations: endpoints.kubernetes.io/last-change-trigger-time: 2024-03-01T11:47:03Z
Subsets:
Addresses: 10.42.0.13,10.42.1.17,10.42.2.16,10.42.2.17
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 80 TCP
The 4 endpoints are
● 10.42.0.13
● 10.42.1.17
● 10.42.2.16
● 10.42.2.17
We are going to use curl to test the connectivity from the testpod to
the service. But Alpine does not come pre-installed with curl
/ # curl -s 10.43.241.247
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style> Try running this a couple of times, and you will randomly
html { color-scheme: light dark; } see either the nginx or httpd output. Since there are 3
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; } nginx replicas and only 1 httpd replica, the chances of
</style>
</head>
getting the nginx output is higher
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
Service
[LoadBalancer]
v1 v1 v1 v2
Deployment Deployment
(Old) (New)
Rolling Update Rolling out a new version incrementally Old and new versions run at
with zero downtime the same time during the
upgrade. Slower deployment
process
Blue-Green Complex upgrades with zero downtime Require twice the amount of
and reduced risk. Can easily rollback to resources to run
previous version
config config
App
pod/mydatabase created
apiVersion: v1
kind: Pod
metadata:
name: mydatabase
spec:
containers:
- name: mydatabase
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD We can inject the environment variables here
value: "secretpassword"
$ vim mysql-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: mydatabase
spec:
containers:
- name: mydatabase
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: “secretpassword”
pod/mydatabase created
…
2024-03-03T14:18:40.745189Z 0 [System] [MY-011323] [Server] X Plugin ready for
connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
2024-03-03T14:18:40.745520Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for
connections. Version: '8.3.0' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL
Community Server - GPL.
key: value
key: value
Container
Injected as
environment
variables
ConfigMap Mounted as volume to Pod ConfigMap
filesystem path
configmap/mysql-configmap created
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-configmap
Note that the ConfigMap manifest does not
data:
have a spec section. All the key-value pairs
MYSQL_ROOT_PASSWORD: secretpassword
are specified under the data section
apiVersion: v1
kind: Pod
metadata:
name: mydatabase
spec:
containers:
- name: mydatabase
image: mysql
envFrom: The key-value pair in the configmap is
- configMapRef: injected as an environment variable in the
name: mysql-configmap container
$ vim mysql-cm-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: mydatabase
spec:
containers:
- name: mydatabase
image: mysql
envFrom:
- configMapRef:
name: mysql-configmap
pod/mydatabase created
$ vim database.json
{
“database”: { We create a database configuration file
that can be read by the app in the
“host”: “mysql-service”, container
“user”: “dba”
}
configmap/mysql-json-configmap created
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-json-configmap
data:
db.json: |-
{
“database”: {
Database configuration content from the
“host”: “mysql-service”,
file db.json
“user”: “dba”
}
kind: Pod
metadata:
name: mydatabase
spec:
volumes:
- name: mysql-config-volume We create a volume and reference it to the
configMap: configmap
name: mysql-json-configmap
containers:
- name: mydatabase
image: mysql
volumeMounts: We mount the volume to the filepath /etc/config
- name: mysql-config-volume of the container
mountPath: /etc/config
envFrom:
- configMapRef:
name: mysql-configmap
$ vim mysql-volumemount-pod.yaml
apiVersion: v1 containers:
- name: mydatabase
kind: Pod image: mysql
metadata: volumeMounts:
name: mydatabase - name: mysql-config-volume
mountPath: /etc/config
spec:
volumes: envFrom:
- name: mysql-config-volume - configMapRef:
configMap: name: mysql-configmap
name: mysql-json-configmap
pod/mydatabase created
sh-4.4#
{
“database”: {
“host”: “mysql-service”,
“user”: “dba”
}
MYSQL_ROOT_PASSWORD:
key: value secretpassword key: value
json
Container
Injected as
environment variables
secret/mysql-secret created
apiVersion: v1
data:
MYSQL_ROOT_PASSWORD: c2VjcmV0cGFzc3dvcmQ=
The password is automatically encoded
kind: Secret in base64 format when we create the
metadata: secret using literals the imperative way
creationTimestamp: "2024-03-05T01:16:19Z"
name: mysql-secret
namespace: default
resourceVersion: "38324"
uid: 63340e3a-97cb-46d2-8f72-b16711fd4895
type: Opaque The type Opaque is for generic sensitive data
when we create the secret as generic in the
kubectl command
secretpassword
apiVersion: v1
kind: Secret
metadata:
name: mysql-secret
If we are creating the secret declaratively
data:
using the YAML manifest, we have to
MYSQL_ROOT_PASSWORD: c2VjcmV0cGFzc3dvcmQ=
encode the value and define here
type: Opaque
c2VjcmV0cGFzc3dvcmQK
apiVersion: v1
kind: Secret
metadata:
name: mysql-secret
type: Opaque
apiVersion: v1 apiVersion: v1
metadata: metadata:
name: mydatabase name: mydatabase
spec: spec:
containers: containers:
- name: mydatabase - name: mydatabase
image: mysql image: mysql
envFrom: envFrom:
- secretRef: - configMapRef:
name: mysql-secret name: mysql-configmap
$ vim mysql-secret-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: mydatabase
spec:
containers:
- name: mydatabase
image: mysql
envFrom:
- secretRef:
name: mysql-secret
pod/mydatabase created
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=mydatabase
GOSU_VERSION=1.16
MYSQL_MAJOR=innovation The secret value is automatically decoded
MYSQL_VERSION=8.3.0-1.el8 and injected as an environment variable in
MYSQL_SHELL_VERSION=8.3.0-1.el8 the container
MYSQL_ROOT_PASSWORD=secretpassword
KUBERNETES_PORT=tcp://10.43.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.43.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.43.0.1
KUBERNETES_SERVICE_HOST=10.43.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
HOME=/root
$ vim database.json
{
“database”: { We create a database configuration
file that can be read by the app in
“host”: “mysql-service”, the container
“user”: “dba”
}
secret/mysql-json-secret created
apiVersion: v1
data:
database.json:
ewogIOKAnGRhdGFiYXNl4oCdOiB7CiAgICDigJxob3N04oCdOiDigJxteXNxbC1zZXJ2aWNl4oC
dLAogICAg4oCcdXNlcuKAnTog4oCcZGJh4oCdCn0KCg==
kind: Secret
metadata:
creationTimestamp: "2024-03-05T02:11:15Z"
name: mysql-json-secret
namespace: default
resourceVersion: "40325"
uid: 0e2f8f43-86b5-4795-adda-6ef672a2e297
type: Opaque
kind: Pod
metadata:
name: mydatabase
spec:
volumes: We create a volume and reference it to the secret. Take note
- name: mysql-secret-volume that we use secretName to reference the secret, unlike
secret:
ConfigMap which uses name
secretName: mysql-json-secret
containers:
- name: mydatabase
image: mysql
volumeMounts: We mount the volume to the filepath /etc/config of the
- name: mysql-secret-volume container. Take note that we set readOnly to true as files
mountPath: /etc/config provided by secret mounted as volume cannot be modified
readOnly: true
envFrom:
- secretRef:
name: mysql-secret
$ vim mysql-volumemount-secret-pod.yaml
apiVersion: v1 containers:
- name: mydatabase
kind: Pod image: mysql
metadata: volumeMounts:
name: mydatabase - name: mysql-secret-volume
mountPath: /etc/config
spec: readOnly: true
volumes:
- name: mysql-secret-volume envFrom:
secret: - secretRef:
secretName: mysql-json-secret name: mysql-secret
pod/mydatabase created
sh-4.4#
{
“database”: {
“host”: “mysql-service”,
“user”: “dba”
}
MYSQL_ROOT_PASSWORD:
key: value secretpassword key: value
json
Container
Injected as
environment variables
Container
Persistent
Storage
Pod
App
PersistentVolumeClaim (PVC)
PersistentVolume (PV)
Cluster
Physical Storage
External Storage
Container
VolumeMount PersistentVolumeClaim
Volume
Pod PersistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
name: myvolume
spec:
capacity: Allocate storage capacity size to 100Mi
storage: 100Mi
hostPath:
path: /data Mounts the directory /data from worker node’s
filesystem
$ vim persistentvolume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: myvolume
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
hostPath:
path: /data
persistentvolume/myvolume created
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
myvolume 100Mi RWO Retain Available 6s
hostPath Used for mounting directories from worker node’s filesystem to the pod
configMap, secret Kubernetes configuration resources mounted to the filesystem of the pod
Delete When the PersistentVolumeClaim is deleted, remove the PersistentVolume and its
associated storage
Recycle Deprecated
kind: StorageClass
Determines where and how this storage
Administrator Developer metadata: is set up. Different storage has its own
name: ebs-sc provisioner.
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
parameters:
csi.storage.k8s.io/fstype: xfs
type: io1
iopsPerGB: “50”
...
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myvolumeclaim
storageClassName: ""
resources:
requests:
storage: 100Mi
$ vim persistentvolumeclaim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myvolumeclaim
spec:
accessModes:
- ReadWriteOnce
storageClassName: ""
resources:
requests:
storage: 100Mi
persistentvolumeclaim/myvolumeclaim created
Name: myvolumeclaim
Namespace: default
StorageClass:
Status: Bound
Volume: myvolume
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 100Mi
Access Modes: RWO
VolumeMode: Filesystem
Used By: <none> This indicates that the persistentvolumeclaim is not
Events: <none> used by any pods yet
kind: Pod
metadata:
name: web
spec:
volumes:
- name: mystorage We create a volume and reference it to the
persistentVolumeClaim: persistentvolumeclaim
claimName: myvolumeclaim
containers:
- name: web
image: nginx
volumeMounts: We mount the volume to the filepath
- name: mystorage
/mnt/data of the container
mountPath: /mnt/data
$ vim nginx-pvc-pod.yaml
apiVersion: v1 containers:
- name: web
kind: Pod image: nginx
metadata: volumeMounts:
name: web - name: mystorage
mountPath: /mnt/data
spec:
volumes:
- name: mystorage
persistentVolumeClaim:
claimName: myvolumeclaim
pod/web created
Name: myvolumeclaim
Namespace: default
StorageClass:
Status: Bound
Volume: myvolume
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 100Mi
Access Modes: RWO
VolumeMode: Filesystem
Used By: web This indicates that the persistentvolumeclaim is
Events: <none> being mounted and used by one pod named web
pod/web created
# cat /mnt/data/message
“I love pvc!”
namespace/dev created
apiVersion: v1
kind: Namespace
metadata:
name: dev
pod/web created
namespace/test created
pod/web created
kind: Deployment
metadata:
name: web-deploy
namespace: test To create the deployment in the namespace,
we just need to add this under the metadata
spec: section
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: nginx
Copyright © Dell Inc. All Rights Reserved.
Create a YAML manifest for our deployment
$ vim nginx-deployment-test-namespace.yaml
apiVersion: apps/v1
selector:
kind: Deployment matchLabels:
app: web
metadata:
name: web-deploy template:
namespace: test metadata:
labels:
spec: app: web
replicas: 3 spec:
containers:
- name: web
image: nginx
deployment.apps/web-deploy configured
kind: ResourceQuota
Limit the number of pods that can be
metadata: created in the namespace to 2
name: web-quota
namespace: staging
spec:
hard:
pods: 2
namespace/staging created
$ vim web-resourcequota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: web-quota
namespace: staging
spec:
hard:
pods: 2
resourcequota/web-quota created
Name: web-quota
Namespace: staging
Resource Used Hard
-------- ---- ----
pods 0 2
deployment.apps/test-deploy created
Name: test-deploy
Namespace: staging
CreationTimestamp: Fri, 22 Mar 2024 01:24:53 +0000
Labels: app=test-limit
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=test-limit
Replicas: 3 desired | 2 updated | 2 total | 2 available | 1 unavailable
...
...
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
ReplicaFailure True FailedCreate
Progressing True ReplicaSetUpdated
...
...
3m25s Normal Pulled pod/test-deploy-6d9f8d87cf-nvd5d Successfully pulled
image "nginx" in 1.943434388s (1.943443685s including waiting)
3m25s Normal Created pod/test-deploy-6d9f8d87cf-xkndn Created container
nginx
3m25s Normal Created pod/test-deploy-6d9f8d87cf-nvd5d Created container
nginx
3m25s Normal Started pod/test-deploy-6d9f8d87cf-xkndn Started container
nginx
3m25s Normal Started pod/test-deploy-6d9f8d87cf-nvd5d Started container
nginx
2m1s Warning FailedCreate replicaset/test-deploy-6d9f8d87cf (combined from similar
events): Error creating: pods "test-deploy-6d9f8d87cf-cnr2c" is forbidden: exceeded quota: web-quota,
requested: pods=1, used: pods=2, limited: pods=2
...
kind: ResourceQuota
metadata:
name: web-quota
namespace: staging
requests.cpu: ”1”
requests.memory: 1024Mi The maximum resources that can be
used by all pods
limits.cpu: ”2”
limits.memory: 2048Mi
$ vim web-resourcequota-limit.yaml
apiVersion: v1
kind: ResourceQuota
We keep the same name to override the
metadata: resourcequota we have created previously
name: web-quota
namespace: staging
spec:
hard:
pods: 2
requests.cpu: ”1”
requests.memory: 1024Mi
limits.cpu: ”2”
limits.memory: 2048Mi
resourcequota/web-quota configured
Name: web-quota
Namespace: staging
Resource Used Hard
-------- ---- ----
limits.cpu 0 2
limits.memory 0 2Gi
pods 0 2
requests.cpu 0 1
requests.memory 0 1Gi
deployment.apps/test-deploy created
Name: test-deploy
Namespace: staging
CreationTimestamp: Mon, 25 Mar 2024 00:01:22 +0000
Labels: app=test-deploy
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=test-deploy
Replicas: 1 desired | 0 updated | 0 total | 0 available | 1 unavailable
...
...
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetCreated
Available False MinimumReplicasUnavailable
ReplicaFailure True FailedCreate
OldReplicaSets: <none>
NewReplicaSet: test-deploy-6d9f8d87cf (0/1 replicas created)
...
...
quota: web-quota: must specify limits.cpu for: nginx; limits.memory for: nginx; requests.cpu for: nginx;
requests.memory for: nginx
117s Warning FailedCreate replicaset/test-deploy-6d9f8d87cf Error creating: pods
"test-deploy-6d9f8d87cf-qdq6q" is forbidden: failed quota: web-quota: must specify limits.cpu for: nginx;
limits.memory for: nginx; requests.cpu for: nginx; requests.memory for: nginx
116s Warning FailedCreate replicaset/test-deploy-6d9f8d87cf Error creating: pods
"test-deploy-6d9f8d87cf-6wnwh" is forbidden: failed quota: web-quota: must specify limits.cpu for: nginx;
limits.memory for: nginx; requests.cpu for: nginx; requests.memory for: nginx
116s Warning FailedCreate replicaset/test-deploy-6d9f8d87cf Error creating: pods
"test-deploy-6d9f8d87cf-vljng" is forbidden: failed quota: web-quota: must specify limits.cpu for: nginx;
limits.memory for: nginx; requests.cpu for: nginx; requests.memory for: nginx
35s Warning FailedCreate replicaset/test-deploy-6d9f8d87cf (combined from similar
events): Error creating: pods "test-deploy-6d9f8d87cf-jrk42" is forbidden: failed quota: web-quota: must
specify limits.cpu for: nginx; limits.memory for: nginx; requests.cpu for: nginx; requests.memory for:
nginx
spec:
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers: Include the requests and limits for the resources
- name: web
image: nginx
resources:
requests:
cpu: "1"
memory: "1Gi"
limits:
cpu: "2"
memory: "2Gi"
$ vim nginx-deployment-test-namespace-limit.yaml
deployment.apps/web-deploy configured
• Set resource requests and limits for CPU and memory to ensure fair
resource allocation and avoid resource contention
• Resource requests specify the minimum amount of CPU and
memory that a container requires to run. Kubernetes can schedule
pods on nodes that have sufficient capacity to meet these
requirements
• Resource limits protect the overall health of the Kubernetes cluster
by preventing individual pods from consuming excessive resources
• kubectl
• logs
• exec -it
• cp
• get events
• top nodes
• top pods
apiVersion: v1
kind: Pod
metadata:
name: web
$ vim nginx-nodeselector-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: web
spec:
containers:
- name: web
image: nginx
nodeSelector:
tier: backend
pod/web created
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web 0/1 Pending 0 3m40s <none> <none> <none> <none>
Pod is in Pending state for some time No node has been assigned to
Name: web
Namespace: default
Priority: 0
Service Account: default
Node: <none>
Labels: <none>
Annotations: <none> Failed scheduling due to no available nodes for
Status: Pending scheduling
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 5m34s default-scheduler 0/4 nodes are available: 1 node(s) had untolerated taint
{node-role.kubernetes.io/master: }, 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/4 nodes are
available: 4 Preemption is not helpful for scheduling..
Warning FailedScheduling 27s default-scheduler 0/4 nodes are available: 1 node(s) had untolerated taint
{node-role.kubernetes.io/master: }, 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/4 nodes are
available: 4 Preemption is not helpful for scheduling..
node/k3d-mycluster-agent-1 labeled
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web 1/1 Running 0 16m 10.42.1.10 k3d-mycluster-agent-1 <none> <none>
node/k3d-mycluster-agent-1 unlabeled