Lecture 00 Understand-Kubernetes
Lecture 00 Understand-Kubernetes
Kubernetes
Let’s start with an analogy…
A Cargo Ship…
Carries containers across the sea
A Cargo Ship…
Host Application as Containers ~ Worker Nodes
Overview
Worker Node-1
Control Ships…
Managing & Monitoring of the cargo ships
Control Ships…
Manage, Plan, Schedule, Monitor ~ Master
Overview
Master Worker Node-1
Let’s talk about
Master Components..
Ship Cranes…
Identifies the placement of containers
Ship Cranes…
Identifies the right node to place a containers ~ Kube-Scheduler
Overview
Master Worker Node-1
Scheduler
Cargo Ship Profiles…
HA database ~ Which containers on which ships? When was it loaded?
Cargo Ship Profiles…
HA database ~ Which containers on which ships? When was it loaded? ~ The ETCD Cluster
Overview
Master Worker Node-1
Scheduler
ETCD
Offices in Dock…
● Operation Team Office ~ Ship Handling, Control
● Cargo Team Office ~ verify if containers are damaged, ensure that new
containers are rebuilt
● IT & Communication Office – Communication in between various ships
Controllers
● Node Controllers – Takes care of Nodes | Responsible for onboarding new nodes
in a cluster | Availability of Nodes
● Replicas Controller – Ensures that desired number of containers are running at all
times
● Controller Manager - Manages all these controllers in place
Overview
Master Worker Node-1
Scheduler
Controller
ETCD
Manager
How does each of these services communicate with each other?
Kube API Server
● A primary management component of k8s
● Responsible for orchestrating all operations within a cluster
● Exposes K8s API ,used by external users to perform management operation in the
cluster and number of controller to monitor the state of the cluster
API Server
Overview
kubectl
Scheduler
UI Controller
ETCD
Manager
21
Overview
Scheduler
kubectl
UI
Controller
ETCD
Manager
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node1 Ready master 92s v1.14.2 192.168.0.18 <none> CentOS Linux 7 (Core) 4.4.0-141-generic docker://18.9.6
node2 Ready <none> 57s v1.14.2 192.168.0.17 <none> CentOS Linux 7 (Core) 4.4.0-141-generic docker://18.9.6
node3 NotReady <none> 39s v1.14.2 192.168.0.16 <none> CentOS Linux 7 (Core) 4.4.0-141-generic docker://18.9.6
node4 NotReady <none> 32s v1.14.2 192.168.0.15 <none> CentOS Linux 7 (Core) 4.4.0-141-generic docker://18.9.6
kubectl
UI
Controller
ETCD
Manager
API Server
Overview
Scheduler Kubelet
kubectl
UI
Controller
ETCD
Manager
kubectl
UI
Controller
ETCD
Manager
kubectl
UI pod
Controller container
ETCD
Manager
kubectl
UI pod
Controller container
ETCD
Manager
● Pod Deployment
● Multi-Container
● Pod Networking
● Pod Lifecycle
VM Container Pod
How Pods are deployed?
Scheduler
API Server
Pod
Container
Master
Cluster
Scaling the Pods to accommodate
increasing traffic
Scheduler
Scheduler
API Server
Pod
Container
Master
Worker Node
What if node resources is getting
insufficient?
Worker-2
Scheduler
API Server
Worker-1
Pod
Container
Master
Cluster
What if node resources is getting
insufficient?
Worker-2
Scheduler
API Server
Worker-1
Pod
Container
Master
Cluster
2 Containers in a same Pod
Worker-2
Scheduler
API Server
Worker-1
Pod
Container
Master
Cluster
Pod Networking
Pod 1 Pod 2
10.0.30.50 10.0.30.60
How does these containers
inside Pods communicate with
External World?
Network Namespace
Pod 1 Pod 2
10.0.30.50 10.0.30.60
10.0.30.50:8080 10.0.30.50:3000
How does one Pod talk to
another Pod?
10.0.30.50 10.0.30.60
Pod Network
How does Intra-Pod
communication take place?
Intra-Pod Communication
Pod 1
Supporting
Main Container
Container
:8080 :3000
Localhost
10.0.30.50
:8080 :3000
A Look at Pod Manifest
Get a shell to a running Container
Pod
Failed
How can you ensure that there
are 3 Pods instances which are
always available and running
at point in time?
ReplicaSet
What is ReplicaSet all about?
Maintain a stable set of replica Pods running at any given time.
a. If there are access Pods, they get killed and vice versa
b. New Pods are launched when they get failed, get deleted and terminated
Labels
Selectors
#Pod-Spec
apiVersion: v1
kind: pod
metadata:
name: nginx-Pod
labels:
app: guestbook
tier: frontend
env: dev
spec:
replicas: 5..
Labels & Selectors
When Pods are scaled, how are these Pods Managed at such large scale?
Labels
Selectors
#Pod-Spec
apiVersion: v1
kind: pod
metadata:
name: nginx-Pod
labels:
app: guestbook
tier: frontend
env: dev
spec:
replicas: 5..
Equality-based Selectors Set-based Selectors
Operators: Operators:
In Manifest: In Manifest:
.. ..
selector: selector:
environment: matchExpressions:
production tier: -{key:environment,operator:in,values:[prod,qa]}
frontend -{key:tier,operator:Notin,values:[frontend,backend]}
.. ..
Supports: Services, Replication Controller Supports: Job, Deployment, ReplicaSet, DaemonSet
Demo - ReplicaSet
● Manifest file
● Test – Scale Up
64
ReplicaSet Manifest File
Creating Nginx-rs Pods
$kubectl create –f nginx-rs.yaml
Scaling the Nginx Service
Deployment
Deployment
A Deployment controller provides declarative updates for Pods and ReplicaSets.
You describe a desired state in a Deployment, and the Deployment controller changes
the actual state to the desired state at a controlled rate. You can define Deployments
to create new ReplicaSets, or to remove existing Deployments and adopt all their
resources with new Deployments.
● How it works?
○ Slowly rollout a version of app by replacing instances one after the other
until all the instances are successfully rolled out.
○ Assume that there are 10 instances of version A which is running behind the
LB. Then update strategy starts with one instance of version B is deployed
When version B is ready to accept traffic, one instance of version A is
removed from the pool
Deployment Type - Canary
● Canary
○ Ideal deployment method for someone who want to test newer version
before it is deployed 100%.
● How it works?
○ This method is all about gradually shifting production traffic from version A
to version B.
○ Lets imagine that there are about 10 instances of app version A running
inside a cluster. You use Canary deployment when you dont want to upgrade
all of your instances. Let's say you upgraded your 2 instances of ver A to
version B then do some testing. If test results are good, then you upgrade
remaining 8 instances to version B. Say, your version B is ready, then you
completely shut down version A.
Deployment Type - Canary
● Blue Green
○ Instance roll out and roll back.
● How it works?
○ Using this method, version B(which is GREEN) is deployed along side version
A(which is BLUE) with exactly same amount of instances.
○ After testing new version with all the requirement, the traffic is switched from
version A to version B at the LB level
Deployment Manifest File
ReplicaSet
Pods
Deployment
Deployment => Pods + ReplicaSet
Deployment
ReplicaSet
Pods
3 Instances of same Nginx Apps running in the form
of Pods
79
3 Instances of same Nginx Apps
running in the form of Pods
3 Instances of same Nginx Apps
running in the form of Pods
Update deployment
3 Instances of same Nginx Apps
running in the form of Pods
Scaling up
Listing Pods by Labels
Services
Services
● Imagine that, you have been asked to deploy web app
● What is Service?
● Type of Services
Services User
192.168.1.1
● Frontend Services
○ A Service which stays between
Service(frontend)
user and frontend pod
● Backend Services Frontend
Pod
○ A Service which communicate
Service(backend)
between frontend Pod and
backend end app:db
Backend
Pod
Node
Types of Services
LoadBalancer
ClusterIP NodePort
Node-1 Node-1
NodePort
10.210.1.1:8080
Services ClusterIP
Services
● Frontend Web app
● Backend DB - Redis
Thank you