Module 2-4
Module 2-4
Micro Services
CCA3010
Module 2
Syllabus
Working with Kubernetes: Cluster Architecture, the costs of self-hosting
Kubernetes, Manages Kubernetes services, Turnkey Kubernetes solutions,
Kubernetes installers, Cluster less Container Services, Deployments of Kubernetes,
pods, Replica Sets, Maintaining Desired State, The Kubernetes Scheduler, Resource
Manifest in YAML format, Kubernetes Package Manager. Kubernetes Volume
Management, Submitting Jobs to Kubernetes.
Architecture of Kubernetes
Architecture of Kubernetes
• Node
• Cluster
• Control Plane (Master)
• API Server
• etcd
• Scheduler
• Controller Manager
• Node Components
• Kubelet
• Container runtime
• Kube Proxy
• Pods
• Replication Controller/Replica Set
• Service
• Volume
• Namespace
• Label and Selector
• Config Map and Secret
Architecture of Kubernetes
• We create manifest (.yml)
• Kubernetes designates one or more of these as master and all others as workers
• The master is now going to run set of K8s processes. These processes will ensure smooth
functioning of cluster. These are called “ Control Plane”.
• 2. Controller Manager- It makes sure that actual state of cluster matches to desired state.
• A scheduler watches for newly created pods that have no node assigned for every pod that the
scheduler discovers, the scheduler becomes responsible for finding best node for that pod to run
on.
• Scheduler gets the information for hardware configuration from configuration files and
schedules the pods on nodes accordingly.
Architecture of Kubernetes
• 4. etcd
• Stores metadata and status of cluster
• etcd is consistent and high available store ( key_value store)
• Source of touch for cluster state (info about state of cluster)
• etcd has following features
• Fully replicated- The entire state is available on every node in the cluster
• Source- Implements automatic TLS with optional client certificate authentication
• Fast- Benchmarked at 10,000 writes per second
• 1.Kubelet
• Agent running on the node.
• Listens to Kubernetes master (ex- pod creation request)
• Use port 10255
• Send success/fail reports to master
• 2. container engine
• Works with kubelet
• Pulling images
• Start/stop container
• Exposing containers on ports specified in manifest
Architecture of Kubernetes
• 3.Kube-proxy
• Kube-proxy runs on each node and this make sure that each pod will get its own unique IP addresses.
• No auto-healing or scaling
• Pod crashes
Types of Kubernetes
1.Self-Hosted Kubernetes:
As discussed earlier, this involves manually setting up and managing all the components of a
Kubernetes cluster, including the control plane and worker nodes, on your own infrastructure.
2.Managed Kubernetes Services:
Cloud providers offer managed Kubernetes services that abstract much of the operational
complexity. Examples include:
1.Amazon EKS (Elastic Kubernetes Service): Managed Kubernetes service on AWS.
2.Google Kubernetes Engine (GKE): Managed Kubernetes service on Google Cloud.
3.Azure Kubernetes Service (AKS): Managed Kubernetes service on Microsoft Azure.
3.On-Premises Kubernetes:
Deploying Kubernetes in an on-premises environment, typically within a private data center.
This allows organizations to have full control over their infrastructure but requires managing the
hardware and networking components.
Types of Kubernetes
4. Bare-Metal Kubernetes:
Running Kubernetes directly on physical servers without an underlying virtualization layer. This
approach is chosen when organizations want to maximize resource utilization and avoid the overhead
of virtualization.
The costs of self-hosting Kubernetes
• Self-hosting Kubernetes can involve various costs, both direct and indirect. It's essential to
consider these factors when planning to deploy and maintain a Kubernetes cluster on your own
infrastructure. Here are some key cost considerations:
1.Hardware Costs:
1. Servers/Nodes: You'll need physical or virtual machines to act as Kubernetes nodes. The
number and specifications of these nodes will depend on your workload and performance
requirements.
2. Storage: Depending on your storage needs, you may incur costs for local or network-attached
storage.
2.Networking Costs:
1. Bandwidth: Data transfer between nodes, as well as communication with external services,
may incur network costs. Ensure your network infrastructure can handle the traffic.
The costs of self-hosting Kubernetes
3. Software Costs:
1. Kubernetes Distribution: Some distributions may have licensing costs or support fees.
Examples include Red Hat OpenShift, VMware Tanzu, and others.
2. Container Runtimes: Consider the container runtime you use (Docker, containerD, etc.) and
any associated costs.
4. Monitoring and Logging:
Implementing monitoring and logging solutions can incur costs. Tools like Grafana, ELK stack,
etc., might have associated expenses.
5. Security:
Security tools, such as vulnerability scanners or network security solutions, may have costs
associated with them.
6. Human Resources:
Employing or training staff to manage, monitor, and troubleshoot the Kubernetes cluster will
contribute to the overall cost.
The costs of self-hosting Kubernetes
7. Training and Certification:
If your team needs training or certification to effectively manage Kubernetes, consider the
associated costs.
8. Backup and Disaster Recovery:
Implementing backup and disaster recovery solutions may have costs, including storage costs for
backups.
9. Upgrades and Maintenance:
Ongoing maintenance, updates, and upgrades might require time and resources.
10. Power and Cooling:
Running and cooling the physical infrastructure or virtualization hosts will contribute to
operational costs.
11. Facility Costs:
If you are using a data center, there will be associated costs for space, power, and other facilities.
The costs of self-hosting Kubernetes
12. Scaling Costs:
As your workload grows, you may need to scale your infrastructure, which incurs additional
costs.
13. Legal and Compliance:
Costs associated with ensuring compliance with data protection laws, industry regulations, etc.
In some cases, using managed Kubernetes services from cloud providers might be a cost-effective
alternative, especially for smaller organizations or those without dedicated infrastructure expertise.
Managed Kubernetes Services
Managed Kubernetes services are offerings provided by cloud service providers to simplify the
deployment, management, and scaling of Kubernetes clusters. These services abstract much of the
operational complexity, allowing users to focus more on deploying and managing applications rather
than the underlying infrastructure.
1.Amazon EKS (Elastic Kubernetes Service):
Amazon EKS is a fully managed Kubernetes service provided by Amazon Web Services (AWS).
It simplifies the process of deploying, managing, and scaling containerized applications using
Kubernetes. EKS integrates with other AWS services and provides features such as automatic
updates and seamless integration with AWS Identity and Access Management (IAM).
2.Google Kubernetes Engine (GKE):
Google Kubernetes Engine is a managed Kubernetes service offered by Google Cloud. GKE
provides features like automated scaling, monitoring, and seamless integration with other Google
Cloud services. It is optimized for Google Cloud's infrastructure and supports features such as
auto-upgrades.
Managed Kubernetes Services
3. Azure Kubernetes Service (AKS):
Azure Kubernetes Service is a managed Kubernetes offering from Microsoft Azure. It simplifies
the deployment and management of containerized applications using Kubernetes. AKS integrates
with Azure services and provides features like auto-scaling, monitoring, and seamless integration
with Azure Active Directory.
4. IBM Cloud Kubernetes Service:
IBM Cloud offers a managed Kubernetes service that allows users to deploy, manage, and scale
containerized applications using Kubernetes. It supports features like automated updates,
monitoring, and integration with other IBM Cloud services.
5. Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE):
Oracle Cloud provides a managed Kubernetes service known as Oracle Cloud Infrastructure
Container Engine for Kubernetes. It enables users to deploy, manage, and scale containerized
applications on Oracle Cloud Infrastructure. OKE supports features like automated updates,
monitoring, and integration with Oracle Cloud services.
These managed Kubernetes services aim to reduce the operational overhead associated with
running Kubernetes clusters.
Turnkey Kubernetes solutions
• Turnkey Kubernetes solutions are pre-packaged and easy-to-deploy Kubernetes distributions that
streamline the process of setting up and managing Kubernetes clusters.
• These solutions are designed to simplify the complexities of deploying and maintaining
Kubernetes, making it more accessible to users who may not have extensive expertise in
Kubernetes administration.
• Turnkey solutions often come with integrated tools, configurations, and automation to expedite the
deployment process.
• popular turnkey Kubernetes solutions
• Rancher
• KubeSphere
• K3s
• MicroK8s: (developed by Canonical)
• Minikube:Minikube is a tool for running Kubernetes clusters locally on a developer's machine. While not suitable
for production use, Minikube provides a quick and convenient way for developers to set up and experiment with
Kubernetes on their local environment.
• KubeVirt
• OpenShift (developed by Red Hat)
Kubernetes installers
1. kubeadm:
Kubeadm is a command-line tool that helps you bootstrap Kubernetes clusters.
It is part of the official Kubernetes project and is widely used for setting up clusters manually.
Kubeadm is suitable for users who want more control over the installation process.
2. Minikube:
Minikube is a tool that allows you to run a single-node Kubernetes cluster on your local machine.
It's useful for development and testing purposes, providing a lightweight and easy-to-use Kubernetes
environment.
1.AWS Fargate:
AWS Fargate is a serverless container orchestration service provided by Amazon Web Services
(AWS). With Fargate, you don't need to manage the underlying infrastructure or clusters. You
can run containers directly without provisioning or configuring virtual machines.
2. Prepare Infrastructure
• Ensure that the infrastructure meets the Kubernetes system requirements.
• Allocate sufficient resources for nodes, including CPU, memory, and storage.
6. Network Setup:
• Choose a networking solution compatible with your Kubernetes deployment.
• Popular choices include Calico, Flannel, and Weave.
Deployments of Kubernetes
7. Monitor and Maintain:
• Implement monitoring solutions like Grafana.
• Regularly update and patch Kubernetes components and worker nodes.
• The pod is deleted, the pod is evicted for lack of resources or the node fails.
• If a pod is scheduled toa a node that fails, or if the scheduling operation itself fails, the pod is
deleted.
• If a node dies, the pod scheduled to that node are scheduled for deletion after a timeout.
• A given pod (UID) is not reschedulable to a new node, instead it will be replaced by an identical
pod, with even the same name if desired, but with a new UID.
• Volume in a pod will exist as long as that pod (with that UID exist) if that pod is deleted for any
reason, volume is also destroyed and created as new on new pod.
• A controller can create and manage multiple pods, and handling replication, rollout and providing
self healing capabilities.
Replica Sets
• A ReplicaSet is a higher-level abstraction in Kubernetes that ensures a specified number of replicas (copies)
of a pod are running at all times. It helps maintain the desired number of identical pod instances to ensure
high availability and scalability.
1. Purpose:
The primary purpose of a ReplicaSet is to maintain a specified number of pod replicas running at all times.
If any of the pods fail or are deleted, the ReplicaSet automatically creates new ones to replace them,
ensuring the desired number is always met.
2. Selectors:
ReplicaSets use label selectors to identify and manage the pods they are responsible for. When creating a
ReplicaSet, you define a set of labels, and the ReplicaSet ensures that the pods it manages have these
labels.
3. Pod Template:
A ReplicaSet includes a pod template that specifies the characteristics of the pods it should create and
maintain. This template includes details such as the container image, resource requirements, environment
variables, and any other settings necessary for the pod.
Replica Sets
apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
containers:
- name: nginx-container
image: nginx:latest
ports:
- containerPort: 80
Resource Manifest in YAML format
•apiVersion: Specifies the API version of the Kubernetes resource being defined. In this case, it's a Pod using
the "v1" version.
•kind: Specifies the type of resource being defined. Here, it's a Pod.
•spec: Specifies the desired state of the resource. In the case of a Pod, it includes information about the
containers to run.
In this example, the manifest defines a Pod named "example-pod" running a single container named "nginx-
container" based on the "nginx:latest" Docker image. The container exposes port 80.
You can apply this manifest using the kubectl apply command. For instance:
4. CleanUp (Optional):
If needed, you can delete the Job and its associated resources once it has completed:
kubectl delete job example-job
Kubernetes Volume Management
1. EmptyDir Volume:
•An EmptyDir volume is created when a pod is assigned to a node and exists as long as the pod is running on
that node. It is initially empty but can be used to share files between containers within the same pod.
volumes:
- name: temp-volume
emptyDir: {}
2. HostPath Volume:
•HostPath allows a pod to use a file or directory from the host machine's filesystem. It is often used for sharing
files between the host and the pod.
volumes:
- name: host-volume
hostPath:
path: /path/on/host
Steps to configure master and node
To connect a master node and a worker node in Kubernetes, you need to follow several steps.
1.Setup Kubernetes Cluster: Ensure that you have set up a Kubernetes cluster. This typically involves
installing Kubernetes on both the master and worker nodes.
5.Test Deployment:
•Deploy a simple application or workload to the Kubernetes cluster to ensure that it runs successfully
across both the master and worker nodes.
•Monitor the deployment to confirm that it's distributed across the cluster.