0% found this document useful (0 votes)
3 views

Kubernetes and Container Orchestration

The document presents an overview of Kubernetes, an open-source container orchestration platform, detailing its functionalities such as automated deployment, scaling, and management of containerized applications. It discusses advanced scheduling and resource optimization techniques, including Horizontal and Vertical Pod Autoscaling, and highlights case studies from Google and Netflix that demonstrate the effectiveness of these strategies in improving resource utilization and reducing costs. The conclusion emphasizes the importance of integrating AI/ML-driven models and energy-efficient strategies for the future of container orchestration.

Uploaded by

s.049.joshua
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Kubernetes and Container Orchestration

The document presents an overview of Kubernetes, an open-source container orchestration platform, detailing its functionalities such as automated deployment, scaling, and management of containerized applications. It discusses advanced scheduling and resource optimization techniques, including Horizontal and Vertical Pod Autoscaling, and highlights case studies from Google and Netflix that demonstrate the effectiveness of these strategies in improving resource utilization and reducing costs. The conclusion emphasizes the importance of integrating AI/ML-driven models and energy-efficient strategies for the future of container orchestration.

Uploaded by

s.049.joshua
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 20

Kubernetes and

Container
Orchestration:
Presented by
-I010 Joshua D’silva
-I014 Jash Kathiria

Exploring Advanced Scheduling and Resource


Optimization Techniques
01 Kubernetes (often abbreviated as K8s) is an open-
source container orchestration platform developed

What is A
by Google

Kubernete 02 utomates the deployment, scaling, and managem


ent of containerized applications.

s? 03
Think of Kubernetes as a system that ensures your
applications run reliably and efficiently across a
cluster of machines.
01 Deploying containers

What is 02 Scaling them up or down

Container
03 Networking between containers

Orchestration
?
Automated management of containers — software
packages that bundle an application and its
dependencies.
04 Health monitoring

05 Rolling updates and rollbacks


Why not Virtual
Machines?
Virtual Machines Containers

Startup Time Minutes Seconds

Size GBs MBs

Resource Usage High Low


The Control Plane is responsible for managing the overall Kubernetes cluster. It makes global
01
decisions about the cluster (e.g., scheduling applications, responding to cluster events).

Cluster: A Kubernetes cluster is composed of a Control Plane and one or more Nodes (worker
02
machines).

Nodes are the physical or virtual machines where the workloads (pods) run. Each node contains
03
several important components such as kubelet, kube-proxy, pod, CRI.

The Cloud Provider API is used to interact with cloud services such as managing storage,
04
networking, and scaling across cloud environments (if Kubernetes is deployed in the cloud).
How it
Works
01
kube-api-server communicates with etcd to store and retrieve cluster
state and configurations.

The scheduler selects nodes for pods based on available resources


02
and other factors.

kubelet on each node ensures the containers in pods are running


03
correctly and reports back to the control plane.

kube-proxy helps route network traffic to the correct pods across the
04
cluster.

cloud-controller-manager integrates the cluster with external cloud


05
services for features like storage, networking, and monitoring.
Resource Optimization Techniques:
Adjusts resources (CPU, memory) based on real-time usage
01 metrics.

Dynamic
Resource 02
Tools like Prometheus and Grafana provide insights into system
performance

Allocation
Ensures optimal resource usage by reacting to changing
03
demands.
Horizontal Pod
Autoscaling
(HPA)
• Automatically adjusts the number of pod
replicas based on resource consumption
(e.g., CPU, memory).
• Ensures applications are scaled dynamically
to meet fluctuating demand.
• Reduces under-provisioning and over-
provisioning.
Modifies resource requests and limits for running pods based on
actual consumption.
01

Vertical
Pod 02
Prevents resource over-provisioning by adjusting pod resources
in real-time.

Autoscaling
(VPA) 03 Improves resource utilization efficiency.
Case Study 1: Advanced Scheduling
Techniques - Google’s Borg System
Google’s Borg
System
1.Overview:
• Borg is an advanced scheduling platform for
managing large-scale deployments across
Google’s global infrastructure.
• It optimizes the use of available resources
and balances workloads dynamically in real-
time.
Google’s Borg
System
Problem
Inefficient default scheduling methods led
to:
• Resource wastage
• Increased latency
• Unpredictable system performance
Google’s Borg System
Solution and Results
Solutions: Results:
• Scheduling: Uses historical data to • Improved Resource Utilization: Up to 80%
proactively predict resource needs. better than traditional methods.
• Resource Pooling: Dynamically pools • Reduced Latency: Predictive scheduling led
resources to optimize workload distribution. to a 30% reduction in average latency.
• Priority & Preemption: High-priority • Increased Cluster Efficiency: Better
workloads can preempt lower-priority ones workload distribution allowed Google to run
for optimal resource allocation. more services on the same infrastructure.
Case Study 2 - Netflix's Resource
Optimization: Overview & Problem
Netflix's Kubernetes
Resource
Optimization
1.Overview:
• Netflix operates one of the largest microservices
architectures globally, managing thousands of
containers deployed on Kubernetes.
• Faced with scaling challenges and the need for cost
optimization, Netflix adopted several resource
optimization techniques to maintain high availability
while reducing operational costs.
Netflix's Kubernetes
Resource’s Problem
1.Cloud Infrastructure Costs: Netflix needed to
optimize its cloud usage to avoid overspending
while ensuring high performance.
2.Scaling Demands: Netflix's infrastructure had to
scale dynamically to meet the fluctuating
demands of millions of users.
3.The challenge was ensuring efficient resource
management without compromising performance
during traffic peaks, such as during content
releases or high-demand periods.
Netflix's Resource
Optimization: Solution &
Results
Solutions: Results:
• Spot Instances for Non-Critical Workloads: Used spot • 35% Reduction in Cloud Infrastructure Costs: By
instances for stateless, non-time-sensitive tasks optimizing resource allocation, Netflix reduced costs
(e.g., batch processing). while maintaining high availability and performance.
• Right-Sizing Workloads: Continuously monitored and • Improved Efficiency: Resources were allocated more
adjusted resource allocation (CPU, memory) to accurately, leading to better overall resource
match actual usage, reducing waste. utilization and efficiency.
• Auto-Scaling: Leveraged Horizontal Pod Autoscaling • High Availability: Despite the cost focus, Netflix
(HPA) and Cluster Autoscaler in Kubernetes to maintained a seamless user experience, even during
dynamically scale resources according to demand. high-demand periods.
Advanced scheduling techniques, like predictive
scheduling, multi-objective optimization, and priority
01 preemption, are crucial for managing large-scale,
dynamic workloads efficiently.

Case studies like Google’s Borg System showcase the


power of predictive models and resource pooling in
02 improving resource utilization, reducing latency, and
enhancing overall system efficiency.

Conclusio Netflix’s case study highlights how dynamic scaling

n: 03
(Horizontal Pod Autoscaling) and cost-aware strategies
resulted in a 35% reduction in cloud infrastructure costs
while maintaining high service availability and user
satisfaction.

As cloud-native applications continue to evolve, the


integration of AI/ML-driven scheduling models, multi-
04 cloud environments, and energy-efficient strategies will
play a significant role in shaping the future of container

You might also like