Kubernetes_1735866835
Kubernetes_1735866835
CONCEPT
What we Learn
FUNDAMENTALS
What we Learn
Part 1
Why is Kubernetes Essential
☞ 𝐖𝐡𝐚𝐭 𝐢𝐬 𝐊𝐮𝐛𝐞𝐫𝐧𝐞𝐭𝐞𝐬?
Kubernetes is an open-source container 𝐨𝐫𝐜𝐡𝐞𝐬𝐭𝐫𝐚𝐭𝐢𝐨𝐧 platform that
automates deploying, scaling, and managing applications in containers.
𝐇𝐢𝐬𝐭𝐨𝐫𝐲:
Kubernetes was initially developed by Google to solve the challenges of
managing containerized applications at scale. Google had an internal
system called 𝐁𝐨𝐫𝐠 that handled container orchestration.- Project 7 🚀
When a group of engineers started to work on open-sourcing a more
streamlined, scalable orchestration platform based on their learnings
from Borg, they code-named it "Project 7.
CONCEPT
Know your Kubernetes Components
Cluster:
A Kubernetes cluster is a collection of nodes (machines) where
Kubernetes manages and orchestrates containerized applications. A
cluster represents a Kubernetes deployment as a whole, combining
both worker nodes (where applications run) and a control plane (which
manages and monitors the cluster).
Control Plane:
The control plane is responsible for the cluster’s overall management.
The cluster’s control plane continuously monitors the actual state of
the applications and resources, adjusting to match the desired state.
This "desired state" is defined by the configurations (usually in YAML
files) you apply to Kubernetes.
CONCEPT
Nodes:
A node is a single machine (either a virtual machine or a physical server)
that runs in the Kubernetes cluster and hosts Pods. Nodes are the
workhorses of Kubernetes; they perform all the tasks needed to keep
applications running.
Types of Nodes:
Master Node (Control Plane Node): Manages the cluster and runs
control plane components (API server, etcd, scheduler, controller
manager).
Worker Node: Executes containerized applications and manages Pods.
Node Components:
➲ Kubelet: An agent that runs on each node, communicates with the
API server, and ensures that containers in Pods are running as
expected.
➲ Kube-proxy: Manages network rules and facilitates network
communication between services, ensuring seamless Pod-to-Pod and
Pod-to-external-traffic connections.
➲ Container Runtime: The software responsible for running the
containers on each node (Docker, containerd, etc.).
CONCEPT
How Kubernetes Manages Containers
❤️
Self-Healing : If a container crashes or has an issue, Kubernetes
automatically detects it and replaces it with a new one.
🌐
Networking & Load Balancing : Kubernetes manages network
connections between containers so they can communicate. It also
distributes user requests across containers, balancing the load to prevent
any single container from being overwhelmed.
📜
Declarative Management : You define the “desired state” of your
application (like how many containers should be running, what version,
etc.) in a configuration file. Kubernetes continuously monitors and makes
sure the current state matches this desired state.
CONCEPT
Pod: The Fundamental Building Block
CONCEPT
✒ Lifecycle Management: Kubernetes manages Pods rather than individual
containers. This ensures containers in a Pod are always co-located, co-
scheduled, and run in a tightly coupled manner.
✒ Atomic Deployment Unit: If a Pod fails, Kubernetes does not repair it but
replaces it with a new Pod instance. Pods are designed to be ephemeral.
Components of a Pod
➡️ Containers: Most Pods run a single container, but you can run multiple
containers in a single Pod if they are tightly coupled.
➡️Shared Network Namespace: Containers in the same Pod share:
Pod Lifecycle
1. Pending: The Pod is created but not yet running. This happens while
Kubernetes schedules the Pod to a node.
2. Running: The Pod is successfully scheduled and all containers are
running or in the process of starting.
3. Succeeded: All containers in the Pod have terminated successfully (exit
code 0).
4. Failed: At least one container in the Pod has terminated with a non-zero
exit code.
5. Unknown: The state of the Pod cannot be determined.
CONCEPT
Nodes and Clusters - Scaling and managing
workloads
Node contains:
Kubelet: Ensures that containers defined in the pod spec are running.
Kube-proxy: Manages networking and communication for the pods.
Container Runtime: Software to run containers (e.g. Docker, containerd).
Nodes can be worker nodes or control-plane nodes.
Scaling in Kubernetes
Kubernetes supports two types of scaling:
1. Horizontal Pod Autoscaling (HPA): Dynamically adjusts the number of
pod replicas for a deployment or replica set. It monitors metrics like CPU,
memory, or custom application metrics.Adds or removes pod replicas based
on thresholds.
2. Node Scaling: Adding or removing nodes to/from the cluster.
Managed manually or automatically using tools like Cluster Autoscaler.
Cluster Autoscaler integrates with cloud providers to add remove virtual
machines dynamically.
CONCEPT
Managing Workloads in Kubernetes
Workloads: Workloads are the applications or services running on
Kubernetes.
It is defined in Kubernetes using manifests (YAML or JSON files).
Types of Workloads:
Deployments: For stateless applications; supports scaling and updates.
StatefulSets: For stateful applications that require unique identities and
stable storage (e.g., databases).
DaemonSets: Ensures a copy of a pod runs on every node (e.g., log
collectors).
Jobs and CronJobs: For running one-time or scheduled tasks.
Other Features:
Load Balancing: Kubernetes ensures workloads are balanced across the
cluster using Services and Ingress.
Monitoring and Logging: Tools like Prometheus, Grafana, and ELK Stack
(Elasticsearch, Logstash, Kibana) help monitor workloads and log
activities.
High Availability and Resilience: Kubernetes automatically restarts failed
pods and reschedules them to healthy nodes.
CONCEPT
Namespace in Kubernetes
CONCEPT
How Does a Namespace Organize Resources in Kubernetes?
♦ Scoping Resources: Resources like Pods, Services, ConfigMaps, and
Secrets are created within a specific namespace. You can have a Pod named
app in the 𝐝𝐞𝐯 namespace and another Pod with the same name in the 𝐩𝐫𝐨𝐝
namespace. Example:
CONCEPT
Services in Kubernetes: Exposing
to the world
Before getting started with Services in Kubernetes, the first question comes
in our mind is “Why do we need Services?”. Suppose we have a website
where we have 3 pod replicas of front-end and 3 pod replicas of backend.
These are the following scenarios we have to tackle:
1. How would the front-end pods be able to access the backend pods?
2. If the front-end pod wants to access the backend pod to which replica of
the backend pod will the requests be redirected. Who makes this
decision?
3. As the IP address of the pods can change, who keeps the track of the new
IP addresses and inform this to the front-end pods?
4. As the containers inside the pods are deployed in a private internal
network, which IP address will the users use to access the front-end
pods?
CONCEPT
Why Do We Need Services in Kubernetes?
Pods Are Ephemeral: Pods are temporary and can be destroyed or
recreated for reasons like scaling, updates, or failures. Each new Pod gets
a different IP address, making direct communication with Pods
unreliable.
Stable Communication: Services provide a consistent way to access the
Pods, regardless of changes in the underlying Pods or their IPs.
Load Balancing: Services distribute network traffic among multiple Pods.
Discovery: They simplify service discovery by acting as a single access
point to a group of Pods.
CONCEPT
How Services Work Internally?
Label Selector: The Service identifies the set of Pods it manages using label
selectors.
CONCEPT
WORKING WITH
OBJECTS
What we Learn
Part 2
Understanding Kubernetes
YAML Files
Kubernetes has become a leading container orchestration platform, offering
scalability, resilience, and portability.
✍️ apiVersion :
The apiVersion field in a Kubernetes YAML file specifies the version of the
Kubernetes API that the resource adheres to. It ensures compatibility
between the YAML file and the Kubernetes cluster.
☞ 𝐂𝐨𝐫𝐞 𝐀𝐏𝐈 𝐆𝐫𝐨𝐮𝐩: Includes fundamental resources.
- Pods: apiVersion: v1
- Services: apiVersion: v1
- ConfigMaps: apiVersion: v1
- Secrets: apiVersion: v1
☞ 𝐀𝐩𝐩𝐬 𝐀𝐏𝐈 𝐆𝐫𝐨𝐮𝐩 : Used for managing workloads.
- Deployments: apiVersion: apps/v1
- DaemonSets: apiVersion: apps/v1
- StatefulSets: apiVersion: apps/v1
- ReplicaSets: apiVersion: apps/v1
CONCEPT
✍️ kind :
The kind field defines the type of resource being created or modified. It
determines how Kubernetes interprets and manages the resource.
☞ Each kind has a specific purpose. For instance:
- Pod: Represents a single or multiple containers.
- Service: Exposes a set of Pods as a network service.
- Deployment: Manages rolling updates for applications.
✍️ metadata :
The metadata field contains essential information about the resource, such
as its name, labels, and annotations. It helps identify and organize resources
within the cluster.
✍️ spec :
The spec field describes the desired state of the resource. It outlines the
configuration details and behavior of the resource. The structure and
content of the spec field vary depending on the resource kind.
CONCEPT
Deployment - How to manage
application updates
A Deployment provides replication functionality with the help of ReplicaSets
and various additional capabilities like rolling out of changes and rollback
changes.
2. Apply Updates Using kubectl: You can update the application by changing
the image tag in the Deployment file and applying it
CONCEPT
3. Monitor the Update Progress: Use the following commands to observe the
update:
Check Deployment status: kubectl rollout status deployment/my-app
View update history: kubectl rollout history deployment/my-app
CONCEPT
Replicaset: How does it
ensure high avaialbility
A ReplicaSet is a Kubernetes resource that ensures a specified number of
identical pod replicas are running at any given time.
☛ Key Features
1. Desired State Management: Maintains the desired number of replicas.
2. Automatic Recovery: Recreates pods that are deleted or fail.
3. Selectors: Matches pods using label selectors to manage them.
ReplicationController
A ReplicationController is an older Kubernetes resource with a similar
purpose to ReplicaSets. It ensures that a specified number of pod replicas
are running at all times.
CONCEPT
Differences between ReplicaSet and ReplicationController:
➣ Replicaset:
- Selectors: Supports set-based selectors (more flexible).
- Use Case: Used with Deployments for modern applications.
- Efficiency: More advanced and flexible.
➣ ReplicationController
- Selectors: Supports only equality-based selectors.
- Use Case: Considered legacy, replaced by ReplicaSet.
- Efficiency: Limited to basic replication tasks.
CONCEPT
StatefulSet in Kubernetes
CONCEPT
Key Features of StatefulSets
♦ Stable Network Identity: Each pod in a StatefulSet gets a unique,
persistent hostname that follows the pattern
<statefulset-name>-<ordinal>.
♦ Stable Persistent Storage: Each pod is associated with its own persistent
volume (PV). Even if a pod is deleted or rescheduled, it retains its associated
data by reattaching the same PV.
CONCEPT
DaemonSets: ensure Running
a pod on each node
A DaemonSet is a Kubernetes resource that ensures a specific pod is running
on all or selected nodes in a cluster. This is particularly useful for running
background tasks, system monitoring, or log collection on every node.
Automatic Updates:
DaemonSets automatically maintain the desired pod specification on
nodes. If you update the DaemonSet, the pods on the nodes will be
updated accordingly.
CONCEPT
Pod Scheduling:
When you create a DaemonSet, Kubernetes:
- Schedules the DaemonSet pod on all eligible nodes using the kube-
scheduler.
- Ensures these pods run even if there are changes to the node pool, such as
adding or removing nodes.
Immutable Management:
Each DaemonSet pod is managed as an individual unit, but Kubernetes
ensures that every eligible node runs exactly one instance of the pod.
CONCEPT
Jobs and CronJobs: help in
Running jobs?
Jobs:
A Job is a Kubernetes resource that runs a specific task to completion. It's
designed for one-time or on-demand tasks. Once the task is finished, the Job
and its associated Pods are automatically cleaned up.
CONCEPT
CronJobs:
A CronJob is a Kubernetes resource that schedules Jobs to run periodically
based on a specified schedule. It's like a time-based trigger that initiates
Jobs at predefined intervals.
CONCEPT
How Jobs and CronJobs Work Together
1. CronJob Scheduling: The CronJob controller monitors the cluster for
CronJob objects. When a CronJob's schedule matches the current time,
it creates a new Job.
2. Job Execution: The Job controller creates Pods to execute the task
defined in the Job spec.
3. Task Completion: The Pods run the task and report their status to the Job
controller.
4. Job Completion: Once all Pods associated with the Job complete
successfully, the Job is considered finished.
5. Cleanup: The Job and its Pods are automatically deleted.
By effectively utilizing Jobs and CronJobs, you can automate routine tasks,
optimize resource utilization, and ensure the reliability and efficiency of
your Kubernetes applications.
CONCEPT
ConfigMaps - Managing
application configurations
ConfigMaps in Kubernetes is a key-value store that allows you to manage
application configuration data independently from the application code.
CONCEPT
ConfigMaps - Managing
application configurations
How ConfigMaps Help Manage Application Configurations:
✅ Decouples Configuration from Code ConfigMaps store configuration
details outside of the application codebase, allowing you to:
Update configurations without rebuilding or redeploying the application.
Manage different configurations for different environments
CONCEPT
WORKING WITH
OBJECTS
What we Learn
Secrets
Ingress Controller
Storage in Kubernetes
RBAC Security Access
Network Policies
Service Discovery
Editing Pods And Deployment
Part 3
Secrets - Handling sensitive
information securely
Every software application is guaranteed to have some secret data. This
secret data can range from database credentials to TLS certificates or
access tokens to establish secure connections.
The platform you build your application on should provide a secure means
for managing this secret data. This is why Kubernetes provides an object
called Secret to store sensitive data you might otherwise put in a Pod
specification or your application container image.
You create Secrets outside of Pods — you create a Secret before any Pod
can use it.
When you create a Secret, it is stored inside the Kubernetes data store
(i.e., an etcd database) on the Kubernetes Control Plane.
When creating a Secret, you specify the data and/or stringData fields.
The values for all the data field keys must be base64-encoded strings.
Suppose you don’t want to convert to base64. In that case, you can
choose to specify the stringData field instead, which accepts arbitrary
strings as values.
CONCEPT
When creating Secrets, you are limited to a size of 1MB per Secret. This is
to discourage the creation of very large secrets that could exhaust the
kube-apiserver and kubelet memory.
Also, when creating Secrets, you can mark them as immutable with
immutable: true. Preventing changes to the Secret data after creation.
Marking a Secret as immutable protects from accidental or unwanted
updates that could cause application outages.
After creating a Secret, you inject it into a Pod either by mounting it as
data volumes, exposing it as environment variables, or as
imagePullSecrets. You will learn more about this later in this article.
CONCEPT
Basic authentication Secret: You use this Secret type to store
credentials needed for basic authentication. When using a basic
authentication Secret, the data field must contain at least one of the
following keys:
SSH authentication secrets: You use this Secret type to store data used
in SSH authentication. When using an SSH authentication, you must
specify a ssh-privatekey key-value pair in the data (or stringData) field as
the SSH credential to use.
TLS secrets: You use this Secret type to store a certificate and its
associated key typically used for TLS. When using a TLS secret, you must
provide the tls.key and the tls.crt key in the configuration’s data (or
stringData) field.
Bootstrap token Secrets: You use this Secret type to store bootstrap
token data during the node bootstrap process. You typically create a
bootstrap token Secret in the kube-system namespace and named it in
the form bootstrap-token-<token-id>.
CONCEPT
Best Practices for Handling Secrets in Kubernetes
CONCEPT
Ingress Controllers -
Managing external access
An Ingress Controller is a Kubernetes component responsible for managing
external access to the services running inside a Kubernetes cluster. It works
in conjunction with Kubernetes Ingress resources, which define rules for
routing traffic to services.
While Kubernetes provides services like NodePort and LoadBalancer for
exposing services, these methods have limitations in complex scenarios.
Ingress Controllers overcome these limitations by providing advanced
traffic management capabilities such as URL-based routing, SSL termination,
and load balancing.
✅ Key Components
CONCEPT
❷ URL-Based Routing
Enables routing based on:
Hostnames (e.g., example.com, api.example.com).
Paths (e.g., /api, /app).
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- host: example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- path: /app
pathType: Prefix
backend:
service:
name: app-service
port:
number: 80
CONCEPT
❸ Load Balancing
Distributes incoming requests across multiple instances of backend
services, ensuring high availability and better performance.
❹ Secure Connections (TLS/SSL Termination)
Supports HTTPS traffic by handling TLS termination.
Allows you to define certificates for secure communication.
spec:
tls:
- hosts:
- example.com
secretName: tls-secret
❺ Path Rewriting
Modifies request paths before forwarding them to backend services, if
needed.
❻ Advanced Features
Authentication and Authorization.
Rate limiting to prevent abuse.
Web Application Firewall (WAF) for additional security.
Flow of Traffic
DNS: A domain name resolves to the Ingress Controller's public IP.
Ingress Controller: Receives incoming traffic and matches it against defined
rules in Ingress resources.
Backend Service: Traffic is forwarded to the appropriate service and its
pods.
CONCEPT
Advantages
Simplified Configuration: Single entry point for multiple services.
Cost-Effective: Reduces dependency on multiple external
LoadBalancers.
Enhanced Flexibility: URL-based routing, SSL termination, and custom
rules.
Scalability: Works with Kubernetes' auto-scaling capabilities.
✅ Ingress Controller:
- Manages external HTTP/HTTPS traffic and routes it to the appropriate
backend services.
- Focuses on layer 7 (HTTP/HTTPS) traffic and advanced routing rules (host-
based, path-based).
- Reads Kubernetes Ingress resources to configure a load balancer or proxy
server like NGINX, Traefik, etc.
- Advanced routing, SSL termination, and load balancing for HTTP/HTTPS
traffic.
- Supports host-based and path-based routing (e.g., api.example.com, /app).
- Handles SSL/TLS termination and certificates at the ingress level.
- Provides intelligent load balancing at the HTTP/HTTPS level.
CONCEPT
✅ Kubernetes Services:
- Provides networking and communication between pods, within the cluster,
and optionally external traffic.
- Covers networking at both layer 4 (TCP/UDP) and optionally layer 7 for
simple external exposure.
- Directly maps traffic to pods using mechanisms like ClusterIP, NodePort,
and LoadBalancer.
- Internal service discovery, exposing services to external users (basic), and
load balancing.
- Defined by a Service resource with types like ClusterIP, NodePort, or
LoadBalancer.
- No routing logic; forwards all traffic to a backend service or pod.
- Basic layer 4 (TCP/UDP) load balancing across pods.
CONCEPT
Storage in Kubernetes -
Persistent Volumes
In Kubernetes, storage refers to how data is stored, accessed, and managed
across the cluster. Applications running in containers often require
persistent or temporary storage to store data. Kubernetes provides various
abstractions and mechanisms to manage storage efficiently, enabling
applications to store and retrieve data seamlessly, even if the container is
terminated or rescheduled.
Types of Storage in Kubernetes
- Ephemeral Storage:
Data exists only as long as the pod or container is running.
Suitable for caching, logs, or temporary data.
Example: emptyDir, configMap, secret.
- Persistent Storage:
Data persists beyond the lifecycle of a pod.
Ideal for databases or critical application data.
Managed using Persistent Volumes (PVs) and Persistent Volume Claims
(PVCs).
CONCEPT
Key Components of PVs
Persistent Volume (PV):
A cluster-level resource that represents physical or virtual storage.
Defined and provisioned by the administrator.
Offers storage from a variety of sources (local disks, NFS, cloud storage,
etc.).
Provisioning:
Static Provisioning: The administrator manually creates PVs with specific
storage configurations.
Dynamic Provisioning: Kubernetes automatically provisions PVs using a
storage class when a PVC is created.
Binding:
PVCs are matched to PVs based on the requested size, access modes,
and storage class.
Once bound, a PV can be used exclusively by the PVC.
CONCEPT
Using the Volume:
Reclaiming:
When a PVC is deleted, the reclaim policy of the PV determines the next
step:
Retain: Data remains intact for manual recovery.
Recycle: Data is wiped, and the PV is made available again.
Delete: The underlying storage is deleted.
CONCEPT
RBAC (Role-Based Access
Control) - Securing access
Role-Based Access Control (RBAC) in Kubernetes is a mechanism that allows
you to define and enforce permissions for users, groups, or service accounts
to access and perform actions on Kubernetes resources. RBAC is a key
feature for securing a Kubernetes cluster by ensuring that only authorized
users can perform specific actions on resources.
ClusterRole:
Similar to a Role but applies cluster-wide and can be used for resources
outside a specific namespace (e.g., nodes, namespaces).
RoleBinding:
Binds a Role to a user, group, or service account within a specific
namespace.
ClusterRoleBinding:
Binds a ClusterRole to a user, group, or service account across the entire
cluster.
CONCEPT
How RBAC Secures Access
Granular Permissions:
You can assign specific permissions for different users or applications,
limiting their actions to only what's required for their role.
Principle of Least Privilege:
Ensures that entities are only granted the minimal access necessary to
perform their functions, reducing the risk of accidental or malicious misuse.
Segregation of Duties:
By assigning different roles to different users or teams, RBAC prevents
unauthorized access and ensures accountability.
Auditability:
RBAC configuration can be audited, providing visibility into who has access
to what.
Dynamic Access Control:
As roles and responsibilities change, RBAC allows you to update permissions
dynamically without disrupting the cluster.
CONCEPT
Network Policies - Controlling
pod-to-pod communication
In Kubernetes, network policies are a crucial component for controlling and
managing network traffic between pods. In this post, will assist you to walk
through the core concept and the steps needed for implementation of
network policies in the cluster and also, ensure you to have granular control
over the communication within cluster.
CONCEPT
Deny All Ingress and Egress Policy Allow Specific Ingress Traffic
To start with, create a default policy To allow specific ingress traffic, you
that denies all ingress and egress need to create a network policy that
traffic unless explicitly allowed. This defines which pods can communicate
policy ensures that no unintended with each other.
communication can occur. Here’s an example that allows ingress
traffic to pods labeled app=backend
from pods labeled app=frontend:
# Default deny all policy
apiVersion: networking.k8s.io/v1 # Allow ingress from frontend to backend
kind: NetworkPolicy apiVersion: networking.k8s.io/v1
metadata: kind: NetworkPolicy
name: default-deny-all metadata:
namespace: default name: allow-frontend-to-backend
spec: namespace: default
podSelector: {} spec:
policyTypes: podSelector:
- Ingress matchLabels:
- Egress app: backend
policyTypes:
- Ingress
ingress:
ingress: Each NetworkPolicy may - from:
include a list of allowed ingress rules. - podSelector:
Each rule allows traffic which matches matchLabels:
both the from and ports sections. app: frontend
egress: Each NetworkPolicy may
include a list of allowed egress rules.
Each rule allows traffic which matches
both the to and ports sections.
CONCEPT
Service Discovery - Connecting
services within the cluster
Service discovery in Kubernetes is a mechanism that allows applications to
find and communicate with each other without needing to hard code IP
addresses or endpoint configuration.
CONCEPT
Key Components of Service Discovery
Kubernetes provides built-in support for service discovery through the use
of Services and DNS.
CONCEPT
Kubernetes’ endpoints API is one method it supports service discovery.
Client applications can use the endpoints API to discover the IP addresses
and ports of pods in an application.
The Kubernetes control plane ETCD serves as a service registry, where all
endpoints are registered and kept up to date by Kubernetes.
☛ B.) Client having no API support:
Not all clients support APIs, Kubernetes supports service discovery in other
methods also.
A Kubernetes service object is a persistent endpoint that points to a
collection of pods depending on label selectors. It uses labels and selectors
to route requests to the backend pods.
Clients can utilize the Kubernetes service’s DNS name. Kubernetes’ internal
DNS manages service mapping.
The usage of DNS for name-to-IP mapping is optional, and Kubernetes can
do so with environment variables. The fundamental implementation of
Kubernetes Service is handled by a kube-proxy instance running on each
worker node.
CONCEPT
Editing Pods and
Deployments
You CANNOT edit specifications of an existing POD other than the below:
1. spec.containers[*].image
2. spec.initContainers[*].image
3. spec.activeDeadlineSeconds
4. spec.tolerations
For example you cannot edit the environment variables, service accounts,
resource limits of a running pod.
CONCEPT
Editing Pods and
Deployments
Edit Deployments
With Deployments you can easily edit any field/property of the POD
template. Since the pod template is a child of the deployment
specification, with every change the deployment will automatically delete
and create a new pod with the new changes. So if you are asked to edit a
property of a POD part of a deployment you may do that simply by running
the command
𝐤𝐮𝐛𝐞𝐜𝐭𝐥 𝐞𝐝𝐢𝐭 𝐝𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭 𝐦𝐲-𝐝𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭
CONCEPT
BEST PRACTICES &
ECOSYSTEM
What we Learn
Blue-Green Deployments
Canary Deployments
Monitoring & Logging
Argo CD- GitOps made Simple
Helm Charts - Simplifying deployments
Security Best Practices
Troubleshooting - Handling issues
Part 4
Blue-Green
Deployments
Deploying new versions of applications is a crucial part of the development
cycle in modern software world. Rolling out updates to production
environments is always a critical thing and can be a risky proposition
anytime, as even small issues can result in significant downtime and lost
revenue.
CONCEPT
When a new version of the application is ready to be deployed, it is
deployed to the green environment. Once the new version is deployed and
tested, traffic is switched to the green environment, making it the new
production environment.
At any point in time, only one environment (Blue or Green) handles live
traffic. After testing and verification, traffic is switched from Blue to Green.
How Blue-Green Deployment Works in Kubernetes?
CONCEPT
Monitor the Green Environment:
Monitor the application behavior after switching traffic.
If there are any issues, you can switch back to the Blue environment by re-
updating the Service.
Clean Up:
Once the Green version is stable, you can remove the Blue Deployment to
save resources.
CONCEPT
Canary Deployments - Testing
changes in production
Canary Deployment in Kubernetes is a deployment strategy where a new
version of an application is gradually rolled out to a small subset of users
before rolling it out to everyone. It helps minimize risk by exposing the new
version to a limited audience and allows you to monitor its behavior and
performance before fully switching over.
It’s important to note that canary deployments are not available by default
in Kubernetes—they are not one of the deployment strategies in the
Deployment object. Therefore, to carry out canary deployments in
Kubernetes you will need some customization or the use of additional tools.
CONCEPT
How Canary Deployment Works in Kubernetes?
1. Stable Deployment (Current Version)
Start with your existing application running in Kubernetes.
CONCEPT
6. Full Rollout or Rollback
Full Rollout: Once the Canary version is stable, scale down the Stable
Deployment and scale up the Canary Deployment to handle 100% of
traffic.
Rollback: If issues arise, scale down or stop the Canary Deployment
entirely. The Stable Deployment will continue serving users.
This allows quick feedback from users, allowing developers to add new
features and deliver what the end-user needs. This helps improve the
software and the user experience.
CONCEPT
Monitoring & Logging -
Essential tools and approaches
Kubernetes monitoring helps you identify issues and proactively manage
Kubernetes clusters. Effective monitoring for Kubernetes clusters makes it
easier to manage your containerized workloads, by tracking uptime,
utilization of cluster resources and interaction between cluster
components.
CONCEPT
The following are popular monitoring tools designed for a containerized
environment.
What to Monitor
Cluster monitoring – Keeps track of the health of an entire Kubernetes
cluster. Helps you verify if nodes are functioning properly and at the
right capacity.
Pod monitoring – Keeps track of issues affecting individual pods, such
as resource utilization of the pod, application metrics of the pod.
Deployment metrics – When using Prometheus, you can monitor
Kubernetes deployments. This metric shows cluster CPU, Kube state,
cAdvisor, and memory metrics.
Ingress metrics – Monitoring ingress traffic can help identify and
manage various issues.
Persistent storage – Setting up monitoring for volume health enables
Kubernetes to implement CSI. You can also use the external health
monitor controller to monitor node failures.
CONCEPT
Control plane metrics – You should monitor schedulers, API servers,
and controllers to track and visualize cluster performance for
troubleshooting purposes.
Node metrics – Monitoring CPU and memory for each Kubernetes node
can help ensure they never run out. Several conditions describe the
status of a running node, such as Ready, MemoryPressure,
DiskPressure, OutOfDisk, and NetworkUnavailable.
CONCEPT
Argo CD - Kubernetes GitOps
Made Simple
𝐀𝐫𝐠𝐨 𝐂𝐃 simplifies Kubernetes application deployments by automating
synchronization with Git, providing robust disaster recovery, and ensuring
consistency and visibility across clusters.
- By adopting GitOps principles with Argo CD, teams can achieve greater
efficiency, security, and reliability in their DevOps workflows.
- GitOps is a methodology for managing software infrastructure and
deployments using Git as the single source of truth.
CONCEPT
𝐂𝐨𝐫𝐞 𝐂𝐨𝐧𝐜𝐞𝐩𝐭𝐬:
𝐆𝐢𝐭 𝐚𝐬 𝐭𝐡𝐞 𝐒𝐨𝐮𝐫𝐜𝐞 𝐨𝐟 𝐓𝐫𝐮𝐭𝐡: All configuration files, including deployments,
services, secrets, etc., are stored in a Git repository.
𝐀𝐮𝐭𝐨𝐦𝐚𝐭𝐞𝐝 𝐏𝐫𝐨𝐜𝐞𝐬𝐬𝐞𝐬: Argo CD detects changes in the Git repository and
automatically apply them to the Kubernetes cluster. This ensures that the
live infrastructure aligns with the desired state defined in Git.
𝐂𝐨𝐧𝐭𝐢𝐧𝐮𝐨𝐮𝐬 𝐃𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭: Any changes made in the Git repository are
automatically reflected in the Kubernetes cluster, enabling continuous
deployment.
CONCEPT
Helm Charts - Simplifying
Kubernetes deployments
Helm is a package manager for Kubernetes, much like apt for Ubuntu or
yum for CentOS. It simplifies the deployment and management of
Kubernetes applications by using Helm Charts, which are reusable
templates for Kubernetes resources.
Helm helps you manage Kubernetes applications — Helm Charts help you
define, install, and upgrade even the most complex Kubernetes application.
Charts are easy to create, version, share, and publish — so start using Helm
and stop the copy-and-paste.
CONCEPT
Components of a Helm Chart
Chart.yaml: Provides metadata about the chart (name, version).
Values.yaml: Contains default configuration values that can be
overridden during deployment.
Templates Directory: Contains the YAML templates for Kubernetes
resources like Pods, Services, ConfigMaps, etc.
Charts Directory: Used for dependencies, allowing you to bundle other
charts required by your application.
README.md (Optional): Documentation about the chart and its usage.
CONCEPT
5. Simplifies Complex Deployments For applications with multiple
microservices, Helm Charts can streamline the deployment by bundling all
configurations into a single package. This removes the need to individually
manage numerous Kubernetes manifests.
6. Easy Upgrades and Rollbacks Helm makes upgrading deployments
simple by allowing you to update the values or chart version. If the upgrade
fails, Helm provides an easy way to roll back to the last known working
state.
7. Community Charts The Helm ecosystem has a large repository of prebuilt
charts for popular applications like Nginx, MySQL, Jenkins, etc. You can use
these charts as-is or customize them according to your needs, saving time
and effort.
CONCEPT
Security Best Practices in
Kubernetes
Kubernetes world is Dynamic and Complex, securing this, is quite
challenging. Today, Kubernetes becomes a ready-to-go option in IT infra,so,
it is becoming an attractive target for attackers.
🌠 Configuration Management
Configuration management is another area where Kubernetes security risks
can arise. Misconfigurations can lead to security vulnerabilities, making
your Kubernetes deployments susceptible to attacks.
CONCEPT
🌠 Software Supply Chain Risks
Any Kubernetes deployment includes many software components, both
within the Kubernetes distribution, included in container images, and
running within live containers. All these components can be a source of
security risks.
🌠 Runtime Threats
Threats can affect nodes, pods, and containers at runtime. This makes
runtime detection and response a critical aspect of Kubernetes security.
It’s important to monitor Kubernetes deployments for suspicious activity
and respond quickly to potential security incidents.
🌠 Infrastructure Compromise
Kubernetes nodes run on physical or virtual computers, which can be
compromised by attackers if not properly secured. Network and storage
systems used by Kubernetes clusters are also vulnerable to attack.
Compromised Kubernetes infrastructure can lead to widespread disruption
of Kubernetes workloads, data loss, and exposure of sensitive information.
CONCEPT
Kubernetes Security Best Practices
1. Cluster Security
Enable Role-Based Access Control (RBAC): Define roles and permissions
for users and applications to ensure they only access what’s necessary.
𝑼𝒔𝒆 𝒌𝒖𝒃𝒆𝒄𝒕𝒍 𝒂𝒖𝒕𝒉 𝒄𝒂𝒏-𝒊 𝒕𝒐 𝒗𝒆𝒓𝒊𝒇𝒚 𝒑𝒆𝒓𝒎𝒊𝒔𝒔𝒊𝒐𝒏𝒔.
2. Workload Security
Use Namespaces for Segmentation: Isolate workloads by using
namespaces to group resources with similar security requirements.
Apply Pod Security Standards (PSS): Use policies (e.g., Pod Security
Admission) to enforce:
1. Non-root containers
2. Read-only root file systems
3. Minimal capabilities
CONCEPT
Limit Resource Usage: Set resource requests and limits to prevent DoS
attacks from overloading nodes.
Image Security:
1. Use trusted base images.
2. Regularly scan container images for vulnerabilities using tools like Trivy
or Clair.
3. Network Security
Implement Network Policies: Use network policies to control traffic
flow to and from pods. This includes:
1. Whitelisting ingress and egress traffic.
2. Blocking unnecessary communication.
4. Secrets Management
Store Secrets Securely: Use tools like HashiCorp Vault or AWS Secrets
Manager instead of Kubernetes Secrets when possible.
Encrypt Secrets at Rest: Enable encryption for Secrets in etcd by
configuring --encryption-provider-config.
Restrict Access to Secrets: Use RBAC to limit access to Secrets.
CONCEPT
5. Supply Chain Security
Image Provenance: Use tools like Notary to sign and verify images.
CI/CD Pipeline Security:
1. Use static and dynamic analysis tools.
2. Scan IaC (Infrastructure as Code) templates for misconfigurations.
6. Node Security
Harden Node Configurations:
1. Regularly update and patch node operating systems.
2. Disable unused services and ports.
Restrict Access to Node Filesystem: Prevent pods from accessing the
host filesystem unless necessary.
Use Seccomp and AppArmor Profiles: Implement Seccomp and
AppArmor for kernel-level security.
CONCEPT
Troubleshooting - Handling
common K8s issues
Kubernetes is the most popular container orchestration tool. It facilitates a
wide range of functionalities like scaling, self-healing, container
orchestration, storage, secrets, and more.Troubleshooting in Kubernetes is
critical and challenging due to the complex and dynamic nature of
Kubernetes architecture.
Dynamic Environment
Ephemeral Workloads: Pods are transient and can be created,
destroyed, or rescheduled dynamically. Troubleshooting issues is
challenging since logs and states can disappear when pods terminate.
Scaling and Auto-healing: Kubernetes automatically scales and replaces
resources. While this is beneficial, it can obscure the root cause by
quickly mitigating symptoms.
CONCEPT
Distributed Networking
Pod-to-Pod Communication: Kubernetes uses a complex networking
model involving overlays (e.g., CNI plugins like Calico, Flannel).
Networking issues can arise from misconfigurations, network policies, or
DNS failures.
External Traffic: Debugging issues with ingress controllers, load
balancers, or service mesh configurations adds additional layers of
complexity.
Shared Responsibility
Multiple Stakeholders: Kubernetes involves developers, DevOps
engineers, and infrastructure teams. A misconfiguration in one area
(e.g., manifests, resource quotas) can cause issues elsewhere.
Shared Resources: Clusters host multiple applications, often from
different teams. Resource contention or namespace conflicts can make
troubleshooting challenging.
Observability Challenges
Log Aggregation: Kubernetes logs are scattered across multiple
components (pods, nodes, and cluster services). Without centralized
logging, gathering relevant logs is cumbersome.
Limited Insights: Out-of-the-box tools like kubectl provide basic insights
but often lack depth, requiring third-party observability tools like
Prometheus, Grafana, or Fluentd for comprehensive monitoring.
CONCEPT
Scaling Adds Complexity
Large Clusters: As the number of nodes, pods, and namespaces grows,
so does the complexity of pinpointing an issue.
Cross-Cluster Issues: In multi-cluster setups, debugging issues related
to federation or inter-cluster communication can be daunting.
Security Implications
RBAC and Policies: Misconfigured Role-Based Access Control (RBAC)
rules or network policies can cause unexpected access issues or
application failures.
Multi-Tenancy: Ensuring security in multi-tenant clusters often leads to
complex configurations that are hard to troubleshoot.
CONCEPT