0% found this document useful (0 votes)
3 views

k8's deployment

kubernetes deployment

Uploaded by

Prasanth Royal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

k8's deployment

kubernetes deployment

Uploaded by

Prasanth Royal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Kubernetes Deployment

A Deployment in Kubernetes is a way to manage and scale applications automatically. It ensures that the
desired number of application instances are running, updates them safely, and manages their lifecycle.

Definition of Kubernetes

Kubernetes, often abbreviated as K8s, is an open-source platform designed to automate the


deployment, scaling, and management of containerized applications.

It acts as an orchestrator, ensuring your applications run efficiently and reliably across
multiple environments, whether on-premises or in the cloud.

Key Benefits of Kubernetes:

 Scalability: Adjust resources dynamically based on traffic.


 Self-healing: Automatically restarts failed containers.
 Load Balancing: Distributes network traffic evenly to ensure application stability.
 Portability: Works seamlessly across various infrastructures.

visual representation of Kubernetes showing the relationship between


Deployment, ReplicaSet, and Pods
R
N
ha
at
sh
Ak

Here’s an example YAML configuration to demonstrate the relationship between


Deployment, ReplicaSet, and Pod in Kubernetes:

Deployment YAML

A Deployment manages ReplicaSets and ensures the desired number of Pods are running.

apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deployment
labels:
app: example-app
spec:
replicas: 3
selector:
matchLabels:
app: example-app
template:
metadata:
labels:
app: example-app
spec:
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80

How It Works:

1. Deployment: Ensures there are always 3 replicas of the application running.


2. ReplicaSet: Automatically created and managed by the Deployment. It ensures the
R
Pods are up and running.
3. Pod: Each replica is a Pod, which runs the application container (in this case, nginx).
N
ha

Breaking Down the YAML:


at

apiVersion and kind: Specify the resource type (Deployment here).


 replicas: Defines how many Pods you want.
sh

 selector: Specifies the label used to identify the Pods managed by this Deployment.
 template: The Pod specification, which includes metadata and container details
Ak

(image, port, etc.

In Kubernetes, a rollback allows you to revert a Deployment to a previous version in case of


errors or issues during updates.

Key Points about Rollbacks:

1. Automatic Rollback: If a Deployment update fails, Kubernetes can automatically roll


back to the last successful version.
2. Manually Trigger Rollback: You can manually roll back to a specific revision.

Example Workflow for Rollback:

Step 1: Update Deployment


Update the Deployment to a new version:

kubectl set image deployment/example-deployment nginx=nginx:1.22

Step 2: Check Deployment History

View the history of revisions:

kubectl rollout history deployment/example-deployment

Example Output:

REVISION CHANGE-CAUSE
1 kubectl create
2 kubectl set image deployment/example-deployment nginx=nginx:1.22

Step 3: Roll Back to a Previous Revision

Rollback to the previous version (e.g., revision 1):

kubectl rollout undo deployment/example-deployment --to-revision=1

Step 4: Check Rollback Status


R
N
Verify the rollback was successful:
ha

kubectl get deployment example-deployment


kubectl describe deployment example-deployment
at
sh

YAML Example for Deployment with Change Cause:


Ak

Add an annotation to track the reason for the change:

apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deployment
annotations:
kubernetes.io/change-cause: "Updated to nginx version 1.22"
spec:
replicas: 3
selector:
matchLabels:
app: example-app
template:
metadata:
labels:
app: example-app
spec:
containers:
- name: nginx
image: nginx:1.22
Key Commands:

 Pause Rollout: Prevent further changes temporarily.

kubectl rollout pause deployment/example-deployment

 Resume Rollout: Resume the paused update.

kubectl rollout resume deployment/example-deployment

Use Case of Kubernetes Deployment

Kubernetes Deployments are crucial for managing modern containerized applications. Below
are practical use cases that demonstrate their utility:

1. Scaling Applications R
Scenario: An e-commerce site experiences high traffic during sales events.
N
 Solution: Use a Deployment to scale the number of application Pods dynamically
ha

based on traffic.
 Benefit: Ensures high availability and smooth user experience.
at

 Command:
 kubectl scale deployment ecommerce-app --replicas=10
sh
Ak

2. Rolling Updates

Scenario: Updating a microservice to a new version without downtime.

 Solution: Use Deployment to perform a rolling update, gradually replacing old Pods
with new ones.
 Benefit: Avoids service interruptions.
 Command:
 kubectl set image deployment/api-service api-container=v2.0.1

3. Rollbacks

Scenario: A new update causes errors or crashes.

 Solution: Rollback the Deployment to the last stable version.


 Benefit: Quickly restore service reliability.
 Command:
 kubectl rollout undo deployment/api-service
4. High Availability

Scenario: Ensuring a web application remains available despite Pod failures.

 Solution: Deployment maintains the desired number of replicas by replacing failed


Pods.
 Benefit: Self-healing ensures continuous availability.

5. Canary Deployments

Scenario: Test a new feature with a small group of users before full rollout.

 Solution: Create a separate Deployment for the new version with limited replicas.
 Benefit: Allows safe testing and feedback collection.

6. Portability Across Environments


R
N
Scenario: Deploying the same application in development, staging, and production
environments.
ha

 Solution: Use Deployment YAML files to define consistent configurations across


at

environments.
sh

 Benefit: Simplifies application management and reduces errors.


Ak

If your deployment is stuck trying to deploy its request ReplicaSet without even
completing, here are some steps and points to troubleshoot and resolve the issue:

Troubleshooting Steps:

1. Check ReplicaSet Status:


o Run kubectl get rs <replicaset-name> to view the status of the
ReplicaSet. Look for errors or pods in Pending or CrashLoopBackOff state.
o If you see any issues, use kubectl describe rs <replicaset-name> to get
more detailed information about the ReplicaSet status.
2. Inspect Pod Logs:
o Use kubectl logs <pod-name> to check the logs of individual pods within
the ReplicaSet. Look for error messages that indicate why the pods aren't
starting properly.
o If the pods are failing, resolve the underlying issues (e.g., missing
environment variables, incorrect configurations, resource limits exceeded).
3. Verify Deployment Configuration:
o Ensure that the deployment YAML file (deployment.yaml) is correctly
configured. Double-check configurations like the number of replicas,
container images, resource limits, environment variables, and port mappings.
o Confirm that the container image exists and is accessible in the registry.
4. Inspect Resource Limits:
o Verify that there are sufficient CPU and memory resources available in the
cluster to support the requested replicas.
o If necessary, adjust resource requests and limits in the deployment YAML to
prevent resource exhaustion.
5. Network Issues:
o Ensure that network policies allow communication between pods, and that any
required services or endpoints are available.
o Check for network policies that could be preventing pods from reaching the
necessary resources.
6. Review Kubernetes Events:
o Use kubectl get events to review any recent events that might indicate
issues with the deployment. Events can provide insights into what's going
wrong at the cluster level.
7. Check Deployment Progress:
o Use kubectl rollout status deployment <deployment-name> to check
if the deployment is making progress or if it’s stuck.
o If it’s stuck, run kubectl rollout undo deployment <deployment-name>
R
to revert to a previous stable state.
N
8. Resource Allocation and Limits:
o Ensure that all necessary resources (CPU, memory, storage) are properly
ha

allocated and not exhausted.


o Review resource requests and limits for pods to prevent deployment issues due
at

to resource constraints.
9. Rolling Back and Re-deploying:
sh

o If the deployment fails repeatedly, consider rolling back to a previous working


version or manually adjusting configurations before trying to redeploy.
Ak

o Use kubectl rollout undo deployment <deployment-name> to revert


changes and then deploy again with adjustments.
10. Cluster Logs and Metrics:

 Review cluster-level logs and metrics to get insights into overall cluster health. Tools
like kubectl top and monitoring dashboards like Grafana can be useful.
Here’s an example of a Kubernetes Deployment YAML file for an Nginx server. This
Deployment will create multiple replicas of an Nginx container and expose it on a
specific port.

Example Nginx Deployment YAML:


apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3 # Number of desired replicas
selector:
matchLabels:
app: nginx
template: # The pod template to be created by the Deployment
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest # Nginx container image and tag
R
ports:
- containerPort: 80 # Port that the container exposes
N
resources: # CPU and memory resource requests and limits
requests:
ha

cpu: "100m"
memory: "128Mi"
limits:
at

cpu: "500m"
memory: "256Mi"
sh
Ak

Breakdown of the Nginx Deployment YAML:

1. apiVersion: Defines the API version of the Kubernetes resource (apps/v1 for
Deployments).
2. kind: Specifies the type of Kubernetes resource (Deployment in this case).
3. metadata:
o name: The name of the Deployment.
o labels: Key-value pairs used for identifying resources in selectors (app: nginx).
4. spec:
o replicas: The number of desired replicas of the Deployment. Setting it to 3 creates
three instances of the Nginx container.
o selector: A label selector to identify which pods belong to this Deployment.
o template: The pod specification that determines what a pod looks like when it is
created.
 metadata under template: Inherited from the Deployment, used to
identify pods.
 spec under template:
 containers: Array of container specifications in the pod.
 name: The name of the container.
 image: The Nginx Docker image.
 ports: Defines which ports the container will expose
(containerPort: 80 makes the Nginx server accessible on port
80).
 resources: Defines resource requests and limits for CPU and
memory for the container.

To deploy an Nginx application using the YAML provided, you will need to follow a series
of commands to create and manage the Kubernetes Deployment. Here's a step-by-step guide
along with explanations for each command:

Step-by-Step Commands for Deploying the Nginx Deployment YAML:

1. Create the Nginx Deployment YAML File:


o Save the YAML content provided in a file named nginx-deployment.yaml. You
can use any text editor to create this file.
2. nano nginx-deployment.yaml

YAML Content:

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
R
labels:
app: nginx
N
spec:
replicas: 3 # Number of desired replicas
ha

selector:
matchLabels:
at

app: nginx
template: # The pod template to be created by the Deployment
sh

metadata:
labels:
app: nginx
Ak

spec:
containers:
- name: nginx
image: nginx:latest # Nginx container image and tag
ports:
- containerPort: 80 # Port that the container exposes
resources: # CPU and memory resource requests and limits
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "256Mi"

Apply the Deployment YAML:

o Use the kubectl apply command to create the Deployment in your Kubernetes
cluster. This command reads the YAML file and applies it as a Deployment resource.
kubectl apply -f nginx-deployment.yaml

Explanation:

o kubectl: The command-line tool to interact with the Kubernetes cluster.


o apply: A sub-command to apply configurations to the cluster.
o -f nginx-deployment.yaml: Specifies the YAML file to apply.
3. Check the Deployment Status:
o To verify that the Deployment was created successfully and that the pods are
running as expected, use the kubectl get deployments command.

kubectl get deployments nginx-deployment

Output Example:

NAME READY UP-TO-DATE AVAILABLE AGE


nginx-deployment 3/3 3 3 10m

Explanation:

o kubectl get deployments: Fetches the list of Deployments in the cluster.


o nginx-deployment: The name of the Deployment we created.
o READY: Shows the number of pods that are ready to serve traffic.
R
o UP-TO-DATE: Indicates how many replicas have been updated to match the desired
N
state.
o AVAILABLE: Number of replicas that are available (not in pending or failed state).
ha

Describe the Deployment:


at

o To get more detailed information about the Deployment, including its status,
sh

conditions, and the pods it manages, use the kubectl describe deployment
command.
Ak

kubectl describe deployment nginx-deployment

Explanation:

o describe: Provides detailed information about the specified resource.


o deployment nginx-deployment: Specifies the Deployment whose details are to
be viewed.
4. View Pod Details:
o To see detailed information about the pods created by the Deployment, use the
kubectl get pods command.
5. kubectl get pods -l app=nginx

Explanation:

o kubectl get pods: Fetches the list of pods in the cluster.


o -l app=nginx: Lists only the pods with the label app=nginx, which are managed
by the nginx-deployment.
View Pod Logs:

o To view the logs of the Nginx pods, use the kubectl logs command. Replace
<pod-name> with the actual name of the pod.

kubectl logs <pod-name>

Explanation:

o kubectl logs: Fetches the logs of the specified pod.


o <pod-name>: Replace with the name of any pod under the nginx-deployment.

Scale the Deployment:

o To change the number of replicas (increase or decrease), use the kubectl scale
command.

kubectl scale deployment nginx-deployment --replicas=5


R
Explanation:
N

o kubectl scale: Scales the specified Deployment.


ha

o --replicas=5: Sets the desired number of replicas to 5.


at

Rolling Out Updates:


sh

o If you need to update the Nginx Deployment (e.g., change the image or
Ak

configuration), use kubectl set image to apply the change.

kubectl set image deployment/nginx-deployment nginx=nginx:1.21.1

Explanation:

o kubectl set image: Updates the specified Deployment's container images.


o deployment/nginx-deployment nginx=nginx:1.21.1: Updates the nginx
container to version 1.21.1.
R
N
ha
at
sh
Ak

Replication Controller

A Replication Controller is a Kubernetes resource that ensures a specified number of pod


replicas are running at all times. It automatically manages the deployment and rescheduling
of pods to maintain the desired state even if some pods fail or are evicted. Here’s how to
create, manage, and troubleshoot a Replication Controller.

Creating a Replication Controller

Below is an example of a Replication Controller YAML file that creates and manages
multiple replicas of a sample Nginx container.
Replication Controller YAML Example:

apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-rc
labels:
app: nginx
spec:
replicas: 3 # Number of desired replicas
selector:
app: nginx
template: # The pod template to be created by the Replication Controller
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest # Nginx container image and tag
ports:
- containerPort: 80 # Port that the container exposes
resources: # CPU and memory resource requests and limits
requests:
cpu: "100m"
memory: "128Mi"
limits:
R
cpu: "500m"
N
memory: "256Mi"
ha

Explanation of the Replication Controller YAML:


at

1. apiVersion: Defines the API version (v1 for core resources in Kubernetes).
sh

2. kind: Specifies the type of Kubernetes resource (ReplicationController).


3. metadata:
Ak

o name: The name of the Replication Controller.


o labels: Used for identifying resources, here it labels the pods with app: nginx.
4. spec:
o replicas: The number of desired pod replicas.
o selector: A label selector that matches pods managed by this Replication
Controller.
o template: The pod specification that determines what a pod looks like when
created.
 metadata under template: Labels pods with app: nginx.
 spec under template:
 containers: Defines the Nginx container to run.
 name: The name of the container.
 image: The Docker image to use (nginx:latest in this case).
 ports: Exposes the port 80 on the container.
 resources: Specifies resource requests and limits for the
container.
Managing a Replication Controller:

Once the Replication Controller is created, you can manage it using the following commands:

1. Create the Replication Controller:


o Save the YAML content provided in a file named nginx-rc.yaml.
o Apply the configuration using kubectl apply.

kubectl apply -f nginx-rc.yaml

2. Check the Replication Controller Status:


o To verify that the Replication Controller is correctly managing the pods, use the
kubectl get rc command.
o kubectl get rc nginx-rc

Output Example:

NAME DESIRED CURRENT READY AGE


nginx-rc 3 3 3 10m

3. Inspect Replication Controller Details:


o To get detailed information about the Replication Controller, use the kubectl
describe rc command.
R
N
kubectl describe rc nginx-rc
ha

4. View Pod Details:


o To see the pods managed by the Replication Controller, use kubectl get pods -
at

l app=nginx.
sh

kubectl get pods -l app=nginx


Ak

5. Scale the Replication Controller:


o To adjust the number of replicas, use the kubectl scale command.

kubectl scale rc nginx-rc --replicas=5

6. Rolling Out Updates:


o You can update the container image of the Replication Controller using kubectl
set image.

kubectl set image rc/nginx-rc nginx=nginx:1.21.1


Common Issues and Fixes:

 Pods Not Starting: Check the logs of individual pods using kubectl logs <pod-name> to
identify why they are not starting.
 Resource Constraints: Ensure that the resources (CPU, memory) allocated to the pods are
not exceeded.
 Network Issues: Verify network connectivity for the pods to communicate with each other
and with external services.

By using Replication Controllers, Kubernetes ensures that your applications remain highly
available, scalable, and resilient to failures.

ReplicaSet

A ReplicaSet is a Kubernetes resource that ensures a specified number of identical pod


replicas are running at any given time. It replaces the older Replication Controllers and
provides more flexibility with features like scaling and management of pods. A ReplicaSet is
typically used when you want to run multiple identical pods of a containerized application.
Here’s how to create, manage, and troubleshoot a ReplicaSet.

Creating a ReplicaSet R
Below is an example of a ReplicaSet YAML file for creating and managing multiple replicas
N
of a sample Nginx container.
ha

ReplicaSet YAML Example:


at

apiVersion: apps/v1
kind: ReplicaSet
sh

metadata:
name: nginx-rs
Ak

labels:
app: nginx
spec:
replicas: 3 # Number of desired replicas
selector:
matchLabels:
app: nginx
template: # The pod template to be created by the ReplicaSet
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest # Nginx container image and tag
ports:
- containerPort: 80 # Port that the container exposes
resources: # CPU and memory resource requests and limits
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "256Mi"
Explanation of the ReplicaSet YAML:

1. apiVersion: Specifies the API version (apps/v1 for ReplicaSets in Kubernetes).


2. kind: Defines the type of Kubernetes resource (ReplicaSet).
3. metadata:
o name: The name of the ReplicaSet.
o labels: Used for identifying resources, here it labels the pods with app: nginx.
4. spec:
o replicas: The number of desired pod replicas.
o selector: A label selector that matches pods managed by this ReplicaSet.
o template: The pod specification that determines what a pod looks like when
created.
 metadata under template: Labels pods with app: nginx.
 spec under template:
 containers: Defines the Nginx container to run.
 name: The name of the container.
 image: The Docker image to use (nginx:latest in this case).
 ports: Exposes the port 80 on the container.
 resources: Specifies resource requests and limits for the
container.

Managing a ReplicaSet:
R
N
Once the ReplicaSet is created, you can manage it using the following commands:
ha

1. Create the ReplicaSet:


o Save the YAML content provided in a file named nginx-rs.yaml.
at

o Apply the configuration using kubectl apply.


sh

kubectl apply -f nginx-rs.yaml


Ak

2. Check the ReplicaSet Status:


o To verify that the ReplicaSet is correctly managing the pods, use the kubectl get
rs command.

kubectl get rs nginx-rs

Output Example:

NAME DESIRED CURRENT READY AGE


nginx-rs 3 3 3 10m

3. Inspect ReplicaSet Details:


o To get detailed information about the ReplicaSet, use the kubectl describe rs
command.

kubectl describe rs nginx-rs

4. View Pod Details:


o To see the pods managed by the ReplicaSet, use kubectl get pods -l
app=nginx.
kubectl get pods -l app=nginx

5. Scale the ReplicaSet:


o To adjust the number of replicas, use the kubectl scale command.

kubectl scale rs nginx-rs --replicas=5

6. Rolling Out Updates:


o You can update the container image of the ReplicaSet using kubectl set image.

kubectl set image rs/nginx-rs nginx=nginx:1.21.1

Common Issues and Fixes:

 Pods Not Starting: Check the logs of individual pods using kubectl logs <pod-name> to
understand why they are not starting.
 Resource Constraints: Ensure that the resources (CPU, memory) allocated to the pods are
not exceeded.
 Network Issues: Verify network connectivity for the pods to communicate with each other
and with external services.

By using ReplicaSets, Kubernetes ensures that your applications remain highly available,
R
scalable, and resilient to failures, similar to the functionality of Replication Controllers but
with more flexibility and ease of use.
N

COMPARISON BETWEEN REPLICASET AND REPLICATION CONTROLLER:


ha

Feature Replication Controller ReplicaSet


at

Ensures a specified number of pod Advanced resource for managing identical


Purpose
sh

replicas are running at all times. pods, with more flexibility and automation.
Ak

API Version v1 apps/v1

Introduction Introduced in Kubernetes 1.0 Introduced in Kubernetes 1.2

Uses matchLabels with set-based


Uses matchLabels for equality-
Label Selector matching (in, notin, exists,
based matching
doesnotexist)

Manual updates, no automatic


Rolling Updates Automatic rolling updates
rolling updates

Pods are not automatically


Self-Healing Automatically repairs failed pods
repaired if they fail

Resource Advanced resource management


Limited resource management
Management (requests, limits)

Manual scaling; does not scale


Scaling Automatically scales up or down
automatically
Less complex, simpler More complex, supports a wider range of
Complexity
configuration features

Deployment Primarily used for static Suited for dynamic deployments, including
Strategy deployments rolling updates and scaling.

Version Updates Manual version updates Supports automated version updates

Less flexible; primarily equality- More flexible; supports complex matching


Flexibility
based matching rules

Best for applications requiring dynamic


Best for straightforward
Use Case deployment, scaling, and automated
deployments
updates.

Summary:

 Replication Controller is suitable for simpler, static applications that do not require
advanced scaling or automated updates. It uses basic label selectors and manual scaling.
 ReplicaSet is designed for more complex applications that require automation, rolling
updates, and self-healing capabilities. It supports more advanced label selectors and scaling
R
features, making it more flexible for dynamic deployments.
N
Here are some interview questions that you might encounter regarding Kubernetes and its
components like Deployments, Replication Controllers, and ReplicaSets:
ha

Kubernetes Interview Questions:


at
sh

1. Explain the difference between a Kubernetes Deployment, Replication


Controller, and ReplicaSet.
Ak

o Discuss their purpose, how they manage the lifecycle of pods, and the advantages
and disadvantages of each.
o Talk about the benefits of using ReplicaSets over Replication Controllers, such as
scalability, ease of use, and more advanced features.
2. What is a Deployment in Kubernetes? How does it work?
o Describe what a Deployment is used for.
o Discuss how Deployments manage the rollout and rollback of applications.
o Explain how you can monitor the status and health of a Deployment.
3. Describe a scenario where you might choose to use a ReplicaSet over a
Replication Controller.
o Discuss specific use cases where ReplicaSets offer more flexibility or are better
suited.
4. How do you troubleshoot issues with a ReplicaSet or a Deployment in
Kubernetes?
o Walk through the steps you would take to diagnose and resolve common issues such
as pods not starting, deployment rollback, or resource constraints.
o Mention commands like kubectl describe, kubectl logs, kubectl get
events, and kubectl rollout status.
5. What are the key benefits of using ReplicaSets in Kubernetes?
o Discuss their role in maintaining a stable state, scaling, and replacing failed pods
automatically.
o Talk about how ReplicaSets can simplify management over individual pod
management.
6. How would you scale a Deployment or ReplicaSet in Kubernetes?
o Explain the commands you would use (kubectl scale, kubectl set image)
and the implications of scaling up or down.
o Discuss considerations around resource availability and pod placement.
7. What are the typical problems you may face when dealing with deployments in
Kubernetes?
o Discuss common issues like pods in CrashLoopBackOff, insufficient resources,
networking problems, and how to resolve them.
8. Explain how Kubernetes ensures high availability for applications.
o Discuss the role of Deployments, Replication Controllers, and ReplicaSets in ensuring
high availability.
o Mention pod distribution, load balancing, and how Kubernetes manages failover.
9. How do you monitor the health of a Kubernetes cluster and its resources?
o Talk about tools like kubectl top, Prometheus, and Grafana.
o Explain how monitoring can help in scaling applications and troubleshooting issues.
10. Explain the kubectl rollout status command and how it is used.
o Discuss its purpose in monitoring the deployment status.
o Describe how you would handle a stuck rollout and what kubectl rollout undo
R
does.
N
ha
at
sh
Ak

You might also like