0% found this document useful (0 votes)
9 views16 pages

Unit no 5-1

Uploaded by

dhanashree
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views16 pages

Unit no 5-1

Uploaded by

dhanashree
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Unit no-05: virtualization & elasticity in cloud Computing

5.1 Elastic Resource:-

 Elastic resources in cloud computing refer to the ability to dynamically


and automatically scale computing resources up or down based on the
changing demands of an application or workload.
 This elasticity allows cloud users to efficiently allocate and de-allocate
resources as needed, which can help optimize performance, cost, and
resource utilization.
 Elasticity is a fundamental feature of cloud computing and is particularly
valuable in scenarios where workloads are unpredictable or vary over
time.
 It enables organizations to optimize their infrastructure in a cost-effective
and responsive manner, supporting the efficient use of cloud resources.

Here are a few key points that define elastic resources in cloud computing:

1. Scalability: Elastic resources allow you to easily scale your computing


resources, such as virtual machines, storage, or network capacity, in response to
fluctuations in workload or traffic. You can scale up to handle increased
demand and scale down during periods of lower demand.

2. Automated: Elasticity is often achieved through automation. Cloud


providers offer services and features that enable automatic provisioning and de-
provisioning of resources, typically in response to predefined policies or
triggers. This automation reduces the need for manual intervention.

3. Cost Efficiency: Elasticity can lead to cost savings because you only pay for
the resources you use when you use them. When demand decreases, resources
are automatically scaled down, reducing costs. When demand increases,
resources are scaled up to meet the demand.

4. Performance Optimization: By automatically adjusting resources, you can


maintain consistent performance levels during high traffic or workload periods.
This prevents over-provisioning or under-provisioning, ensuring that your
applications remain responsive.

5. Resilience: Elasticity can enhance the resilience of your applications by


allowing resources to be added or removed in response to failures or
disruptions. This ensures that your applications continue to operate even when
some resources fail.
6. Resource Types: Elastic resources can include virtual servers (e.g., Amazon
EC2 instances in AWS), databases, storage, load balancers, and more,
depending on the specific needs of your application.

5.2 Containers: Docker,Introduction to DevOps

 What are Containers?


 Containers are packages of software that contain all of the necessary
elements to run in any environment.
 In this way, containers virtualize the operating system and run anywhere,
from a private data center to the public cloud or even on a developer’s
personal laptop.
 From Gmail to YouTube to Search, everything at Google runs in
containers. Containerization allows our development teams to move fast,
deploy software efficiently, and operate at an unprecedented scale.

 Define Containers:-

Containers are lightweight packages of your application code together with


dependencies such as specific versions of programming language runtimes and
libraries required to run your software services.

 What are the benefits of containers?


 Separation of responsibility

Containerization provides a clear separation of responsibility, as developers


focus on application logic and dependencies, while IT operations teams can
focus on deployment and management instead of application details such as
specific software versions and configurations.

 Workload portability

Containers can run virtually anywhere, greatly easing development and


deployment: on Linux, Windows, and Mac operating systems; on virtual
machines or on physical servers; on a developer’s machine or in data centers
on-premises; and of course, in the public cloud.
 Application isolation

Containers virtualize CPU, memory, storage, and network resources at the


operating system level, providing developers with a view of the OS logically
isolated from other applications.

 Docker

 Docker is a software platform that allows you to build, test, and deploy
applications quickly.
 Docker packages software into standardized units called containers that
have everything the software needs to run including libraries, system
tools, code, and runtime.
 Docker,you can quickly deploy and scale applications into any
environment and know your code will run.Running Docker on AWS
provides developers and admins a highly reliable, low-cost way to build,
ship, and run distributed applications at any scale.

 Docker architecture:-
 Docker uses a client-server architecture. The Docker client talks to the
Docker daemon, which does the heavy lifting of building, running, and
distributing your Docker containers.
 The Docker client and daemon can run on the same system, or you can
connect a Docker client to a remote Docker daemon.
 The Docker client and daemon communicate using a REST API, over
UNIX sockets or a network interface. Another Docker client is Docker
Compose, that lets you work with applications consisting of a set of
containers.
 The Docker daemon

The Docker daemon (dockerd) listens for Docker API requests and manages
Docker objects such as images, containers, networks, and volumes. A daemon
can also communicate with other daemons to manage Docker services.

 The Docker client

The Docker client (docker) is the primary way that many Docker users interact
with Docker. When you use commands such as docker run, the client sends
these commands to dockerd, which carries them out. The docker command uses
the Docker API. The Docker client can communicate with more than one
daemon.

 Docker Desktop

Docker Desktop is an easy-to-install application for your Mac, Windows or


Linux environment that enables you to build and share containerized
applications and microservices. Docker Desktop includes the Docker daemon
(dockerd), the Docker client (docker), Docker Compose, Docker Content Trust,
Kubernetes, and Credential Helper. For more information, see Docker Desktop.

 Docker registries

A Docker registry stores Docker images. Docker Hub is a public registry that
anyone can use, and Docker looks for images on Docker Hub by default. You
can even run your own private registry.

When you use the docker pull or docker run commands, Docker pulls the
required images from your configured registry. When you use the docker
push command, Docker pushes your image to your configured registry.

 Docker objects

When you use Docker, you are creating and using images, containers, networks,
volumes, plugins, and other objects. This section is a brief overview of some of
those objects.

 Images

An image is a read-only template with instructions for creating a Docker


container. Often, an image is based on another image, with some additional
customization. For example, you may build an image which is based on
the ubuntu image, but installs the Apache web server and your application, as
well as the configuration details needed to make your application run.

 Containers

A container is a runnable instance of an image. You can create, start, stop,


move, or delete a container using the Docker API or CLI.

 Docker Uses:-

Docker is a versatile platform for containerization that has a wide range of uses
across different domains and industries.

Here are some of the primary use cases for Docker:

 Application Packaging: Docker is commonly used to package


applications and their dependencies into containers. This makes it
easy to create consistent environments for running applications
across different systems and cloud providers.
 Development and Testing: Developers often use Docker for
creating isolated development and testing environments.
 Data Science and Machine Learning: Data scientists and
machine learning engineers use Docker to create reproducible and
consistent environments for developing, training, and deploying
machine learning models.
 Desktop Virtualization: Docker has been used for creating
isolated desktop environments for developers or specific
applications
 Education and Training: Docker is used in educational settings to
provide students with a consistent development and testing
environment. It's also used in training programs.

 Introduction to DevOps:
 DevOps is a set of practices, cultural philosophies, and tools that
aim to improve and streamline the collaboration between software
development (Dev) and IT operations (Ops) teams.
 The primary goal of DevOps is to shorten the software
development life cycle, enable more frequent software releases,
and improve the quality and reliability of software applications.
 DevOps is a set of principles, practices, and tools that bridge the
gap between software development and IT operations, with the
ultimate goal of delivering software faster, with higher quality, and
in a more collaborative and efficient manner.

Here's an introduction to DevOps:

1. Origin of DevOps:

DevOps emerged as a response to the traditional siloed approach in software


development, where development teams and operations teams often worked in
isolation, resulting in communication gaps, longer release cycles, and increased
risk of errors. The need for faster, more efficient software delivery led to the
development of DevOps practices.
2. Key Principles of DevOps:

DevOps is guided by several key principles:

- Collaboration: DevOps promotes collaboration and communication between


development and operations teams. This helps in breaking down organizational
silos and fostering a culture of shared responsibility.

- Automation: Automation is a core element of DevOps. It involves automating


repetitive tasks, such as code builds, testing, and deployment, to increase
efficiency and reduce human error.

- Continuous Integration (CI) and Continuous Deployment (CD): DevOps


encourages CI/CD pipelines, which automate the integration of code changes,
testing, and deployment to production. This allows for faster and more reliable
releases.

- Monitoring and Feedback: Continuous monitoring and feedback loops help


teams to identify issues and performance bottlenecks in real-time, allowing for
quick responses and improvements.

- Infrastructure as Code (IaC): DevOps practices often incorporate IaC,


which means defining and managing infrastructure in a code-like manner. This
ensures that infrastructure is consistent, version-controlled, and easily
reproducible.

3. Benefits of DevOps:

DevOps offers several advantages, including:

- Faster Release Cycles: DevOps shortens the time between code development
and deployment, allowing organizations to release new features and updates
more frequently.
- Improved Quality: By automating testing and deployment, DevOps reduces
the likelihood of human errors and improves the overall quality of software.

- Enhanced Collaboration: DevOps fosters a culture of collaboration, where


development and operations teams work together to achieve common goals.

- Greater Efficiency: Automation reduces manual and repetitive tasks, making


processes more efficient and cost-effective.

- Increased Stability and Reliability:Continuous monitoring and feedback


loops help identify issues early, ensuring more stable and reliable systems.

- Scalability: DevOps practices are well-suited for scalable and cloud-native


applications, making it easier to manage resources and adapt to changing
workloads.

4. DevOps Tools:

DevOps relies on a wide range of tools to facilitate automation, collaboration,


and monitoring. Some popular DevOps tools include Jenkins, Git, Docker,
Kubernetes, Ansible, Terraform, and various monitoring and logging solutions.

5.3 container registry:


 A container registry is a collection of repositories made to store container
images. A container image is a file comprised of multiple layers which
can execute applications in a single instance.

 Public vs. private registries


 Public container registries are generally the faster and easier route when
initiating a container registry.
 They are ideal for smaller teams that can take more advantage from
incorporating standard and open sourced images from public registries.
Public registries are also seen to be easier to use; however, they may also
be less secure than private registries.
 A private container registry is set up by the organization using it. Private
registries are either hosted or on premises and popular with larger
organization or enterprises that are more set on using a container registry.
Having complete control over the registry in development allows an
organization more freedom in how they choose to manage it.
 There are 2 types of container registries: public and private.
 Public registries are commonly used by individuals or small teams that
want to get up and running with their registry as quickly as possible.
However, as their organizations grow, this can bring more complex
security issues like patching, privacy, and access control that can arise.
 Private registries provide a way to incorporate security and privacy into
enterprise container image storage, either hosted remotely or on-premises.
These private registries often come with advanced security features and
technical support.

 Most cloud providers offer private image registry services: Google offers
the Google Container Registry, AWS provides Amazon Elastic Container
Registry (ECR), and Microsoft has the Azure Container Registry.

5.4Kubernates in the cloud:

Kubernetes (K8s) is a powerful open-source platform for automating


deployment, scaling, and management of containerized applications.
It's particularly effective in cloud environments for enabling
seamless scalability, implementing CI/CD pipelines, and managing
microservices architectures. Here's how Kubernetes plays a role in
these areas:

1. Kubernetes for Cloud Scaling:

Kubernetes is designed for dynamic scaling in cloud environments,


helping to manage workloads automatically and effectively:
 Auto-scaling: Kubernetes supports horizontal pod autoscaling, which
means it can automatically increase or decrease the number of pod
replicas based on CPU or custom metrics, helping manage traffic spikes
without manual intervention.
 Cloud-Native Flexibility: It can be easily integrated with cloud platforms
like AWS, Google Cloud, or Azure to scale resources based on demand.
Kubernetes manages the scaling of applications and infrastructure in a
distributed manner.
 Multi-cloud Support: Kubernetes can span across multiple cloud
environments (hybrid cloud), which offers a more flexible approach to
resource management and scaling depending on traffic or performance
requirements.

2. Kubernetes in CI/CD Pipelines:

CI/CD (Continuous Integration/Continuous Delivery) pipelines automate


the process of deploying applications and managing updates. Kubernetes
provides an ideal platform for implementing CI/CD pipelines:

 Integration with Tools: Kubernetes integrates seamlessly with DevOps


tools like Jenkins, GitLab CI, Argo CD, and Tekton to automate the
build, test, and deploy process.
 Rolling Updates and Canary Deployments: Kubernetes can handle rolling
updates, allowing you to update a deployment without downtime, and
supports canary deployments to gradually roll out new features.
 Declarative Infrastructure: Using Helm charts or Kubernetes manifests,
the desired state of an application can be declaratively defined, making
deployments reproducible and version-controlled in CI/CD pipelines.

5.3. Kubernetes for Microservices:

Kubernetes is particularly well-suited for deploying microservices-based


architectures:

 Service Discovery and Load Balancing: Each microservice can be


deployed as a separate pod, and Kubernetes provides built-in service
discovery, load balancing, and communication between microservices
using services and DNS-based routing.
 Isolation and Scaling of Services: Kubernetes ensures each microservice
can be independently deployed, scaled, and updated. This allows for more
efficient resource usage and scalability per service.
 Resilience and Self-Healing: Kubernetes manages the lifecycle of
containers by automatically restarting failed containers, rescheduling
them, and providing failover mechanisms. This ensures high availability
for microservices.

Benefits of Kubernetes in Cloud Scaling, Pipelines, and Microservices:

 High Availability: Kubernetes ensures applications stay up and running


with minimal downtime.
 Scalability: Whether you need to scale your application up during high
demand or down during lulls, Kubernetes automates this process
efficiently.
 Efficiency: Kubernetes maximizes resource usage, optimizing cloud costs
by scaling based on real-time needs.
 Speed and Agility: With CI/CD pipelines, it allows for faster iteration and
deployment of features, especially in a microservices architecture where
services can be updated independently.

5.5Hybrid and Multi-cloud kubernates:

Hybrid and Multi-Cloud Kubernetes refers to the use of Kubernetes to


manage and orchestrate containerized applications across multiple
environments—whether on-premises, private clouds, or multiple public
cloud providers. This architecture enables flexibility, resilience, and
portability of applications, allowing organizations to avoid vendor lock-
in and leverage the best of various cloud services.

1. Hybrid Cloud Kubernetes:

A hybrid cloud setup involves running applications both on-premises


(data centers, private clouds) and on public cloud platforms (like AWS,
Google Cloud, Azure). Kubernetes can manage workloads across these
environments, ensuring a seamless experience as if everything were
running in a single, unified cluster.

Key Characteristics of Hybrid Cloud Kubernetes:

 Consistency Across Environments: Kubernetes allows you to maintain


consistency between on-prem and cloud-based workloads. You can
deploy the same containers in both environments without worrying about
compatibility.
 Workload Portability: Applications can be easily moved between on-
prem and cloud environments. For example, you can run your base
workloads on-prem and burst into the public cloud during peak times.
 Hybrid Connectivity: Kubernetes enables seamless communication
between services deployed in different environments. Tools like Istio
(service mesh) can help manage traffic across hybrid environments,
providing visibility and security policies.

Use Cases:

 Regulated Industries: Companies in industries with strict regulatory or


data privacy requirements often use hybrid setups to keep sensitive data
on-premises while utilizing the scalability of the cloud for other parts of
their application.
 Disaster Recovery: On-prem systems can be backed up or failover to the
cloud in case of disasters.
 Gradual Cloud Migration: Organizations that are slowly migrating to the
cloud can run Kubernetes clusters on-premises and in the cloud, enabling
a phased transition.

2. Multi-Cloud Kubernetes:

Multi-cloud refers to deploying Kubernetes clusters across multiple


public cloud providers (e.g., AWS, Azure, GCP). This approach allows
organizations to avoid cloud vendor lock-in, optimize costs, and leverage
the unique capabilities of different cloud providers.
Key Characteristics of Multi-Cloud Kubernetes:

 Cross-Cloud Portability: Kubernetes abstracts the underlying


infrastructure, so applications can be moved between different cloud
providers without modification.
 Vendor Agnosticism: Multi-cloud architecture prevents dependency on a
single cloud provider. This enables flexibility to switch between
providers based on cost, performance, or available features.
 Unified Management: Tools like Rancher, Anthos (Google Cloud), Azure
Arc, or Red Hat OpenShift allow you to manage multiple Kubernetes
clusters across different clouds from a single interface.
 Optimizing Cloud Resources: By distributing workloads across different
cloud providers, organizations can choose the best cloud for specific
workloads. For instance, they might use GCP’s AI services while
leveraging AWS’s compute resources for general workloads.

Use Cases:

 Avoiding Vendor Lock-In: Organizations can avoid becoming dependent


on a single cloud provider by distributing workloads across multiple
clouds.
 Cost Optimization: Applications can be dynamically placed in the cloud
provider offering the best price/performance at any given time.
 High Availability: Multi-cloud setups provide better fault tolerance. Even
if one cloud provider experiences an outage, the other environments can
keep the application running.

Challenges and Considerations in Hybrid/Multi-Cloud Kubernetes:

Managing Kubernetes in hybrid and multi-cloud environments can be


complex. Some of the common challenges include:

 Networking and Connectivity: Ensuring seamless communication


between on-prem, private, and public clouds requires sophisticated
networking solutions and sometimes service meshes like Istio.
 Data Consistency: Synchronizing data between clouds or between on-
prem and cloud environments can be tricky and might require solutions
like cloud storage replication, database clustering, or data fabrics.
 Security: Managing security across multiple clouds or hybrid
environments means ensuring consistent access control, encryption, and
compliance policies. You need to integrate security tools to work across
environments.
 Tooling and Management: Managing multiple Kubernetes clusters across
different clouds can become overwhelming without proper tools.
Solutions like KubeFed (Kubernetes Federation), Rancher, or OpenShift
offer features for unified control over hybrid/multi-cloud clusters.

Popular Tools for Hybrid and Multi-Cloud Kubernetes:

 Rancher: A popular multi-cluster Kubernetes management platform that


supports both hybrid and multi-cloud deployments. Rancher allows
unified management of clusters across environments.
 Google Anthos: Google’s platform to manage Kubernetes clusters in a
hybrid or multi-cloud environment. It allows Kubernetes clusters to run
on-prem, in Google Cloud, or other public clouds like AWS.
 Azure Arc: Microsoft Azure’s service that extends the Azure control
plane to on-prem and multi-cloud environments, allowing centralized
management of Kubernetes clusters.
 Red Hat OpenShift: An enterprise Kubernetes solution that can run on-
prem, in private clouds, or across multiple public cloud providers,
offering a unified management experience.
 KubeFed (Kubernetes Federation): A native Kubernetes project designed
to manage multiple clusters by enabling applications and resources to be
shared across clusters.

Advantages of Hybrid and Multi-Cloud Kubernetes:

 Flexibility: You can choose the best environment for each workload—on-
prem for compliance-sensitive tasks, and the cloud for elasticity.
 Resilience: Multi-cloud architectures offer superior redundancy,
minimizing the risk of downtime due to cloud-specific outages.
 Cost Efficiency: Optimize your use of cloud resources based on cost,
performance, or geographical factors.

5.6Running kubernates locally with docker desktop and sklearn flask:

Docker Desktop includes a standalone Kubernetes server and client, as well as


Docker CLI integration that runs on your machine.

The Kubernetes server runs locally within your Docker instance, is not
configurable, and is a single-node cluster. It runs within a Docker container on
your local system, and is only for local testing.

Turning on Kubernetes allows you to deploy your workloads in parallel, on


Kubernetes, Swarm, and as standalone containers. Turning on or off the
Kubernetes server does not affect your other workloads.

Install and turn on Kubernetes


1. From the Docker Desktop Dashboard, select the Settings.
2. Select Kubernetes from the left sidebar.
3. Next to Enable Kubernetes, select the checkbox.
4. Select Apply & Restart to save the settings and then select Install to confirm.
This instantiates images required to run the Kubernetes server as containers, and
installs the /usr/local/bin/kubectl command on your machine.

Note

Docker Desktop does not upgrade your Kubernetes cluster automatically after a
new update. To upgrade your Kubernetes cluster to the latest version,
select Reset Kubernetes Cluster.

Use the kubectl command


Kubernetes integration provides the Kubernetes CLI command
at /usr/local/bin/kubectl on Mac and at C:\Program Files\Docker\Docker\
Resources\bin\kubectl.exe on Windows. This location may not be in your
shell's PATH variable, so you may need to type the full path of the command or
add it to the PATH.
If you have already installed kubectl and it is pointing to some other
environment, such as minikube or a GKE cluster, ensure you change the context
so that kubectl is pointing to docker-desktop:
$ kubectl config get-contexts
$ kubectl config use-context docker-desktop

Tip

Run the kubectl command in a CMD or PowerShell terminal, otherwise kubectl


config get-contexts may return an empty result.
If you are using a different terminal and this happens, you can try setting
the kubeconfig environment variable to the location of the .kube/config file.
If you installed kubectl using Homebrew, or by some other method, and
experience conflicts, remove /usr/local/bin/kubectl.

You can test the command by listing the available nodes:

$ kubectl get nodes

NAME STATUS ROLES AGE VERSION


docker-desktop Ready control-plane 3h v1.29.1
Turn off and uninstall Kubernetes
To turn off Kubernetes in Docker Desktop:

1. From the Docker Desktop Dashboard, select the Settings icon.


2. Select Kubernetes from the left sidebar.
3. Next to Enable Kubernetes, clear the checkbox
4. Select Apply & Restart to save the settings.This stops and removes Kubernetes
containers, and also removes the /usr/local/bin/kubectl command.

You might also like