0% found this document useful (0 votes)
33 views

Kubernetes-03 12 23

The document discusses microservice architecture and Kubernetes. It defines microservices as independently deployable services that communicate over lightweight protocols. Kubernetes provides a platform to deploy and manage containerized microservices at scale. It abstracts infrastructure and allows developers to focus on applications. Kubernetes clusters have a master node that controls worker nodes where containers run in pods - the smallest deployable units in Kubernetes.

Uploaded by

Nagesh Acharya
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views

Kubernetes-03 12 23

The document discusses microservice architecture and Kubernetes. It defines microservices as independently deployable services that communicate over lightweight protocols. Kubernetes provides a platform to deploy and manage containerized microservices at scale. It abstracts infrastructure and allows developers to focus on applications. Kubernetes clusters have a master node that controls worker nodes where containers run in pods - the smallest deployable units in Kubernetes.

Uploaded by

Nagesh Acharya
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Agenda

 Qwiki lab for docker (more commands in docker)



 MicroService architecture

 Kubernetes Engine

 Kubernetes Architecture

 Kubernetes POD

 Demo
Micro service architecture

Micro service architecture
 Microservices are an architectural style that develops a single application as a set of
small services. Each service runs in its own process. The services communicate with
clients, and often each other, using lightweight protocols, often over messaging or
HTTP.

 Microservices can be thought of as a form of service-oriented architecture wherein


applications are built as a collection of different smaller services rather than one whole
app.

 we can have several independent applications that can run on their own. You can
create them using different programming languages and even different platforms. You
can structure big and complicated applications with simpler and independent programs
that execute by themselves. These smaller programs are grouped to deliver all the
functionalities of the big, monolithic app.
 Instead of large teams working on large, monolithic projects, smaller,
more agile teams develop the services using the tools and frameworks they
are most comfortable with.

 Each of the involved programs is independently versioned, executed, and


scaled. These microservices can interact with other microservices and can
have unique URLs or names while being always available and consistent
even when failures are experienced.

 we can scale your services horizontally with technologies like Docker and
Kubernetes.
 Benefit of microservices:
 1) Improved scalability Because microservices let you independently scale services up
or down, the ease—and cost—of scaling is dramatically less than in a monolithic
system.

 2) Better fault isolation. If one microservice fails, all the others will likely continue to
work. This is a key part of the microservices architectural design.

 3) Increased developer productivity. New developers can get up to speed rapidly,


since it’s easier to understand a small, isolated piece of functionality than an entire
monolithic application.

 4) Smaller and more agile development teams. In modern software organizations,


teams are often organized by the microservices they work on.
Kubernetes
 Born of kubernetes:

 Micro services also have drawbacks. When your system consists of only a small
number of deployable components, managing those components is easy.

 It’s easy to decide where to deploy each component, because there aren’t that
many choices.

 When the number of those components increases, deployment-related


decisions become increasingly difficult because not only does the number of
deployment combinations increase, but the number of inter-dependencies
between the components increases by an even greater factor
 Microservices perform their work together as a team, so they need to find
and talk to each other.

 When deploying them, someone or something needs to configure all of


them properly to enable them to work together as a single system.

 With increasing numbers of microservices, this becomes tedious and


error-prone, especially when you consider what the ops/sysadmin teams
need to do when a server fails.
 Google was the first company that realized it needed a much better way of
deploying and managing their software components and their infrastructure to
scale globally.

 Google runs hundreds of thousands of servers and has had to deal with
managing deployments on such a massive scale.

 Kubernetes is a software system that allows you to easily deploy and manage
containerized applications.

 Kubernetes enables you to run your software applications on thousands of


computer nodes as if all those nodes were a single computer.
 It abstracts away the underlying infrastructure and, by doing so, simplifies
development, deployment, and management for both development and
the operations teams

 Deploying applications through Kubernetes is always the same, whether


your cluster contains only a couple of nodes or thousands of them. The
size of the cluster makes no difference at all.
 Kubernetes system is composed of a master node and any number of worker
nodes.

 When the developer submits a list of apps to the master, Kubernetes deploys
them to the cluster of worker nodes. What node a component lands on doesn’t
(and shouldn’t) matter—neither to the developer nor to the system
administrator.

 The developer can specify that certain apps must run together and Kubernetes
will deploy them on the same worker node.

 Others will be spread around the cluster, but they can talk to each other in the
same way, regardless of where they’re deployed.
 It relieves application developers from having to implement certain
infrastructure-related services into their apps for e.g
things such as service discovery, scaling, load-balancing, self-healing.

 Kubernetes works in cluster where we have master node and then


multiple worker nodes.
 Master node: Master node is what controls the cluster and makes it function.

 Master consists of multiple components that can run on a single master node or
be split across multiple nodes and replicated to ensure high availability.

 Kubernetes API Server, which you and the other Control Plane components
communicate with.

 The Scheduler, which schedules your apps (assigns a worker node to each
deployable component of your application)
 The Controller Manager, which performs cluster-level functions, such as
replicating components, keeping track of worker nodes, handling node
failures.

 Etcd, a reliable distributed data store that persistently stores the cluster
configuration.

 The components of the Control Plane hold and control the state of the
cluster, but they don’t run your applications. This is done by the (worker)
nodes.
 Worker nodes: The worker nodes are the machines that run your containerized
applications. The task of running, monitoring, and providing services to your
applications is done by the following components:

 container runtime, which runs your containers for e.g docker

 Kubelet, which talks to the API server and manages containers on its node.

 Kubernetes Service Proxy (kube-proxy), which load-balances network traffic


between application components
 Responsibilities of kubernetes:
 1) keep the containers running - Once the application is running, Kubernetes
continuously makes sure that the deployed state of the application always
matches the description you provided. For example, if you specify that you
always want five instances of a web server running, Kubernetes will always keep
exactly five instances running.

 If one of those instances stops working properly, like when its process crashes or
when it stops responding, Kubernetes will restart it automatically.

 If a whole worker node dies or becomes inaccessible, Kubernetes will select new
nodes for all the containers that were running on the node and run them on the
newly selected nodes
 Scaling the number of copies: While the application is running, you can
decide you want to increase or decrease the number of copies, and
Kubernetes will spin up additional ones or stop the excess ones,
respectively.

 We can even leave the job of deciding the optimal number of copies to
Kubernetes. It can automatically keep adjusting the number, based on
real-time metrics, such as CPU load, memory consumption, queries per
second, or any other metric your app exposes.
 Achieving better utilization of hardware:
 By setting up Kubernetes on your servers and using it to run your apps instead of
running them manually, you’ve decoupled your app from the infrastructure.

 When you tell Kubernetes to run your application, you’re letting it choose the most
appropriate node to run your application on based on the description of the
application’s resource requirements and the available resources on each node.

 By using containers and not tying the app down to a specific node in your cluster, you’re
allowing the app to freely move around the cluster at any time, so the different app
components running on the cluster can be mixed and matched to be packed tightly
onto the cluster nodes
 This ensures the node’s hardware resources are utilized as best as possible
 Health checking and self healing: Kubernetes monitors your app
components and the nodes they run on and automatically reschedules
them to other nodes in the event of a node failure.
Kubernetes PODS
 Pods are the smallest deployable units of computing that you can create and
manage in Kubernetes.

 A Pod is a group of one or more containers, with shared storage and network
resources, and a specification for how to run the containers.

 Instead of deploying containers individually, you always deploy and operate on a


pod of containers.

 The key thing about pods is that when a pod does contain multiple containers,
all of them are always run on a single worker node—it never spans multiple
worker nodes.
 Decide when to use multiple containers in a pod?

 Most of the time we will put only single container in a pod but if we put
mulitple containers in a pod then below two questions shall be answered.

 Do they need to be run together or can they run on different hosts?

 Must they be scaled together or individually?


 Commands to see all the pods running in kubernetes cluster
 Kubectl get pods.

 Steps to deploy our application in Kubernetes.


 1)create a cluster specifiying number of worker nodes.
 2)Deploy our application by mentioning the application image name and
tag.
 3)Expose your application to internet by using load balancer.
 4)Access the application from outside.
 5)Scale the application by increasing the number of replicas

You might also like