Kubernetes-03 12 23
Kubernetes-03 12 23
Kubernetes Engine
Kubernetes Architecture
Kubernetes POD
Demo
Micro service architecture
Micro service architecture
Microservices are an architectural style that develops a single application as a set of
small services. Each service runs in its own process. The services communicate with
clients, and often each other, using lightweight protocols, often over messaging or
HTTP.
we can have several independent applications that can run on their own. You can
create them using different programming languages and even different platforms. You
can structure big and complicated applications with simpler and independent programs
that execute by themselves. These smaller programs are grouped to deliver all the
functionalities of the big, monolithic app.
Instead of large teams working on large, monolithic projects, smaller,
more agile teams develop the services using the tools and frameworks they
are most comfortable with.
we can scale your services horizontally with technologies like Docker and
Kubernetes.
Benefit of microservices:
1) Improved scalability Because microservices let you independently scale services up
or down, the ease—and cost—of scaling is dramatically less than in a monolithic
system.
2) Better fault isolation. If one microservice fails, all the others will likely continue to
work. This is a key part of the microservices architectural design.
Micro services also have drawbacks. When your system consists of only a small
number of deployable components, managing those components is easy.
It’s easy to decide where to deploy each component, because there aren’t that
many choices.
Google runs hundreds of thousands of servers and has had to deal with
managing deployments on such a massive scale.
Kubernetes is a software system that allows you to easily deploy and manage
containerized applications.
When the developer submits a list of apps to the master, Kubernetes deploys
them to the cluster of worker nodes. What node a component lands on doesn’t
(and shouldn’t) matter—neither to the developer nor to the system
administrator.
The developer can specify that certain apps must run together and Kubernetes
will deploy them on the same worker node.
Others will be spread around the cluster, but they can talk to each other in the
same way, regardless of where they’re deployed.
It relieves application developers from having to implement certain
infrastructure-related services into their apps for e.g
things such as service discovery, scaling, load-balancing, self-healing.
Master consists of multiple components that can run on a single master node or
be split across multiple nodes and replicated to ensure high availability.
Kubernetes API Server, which you and the other Control Plane components
communicate with.
The Scheduler, which schedules your apps (assigns a worker node to each
deployable component of your application)
The Controller Manager, which performs cluster-level functions, such as
replicating components, keeping track of worker nodes, handling node
failures.
Etcd, a reliable distributed data store that persistently stores the cluster
configuration.
The components of the Control Plane hold and control the state of the
cluster, but they don’t run your applications. This is done by the (worker)
nodes.
Worker nodes: The worker nodes are the machines that run your containerized
applications. The task of running, monitoring, and providing services to your
applications is done by the following components:
Kubelet, which talks to the API server and manages containers on its node.
If one of those instances stops working properly, like when its process crashes or
when it stops responding, Kubernetes will restart it automatically.
If a whole worker node dies or becomes inaccessible, Kubernetes will select new
nodes for all the containers that were running on the node and run them on the
newly selected nodes
Scaling the number of copies: While the application is running, you can
decide you want to increase or decrease the number of copies, and
Kubernetes will spin up additional ones or stop the excess ones,
respectively.
We can even leave the job of deciding the optimal number of copies to
Kubernetes. It can automatically keep adjusting the number, based on
real-time metrics, such as CPU load, memory consumption, queries per
second, or any other metric your app exposes.
Achieving better utilization of hardware:
By setting up Kubernetes on your servers and using it to run your apps instead of
running them manually, you’ve decoupled your app from the infrastructure.
When you tell Kubernetes to run your application, you’re letting it choose the most
appropriate node to run your application on based on the description of the
application’s resource requirements and the available resources on each node.
By using containers and not tying the app down to a specific node in your cluster, you’re
allowing the app to freely move around the cluster at any time, so the different app
components running on the cluster can be mixed and matched to be packed tightly
onto the cluster nodes
This ensures the node’s hardware resources are utilized as best as possible
Health checking and self healing: Kubernetes monitors your app
components and the nodes they run on and automatically reschedules
them to other nodes in the event of a node failure.
Kubernetes PODS
Pods are the smallest deployable units of computing that you can create and
manage in Kubernetes.
A Pod is a group of one or more containers, with shared storage and network
resources, and a specification for how to run the containers.
The key thing about pods is that when a pod does contain multiple containers,
all of them are always run on a single worker node—it never spans multiple
worker nodes.
Decide when to use multiple containers in a pod?
Most of the time we will put only single container in a pod but if we put
mulitple containers in a pod then below two questions shall be answered.