Kubernetes ?
Kubernetes ?
Scalability Highly scalable and scales fast. Highly scalable and scales 5x faster than Kubernetes.
Manual intervention needed for load Docker swarm does auto load balancing
Load Balancing balancing traffic between different
containers and pods. of traffic between containers in the cluster.
Logging & 3rd party tools like ELK stack should be used
In-built tools for logging and monitoring.
Monitoring for logging and monitoring.
Q2. What is Kubernetes?
➢ Refer to the above diagram. The left side architecture represents deploying applications
on hosts. So, this kind of architecture will have an operating system and then the
operating system will have a kernel which will have various libraries installed on the
operating system needed for the application. So, in this kind of framework you can have
n number of applications and all the applications will share the libraries present in that
operating system whereas while deploying applications in containers the architecture is
a little different.
➢ This kind of architecture will have a kernel and that is the only thing that’s going to be
the only thing common between all the applications. So, if there’s a particular
application which needs Java then that particular application we’ll get access to Java and
if there’s another application which needs Python then only that particular application
will have access to Python.
➢ The individual blocks that you can see on the right side of the diagram are basically
containerized and these are isolated from other applications. So, the applications have
the necessary libraries and binaries isolated from the rest of the system, and cannot be
encroached by any other application.
Q5. What is Container Orchestration?
Consider a scenario where you have 5-6 microservices for an application. Now, these
microservices are put in individual containers, but won’t be able to communicate without
container orchestration. So, as orchestration means the amalgamation of all instruments
playing together in harmony in music, similarly container orchestration means all the services in
individual containers working together to fulfill the needs of a single server.
As you can see in the above diagram, there were also many challenges that came into place
without the use of container orchestration. So, to overcome these challenges the container
orchestration came into place.
Q7. What are the features of Kubernetes?
The features of Kubernetes, are as follows:
So, the different types of controller manager running on the master node are :
There are 2 nodes having the pod and root network namespaces with a Linux bridge. In addition
to this, there is also a new virtual ethernet device called flannel0(network plugin) added to the
root network.
Now, suppose we want the packet to flow from pod1 to pod 4. Refer to the below diagram.
• So, the packet leaves pod1’s network at eth0 and enters the root network at veth0.
• Then it is passed on to cbr0, which makes the ARP request to find the destination and it
is found out that nobody on this node has the destination IP address.
• So, the bridge sends the packet to flannel0 as the node’s route table is configured
with flannel0.
• Now, the flannel daemon talks to the API server of Kubernetes to know all the pod IPs
and their respective nodes to create mappings for pods IPs to node IPs.
• The network plugin wraps this packet in a UDP packet with extra headers changing the
source and destination IP’s to their respective nodes and sends this packet out via eth0.
• Now, since the route table already knows how to route traffic between nodes, it sends
the packet to the destination node2.
• The packet arrives at eth0 of node2 and goes back to flannel0 to de-capsulate and emits
it back in the root network namespace.
• Again, the packet is forwarded to the Linux bridge to make an ARP request to find out
the IP that belongs to veth1.
• The packet finally crosses the root network and reaches the destination Pod4.
The federated clusters can achieve this by doing the following two things. Refer to the below
diagram.
cenario 1: Suppose a company built on monolithic architecture handles numerous products.
Now, as the company expands in today’s scaling industry, their monolithic architecture started
causing problems.
How do you think the company shifted from monolithic to microservices and deploy their
services containers?
Solution:
As the company’s goal is to shift from their monolithic application to microservices, they can
end up building piece by piece, in parallel and just switch configurations in the background.
Then they can put each of these built-in microservices on the Kubernetes platform. So, they can
start by migrating their services once or twice and monitor them to make sure everything is
running stable. Once they feel everything is going good, then they can migrate the rest of the
application into their Kubernetes cluster.
Scenario 2: Consider a multinational company with a very much distributed system, with a
large number of data centers, virtual machines, and many employees working on various tasks.
How do you think can such a company manage all the tasks in a consistent way with
Kubernetes?
Solution:
As all of us know that I.T. departments launch thousands of containers, with tasks running
across a numerous number of nodes across the world in a distributed system.
In such a situation the company can use something that offers them agility, scale-out capability,
and DevOps practice to the cloud-based applications.
So, the company can, therefore, use Kubernetes to customize their scheduling architecture and
support multiple container formats. This makes it possible for the affinity between container
tasks that gives greater efficiency with an extensive support for various container networking
solutions and container storage.
Scenario 3: Consider a situation, where a company wants to increase its efficiency and the
speed of its technical operations by maintaining minimal costs.
How do you think the company will try to achieve this?
Solution:
The company can implement the DevOps methodology, by building a CI/CD pipeline, but one
problem that may occur here is the configurations may take time to go up and running. So,
after implementing the CI/CD pipeline the company’s next step should be to work in the cloud
environment. Once they start working on the cloud environment, they can schedule containers
on a cluster and can orchestrate with the help of Kubernetes. This kind of approach will help
the company reduce their deployment time, and also get faster across various environments.
Scenario 4: Suppose a company wants to revise it’s deployment methods and wants to build a
platform which is much more scalable and responsive.
How do you think this company can achieve this to satisfy their customers?
Solution:
In order to give millions of clients the digital experience they would expect, the company needs
a platform that is scalable, and responsive, so that they could quickly get data to the client
website. Now, to do this the company should move from their private data centers (if they are
using any) to any cloud environment such as AWS. Not only this, but they should also
implement the microservice architecture so that they can start using Docker containers. Once
they have the base framework ready, then they can start using the best orchestration platform
available i.e. Kubernetes. This would enable the teams to be autonomous in building
applications and delivering them very quickly.
Scenario 5: Consider a multinational company with a very much distributed system, looking
forward to solving the monolithic code base problem.
How do you think the company can solve their problem?
Solution
Well, to solve the problem, they can shift their monolithic code base to a microservice design
and then each and every microservices can be considered as a container. So, all these
containers can be deployed and orchestrated with the help of Kubernetes.
Scenario 6: All of us know that the shift from monolithic to microservices solves the problem
from the development side, but increases the problem at the deployment side.
How can the company solve the problem on the deployment side?
Solution
The team can experiment with container orchestration platforms, such as Kubernetes and run it
in data centers. So, with this, the company can generate a templated application, deploy it
within five minutes, and have actual instances containerized in the staging environment at that
point. This kind of Kubernetes project will have dozens of microservices running in parallel to
improve the production rate as even if a node goes down, then it can be rescheduled
immediately without performance impact.
Scenario 7: Suppose a company wants to optimize the distribution of its workloads, by
adopting new technologies.
How can the company achieve this distribution of resources efficiently?
Solution
The solution to this problem is none other than Kubernetes. Kubernetes makes sure that the
resources are optimized efficiently, and only those resources are used which are needed by
that particular application. So, with the usage of the best container orchestration tool, the
company can achieve the distribution of resources efficiently.
Scenario 8: Consider a carpooling company wants to increase their number of servers by
simultaneously scaling their platform.
How do you think will the company deal with the servers and their installation?
Solution
The company can adopt the concept of containerization. Once they deploy all their application
into containers, they can use Kubernetes for orchestration and use container monitoring tools
like Prometheus to monitor the actions in containers. So, with such usage of containers, giving
them better capacity planning in the data center because they will now have fewer constraints
due to this abstraction between the services and the hardware they run on.
Scenario 9: Consider a scenario where a company wants to provide all the required hand-outs
to its customers having various environments.
How do you think they can achieve this critical target in a dynamic manner?
Solution
The company can use Docker environments, to put together a cross-sectional team to build a
web application using Kubernetes. This kind of framework will help the company achieve the
goal of getting the required things into production within the shortest time frame. So, with such
a machine running, the company can give the hands-outs to all the customers having various
environments.
Scenario 10: Suppose a company wants to run various workloads on different cloud
infrastructure from bare metal to a public cloud.
How will the company achieve this in the presence of different interfaces?
Solution
The company can decompose its infrastructure into microservices and then adopt Kubernetes.
This will let the company run various workloads on different cloud infrastructures.
Q1. What are minions in Kubernetes cluster?
a. They are components of the master node.
b. They are the work-horse / worker node of the cluster.[Ans]
c. They are monitoring engine used widely in kubernetes.
d. They are docker container service.
b. Kubelet
c. Etcd[Ans]
d. None of the above
c. Rolling Updates
d. Both ReplicaSet and Deployment[Ans]
a. HTTPGetAction
b. ExecAction
c. TCPSocketAction[Ans]
d. None of the above