0% found this document useful (0 votes)
17 views

Interview Ques

Kubernetes is an open-source platform for automating deployment, scaling, and management of containerized applications. It provides a standardized way to deploy and operate applications through components like Pods, Services, Deployments and more. Helm is a package manager that simplifies deployment and management of applications in Kubernetes using packages called charts. Monitoring tools like Prometheus and Grafana can monitor Kubernetes clusters, while centralized logging with ELK or Fluent aggregates container logs. Role-based access control (RBAC) manages access in Kubernetes clusters.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Interview Ques

Kubernetes is an open-source platform for automating deployment, scaling, and management of containerized applications. It provides a standardized way to deploy and operate applications through components like Pods, Services, Deployments and more. Helm is a package manager that simplifies deployment and management of applications in Kubernetes using packages called charts. Monitoring tools like Prometheus and Grafana can monitor Kubernetes clusters, while centralized logging with ELK or Fluent aggregates container logs. Role-based access control (RBAC) manages access in Kubernetes clusters.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

1

Kubernetes interview questions


1. What is Kubernetes, and what problem does it solve?
Ans: Kubernetes is an open-source container orchestration platform designed to
automate the deployment, scaling, and management of containerized applications. It
simplifies the deployment and operation of applications by providing a standardized way
to manage containerized workloads.

2. Explain the key components of Kubernetes architecture.


Ans: Key components include:
• Master Node: Controls and manages the cluster.
• Nodes (Minions): Worker machines where containers run.
• ETCD: Distributed key-value store for cluster configuration.
• API Server: Exposes the Kubernetes API.
• Controller Manager: Ensures the desired state of the cluster.
• Scheduler: Assigns workloads to Nodes.

3. Define Pods and explain their role in Kubernetes.


Ans: A Pod is the smallest deployable unit in Kubernetes, representing a single instance
of a running process. It can contain one or more containers sharing the same network
namespace and storage. Pods enable co-located containers to communicate and share
data easily.

4. What is the purpose of a Service in Kubernetes? How does it


work?
Ans: A Service in Kubernetes provides a stable endpoint for accessing a set of Pods. It
enables load balancing across multiple Pods and ensures continuous availability even if
Pods are added or removed. Services work by assigning a Cluster IP and DNS to the set
of Pods they represent.

1|Page
2

5. Deployments and Replication Controllers:


I. Differentiate between a Deployment and a Replication
Controller in Kubernetes.
Ans: A Deployment is a higher-level abstraction that manages Replica Sets, which in
turn manage Pods. Deployments enable rolling updates and rollbacks, making
application scaling and updates easier. Replication Controllers are an older concept,
whereas Deployments offer more flexibility and features.

II. How does a Deployment ensure high availability of


applications in Kubernetes?
Ans: Deployments ensure high availability by maintaining a specified number of replica
Pods. If a Pod fails or is terminated, the Deployment automatically replaces it to meet
the desired replica count. This ensures that the application is continuously available.

III. Explain the significance of the ‘kubectl apply’


command.
Ans: ‘kubectl apply’ is used to apply or update resources defined in a YAML
or JSON file to a Kubernetes cluster. It detects the differences between the desired state
and the current state of the resources and applies only the necessary changes. It is a
declarative way to manage Kubernetes resources.

6. Containerization:
I. What is container orchestration, and how does
Kubernetes achieve it?
Ans: Container orchestration is the automated management of containerized
applications, including deployment, scaling, and operation. Kubernetes achieves
container orchestration by automating the scheduling and scaling of containerized
workloads, handling failures, and providing a platform for managing application
lifecycles.

II. Explain how Docker and Kubernetes work


together.
Ans: Docker is a containerization platform, and Kubernetes is an orchestration
platform. Docker packages applications and their dependencies into containers.
Kubernetes orchestrates the deployment, scaling, and management of these containers.
Kubernetes uses Docker containers as its runtime environment.

2|Page
3

III. Describe the role of a Docker file in a Kubernetes


environment.
Ans: A Docker file is used to build Docker images. In a Kubernetes environment, these
images are deployed as containers within Pods. The Docker file specifies the base image,
application code, dependencies, and configurations needed to create a reproducible and
portable image for deployment in a Kubernetes cluster.

7. Kubernetes Networking:
I. How does networking work in a Kubernetes cluster?
Ans: Each Pod in a Kubernetes cluster gets its own unique IP address, and all
containers within the Pod share this IP. Services expose Pods to the network. Pods can
communicate with each other directly through their IP addresses or by using Services as
a stable endpoint.

II. What is a Kubernetes Service, and how does it


provide load balancing?
Ans: A Kubernetes Service is an abstraction that defines a logical set of Pods and a
policy by which to access them. It provides load balancing by distributing incoming
network traffic across multiple Pods of the same Service, ensuring even distribution of
requests.

III. Explain the differences between Cluster IP, Node


Port, and Load Balancer types of Services.
Ans:
• Cluster IP: Exposes the Service on an internal IP within the
cluster. It is accessible only within the cluster.
• Node Port: Exposes the Service on a static port on each
Node's IP. It makes the Service accessible externally using
the Node's IP and the assigned static port.
• Load Balancer: Exposes the Service externally using a cloud
provider's load balancer. It automatically assigns an external
IP to the Service.

3|Page
4

8. Persistent Storage:
I. How does Kubernetes manage persistent storage for
applications?
Ans: Kubernetes manages persistent storage using Persistent Volumes (PVs) and
Persistent Volume Claims (PVCs). PVs represent physical storage resources, and PVCs are
requests for storage by Pods. PVs and PVCs decouple storage configuration from Pod
definitions, allowing for dynamic provisioning and efficient storage utilization.

II. Explain the concept of Persistent Volumes (PVs) and


Persistent Volume Claims (PVCs).
Ans: Persistent Volumes (PVs) are physical storage resources within a cluster, and
Persistent Volume Claims (PVCs) are requests for storage by Pods. PVCs are bound to
PVs based on capacity, access modes, and other properties. PVs and PVCs enable the
dynamic provisioning and management of storage resources in a Kubernetes cluster.

9. Configuration and Secrets:


I. What is a Config Map in Kubernetes, and how is it
used?
Ans: A Config Map in Kubernetes is an API object that allows you to decouple
configuration artifacts from containerized applications. It stores key-value pairs of
configuration data that can be mounted as files or environment variables in Pods. Config
Maps make it easier to manage and update configuration settings without modifying
application code.

II. How are Secrets managed in Kubernetes?


Ans: Secrets in Kubernetes are used to store sensitive information, such as passwords
or API keys. They are similar to Config Maps but are specifically designed for confidential
data. Secrets are encoded or encrypted and can be mounted into Pods as files or used
as environment variables.

III. Explain the use of environment variables in


Kubernetes Pods.
Ans: Environment variables in Kubernetes Pods are used to pass configuration values
to applications. They can be set in Pod specifications or injected from Config Maps,
Secrets, or downward API. Environment variables provide a flexible way to configure
applications without modifying their code.

4|Page
5

10. Helm:
I. What is Helm, and how does it simplify application
deployment in Kubernetes?
Ans: Helm is a package manager for Kubernetes that simplifies the deployment and
management of applications. It uses charts, which are packages of pre-configured
Kubernetes resources, to define, install, and upgrade complex applications. Helm charts
enable versioning and sharing of application configurations.

II. Describe the purpose of Helm charts and their


components.
Ans: Helm charts are packages of pre-configured Kubernetes resources that define, install,
and manage applications in a Kubernetes cluster. Components of a Helm chart include:

• Chart.yaml: Metadata about the chart.


• values.yaml: Default configuration values.
• templates: Kubernetes manifests that define resources.
• charts: Dependencies on other Helm charts.

11. Monitoring and Logging:


I. How can you monitor a Kubernetes cluster?
Ans: Kubernetes clusters can be monitored using tools like Prometheus and Grafana.
Prometheus collects metrics from Kubernetes components and applications, while Grafana
provides visualization and alerting capabilities based on these metrics

II. Explain how you would set up centralized logging for a


Kubernetes environment.
Ans: Centralized logging in Kubernetes can be achieved using tools like the ELK
(Elasticsearch, Logstash, Kibana) stack or Fluent. Containers can be configured to send logs
to a central logging server, which aggregates and indexes logs for easy analysis and
troubleshooting.

12. Security:

5|Page
6

I. What are Kubernetes RBAC (Role-Based Access


Control) and why are they important?
Ans: RBAC in Kubernetes restricts access to resources based on roles and permissions. It
helps in controlling who can perform actions within a cluster. Proper RBAC implementation
enhances security by ensuring that users and service accounts have the minimum necessary
privileges.

II. How do you secure communication within a


Kubernetes cluster?
Ans: Communication within a Kubernetes cluster is secured using Transport Layer Security
(TLS) certificates. Kubernetes components and communication between Pods and Services
are configured with TLS to encrypt data in transit and authenticate the identity of the
communicating parties.

III. Explain Pod Security Policy and its role in enhancing


security.
Ans: Pod Security Policy is a Kubernetes feature that controls the security context under
which a Pod runs. It defines a set of conditions and constraints for Pods, such as running with
a specific user or group, using a particular seccomp profile, or requiring certain capabilities.
Pod Security Policy enhances security by enforcing best practices and security measures.

12. Advanced Concepts:


I. What is an Ingress in Kubernetes, and how does it
simplify HTTP routing?
Ans: An Ingress is an API object that manages external access to services in a cluster. It
allows for the definition of HTTP and HTTPS routes, load balancing, and SSL termination.
Ingress simplifies HTTP routing by providing a centralized way to manage external access to
multiple services.

II. Describe the use of Helm hooks in deploying


applications.
Ans: Helm hooks are used to perform actions at specific points during the lifecycle of a
Helm release. They enable the execution of scripts, such as database migrations or pre/post-
install steps, before or after certain Helm operations. Helm hooks provide a way to customize
the deployment process.

6|Page
7

III. Explain the concept of Custom Resource Definitions


(CRDs) in Kubernetes.
Ans: Custom Resource Definitions (CRDs) enable the extension of Kubernetes API by
defining custom resources and their specifications. They allow users to define and manage
their own resources beyond the built-in ones. CRDs are used in conjunction with Operators
to create and manage custom controllers for these resources.

13. Troubleshooting:
I. How would you troubleshoot a Pod that is not
starting as expected?
Ans: Troubleshooting a non-starting Pod involves checking the Pod's logs, describing the
Pod, inspecting events, and examining resource utilization. Common issues include
misconfigured environment variables, missing dependencies, or resource constraints.

II. Discuss strategies for handling rolling updates


without downtime in a Kubernetes cluster.
Ans: Strategies for rolling updates without downtime involve deploying updates in a
controlled manner, such as using Deployments with rolling update strategies. This allows
new Pods to be gradually introduced while old ones are terminated, ensuring continuous
availability.

These questions and answers cover a broad range of Kubernetes concepts. Depending on
the specific role and the depth of expertise required, interviewers may ask additional
questions or dive deeper into specific areas. Candidates should be prepared to discuss their
practical experience and problem-solving skills related to Kubernetes.

7|Page

You might also like