0% found this document useful (0 votes)
209 views

Kubernetes Full INSAT's Course

Container orchestration automates the deployment, management, and scaling of containerized applications. It involves three main things: 1. Containers, which are lightweight executable packages that include an application and all its dependencies. They allow for greater portability and efficiency compared to virtual machines. 2. Docker, which is a tool used to create, deploy and run containers. It provides a simple way to automate container creation and management using Dockerfiles, images, and commands. 3. Orchestration, which is the automation of operational tasks needed to manage the lifecycle of containerized applications, including provisioning, deployment, scaling, networking, and monitoring of containers. Kubernetes is the most popular container orchestr

Uploaded by

HAMDI GDHAMI
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
209 views

Kubernetes Full INSAT's Course

Container orchestration automates the deployment, management, and scaling of containerized applications. It involves three main things: 1. Containers, which are lightweight executable packages that include an application and all its dependencies. They allow for greater portability and efficiency compared to virtual machines. 2. Docker, which is a tool used to create, deploy and run containers. It provides a simple way to automate container creation and management using Dockerfiles, images, and commands. 3. Orchestration, which is the automation of operational tasks needed to manage the lifecycle of containerized applications, including provisioning, deployment, scaling, networking, and monitoring of containers. Kubernetes is the most popular container orchestr

Uploaded by

HAMDI GDHAMI
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 55

Automation and Operation:

IAAS, CAAS and Serverless

Chapter 1: Kubernetes

RT4 - 2023

Seif Eddine Souissi


CONTENT

01 Notion of container orchestration


02 Kubernetes Architecture
03 Workload of Kubernetes
04 Container Network Interface
05 Kubernetes and Storage
06 Basic Security Concepts
07 Cloud adaptation to Kubernetes
Introduction: What’s going on in the Market

Top 5 Surging IT Operation Skills


Consumption Growth 2017-2021

842% 398% 202%

Certified Kubernetes Server Administrator Network Administrator


Administrator
Introduction: Goals

Achieving one of the two certificates

HA Deployment Core Concepts


Logging/ Monitoring Multi Container Pods
Maintenance Pod Design
Kubernetes Scheduler Config Maps
Application lifecycle Jobs
Security Coding
NOTION OF CONTAINER
ORCHESTRATION
Notion of container orchestration: Containers
To understand Kubernetes, we must first understand two things: Container and Orchestration.
Once we get familiarized with both terms, we would be able to understand what Kubernetes is capable of.
We will start looking at Containers first.

• Containers are completely isolated environments,


• They can have their own processes or services, their own network
interfaces, their own mounts, just like Virtual machines, Except that
they all share the same OS kernel.
• Containers have existed for about 10 years now and some of the
different types of containers are LXC, LXD , LXCFS etc.
• Docker utilizes LXC containers. Setting up these container
environments is hard as they are very low level and that is where
Docker offers a high-level tool with several powerful functionalities
Notion of container orchestration: Containers

Containers are made possible by process isolation and virtualization capabilities built into the Linux kernel.
These capabilities—such as control groups (Cgroups) for allocating resources among processes, and namespaces for restricting a processes
access or visibility into other resources or areas of the system—enable multiple application components to share the resources of a single
instance of the host operating system in much the same way that a hypervisor enables multiple virtual machines (VMs) to share the CPU,
memory and other resources of a single hardware server.

As a result, container technology offers all the functionality and benefits of VMs—including application isolation, cost-effective scalability,
and disposability—plus important additional advantages:

Light Weight Increased Productivity Resource Efficiency


Notion of container orchestration: Docker

• Docker is an open-source platform that enables developers to build,


deploy, run, update and manage containers—standardized,
executable components that combine application source code with
the operating system (OS) libraries and dependencies required to run
that code in any environment.
• Docker lets developers access these native containerization
capabilities using simple commands and automate them through a
work-saving application programming interface (API).
• Compared to LXC, Docker offers:

 Improved and seamless container portability


 Even lighter weight and more granular updates
 Automated container creation
 Container versioning
 Container reuse
 Shared container libraries
Notion of container orchestration: Docker

Docker Hub Docker Desktop Docker Daemon

It is the public repository of It is an application for Mac or It is a service that creates and
Docker images that calls itself the Windows that includes Docker manages Docker images, using
“world’s largest library and Engine, Docker CLI client, the commands from the client.
community for container images.” Docker Compose, Kubernetes, Essentially Docker daemon
It holds over 100,000 container and others. It also includes access serves as the control center of
images sourced form commercial to Docker Hub. your Docker implementation.
software vendors, open-source
projects, and individual
developers.
Notion of container orchestration: Docker

Docker File Docker Image Docker Containers

Every Docker container starts Docker images contain executable Docker containers are the live, running
with a simple text file containing application source code as well as all instances of Docker images. While
instructions for how to build the the tools, libraries, and dependencies Docker images are read-only files,
Docker container image. that the application code needs to run containers are life, ephemeral,
DockerFile automates the process as a container. When you run the executable content. Users can interact
of Docker image creation. Docker image, it becomes one with them, and administrators can
instance (or multiple instances) of adjust their settings and conditions
the container. using Docker commands.
Notion of container orchestration: Docker

Pod Build Push/Pull

A Pod (as in a pod of whales or Building a container Image based Pull


pea pod) is a group of one or on a Docker File Download an image from
more containers, with shared container registry
storage and network resources, Push
and a specification for how to run
the containers. A Pod's contents Upload an image to container
are always co-located and co- registry
scheduled, and run in a shared
context.
Notion of container orchestration: Deployment

From the Past to The Future


Notion of container orchestration: Deployment

Traditional deployment era Virtualized deployment era Container deployment era


• Organizations ran applications on • Run multiple Virtual Machines (VMs) on • Containers are like VMs, but they have
physical servers. a single physical server's CPU. relaxed isolation properties to share the
• There was no way to define resource • Virtualization allows applications to be Operating System (OS) among the
boundaries for applications in a physical isolated between VMs and provides a applications. Therefore,
server, and this caused resource level of security as the information of one • containers are considered lightweight.
allocation issues. application cannot be freely accessed by Like a VM, a container has its own
another application. filesystem, share of CPU, memory,
• Virtualization allows better utilization of process space, and more. As they are
resources in a physical server and allows decoupled from the underlying
better scalability. infrastructure, they are portable across
• Each VM is a full machine running all the clouds and OS distributions.
components, including its own operating
system, on top of the virtualized
hardware.
Notion of container orchestration: Orchestration

Container orchestration is the automation of much of the


operational effort required to run containerized workloads and
services. This includes a wide range of things software teams need to
manage a container’s lifecycle, including provisioning, deployment,
scaling (up and down), networking, load balancing and more.
--VMware

Some of the key benefits of Orchestration are:

•Simplified operations: This is the most important benefit of


container orchestration and the main reason for its adoption.
Containers introduce a large amount of complexity that can quickly
get out of control without container orchestration to manage it.
•Resilience: Container orchestration tools can automatically restart or
scale a container or cluster, boosting resilience.
•Added security: Container orchestration’s automated approach helps
keep containerized applications secure by reducing or eliminating the
chance of human error. 
KUBERNETES ARCHITECTURE
Kubernetes Architecture: Overview

• Kubernetes is a popular open-source platform for container


orchestration.
• While there are other options for container orchestration, such as
Apache Mesos or Docker Swarm, Kubernetes has become the
industry standard.
• The main goal behind K8s is to enforce “ Desired State
Management”
• K8s Provides:

 Service discovery and load balancing

 Storage Orchestration

 Automated rollout and rollbacks

 Automatic bin packing

 Self-healing

 Secret and configuration management


Kubernetes Architecture: Main Components

Control Plane Cluster


The control plane's components make A Kubernetes cluster consists of a set of
global decisions about the cluster (for worker machines, called nodes, that run
example, scheduling), as well as detecting containerized applications. Every cluster
and responding to cluster events has at least one worker node.
Kubernetes Architecture: Control Plane Components

Control Plane

controller-
kube-apiserver etcd kube-scheduler
manager
Kubernetes Architecture: Control Plane Components

The API server is a component of the Kubernetes control plane that exposes
kube-apiserver the Kubernetes API. The API server is the front end for the Kubernetes control
plane.

Consistent and highly-available key value store used as Kubernetes' backing


etcd store for all cluster data.

Control
Plane
Control plane component that watches for newly created Pods with no
kube-scheduler assigned node and selects a node for them to run on.

Control plane component that runs controller processes.


controller- Logically, each controller is a separate process, but to reduce complexity, they
manager are all compiled into a single binary and run in a single process.
Kubernetes Architecture: Cluster

Cluster

Node Node
(Worker) (Worker)
Kubernetes Architecture: Node Components

An agent that runs on each node in the cluster. It makes sure that containers
are running in a Pod. The kubelet takes a set of PodSpecs that are provided
kubelet and ensures that the containers described in those PodSpecs are running and
healthy.

kube-proxy maintains network rules on nodes. These network rules allow


kube-proxy network communication to your Pods from network sessions inside or outside
of your cluster.

Node
Container The container runtime is the software that is responsible for running
containers.
runtime

Pods are the smallest deployable element in K8s, it is an abstraction layer for
Pods the containers
Kubernetes Architecture: Global Architecture
WORKLOAD OF KUBERNETES
Workload of Kubernetes

A workload is an application running on Kubernetes. Whether your workload is a single component


or several that work together, on Kubernetes you run it inside a set of pods.

Kubernetes pods have a defined lifecycle. For example, once a pod is running in your cluster
then a critical fault on the node where that pod is running means that all the pods on that node
fail. Kubernetes treats that level of failure as final: you would need to create a new Pod to
recover, even if the node later becomes healthy.

However, to make life considerably easier, you don't need to manage each Pod directly. Instead,
you can use workload resources that manage a set of pods on your behalf. These resources
configure controllers that make sure the right number of the right kind of pod are running, to
match the state you specified.
Workload of Kubernetes: Pods

A Pod (as in a pod of whales or pea pod) is a group of one or more containers, with shared storage and
network resources, and a specification for how to run the containers. A Pod's contents are always co-located
and co-scheduled, and run in a shared context. A Pod models an application-specific "logical host": it
contains one or more application containers which are relatively tightly coupled.

•Pods that run a single container: The "one-container-per-Pod" model is the most common Kubernetes use
case; in this case, you can think of a Pod as a wrapper around a single container; Kubernetes manages Pods
rather than managing the containers directly.

•Pods that run multiple containers that need to work together: A Pod can encapsulate an application
composed of multiple co-located containers that are tightly coupled and need to share resources. These co-
located containers form a single cohesive unit of service—for example, one container serving data stored in a
shared volume to the public, while a separate sidecar container refreshes or updates those files. The Pod
wraps these containers, storage resources, and an ephemeral network identity together as a single unit.
Workload of Kubernetes: Pods Lifecycle
Like individual application containers, Pods are relatively ephemeral (rather than durable) entities. Pods are created,
assigned a unique ID (UID), and scheduled to nodes where they remain until termination (according to restart policy) or
deletion. If a Node dies, the Pods scheduled to that node are scheduled for deletion after a timeout period.

The phase of a Pod is a simple, high-level summary of where the Pod is in its
lifecycle.
Pending The Pod has been accepted by the Kubernetes cluster, but one or more of the containers has not been set up and made ready to run. This
includes time a Pod spends waiting to be scheduled as well as the time spent downloading container images over the network.

Running The Pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process
of starting or restarting.

Succeeded All containers in the Pod have terminated in success, and will not be restarted.
Failed All containers in the Pod have terminated, and at least one container has terminated in failure. That is, the container either exited with
non-zero status or was terminated by the system.

Unknown For some reason the state of the Pod could not be obtained. This phase typically occurs due to an error in communicating with the node
where the Pod should be running.
Workload of Kubernetes: workload resources

Deployment is a good fit for managing a stateless application workload on


Deployment and
your cluster, where any Pod in the Deployment is interchangeable and can be
ReplicaSet replaced if needed.

StatefulSet StatefulSet lets you run one or more related Pods that do track state.

Workload
Resources
DaemonSet defines Pods that provide node-local facilities. These might be
DaemonSet fundamental to the operation of your cluster, such as a networking helper tool,
or be part of an add-on.

Job and CronJob define tasks that run to completion and then stop. Jobs
Job and CronJob represent on-off tasks, whereas CronJobs recur according to a schedule.
Workload of Kubernetes: Roll-Out

a Kubernetes rollout is a process of updating or replacing replicas with new replicas matching a new
deployment template. Changes may be configurations such as environment variables or labels, or also
code changes which result in the updating of an image key of the deployment template.

How Kubernetes roll-outs work

1. Create a YAML file describing the desired state 4. The kube-controller-manager continuously monitors the system for
configuration of the cluster. new requests and works towards reconciling the system state to the
desired state – creating ReplicaSets,
2. Apply the YAML file to the cluster through kubectl, the deployments and pods in the process.
Kubernetes command-line interface.
5. Once all controllers have run, the kube-scheduler will see that there
3.Kubectl submits the request to the kube-apiserver, which are pods in the “pending” state because they haven’t been scheduled
authenticates and authorises the request before recording the to run on a node yet. The scheduler finds suitable nodes for the pods,
change in a database, etcd. then communicates with the kubelet in each node to take control and
start the deployment.
Workload of Kubernetes: YAML Reminder

YAML is a human-friendly data serialization language for all programming languages.


KUBERNETES NETWORKING
K8s Networking: The Kubernetes network model

Every Pod in a cluster gets its own unique cluster-wide IP address. This means you do not need to explicitly create links
between Pods, and you almost never need to deal with mapping container ports to host ports.

This creates a clean, backwards-compatible model where Pods can be treated much like VMs or physical hosts from the
perspectives of port allocation, naming, service discovery, load balancing, application configuration, and migration.

Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional
network segmentation policies):

•pods can communicate with all other pods on any other node without NAT

•agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node
K8s Networking: Cluster Networking

Networking is a central part of Kubernetes, but it can be challenging to understand exactly how it is expected to work.
There are 4 distinct networking problems to address:

• Highly-coupled container-to-container communications: this is solved by Pods and localhost communications.

• Pod-to-Pod communications: this is the primary focus of Pod Abstraction.

• Pod-to-Service communications: this is covered by Services.

• External-to-Service communications: this is also covered by Services.

• Kubernetes is all about sharing machines between applications. Typically, sharing machines requires ensuring that two
applications do not try to use the same ports. Coordinating ports across multiple developers is very difficult to do at
scale and exposes users to cluster-level issues outside of their control.

• Dynamic port allocation brings a lot of complications to the system - every application has to take ports as flags, the API
servers must know how to insert dynamic port numbers into configuration blocks, services must know how to find each
other, etc. Rather than deal with this, Kubernetes takes a different approach.
KUBERNETES AND STORAGE
Kubernetes and Storage: Challenge

In Kubernetes, containerized applications can be either stateful or stateless. Stateless applications do not have any
persistent state, and they lose their data once the containerized application shut down or accidentally crashed. However, for
stateful applications, Kubernetes needs to attach persistent external volumes for the storage of states.

Storage for stateful applications has always been a challenge as they must be always provisioned with the volume which
stays connected with pods through remote storage. Kubernetes community is trying to fill the gap for managing stateful
apps for years by providing many abstractions and features. But containers by default are stateless, causing significant
challenges for stateful workloads.
Kubernetes and Storage: Storage Classes

A Storage Class provides a way for administrators to describe the "classes" of storage they offer. Different classes might
map to quality-of-service levels, or to backup policies, or to arbitrary policies determined by the cluster administrators.
Kubernetes itself is unopinionated about what classes represent. This concept is sometimes called "profiles" in other
storage systems

Each StorageClass has a provisioner that determines what volume


provisioner
plugin is used for provisioning

Storage Describe volumes belonging to the storage class. Different parameters


parameters
Class may be accepted depending on the provisioner

Reclaim
Can be either Delete or Retain
Policy
Kubernetes and Storage: Main Volume types

Persistent

A Persistent Volume (PV) is a piece of storage in the


cluster that has been provisioned by an administrator or
dynamically provisioned using Storage Classes. It is a
resource in the cluster just like a node is a cluster
resource. PVs are volume plugins like Volumes but
have a lifecycle independent of any individual Pod that
uses the PV.
Kubernetes and Storage: Main Volume types

Some application need additional storage but don't care whether that data is stored persistently across
restarts. For example, caching services are often limited by memory size and can move infrequently
used data into storage that is slower than memory with little impact on overall performance.
Other applications expect some read-only input data to be present in files, like configuration data or
secret keys.
Ephemeral volumes are designed for these use cases. Because volumes follow the Pod's lifetime and get
Ephemeral

created and deleted along with the Pod.


Kubernetes supports several different kinds of ephemeral volumes for different purposes:

• emptyDir: empty at Pod startup, with storage coming locally from the kubelet base directory
(usually the root disk) or RAM
• configMap, downwardAPI, secret: inject different kinds of Kubernetes data into a Pod
• CSI ephemeral volumes: similar to the previous volume kinds, but provided by special CSI drivers
which specifically support this feature
• generic ephemeral volumes, which can be provided by all storage drivers that also support persistent
volumes
BASIC SECURITY CONCEPTS
Basic Security Concepts: The 4C's of Cloud Native security
You can think about security in layers. The 4C's of Cloud Native security are Cloud, Clusters, Containers, and Code. Each
layer of the Cloud Native security model builds upon the next outermost layer. The Code layer benefits from strong base
(Cloud, Cluster, Container) security layers. You cannot safeguard against poor security standards in the base layers by
addressing security at the Code level.
Basic Security Concepts: Code
Application code is one of the primary attack surfaces over which you have the most control.
While securing application code is outside of the Kubernetes security topic, here are recommendations to protect
application code.

Area of Concern for Code Recommandation


Access over TLS only If your code needs to communicate by TCP, perform a TLS handshake with the client ahead
of time. With the exception of a few cases, encrypt everything in transit. Going one step
further, it's a good idea to encrypt network traffic between services.
Limiting port ranges of communication This recommendation may be a bit self-explanatory, but wherever possible you should only
expose the ports on your service that are absolutely essential for communication or metric
gathering.
3rd Party Dependency Security It is a good practice to regularly scan your application's third-party libraries for known
security vulnerabilities. Each programming language has a tool for performing this check
automatically.
Static Code Analysis Most languages provide a way for a snippet of code to be analyzed for any potentially
unsafe coding practices. Whenever possible you should perform checks using automated
tooling that can scan codebases for common security errors.
Dynamic probing attacks There are a few automated tools that you can run against your service to try some of the
well-known service attacks. These include SQL injection, CSRF, and XSS.
Basic Security Concepts: Cluster/Container
Components of the Cluster security

 RBAC Authorization (Access to the Kubernetes API)


 Authentication
 Application secrets management (and encrypting them in
etcd at rest)
 Ensuring that pods meet defined Pod Security Standards
 Quality of Service (and Cluster resource management)
 Network Policies
 TLS for Kubernetes Ingress
Basic Security Concepts: Cloud

In many ways, the Cloud (or co-located servers, or the corporate datacenter) is the trusted computing base of a Kubernetes
cluster. If the Cloud layer is vulnerable (or configured in a vulnerable way) then there is no guarantee that the components
built on top of this base are secure. Each cloud provider makes security recommendations for running workloads securely
in their environment
Area of Concern for Kubernetes Infrastructure Recommendation

Network access to API Server (Control plane)


All access to the Kubernetes control plane is not allowed publicly on the internet and is controlled by network
access control lists restricted to the set of IP addresses needed to administer the cluster.

Network access to Nodes (nodes)


Nodes should be configured to only accept connections (via network access control lists) from the control plane on
the specified ports, and accept connections for services in Kubernetes of type NodePort and LoadBalancer. If
possible, these nodes should not be exposed on the public internet entirely.

Kubernetes access to Cloud Provider API


Each cloud provider needs to grant a different set of permissions to the Kubernetes control plane and nodes. It is
best to provide the cluster with cloud provider access that follows the principle of least privilege for the resources it
needs to administer. The Kops documentation provides information about IAM policies and roles.

Access to etcd
Access to etcd (the datastore of Kubernetes) should be limited to the control plane only. Depending on your
configuration, you should attempt to use etcd over TLS. More information can be found in the etcd documentation.

etcd Encryption
Wherever possible it's a good practice to encrypt all storage at rest, and since etcd holds the state of the entire
cluster (including Secrets) its disk should especially be encrypted at rest.
Basic Security Concepts: Security Checklist
Authentication & Authorization:

 System master's group is not used for user or component


authentication.
 The kube-controller-manager is running with --use-service-
account-credentials enabled.
 The root certificate is protected Intermediate and leaf
certificates have an expiry date no more than 3 years in the
future.
 A process exists for periodic access review, and reviews
occur no more than 24 months apart.
 The Role Based Access Control Good Practices is followed
for guidance related to authentication and authorization.
Basic Security Concepts: Security Checklist
Network security

 CNI plugins in-use supports network policies.


 Ingress and egress network policies are applied to all
workloads in the cluster.
 Default network policies within each namespace, selecting
all pods, denying everything, are in place.
 If appropriate, a service mesh is used to encrypt all
communications inside of the cluster.
 The Kubernetes API, kubelet API and etcd are not exposed
publicly on Internet.
 Access from the workloads to the cloud metadata API is
filtered.
 Use of LoadBalancer and ExternalIPs is restricted.
Basic Security Concepts: Security Checklist
Pod security

 RBAC rights to create, update, patch, delete workloads is


only granted if necessary.
 Appropriate Pod Security Standards policy is applied for all
namespaces and enforced.
 Memory limit is set for the workloads with a limit equal or
inferior to the request.
 CPU limit might be set on sensitive workloads.
 For nodes that support it, Seccomp is enabled with
appropriate syscalls profile for programs.
 For nodes that support it, AppArmor or SELinux is enabled
with appropriate profile for programs.
Basic Security Concepts: Security Checklist
Secrets:

ConfigMaps are not used to hold confidential data.


 Encryption at rest is configured for the Secret API.
 If appropriate, a mechanism to inject secrets stored in third-
party storage is deployed and available.
 Service account tokens are not mounted in pods that don't
require them.
 Bound service account token volume is in-use instead of non-
expiring tokens.
Basic Security Concepts: Security Checklist
Images:

  Minimize unnecessary content in container images.


  Container images are configured to be run as unprivileged
user.
  References to container images are made by sha256 digests
(rather than tags) or the provenance of the image is validated
by verifying the image's digital signature at deploy time via
admission control.
  Container images are regularly scanned during creation and
in deployment, and known vulnerable software is patched.
CLOUD ADAPTATION TO
KUBERNETES
Cloud adaptation to Kubernetes

Cloud adaptations with Kubernetes help simplify and automate container management, making it easier to deploy
and manage applications in the cloud.
Cloud adaptation to Kubernetes: Key Terms
Container Kubernetes enables automated container management, providing features such as scheduling,
Orchestration scaling, updating and deploying containers.

Configuration Kubernetes provides tools for managing the configuration of containers, including secret
Management management, storage volume management and configuration settings management.

Update Kubernetes enables applications to be updated seamlessly, managing the update process without
Cloud management service interruption.
adaptatio
ns Kubernetes enables automatic replication of containers to improve application resiliency and
Replication availability.
General
Key
Terms Resource Kubernetes provides tools for resource management, including memory management, CPU usage
management management and quota management.

Kubernetes provides security features such as identity management, certificate management and
Security security policy management.

Kubernetes allows applications to be deployed across multiple cloud infrastructures, enabling


Multi-cloud application portability between clouds.
Cloud adaptation to Kubernetes: Azure

Azure Kubernetes Service (AKS) offers


the quickest way to start developing and
deploying cloud-native apps in Azure,
datacenters, or at the edge with built-in
code-to-cloud pipelines and guardrails.
Get unified management and
governance for on-premises, edge, and
multi-cloud Kubernetes clusters.
Cloud adaptation to Kubernetes: Azure
Azure offers several options for running Kubernetes on its cloud platform, including managed Kubernetes services and K8S on Azure VMs.
Here are some steps you can take to adapt to Azure cloud for Kubernetes :

Choose a Kubernetes Service : Azure offers several Kubernetes services, including Azure Kubernetes Service (AKS), Azure Red Hat
OpenShift, and Azure Arc enabled Kubernetes. Choose the service that best suits your needs based on your requirements and the level of control
you need.
 
Deploy Kubernetes : Once you've selected a Kubernetes service, you can deploy Kubernetes on Azure by following the service-specific
deployment steps. For example, to deploy AKS, you can use the Azure Portal or Azure CLI.
 
Configure Networking : Kubernetes on Azure requires networking configuration to enable communication between pods and services. Azure
offers several options for configuring networking, including Azure CNI. Choose the networking solution that best suits your needs.
 
Secure Your Cluster : Security is an important consideration when running Kubernetes on any cloud platform. Azure provides several built-in
security features, such as Azure Security Center, Azure Active Directory integration, and Role-Based Access Control (RBAC). Make sure to
enable these features to secure your cluster.
 
Monitor Your Cluster : Monitoring your Kubernetes cluster is important to ensure its performance and availability. Azure offers several tools
for monitoring Kubernetes, such as Azure Monitor and Azure Log Analytics. Use these tools to track the health of your cluster and diagnose
issue.
 
Automate Deployment : Automating deployment can save time and reduce the risk of errors. Azure offers several tools for automating
Kubernetes deployment, such as Azure DevOps, Azure Kubernetes Service (AKS) Deployment Center, and Azure Arc enabled Kubernetes.
Choose the tool that best suits your needs.
Cloud adaptation to Kubernetes: AWS
Running Kubernetes in the cloud is easy with AWS. A scalable and highly available infrastructure of virtual machines, integrations of
services from the community and the managed Kubernetes service Amazon Elastic Kubernetes Service (EKS), which is certified as
Kubernetes-compliant, are used for this purpose.
Kubernetes in Multi Cloud: Best Practices
K8s clusters across multiple clouds
Kubernetes is a platform with similar
aspects to those in a data center.

This includes the following:

• networking
• servers
• storage
• security and policies
• permissions and RBAC
• container images

Workloads within a Kubernetes cluster


act as an entire environment inside of one
platform. Some engineers call it "the data
center of the cloud."
Kubernetes in Multi Cloud: Best Practices
When you design a multi-cloud Kubernetes strategy, you should think about four critical features.

• Standardizing cluster policies


• Track versioning
• Spread workloads
• Proper labeling

You might also like