PowerProtect Data Manager Kubernetes Integration - Participant Guide
PowerProtect Data Manager Kubernetes Integration - Participant Guide
MANAGER KUBERNETES
INTEGRATION
PARTICIPANT GUIDE
PARTICIPANT GUIDE
PowerProtect Data Manager Kubernetes Integration-SSP
It also provides the ability to orchestrate a cluster of virtual machines and schedule
containers to run on those virtual machines based on their available compute
resources and the resource requirements of each container. Containers are
grouped into pods, the basic operational unit for Kubernetes, which can be scaled
to your wanted state.
Node
Master
Node Processes
Kubernetes Cluster
Container
CSI
Typically, the storage vendors (such as Dell EMC) provide the CSI Driver. The CSI
drivers implement an interface between the Container Orchestrator and Dell EMC
arrays. It is a plug-in that is installed into a Kubernetes environment to provide
persistent storage using the Dell EMC storage system.
kubelet: kubelet is the primary “node agent” that runs on each node in the
Kubernetes cluster. kubelet ensure that Containers are running in a Kubernetes
Pod. Within a Kubernetes cluster, the kubelet functions as a local agent that
watches for pod specs by the Kubernetes API server. The kubelet is also
responsible for registering a node with a Kubernetes cluster, sending events and
pod status, and reporting resource utilization.
Master
The Kubernetes Master is the access point (or the control plane) from which
administrators and other users interact with the cluster to manage the scheduling
and deployment of containers. A cluster will always have at least one Master, but
may have more depending on the cluster’s replication pattern.
Node
Master
Node Processes
Kubernetes Cluster
Node
Each Kubernetes node runs an agent process that is called a kubelet that is
responsible for managing the state of the node: starting, stopping, and maintaining
application containers based on instructions from the control plane. The kubelet
collects performance and health information from the node, pods, and containers it
runs and shares that information with the control plane to help it make scheduling
decisions.
A pod can define one or more volumes, such as a local disk or network disk, and
expose them to the containers in the pod, which allows different containers to share
storage space. For example, volumes can be used when one container downloads
content and another container uploads that content somewhere else.
Master
Node Processes
Kubernetes Cluster
Pod
Pod: A Pod is a collection of Containers, and a Pod is the basic execution unit of a
Kubernetes application. It is the smallest and simplest unit in the Kubernetes object
model that you create or deploy. A Pod represents processes running on your
Cluster.
Docker is the most common container runtime that is used in a Kubernetes Pod,
but Pods support other container runtimes as well.
• Pods that run multiple containers that must work together. A Pod might
encapsulate an application that is composed of multiple co-located containers
that are tightly coupled and must share resources. These co-located containers
might form a single cohesive unit of service–one container serving files from a
shared volume to the public, while a separate “sidecar” container refreshes or
updates those files. The Pod wraps these containers and storage resources
together as a single manageable entity.
IP Address
Volume
Container
PV and PVC
PV: PersistentVolume (PV) is a piece of storage in the cluster that has been
provisioned by an administrator or dynamically provisioned using Storage Classes.
It is a resource in the cluster just like a node is a cluster resource. PVs are volume
plugins like Volumes but have a life cycle independent of any individual Pod that
uses the PV. This object captures the details of the implementation of the storage,
such as NFS, iSCSI, or a cloud provider-specific storage system.
Service
Service B
Service
Deployment
Service A
In the panel below, click the arrow key to advance through the series of slides for
more information.
VMware Velero is an open-source tool to safely back up, recover, and migrate
Kubernetes clusters and persistent volumes. It works both on-premises and in a
public cloud. Velero consists of a server process running as a deployment in your
Kubernetes cluster and a command-line interface (CLI) with which DevOps teams
and platform operators configure scheduled backups, trigger manual backups, and
perform restores.
Velero uses the Kubernetes API to capture the state of cluster resources and to
restore them when necessary. Velero backups capture subsets of the cluster’s
resources, filtering by namespace, resource type, and label selector, which
provides a high degree of flexibility around what is backed up and restored. Also,
Velero enables you to back up and restore your applications’ persistent data
alongside their configurations, using either your storage platform’s native snapshot
capability or an integrated file-level backup tool.
The following data protection attributes are specified when the Centralized
protection policy is created: Application Type, Purpose, Assets, Backup Start and
End Time, Schedule, and SLA.
1.17 to 1.20 Any on-premises storage, AWS EBS, Driver supports CSI 1.0.0
Azure Disk, GCE Persistent Disk and higher with support for
which has CSI driver with snapshot CSI beta snapshots till 1.19
support. release and CSI GA
snapshots for 1.20.
1.17 to 1.20 vSPhere CNS storage for native vSphere CSI driver 2.0.1 and
Kubernetes clusters on vSphere 6.7 higher using FCD snapshots.
U3 and higher.
1.17 to 1.20 vSPhere CNS storage for Tanzu vSphere CSI driver 2.0.1 and
Kubernetes guest clusters on higher using FCD snapshots.
vSphere 7.0.U1 P02 and higher.
The next page highlights the procedures to add Protection Storage to the
PowerProtect Data Manager. Scroll down each step to expand and view more
detailed information.
Step One
From the left navigation pane, select Infrastructure > Storage > Protection
Storage tab.
Click Add.
Step Two
Specify the Name and the FQDN for the PowerProtect DD appliance.
From the PowerProtect Data Manager left navigation pane, select Infrastructure >
Asset Sources. Select the Kubernetes asset type and click Enable Source.
After the Kubernetes asset type is enabled, select Infrastructure > Asset Sources
> Kubernetes tab. Click Add to add the Kubernetes cluster. Kubernetes cluster
master node's FQDN (or IP address) and credentials information is required to add
the Kubernetes cluster to PowerProtect Data Manager. In the case of a HA cluster,
the external IP of the load balancer must be used for the Asset Source.
Step One
Log in to PowerProtect Data Manager. From the left navigation pane, select
Infrastructure > Asset Sources.
Step Two
From the left navigation pane, select Infrastructure > Asset Sources >
Kubernetes tab.
Select Infrastructure > Assets > Kubernetes tab, all Kubernetes namespaces are
listed.
The following Kubernetes object can be discovered and added as Assets by the
PowerProtect Data Manager UI:
• Namespace.
Crash Select this option to snapshot the persistent volume bound to the
Consistent persistent volume claims in the Kubernetes namespace and back
them up to the PowerProtect DD system.
Exclusion Select this option to exclude the Kubernetes assets (for example,
PVC) in this group from the protection policy.
PowerProtect Data Manager supports the following backup level for Kubernetes
namespaces:
Backup Description
Levels
Use crash consistent protection policy to protect the Kubernetes namespaces and
their related PVCs.
When the crash consistent protection policy is added, the following data protection
attributes can be specified:
• Assets.1
• Backup schedule2
1Specify the PowerProtect Assets that need to be protected. This could be Oracle
database, SAP HANA database, Kubernetes namespaces, Microsoft SQL
database, Microsoft Exchange database, VMware virtual machine, and
Linux/Windows file system.
2Specify the following: Frequency of the backup operation recurrence, the backup
operation starts and stops time, and the retention period for the backup data.
The next page highlights the procedures to add a crash consistent protection
policy. Scroll down each step to expand and view more detailed information.
3 Select the wanted SLA to associate with the Crash Consistent protection policy.
Step One
From the left navigation pane, select Protection > Protection Policies.
Click Add.
Step Two
Specify the name for the protection policy and select Kubernetes as policy type.
Step Five
To create a new PowerProtect DD Storage Unit, select New on the Storage Unit
drop-down menu.
Review the Crash Consistent protection policy, click Finish to create the protection
policy.
Step Eight
From the left navigation pane, click Jobs > Protection Jobs to verify that the crash
consistent protection policy completes successfully.
Once a crash consistent protection policy is added, you can perform a manual
backup by using the Protect Now option from the Protection > Protection
Policies page.
The Protect Now option on the Protection Policies page allows you to manually
start a backup operation to protect multiple Kubernetes namespaces that are in the
designated protection policy.
To start the Protect Now option on the Protection Policies page, the protection
policy must be enabled, and the protection policy purpose must be one of the
following -- Crash Consistent, Centralized Protection, and Application Aware.
The next page highlights the procedures to start the crash consistent protection
policy from PowerProtect Data Manager. Scroll down each step to expand and
view more detailed information.
Step One
From the left navigation pane, select Protection > Protection Policies.
Step Two
On the Assets Selection page, select the All assets defined in the Protection
Policy option.
• Full
• Keep For: 3 Days
Step Four
Step Five
− Without shards.
When PowerProtect Data Manager performs the application consistent backup, it
places the database in a quiescent state and then takes a snapshot of the
database. After the snapshot is taken, the database resumes normal operations
and PowerProtect Data Manager backs up the snapshot to the PowerProtect DD
system. Usually, the snapshot operation is instantaneous and the downtime is
minimal.
These backups are agentless, in that the PowerProtect Data Manager can take a
snapshot of containers without the need for software installation in the database
application. The snapshot is then backed up to the PowerProtect DD system using
the standard procedures for the Kubernetes environment.
The ppdmctl.tar.gz file must be extracted to your local system first and then copy
the related configuration files and the ppdmctl control command to the
Kubernetes host. The ppdmctl control command is used to deploy (create) and
administer the application template.
The .yaml and .json files under the ppdmctl/examples folder are not application
template. The .yaml and .json files contain the necessary information about how to
create the application template.
The default yaml file is used to create the application template for a single-instance
database. The customized yaml file (with the --inputfile parameter) is used in
an environment where the database running in a cluster.
The ppdmctl command is used to create the application template. You can create
the application template from a customized yaml file (with the --inputfile
parameter) or from the default yaml file.
Run the following command to create a default application template for a specific
Kubernetes namespace:
After you create the application template, no special steps are required to perform
the application-consistent database backup.
From PowerProtect Data Manager UI, select Protection > Protection Policies >
Add option create Crash Consistent protection policy to back up the Kubernetes
namespace where the database application resides.
• Namespace
• PersistentVolumeClaim (PVC)
During the restore operation, you must select one of the following options on the
Purpose page from the PowerProtect Data Manager UI:
Step One
From the left navigation pane, select Recovery > Assets > Kubernetes tab.
On the left pane, click DD. On the right pane, select the backup copy that you want
to restore.
Click Restore.
Step Three
On the Cluster page, select the destination Kubernetes cluster where you want to
restore to.
On the Purpose page, select what Kubernetes objects you want to restore.
Step Five
On the Restore Type page, select the destination Kubernetes namespace where
you want to restore to.
Step Six
Step Seven
Step Eight
On the Jobs > Protection Jobs page, wait for the restore operation to complete
successfully.
Product version in DES-3521 proven exam for PowerProtect Data Manager is 19.8.
(C) - Classroom
Click the Save Progress and Exit button below to record this
eLearning as complete.
Go to the next eLearning or assessment, if applicable.