NetBackup105_AdminGuide_Kubernetes
NetBackup105_AdminGuide_Kubernetes
Kubernetes Administrator's
Guide
Release 10.5
Last updated: 2024-09-30
Legal Notice
Copyright © 2024 Veritas Technologies LLC. All rights reserved.
Veritas, the Veritas Logo, and NetBackup are trademarks or registered trademarks of Veritas
Technologies LLC or its affiliates in the U.S. and other countries. Other names may be
trademarks of their respective owners.
This product may contain third-party software for which Veritas is required to provide attribution
to the third party (“Third-party Programs”). Some of the Third-party Programs are available
under open source or free software licenses. The License Agreement accompanying the
Software does not alter any rights or obligations you may have under those open source or
free software licenses. Refer to the Third-party Legal Notices document accompanying this
Veritas product or available at:
https://www.veritas.com/about/legal/license-agreements
The product described in this document is distributed under licenses restricting its use, copying,
distribution, and decompilation/reverse engineering. No part of this document may be
reproduced in any form by any means without prior written authorization of Veritas Technologies
LLC and its licensors, if any.
The Licensed Software and Documentation are deemed to be commercial computer software
as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19
"Commercial Computer Software - Restricted Rights" and DFARS 227.7202, et seq.
"Commercial Computer Software and Commercial Computer Software Documentation," as
applicable, and any successor regulations, whether delivered by Veritas as on premises or
hosted services. Any use, modification, reproduction release, performance, display or disclosure
of the Licensed Software and Documentation by the U.S. Government shall be solely in
accordance with the terms of this Agreement.
http://www.veritas.com
Technical Support
Technical Support maintains support centers globally. All support services will be delivered
in accordance with your support agreement and the then-current enterprise technical support
policies. For information about our support offerings and how to contact Technical Support,
visit our website:
https://www.veritas.com/support
You can manage your Veritas account information at the following URL:
https://my.veritas.com
If you have questions regarding an existing support agreement, please email the support
agreement administration team for your region as follows:
Japan [email protected]
Documentation
Make sure that you have the current version of the documentation. Each document displays
the date of the last update on page 2. The latest documentation is available on the Veritas
website:
https://sort.veritas.com/documents
Documentation feedback
Your feedback is important to us. Suggest improvements or report errors or omissions to the
documentation. Include the document title, document version, chapter title, and section title
of the text on which you are reporting. Send feedback to:
You can also see documentation information or ask a question on the Veritas community site:
http://www.veritas.com/community/
https://sort.veritas.com/data/support/SORT_Data_Sheet.pdf
Contents
■ Overview
Overview
The NetBackup web UI provides the capability for backups and restores of
Kubernetes applications in the form of namespaces. The protectable assets in the
Kubernetes clusters are automatically discovered in the NetBackup environment
and administrators can select one or more protection plans that contain the wanted
schedule, backup, and retention settings.
The NetBackup web UI lets you perform the following operations:
■ Add Kubernetes cluster for protection.
■ View discovered namespaces.
■ Manage permissions for roles
■ Set resource limits to optimize load on your infrastructure and network.
■ Manage protection and intelligent group to protect Kubernetes assets.
■ Restore namespaces and persistent volumes to same or alternate Kubernetes
cluster.
■ Monitor backup and restore operations.
■ Image expiration, image import, and image copy operations.
Overview of NetBackup for Kubernetes 9
Features of NetBackup support for Kubernetes
Feature Description
Auto NetBackup Adds Kubernetes cluster and configurations such as storage class and volume snapshot
Kubernetes Agent class, and data mover configuration can be done with automated deployment supported.
Configuration
Integration with The NetBackup web UI provides RBAC roles to control which NetBackup users can manage
NetBackup role-based Kubernetes operations in NetBackup. The user does not need to be a NetBackup
access control (RBAC) administrator to manage Kubernetes operations.
■ Use a single protection plan to protect multiple Kubernetes namespaces. The assets
can be spread over multiple clusters.
■ You are not required to know the Kubernetes commands to protect the Kubernetes
assets.
Intelligent management NetBackup automatically discovers the namespaces, persistent volumes, persistent volume
of Kubernetes assets claims, and so on, in the Kubernetes clusters. You can also perform manual discovery.
After the assets are discovered, the Kubernetes workload administrator can select one or
more protection plans to protect them.
Kubernetes specific Kubernetes service accounts used to authenticate and manage the clusters.
credentials
■ Full discovery Discovery when a new cluster is added to the NetBackup is always a full discovery.
■ Incremental Once the Kubernetes cluster is added, auto discovery cycle is triggered to discover all the
discovery assets available on the Kubernetes cluster. The first auto discovery of the day is a full
discovery and subsequent auto discoveries are incremental.
■ Snapshot only ■ Backups are managed entirely by the NetBackup server from a central location.
backups Administrators can schedule automatic, unattended backups for namespaces on different
■ Backup from Kubernetes clusters.
snapshot ■ The NetBackup web UI supports backup and restore of namespaces from one interface.
■ Backup schedule configuration for full backups.
■ Manual backups and snapshot only backups.
■ Resource throttling for each cluster to improve the performance of backups.
■ NetBackup can perform backups of Kubernetes namespaces with snapshot methodology,
achieving faster recovery time objectives.
Overview of NetBackup for Kubernetes 10
Features of NetBackup support for Kubernetes
Feature Description
■ Restore from ■ Restore Kubernetes namespaces and persistent volumes to different locations.
snapshot ■ Restore to a different Kubernetes cluster flavor using restore from backup copy with
■ Restore from backup parallel restore jobs.
copy
Client side data Client side data deduplication support feature is enabled for Kubernetes.
deduplication support
For more details, refer to the About client-side deduplication section in the NetBackup
Deduplication Guide.
Auto Image Replication The backups that are generated in one NetBackup Kubernetes cluster can be replicated
(AIR) to storage in one or more target NetBackup domains. This also is referred to as AIR. The
ability to replicate backups to storage in other NetBackup domains.
The Auto Image Replication (A.I.R.) is supported for all schedule types.
Protection of Stateful Kubernetes application using persistent volumes to maintain there states can be protected.
applications Backup and restore of Persistent Volume Claims (PVCs) of mode file system and/or block
for the Container Storage Interface (CSI) providers which supports the following features:
Import and verify Import is a two step operation, the first step recreates the catalog entries for the backups
that are on the specified media. Once the second phase import has been completed catalog
entries for files were backed up by those images will be created.
Verify: NetBackup can verify the contents of a backup by comparing its contents to what is
recorded in the NetBackup catalog.
Federal Information NetBackup Kubernetes on Red Hat platform provides support to FIPS compliant
Processing Standards communication.
(FIPS) support for Red
Hat platforms
Accelerator backup NetBackup supports accelerator backup for Kubernetes workloads and it reduces the backup
support for Kubernetes time.
OpenShift Virtualization NetBackup version 10.4.1 and later, provides backup and restore support for namespaces
support for Kubernetes with one or more virtual machines running on Kubernetes clusters.
workload
Chapter 2
Deploying and configuring
the NetBackup Kubernetes
operator
This chapter includes the following topics:
3. $ ./get_helm.sh
Note: You must deploy the operator in each cluster, where you want to deploy
NetBackup.
netbackupkops-helm-chart/
├── charts
├── Chart.yaml
├── templates
│ ├── deployment.yaml
│ └── _helpers.tpl
└── values.yaml
Directory structure:
netbackupkops-helm-chart/values.yaml
netbackupkops-helm-chart/.helmignore
netbackupkops-helm-chart/templates/
netbackupkops-helm-chart/templates/deployment.yaml
netbackupkops-helm-chart/templates/_helpers.tpl
netbackupkops-helm-chart/charts/
Note: Before installing a new plug-in, you must uninstall the older plug-in.
5 To change the current directory to your home directory, run the command: cd
~
After log in, the config.json file containing the authorization token is
created or updated. To view the config.json file, run the command: cat
~/.docker/config.json
The output looks like:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "c3R...zE2"
}
}
}
■ To load the image to the docker cache and push the image to the docker
image repository, run the commands:
■ Load the tar file for Netbackup Kubernetes Operator.
<docker load -i <nameof the tar file> ./>
■ Push the image to a repository from where Kubernetes can fetch the
image at the time of NetBackup Kubernetes Operator deployment.
docker push <repo-name/image-name:tag-name>
Note: In the example Docker is used for reference. You can use any other
CLI tool which provides equivalent functionality.
■ replace the value for image in the manager section, with your image name
and tag repo-name/image-name:tag-name.
■ Change the value of replicas to 0.
8 Sizing for metadata persistent volume is required. The default persistent volume
size for Kubernetes operator is 10Gi. The persistent volume size is configurable.
You can change the value for storage from 10Gi to a higher value before
deploying the plugin. This leads to the nbukops pod have the size of the PVC
mounted in the pod.
You can specify metadata persistent volume size in values.yaml.
Persistent Volume Claim in deployment.yaml under helm-chart looks like this
:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
component: netbackup
name: {{ .Release.Namespace }}-netbackupkops
namespace: {{ .Release.Namespace }}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
■ During fresh installation while configuring the Helm Chart. You can modify
the size of PVC storage in the deployment.yaml of the
netbackupkops-helm-chart which leads to creation of the initial PVC size.
■ Post installation, updating the PVC size (dynamic volume expansion) is
supported by few storage vendors. For more information, refer to
https://kubernetes.io/docs/concepts/storage/persistent-volumes
Note: The default size of persistent volume can be resized to larger value
without losing the data. You are recommended to add the storage provider that
supports volume expansion.
Deploying and configuring the NetBackup Kubernetes operator 16
Port requirements for Kubernetes operator deployment
Example:
helm list -n netbackup
Example:
helm history veritas-netbackupkops -n netbackup
Note: Review the Kubernetes configuration to ensure that the Kubernetes API server port
has not been changed from 443 to a non-default port; often 6443 or 8443.
Note: NetBackup Kubernetes Operator (KOps) and datamover pods have additional
requirements (new in NetBackup 10.0).
Kubernetes cluster TCP port 13724 bi-directional Primary and media server
if using Resilient Network.
Add note to backup configmap values if they are changed. Upgrade resets the helm
values to default. Old configmap must be patched again after upgrade.
Important notes
■ All components (NBU Primary, Media, Kubernetes operators, and Data mover)
must be same version.
■ Existing policies continue to take backups but must be restored manually until
the Kubernetes operator is updated.
■ Push the image to a repository from where Kubernetes can fetch the
image at the time of NetBackup Kubernetes operator deployment.
docker push <repo-name/image-name:tag-name>
Note: In the example, a docker is used for reference. You can use any other
CLI tool that provides equivalent functionality.
Example:
helm upgrade veritas-netbackupkops ./netbackupkops-helm-chart -n
netbackup
Note: Upgrading the NetBackup Kubernetes operator will reset the Helm values
to their defaults. Ensure that you back up the old configmap and reapply any
patches if the values change after the upgrade.
Note: By uninstalling the plugin, the NetBackup Kubernetes operator PVC is also
deleted which has metadata elated to snapshot based backups.
NetBackup Kubernetes operator deletion can result loss of metadata volume, which
also hosts the snapshot metadata. If any snapshots are already performed, then
restore from snapshot copy operation fails in the absence of metadata.
In NetBackup 9.1, you must first delete the older snapshots manually then delete
the associated Velero snapshots.
In NetBackup 10.0, you cannot perform expiration of Velero managed snapshots
which were created using NetBackup 9.1. When the backup images are expired in
NetBackup, the catalog is automatically cleared. But you must delete the snapshot
on Kubernetes server manually.
For more details on manual image expiration operation, see
https://www.veritas.com/content/support.
2 Enter the password upon prompt. Skip this step if you are already logged in
3 Run docker load -i <name of the datamover image file>
4 Run docker tag <datamover image name:tag of the loaded datamover
image> <repo-name/image-name:tag-name>
Note: In the example, docker is used for reference. You can use any CLI tool
which has equivalent capabilities.
6 Ensure that the configmap with primary server name, have image value set to
<repo-name/image-name:tag-name> pushed in step no 4.
Example,
apiVersion: v1
data:
datamover.properties: image=<image-repo>/datamover:<datamover tag>
version: "1"
kind: ConfigMap
metadata:
name: <Primary Server Name>
namespace: <Netbackup Kubernetes Operator Namespace Name>
apiVersion: v1
kind: Secret
metadata:
name: <kops-namespace>-nb-config-deploy-secret
namespace: <kops-namespace>
type: Opaque
stringData:
apikey: <Enter the value of API key from the earlier step>
Preinstallation
1 Edit the following fields in netbackupkops-helm-chart/values.yaml.
■ containers.manager.image: Container registry URL for pulling the NetBackup
Kubernetes controller image.
■ imagePullSecrets name: name of the image pull secret if the container
registry requires authentication to pull images.
■ nbprimaryserver: Configured name of NetBackup primary server.
Deploying and configuring the NetBackup Kubernetes operator 22
Automated configuration of NetBackup protection for Kubernetes
5 Mapping between the storageclass and the snapshot class is managed through
the storageMap. If a new storage option is added to the cluster, it can also be
updated in the configmap for backup-operator-configuration after installation.
■ storageMap is a dictionary of key, value fields where key is storage class
and its value is a tuple consisting of (snapshotClass,
storageClassForBackupDataMovement,
storageClassForRestoreFromBackup) This field is mandatory to specify
mapping between storage class and snapshot class.
■ snapshotclass must be created with same provisioner as storage class and
it must be capable of snapshotting the storage class. All storage classes
should have their entry for snapshotclass.
Deploying and configuring the NetBackup Kubernetes operator 23
Automated configuration of NetBackup protection for Kubernetes
storageMap:
<key - storage class name>:
snapshotClass: [mandatory field to specify volumesnapshotclass for
storageClassForBackupDataMovement: <optional, storage class used to
NetBacup media server>
storageClassForRestoreFromBackup: <optional, storage class used to
to k8s cluster>
Example for openshift storage classes. cephfs storage class should have
storageMap:
ocs-storagecluster-cephfs:
storageClassForBackupDataMovement: ocs-storagecluster-cephfs
storageClassForRestoreFromBackup: ocs-storagecluster-cephfs
snapshotClass: ocs-storagecluster-cephfsplugin-snapclass
ocs-storagecluster-ceph-rbd:
snapshotClass: ocs-storagecluster-rbdplugin-snapclass
Install
To install helm, run the following command:
# helm install veritas-netbackupkops <path to
netbackupkops-helm-chart> -n <kops namespace>
Debug
To get the config-deploy pod from the Kubernetes operator namespace, run the
following command:
Deploying and configuring the NetBackup Kubernetes operator 24
Configure settings for NetBackup snapshot operation
Logs
To check the logs from the pod <namespace>-netbackup-config-deploy, run the
following command:
# kubectl logs <pod-name> -n <kops namespace>
Log level
It sets the log level of the configuration pod. Values can be set to DEBUG, INFO, or
ERROR. Default value is set to INFO.
Note: For more details, refer to the NetBackup Kubernetes Quick Start Guide.
Note: You can add both labels on a single storage class. If the storage class
supports Block volume backed by raw block and the Filesystem volume.
Deploying and configuring the NetBackup Kubernetes operator 25
Configure settings for NetBackup snapshot operation
Note: To get the configuration value, you can run the command: kubectl get
configmaps <namespace>-backup-operator-configuration -n <namespace>
-o yaml > {local.file}
To label the storage classes, run the following commands that are shown in
the examples:
Example 1. Run the command:# kubectl get sc
Deploying and configuring the NetBackup Kubernetes operator 31
Configure settings for NetBackup snapshot operation
Name Provisioner
ocs-storagecluster-ceph-rgw openshift-storage.ceph.rook.io/bucket
ocs-storagecluster-ceph-rbd openshift-storage.cephfs.csi.ceph.com
Openshift-storage.noobaa.io openshift-storage.noobaa.io/obc
thin kubernetes.io/vsphere-volume
Note: You need a storage class with volume binding mode set to Immediate.
If the PVC volume binding mode is WaitForFirstConsumer then it affects the
creation of the snapshot from the PVC. This situation can cause the backup
jobs to fail.
storageclass.storage.k8s.io/ocs-storagecluster-cephfs labeled
2. Label a valid volume snapshot class for NetBackup usage, add the following
label: netbackup.veritas.com/default-csi-volume-snapshot-class=true. If the
NetBackup labeled VolumeSnapshotClass class is not found, then backup
from snapshot job for metadata image and restore jobs fails with an error
message: Failed to create snapshot of the Kubernetes namespace.
To label the volume snapshot classes, run the following commands given the
examples:
Example 1. Run the command:# kubectl get volumesnapshotclass
Name Driver
ocs-storagecluster-cephfsplugin-snapclass openshift-storage.cephfs.csi.ceph.com
ocs-storagecluster-rbdplugin-snapclass openshift-storage.rbd.csi.ceph.co
Delete 2d2h
Delete 2d2h
Name Driver
ocs-storagecluster-cephfsplugin-snapclass openshift-storage.cephfs.csi.ceph.com
Delete 2d2h
volumesnapshotclass.snapshot.storage.k8s.io/ocs-storagecluster-cephfsplugi
Name Driver
ocs-storagecluster-cephfsplugin-snapclass openshift-storage.cephfs.csi.ceph.com
3. Each primary server which runs the backup from snapshot and restore from
backup copy operations, needs to create a separate ConfigMap with the primary
server's name.
In the following configmap.yaml example:
■ backupserver.sample.domain.com and mediaserver.sample.domain.com
are the host names of the NetBackup primary and media server.
■ IP: 10.20.12.13 and IP: 10.21.12.13 are the IP addresses of the
NetBackup primary and media server.
apiVersion: v1
data:
datamover.hostaliases: |
10.20.12.13=backupserver.sample.domain.com
10.21.12.13=mediaserver.sample.domain.com
datamover.properties: |
image=reg.domain.com/datamover/image:latest
version: "1"
Deploying and configuring the NetBackup Kubernetes operator 34
Configure settings for NetBackup snapshot operation
kind: ConfigMap
metadata:
name: backupserver.sample.domain.com
namespace: kops-ns
4. Specify datamover.properties:
image=reg.domain.com/datamover/image:latest with correct data mover
image.
5. Specify datamover.hostaliases, if the primary server and the media servers
that are connected to the primary server have short names and host resolution
failing from the data mover. Provide a mapping of all the host names to the IPs
for the primary and the media servers.
6. Create a secret as described in detail in the Point 6 in the Deploy service
package on NetBackup Kubernetes operator section to use a private docker
registry.
Once the secret is created, add the following attributes while creating a
configmap.yaml file.
datamover.properties: |
image=repo.azurecr.io/netbackup/datamover:10.0.0049
imagePullSecret=secret_name
8. If the Kubernetes operator is not able to resolve the primary server with the
short names, refer to the following guidelines.
■ If you get the following message when you fetch the certificates:EXIT
STATUS 8500: Connection with the web service was not established. Then,
verify the host name resolution state from the nbcert logs.
■ If the host name resolution fails, then update the values.yaml file with
hostAliases.
hostAliases:
- hostnames:
- backupserver.sample.domain.com
ip: 10.20.12.13
- hostnames:
- mediaserver.sample.domain.com
ip: 10.21.12.13
Copy, paste the hostAliases example details in the text editor and add to
the hostAliases in the deployment.
hostAliases example:
2104 hostAliases;
- ip:10.15.206.7
hostnames:
- lab02-linsvr-01.demo.sample.domain.com
- lab02-linsvr-01
- ip:10.15.206.8
hostnames:
- lab02-linsvr-02.demo.sample.domain.com
- lab02-linsvr-02
imagePullSecrets:
- name: {{ .values.netbackupKops.imagePullSecrets.name}}
Note: This step is mandatory to have successful backup from snapshot and
restore from backup copies.
apiVersion: v1
data:
datamover.hostaliases: |
10.20.12.13=backupserver.sample.domain.com
10.21.12.13=mediaserver.sample.domain.com
datamover.properties: |
image=reg.domain.com/datamover/image:latest
DTE_CLIENT_MODE=ON
version: "1"
kind: ConfigMap
metadata:
name: backupserver.sample.domain.com
namespace: kops-ns
VXMS_VERBOSE Range:[0,99]
VERBOSE Range:[0,5]
DTE_CLIENT_MODE ■ AUTOMATIC
■ ON
■ OFF
apiVersion: v1
data:
datamover.properties: |
image=reg.domain.com/datamover/image:latest
VERBOSE=5
DTE_CLIENT_MODE=OFF
VXMS_VERBOSE=5
version: "1"
kind: ConfigMap
metadata:
name: backupserver.sample.domain.com
namespace: kops-ns
hostAliases:
- hostnames:
- backupserver.sample.domain.com
ip: 10.20.12.13
- hostnames:
- mediaserver.sample.domain.com
ip: 10.21.12.13
Copy, paste the hostAliases example details in the text editor and add to
the hostAliases in the deployment.
2 If data mover is not able to resolve short names of backup server or media
server. To resolve this issue, perform the following steps:
■ Update configmap with backup server name.
■ Add datamover.hostaliases field, map with IP addresses to the hostname.
■ In the following configmap.yaml example,
■ backupserver.sample.domain.com and mediaserver.sample.domain.com
are the hostnames of NetBackup primary and media server.
■ IP: 10.20.12.13 and IP: 10.21.12.13 are the IP addresses of NetBackup
primary and media server.
apiVersion: v1
data:
datamover.hostaliases: |
10.20.12.13=backupserver.sample.domain.com
10.21.12.13=mediaserver.sample.domain.com
datamover.properties: |
image=reg.domain.com/datamover/image:latest
version: "1"
kind: configmap
metadata:
name: backupserver.sample.domain.com
namespace: kops-ns
■ if you update the configmap.yaml which is already created then run the
command to update configmap. kubectl apply -f configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: backupserver.sample.domain.com
namespace: netbackup
data:
datamover.hostaliases: |
10.20.12.13=backupserver.sample.domain.com
10.21.12.13=mediaserver.sample.domain.com
datamover.properties: |
image=reg.domain.com/datamover/image:latest
datamover.nodeSelector: |
kubernetes.io/hostname: test1-l94jm-worker-k49vj
topology.rook.io/rack: rack1
Deploying and configuring the NetBackup Kubernetes operator 40
Data mover pod schedule mechanism support
version: "1"
apiVersion: v1
kind: ConfigMap
metadata:
name: backupserver.sample.domain.com
namespace: netbackup
data:
datamover.hostaliases: |
10.20.12.13=backupserver.sample.domain.com
10.21.12.13=mediaserver.sample.domain.com
datamover.properties: |
image=reg.domain.com/datamover/image:latest
datamover.nodeName : test1-l94jm-worker-hbblk
version: "1"
3. Taint and Toleration: Toleration allows you to schedule the pods with similar
taints. Taint and toleration work together to ensure that the pods are scheduled
onto appropriate nodes. If one or more taints are applied to a node. Then that
node must not accept any pods which does not tolerate the taints.
Example:
apiVersion: v1
kind: ConfigMap
Deploying and configuring the NetBackup Kubernetes operator 41
Data mover pod schedule mechanism support
metadata:
name: backupserver.sample.domain.com
namespace: netbackup
data:
datamover.hostaliases: |
10.20.12.13=backupserver.sample.domain.com
10.21.12.13=mediaserver.sample.domain.com
datamover.properties: |
image=reg.domain.com/datamover/image:latest
datamover.tolerations: |
- key: "dedicated"
operator: "Equal"
value: "experimental"
effect: "NoSchedule"
version: "1"
4. Affinity and Anti-affinity: Node affinity functions like the nodeSelector field
but it is more expressive and allows you to specify soft rules. Inter-pod
affinity/anti-affinity allows you to constrain pods against labels on the other
pods.
Examples:
■ Node Affinity:
apiVersion: v1
kind: ConfigMap
Deploying and configuring the NetBackup Kubernetes operator 42
Data mover pod schedule mechanism support
metadata:
name: backupserver.sample.domain.com
namespace: netbackup
data:
datamover.hostaliases: |
10.20.12.13=backupserver.sample.domain.com
10.21.12.13=mediaserver.sample.domain.com
datamover.properties: |
image=reg.domain.com/datamover/image:latest
datamover.affinity: |
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- test1-l94jm-worker-hbblk
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
Deploying and configuring the NetBackup Kubernetes operator 43
Data mover pod schedule mechanism support
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
version: "1"
■ Pod Affinity
apiVersion: v1
kind: ConfigMap
metadata:
name: backupserver.sample.domain.com
namespace: netbackup
data:
datamover.hostaliases: |
10.20.12.13=backupserver.sample.domain.com
10.21.12.13=mediaserver.sample.domain.com
datamover.properties: |
image=reg.domain.com/datamover/image:latest
datamover.affinity: |
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
Deploying and configuring the NetBackup Kubernetes operator 44
Data mover pod schedule mechanism support
matchExpressions:
- key: component
operator: In
values:
- netbackup
topologyKey: kubernetes.io/hostname
version: "1"
apiVersion: v1
kind: ConfigMap
metadata:
name: backupserver.sample.domain.com
namespace: netbackup
data:
datamover. hostaliases: |
10.20.12.13=backupserver.sample.domain.com
10.21.12.13=mediaserver.sample.domain.com
datamover.properties: |
image=reg.domain.com/datamover/image:latest
datamover.topologySpreadConstraints : |
Deploying and configuring the NetBackup Kubernetes operator 45
Data mover pod schedule mechanism support
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
version: "1"
■ Labels: Labels are the key/value pairs attached to the objects, such as
pods. Labels intends to identify the attributes of an object which are
significant and relevant to users. Labels can organize and select subsets
of objects. Labels which are attached to objects at creation time are
subsequently added and modified at any time.
Example:
apiVersion: v1
kind: ConfigMap
metadata:
name: backupserver.sample.domain.com
namespace: netbackup
data:
datamover.hostaliases: |
10.20.12.13=backupserver.sample.domain.com
10.21.12.13=mediaserver.sample.domain.com
datamover.properties: |
image=reg.domain.com/datamover/image:latest
datamover.labels: |
env: test
pod: datamover
Deploying and configuring the NetBackup Kubernetes operator 46
Data mover pod schedule mechanism support
version: "1"
apiVersion: v1
kind: ConfigMap
metadata:
name: backupserver.sample.domain.com
namespace: netbackup
data:
datamover.hostaliases: |
10.20.12.13=backupserver.sample.domain.com
10.21.12.13=mediaserver.sample.domain.com
datamover.properties: |
image=reg.domain.com/datamover/image:latest
datamover.annotations: |
buildinfo: |-
[{
"name": "test",
"build": "1"
}]
imageregistry: "https://reg.domain.com/"
Deploying and configuring the NetBackup Kubernetes operator 47
Validating accelerator storage class
version: "1"
Note: You must deploy the certificates before you can perform Backup from
Snapshot and Restore from Backup operations.
The Cluster must be added and discovered successfully before creating the
BackupServerCert as it relies on the NetBackup passing some clusterInfo in
order to set the status as Success.
apiVersion: netbackup.veritas.com/v1
kind: BackupServerCert
metadata:
name: backupservercert-sample-nbca
namespace: kops-ns
spec:
clusterName: cluster.sample.com:port
backupServer: primary.server.sample.com
certificateOperation: Create | Update | Remove
certificateType: NBCA | ECA
nbcaAttributes:
nbcaCreateOptions:
secretName: "Secret name consists of token and fingerprint"
nbcaUpdateOptions:
secretName: "Secret name consists of token and fingerprint"
force: true | false
nbcaRemoveOptions:
hostID: "hostId of the nbca certificate. You can view on Netbackup UI"
ecaAttributes:
ecaCreateOptions:
ecaSecretName: "Secret name consists of cert, key, passphrase, cacert"
copyCertsFromSecret: true | false
isKeyEncrypted: true | false
ecaUpdateOptions:
ecaCrlCheck: DISABLE | LEAF | CHAIN
ecaCrlRefreshHours: [0,4380]
NBCA: ON
ECA: OFF
apiVersion: netbackup.veritas.com/v1
kind: BackupServerCert
metadata:
name: backupservercert-sample
namespace: kops-ns
spec:
clusterName: cluster.sample.com:port
backupServer: primaryserver.sample.domain.com
certificateOperation: Create | Update | Remove
certificateType: NBCA
nbcaAttributes:
nbcaCreateOptions:
secretName: "Secret name consists of token and fingerprint"
nbcaUpdateOptions:
secretName: "Secret name consists of token and fingerprint"
force: true
nbcaRemoveOptions:
hostID: "hostId of the nbca certificate. You can view on Netbackup UI"
apiVersion: v1
kind: Secret
metadata:
name: secret-name
namespace: kops-ns
type: Opaque
stringData:
token: "Authorization token | Reissue token"
fingerprint: "SHA256 Fingerprint"
apiVersion: netbackup.veritas.com/v1
kind: BackupServerCert
metadata:
name: backupserver-nbca-create
namespace: kops-ns
spec:
Deploying certificates on NetBackup Kubernetes operator 52
Perform Host-ID-based certificate operations
clusterName: cluster.sample.com:port
backupServer: backupserver.sample.domain.com
certificateOperation: Create
certificateType: NBCA
nbcaAttributes:
nbcaCreateOptions:
secretName: nbcaSecretName with token and fingerprint
9 Once the certificate is created, check custom resource status. If the custom
resource status is successful, you can run Backup from Snapshot jobs.
Note: You need to check that the BackupServerCert custom resource status
is successful before initiating Backup from Snapshot or Restore from Backup
Copy operations.
Note: Ensure to check whether the NetBackup primary server clock and the
NetBackup Kubernetes operator clock are in sync. For more details on the
CheckClockSkew errors, refer to the Implication of clock skew on certificate
validity section in the NetBackup™ Security and Encryption Guide.
apiVersion: netbackup.veritas.com/v1
kind: BackupServerCert
metadata:
name: backupserver-nbca-domain.com
namespace: kops-ns
spec:
clusterName: cluster.sample.com:port
backupServer: backupserver.sample.domain.com
certificateOperation: Remove
certificateType: NBCA
nbcaAttributes:
nbcaRemoveOptions:
hostID: nbcahostID
Note: If update certificate operation fails, you must remove the certificate first and
then create a new certificate.
apiVersion: netbackup.veritas.com/v1
kind: BackupServerCert
metadata:
name: backupserver-nbca-update
namespace:kops-ns
spec:
clusterName: cluster.sample.com:port
backupServer: backupserver.sample.domain.com
certificateOperation: Update
certificateType: NBCA
nbcaAttributes:
nbcaUpdateOptions:
secretName: "Name of secret containing
token and fingerprint"
force: true
3 Once the backupservercert object is created, then check the custom resource
status.
Deploying certificates on NetBackup Kubernetes operator 55
Perform ECA certificate operations
NBCA: ON
ECA: ON
To configure the backup server in ECA mode, refer to the About external CA support
in NetBackup section in the NetBackup™ Security and Encryption Guide
ECA certificate specification looks like this:
apiVersion: netbackup.veritas.com/v1
kind: BackupServerCert
metadata:
name: backupservercert-sample-eca
namespace: kops-ns
spec:
clusterName: cluster.sample.com:port
backupServer: primaryserver.sample.domain.com
certificateOperation: Create | Update | Remove
certificateType: ECA
ecaAttributes:
ecaCreateOptions:
ecaSecretName: "Secret name consists of cert, key, passphrase, cacert"
copyCertsFromSecret: true | false
isKeyEncrypted: true | false
ecaUpdateOptions:
ecaCrlCheck: DISABLE | LEAF | CHAIN
ecaCrlRefreshHours: range[0,4380]
Deploying certificates on NetBackup Kubernetes operator 56
Perform ECA certificate operations
Remove NA
├── cert_chain.pem
├── private
| |___key.pem
| |___passphrase.txt
|___trusted
|__cacerts.pem
--from- file=passphrase=private/passphrase.txt
-n kops-ns
apiVersion: netbackup.veritas.com/v1
kind: BackupServerCert
metadata:
name: backupservercert-eca-create
namespace: kops-ns
spec:
clusterName: cluster.sample.com:port
backupServer: backupserver.sample.domain.com
certificateOperation: Create
certificateType: ECA
ecaAttributes:
ecaCreateOptions:
ecaSecretName: eca-secret
copyCertsFromSecret: true
isKeyEncrypted: false
6 To copy certificate and keys to the Kubernetes operator, do any of the following:
■ Set copyCertsFromSecret as true
■ Set copyCertsFromSecret as false to avoid copying certificates and keys
existing on the Kubernetes Operator.
Note: ECA is common across all primary server thus Kubernetes operator
require one set of certificates and keys that can be enrolled with all primary
servers as required. No need to copy certificates and keys every time unless
there's issue with the previous copied certificates and keys.
Deploying certificates on NetBackup Kubernetes operator 59
Perform ECA certificate operations
7 If private key is encrypted, set isKeyEncrypted flag as true or else false for
unencrypted key. Ensure passphrase is provided in secret if private key is
encrypted.
8 Set ecaSecretName with the secret name, created backupservercert yaml
in step 5.
9 To create the eca-create-backupservercert.yaml file, run the command:
kubectl create -f eca-create-backupservercert.yaml
apiVersion: netbackup.veritas.com/v1
kind: BackupServerCert
metadata:
name: backupservercert-eca-remove
namespace: kops-ns
spec:
clusterName: cluster.sample.com:port
backupServer: backupserver.sample.domain.com
certificateOperation: Remove
certificateType: ECA
■ Then, save the text with the yaml file extension to the home directory from
where the Kubernetes clusters are accessible.
3 Once the object is created, then you need to check the custom resource status.
If failed, then you can take necessary actions.
These steps removes the external certificate details with respect to the specified
primary server from the local certificate store. The certificate is neither deleted from
the system nor from the NetBackup database.
If you want to disable ECA then refer to the Disabling an external CA in a NetBackup
domain section in the NetBackup™ Security and Encryption Guide
If you enrolled ECA on the Kubernetes operator for a backup server but later
reinstalled the backup server which supports just NBCA. Then, you have to remove
ECA enrolment from Kubernetes operator because during nbcertcmd communication
with backupserver CA support might get compared and if it mismatches then an
error occurs.
apiVersion: netbackup.veritas.com/v1
kind: BackupServerCert
metadata:
name: backupservercert-eca-update
namespace: kops-ns
spec:
clusterName: cluster.sample.com:port
backupServer: backupserver.sample.domain.com
certificateOperation: Update
certificateType: ECA
ecaAttributes:
ecaUpdateOptions:
ecaCrlCheck: DISABLE | LEAF | CHAIN
ecaCrlRefreshHours: [0,4380]
■ Open the text editor and paste the yaml file text.
■ Then, save the text with the yaml file extension to the home directory from
where the Kubernetes clusters are accessible.
3 The ECA_CRL_CHECK option lets you specify the revocation check level for
external certificates of the host. It also lets you disable the revocation check
for the external certificates. Based on the check, the revocation status of the
certificate is validated against the Certificate Revocation List (CRL) during host
communication. For more information, refer to the ECA_CRL_CHECK for
NetBackup servers and clients section in the NetBackup™ Security and
Encryption Guide.
4 The ECA_CRL_REFRESH_HOURS option specifies the time interval in hours
to download the CRLs from the URLs that are specified in the peer host
certificate's Certificate Revocation List distribution points (CDP). For more
information, refer to the ECA_CRL_REFRESH_HOURS for NetBackup servers
and clients section in the NetBackup™ Security and Encryption Guide
2 Log on to the Kubernetes operator with administrator rights and run the
command:
kubectl exec pod/nbu-controller-manager-7c99fb8474-hzrsl -n
<namespace of Kubernetes operator> -c netbackupkops -it -- bash
Deploying certificates on NetBackup Kubernetes operator 62
Identify certificate types
3 To list backup servers which have NBCA certificate for Kubernetes, run the
command:
/nbcertcmdtool/nbcertcmdtool -atLibPath/nbcertcmdtool/
4 To list of backup servers which have ECA certificate for Kubernetes, run the
command:
/nbcertcmdtool/nbcertcmdtool -atLibPath/nbcertcmdtool/
■ Configure settings
4 Click Next. In the Manage credentials page, you can add credentials to the
cluster.
■ To use an existing credential, choose Select from an existing credential,
and click Next. In the next page, select the required credentials, and click
Next.
■ To create a new credential, click Add credential, and click Next. In the
Manage credentials page, enter the following:
■ Credential name: Enter a name of the credential.
■ Tag: Enter a tag to associate with the credential.
■ Description: Enter a description of the credential.
■ To add Kubernetes clusters in NetBackup you need a Certification
Authority (CA) certificate and a token. CA certificate and a token of the
backup service account is required for authorization and authentication
of Kubernetes cluster. To get a CA certificate and token, run the following
command in the Kubernetes cluster: kubectl get secret
<[namespace-name]-backup-server-secret> -n <namespace name>
-o yaml..
5 Click Next.
The credentials are validated and on successful validation, the cluster is added.
After the cluster is added, autodiscovery runs to discover available assets in
the cluster.
Note: In NetBackup Kubernetes version 10.1, edit cluster operation fails with an
error message. The recommended action to resolve this issue is to first delete the
cluster and add the cluster again.
Configure settings
The Kubernetes settings let you configure the various aspects of the Kubernetes
deployment.
Managing Kubernetes assets 66
Configure settings
■ You can add individual limits to the clusters that override the global limit for
that cluster. To set individual limits to the clusters, click Add.
■ You can select the cluster available from the list and then enter a limit value
for the selected cluster. You can add limits to each available cluster in your
deployment.
■ Click Save to save the changes.
Note: In the NetBackup 10.0 release, the data mover pods exceed the Kubernetes
resource limit settings.
See “Datamover pods exceed the Kubernetes resource limit ” on page 125.
Configure permissions
Using manage permissions, you can assign different access privileges to the user
roles. For more information see the Managing role-based access control chapter
in the NetBackup Web UI Administrator's Guide.
Note: The malware scanner host can initiate a scan of three images at the
same time.
6 After the scan starts, you can see the Malware Scan Progress on Malware
Detection, the following fields are visible:
■ Not scanned
■ Not infected
■ Infected
■ Failed
■ In progress
■ Pending
Chapter 5
Managing Kubernetes
intelligent groups
This chapter includes the following topics:
Note: You can create, update, or delete the intelligent groups only if your role has
the necessary RBAC permissions for the assets that you require to manage. The
NetBackup security administrator can grant you access for an asset type (clusters,
namespaces, and VMGroup). Refer to the NetBackup Web UI Administrator's Guide.
Managing Kubernetes intelligent groups 72
Create an intelligent group
Note: With Virtualization support in Kubernetes, you can create intelligent groups
with filtering the namespaces based on specific resource kinds. Virtual machines,
Persistent volume, and Persistent volume claims are resource kinds available to
filter.
Note: Intelligent group can be created across multiple clusters. Ensure that
you have the required permissions to add clusters in the group. To view and
manage the group, the group administrator must have the view and manage
permission for the selected clusters and groups.
7 To add a condition, use the drop-downs to select a keyword and operator and
then enter a value.
To change the effect of the query, click + Condition and click AND or OR,
then select the keyword, operator, and value for the condition.
Note: To add label conditions, click Add label condition enter the label key
and value.
Note: You can choose to have only a label key in the condition without the
label value. As value is optional parameter to add a label condition.
Note: To add sub-query, click Add sub-query. You can add multiple level
sub-queries.
Note: When using queries in Intelligent groups, the NetBackup web UI might
not display an accurate list of assets that match the query if the query condition
has non-English characters.
Using the not equals filter condition on any attribute returns assets including
those that have no value (null) present for the attribute.
Note: When you click Preview or you save the group, the query options are
treated as case-sensitive when the assets are selected for the group.
■ Create a policy
Create a policy
Use the following procedure to create a backup policy using policy type Kubernetes
within the NetBackup web UI.
To create a policy
1 On the left, select Protection > Policies.
2 Click Add.
3 On the Attributes tab, do the following:
■ Enter policy name in the Policy name field.
■ Select the Policy type as Kubernetes.
■ Select the Policy storage that you want to use.
■ Select or configure any other policy attributes.
4 On the Schedules tab, configure all the necessary schedules. For example,
Full and incremental schedules.
5 On the Kubernetes tab, do any of the following:
■ Select the Intelligent group to add new or existing intelligent group that
you want to protect.
■ Select the Namespace to add namespaces from list that you want to protect.
Managing Kubernetes policies 76
Create a policy
6 On the Resource kind and label selection tab, do any of the following:
■ Select the Include all resource kinds in the backupoption to include all
resource kinds in the backup.
■ Select the Exclude the following resource kinds from the backup option
and enter manually the resource kind or select the resource kinds you want
to exclude the backup.
■ In the Label selection, click Add + to the label queries you want to add.
7 Click Create.
Chapter 7
Protecting Kubernetes
assets
This chapter includes the following topics:
■ Configure backups
Note: The RBAC role that is assigned to you must give you access to the intelligent
groups that you want to manage and to the protection plans that you want to use.
Protecting Kubernetes assets 78
Remove protection from an intelligent group
Note: You must select the Create backup from snapshot option to enable
the replicate and duplicate options for backup copy.
■ If you do not select Create backup from snapshot option, then by default,
Snapshot only storage backup will get configured to run the backup jobs.
■ Select Create a replica copy (Auto Image Replication) of the backup
from snapshot option to create a replica copy of the backup.
■ Select Create a duplicate copy of the backup from snapshot option to
create a duplicate copy of the backup.
6 Continue creating the schedule in the Start window tab, as described in the
Managing protection plans section of the NetBackup Web UI Administrator's
Guide
7 Continue to configure the Storage options for backup from snapshot, as
described in the Managing protection plans section of the NetBackup Web UI
Administrator's Guide
■ Select Exclude the following resource kinds from the backup option to
exclude the resource kinds from the backup job. Click Select to choose the
resource kinds from the static list. The selected resource kinds are displayed
in the text field or you can manually enter the custom resource definition
(CRD) with correct format (type.group). You can delete the selected resource
kinds from the exclude list.
In case, the custom resource kind definitions are not present in the static
list then you can enter custom resource definition (CRD) manually. For
example: demo.nbu.com.
Note: Exclude list of resource kinds takes precedence in terms of mapping the
resources over the labels selected for backup.
2 Under the Labels selection section, click Add to add the labels to map its
associated resources for the backup, enter the label prefix and key, and then
select a operator. All associated resources of the included labels are mapped
for the backup job.
Following are the four operators which you can add to a label:
■ Enter a label key equal to a value.
■ Enter a label key which already exists, without any values.
■ Enter a label key which is in a set of values.
■ Enter a label key not in a set of values.
You can add multiple values for in/not in operators in the set of values with
comma separated.
Note: Selected labels must be present at the time of backup to ensure that the
conditions are applied successfully.
Note: Label selection must only be exclusive of selecting any resource kind
which doesn't contradict between multiple label conditions.
Review page displays the excluded list of resource kinds and the selected labels
for inclusions, and the selected storage units selected.
Note: You can edit or delete the protection plan created for Kubernetes workloads.
You cannot customize the protection plan created for Kubernetes workloads.
Protecting Kubernetes assets 81
Configure backups
Configure backups
NetBackup allows you to run two types of backup jobs in Kubernetes workload:
Snapshots only and Backup from Snapshot. Follow the steps to configure a backup
job for Kubernetes operator.
To perform backup on Kubernetes workload
1 On the left, click Protection > Protection plans and then click Add.
2 In Basic properties, enter a Name, Description, and select Kubernetes,
from the Workload drop-down list.
3 Click Next. In Schedules, click Add schedule.
In the Add backup schedule tab, you can configure the options for retaining
the backup and the snapshot.
4 From the Recurrence drop-down, specify the frequency of the backup.
5 In the Snapshot and backup copy options, do any of the following:
■ Select Create backup from snapshot option, to configure backup from
snapshot for the protection plan. Specify retention period for the backup
from snapshot using the Keep backup for drop-down.
Note: You must select the Create backup from snapshot option to enable
the replicate and duplicate options for backup copy.
■ If you do not select Create backup from snapshot option, then by default,
Snapshot only storage backup is configured to run the backup jobs.
■ SelectCreate a replica copy (Auto Image Replication) of the backup
from snapshot option to create a replica copy of the backup.
■ Select Create a duplicate copy of the backup from snapshot option to
create a duplicate copy of the backup.
Protecting Kubernetes assets 82
Configure Auto Image Replication (A.I.R.) and duplication
6 Continue creating the schedule in the Start window tab, as described in the
Managing protection plans section of the NetBackup Web UI Administrator's
Guide
7 Continue to configure the Storage options for backup from snapshot, as
described in the Managing protection plans section of the NetBackup Web UI
Administrator's Guide
■ While selecting a storage for Backup from Snapshot option, the selected
storage unit must have the media servers of NetBackup version 10.0 or
later.
■ Media server managing the storage must have access to the selected
Kubernetes clusters.
■ Media server must be able to connect with the API server. The port
corresponding to the API server must be open for the outbound connection
from the media server. The datamover pod must be able to connect to the
media server.
■ Establish the trust relationship between two primary servers for interdomain
operations.
■ Log on to the source primary server, on the left, click Hosts > Host
properties to build a connection between a source and target primary
server.
■ Select a source primary server. If necessary, and click Connect. Then
click Edit primary server.
■ Click Servers. On the Trusted primary servers tab, click Add to add
a source server.
■ Click Validate Certificate Authority, then click Next to proceed with
the certificate authority validation.
■ To create a trusted primary server, select from the following options:
■ Select Specify authentication token of the trusted primary
server to add an existing token or create a new token for the
source primary server.
■ Select Specify credentials of the trusted primary server to
add user credentials for the source primary server.
■ Click Add.
4 Create Kubernetes protection plan with the Create backup from snapshot
option to enable the replicate copy option.
On the left, click Protection > Protection plans. On the Schedules tab, click
Add schedule.
5 In the Snapshot and backup copy optionssection, select Create backup
from snapshot option to enable the replicate and duplicate copy options.
6 Select Create a replica copy (Auto Image Replication) of the backup from
snapshot option, and set a time duration to retain the replica copy.
Note: Auto Image Replication can only be created on the trusted NetBackup
primary servers.
7 Select Create a duplicate copy of the backup from snapshot option and
set a time duration to retain the duplicate copy.
8 Click Add.
9 Continue creating the schedule in the Start window tab, as described in
theManaging protection plans section of the NetBackup Web UI Administrator’s
Guide.
10 Click Next.
11 On the Storage options tab, select the storage units to backup from snapshot,
replicate, or duplicate copy.
Note: For Backup from snapshot and duplication, you can add simple storage
units. But for replication, you must add a trusted storage unit with an import
storage lifecycle policies (SLPs).
12 To the right of the selected backup options, click Edit to modify selected the
storage units for backup.
■ For the replica copy option, select the primary server for replication copy.
Then click Next.
■ Select an import storage lifecycle policy that is defined in the trusted server
and then click Use selected replication target.
Note: All storage types supported in Storage Lifecycle Policy (SLP) are supported
for backup jobs.
11 Review the setup of the storage unit and then click Save.
12 To check details of a scheduled backup or backup now job, In the Activity
monitor tab, click the Job ID, to view the backup job details. For file mode,
you can see the total number of backed up files for every image in the Job
Details section.
infrastructure, apply these annotations and hooks to any application pod that requires
a quiesce state.
Backup hooks are used for both pre (before the snapshot) and post (after the
snapshot) processing. In the context of data protection, this usually means that a
netbackup-pre-backup hook calls a quiesce procedure or command, and the
netbackup-post-backup hook calls an un-quiesce procedure or command. Each
set of hooks specifies the command, as well as the container where it is applied.
Note that the commands are not executed within a shell on the containers. Thus,
a full command string with the directory is used in the given examples.
Identify the applications that require application consistent backups and apply the
annotation with a set of backup hooks as part of the configuration for Kubernetes
data protection.
Add an annotation to a pod, use the Kubernetes User Interface (UI). Alternatively,
use the kubectl annotate function on the Kubernetes cluster console for a specific
pod or label. The methods to apply annotations may vary depending on the
distribution, therefore the following examples focuses on the kubectl command,
based on its wide availability in most distributions.
Additionally, annotations can be added to the base Kubernetes objects, such as
the deployment or replica set resources to ensure the annotations are included in
any newly deployed pods. The Kubernetes administrator can update annotations
dynamically.
Labels are key-value pairs which are attached to the Kubernetes objects such as
Pods, or Services. Labels are used as attributes for objects that are meaningful
and relevant to the user. Labels can be attached to objects at creation time and
subsequently added and modified at any time. Kubernetes offers integrated support
for using these labels to query objects and perform bulk operations on selected
subsets. Each object can have a set of key-value labels defined. Each Key must
be unique for a given object.
As an example of formatting and syntax of the label metadata:
"metadata": {"labels": {"key1":"value1","key2":"value2"}}
Either specify the pod name specifically, or a label that applies to the desired group
of pods. If multiple annotation arguments are used, then specify the correct JSON
format, such as a JSON array: '["item1","item2","itemn"]'# kubectl annotate pod
[ {pod_name} | -l {label=value}] -n {the-pods-namespace_name}
[annotation syntax - see following]
This method can be combined with && to join multiple commands if some applications
require multiple commands to achieve the desired result. The commands specified
are not provided by Veritas, and the user must manually customize the application
pod. Replace {values} with the actual names used in your environment.
Protecting Kubernetes assets 88
Configure application consistent backup
Note: Allkubectl commands must be defined in a single line. Be careful when you
copy or paste the following examples.
netbackup-pre.hook.back.velero.io/command
netbackup-pre.hook.backup.velero.io/container
netbackup-post.hook.back.velero.io/command
netbackup-post.hook.backup.velero.io/container
This translates into the following single command to set both the pre and post
backup hooks for MongoDB. Note the special syntax to escape special characters
as well the brackets ([]), single and double quotes and commas (,) used as part of
the JSON format:
# kubectl annotate pod {mongodb-pod-name} -n {mongodb namespace}
netbackup-pre.hook.back.velero.io/command='["/bin/bash", "-c", "mongo
--eval \"db.fsyncLock()\""]'
netbackup-pre.hook.backup.velero.io/container={mongodb-pod-name}
netbackup-post.hook.backup.velero.io/command='["/bin/bash","-c","mongo
--eval \"db.fsyncUnlock()\""]'
netbackup-post.hook.backup.velero.io/container={mongodb-pod-name}
This translates into the following single command to set both the pre and post
backup hooks for MySQL. In this example, we used a label instead of a pod name,
so the label can annotate multiple pods at once. Note the special syntax to escape
special characters as well the brackets ([]), single and double quotes and commas
(,) used as part of the JSON format:
Protecting Kubernetes assets 89
Configure application consistent backup
This translates into the following single command to set both the pre and post
backup hooks for Postgres. In this example, we used a label instead of a pod name,
so the label can annotate multiple matching pods at once. Labels can be applied
to any Kubernetes object, and in this case, we are using them to provide another
way to modify a specific container and select only certain pods. Note the special
syntax to escape special characters as well the brackets ([]), single and double
quotes and commas (,) used as part of the JSON format:
# kubectl annotate pod -l app=app-postgresql -n {postgres namespace}
netbackup-pre.hook.backup.velero.io/command='["/bin/bash", "-c",
"psql -U postgres -c \"SELECT
pg_start_backup(quote_literal($EPOCHSECONDS));\""]'
netbackup-pre.hook.backup.velero.io/container={postgres container
name} netbackup-post.hook.backup.velero.io/command='["/bin/bash",
"-c", "psql -U postgres -c \"SELECT pg_stop_backup();\""]'
netbackup-post.hook.backup.velero.io/container={postgres container
name}
This translates into the following single command to set both the pre and post
backup hooks for NGINX. In this example, we will omit the container hooks, and
this will modify the first container that matches the pod name by default. Note the
special syntax to escape special characters as well the brackets ([]), single and
double quotes and commas (,) used as part of the JSON format:
Protecting Kubernetes assets 90
Configure application consistent backup
Cassandra example
Following are the commands to quiesce and un-quiesce the Cassandra database:
# nodetool flush
# nodetool verify
This translates into the following single command to set both the pre and post
backup hooks for Cassandra. Note the special syntax to escape special characters
as well the brackets ([]), single (‘’), and double quotes (“”) and commas (,) used as
part of the JSON format:
# kubectl annotate pod {cassandra-pod} -n {Cassandra namespace}
netbackup-pre.hook.backup.velero.io/command='["/bin/bash", "-c",
"nodetool flush"]'
netbackup-pre.hook.backup.velero.io/container={cassandra-pod}
netbackup-post.hook.backup.velero.io/command='["/bin/bash", "-c",
"nodetool verify"]'
netbackup-post.hook.backup.velero.io/container={cassandra-pod}
Note: The examples provided are only initial guide, and specific requirements for
each workload must include the collaboration between backup, workload, and
Kubernetes administrators.
At present, Kubernetes do not support an on-error hook. If the user-specified
command fails, the backup snapshot does not proceed.
The default timeout value for the command to return an exit status is 30 seconds.
But this value can be changed with the following hooks as annotations to the pods:
netbackup-pre.hook.backup.velero.io/timeout=#in-seconds#
netbackup-post.hook.backup.velero.io/timeout=#in-seconds#
Chapter 8
Managing image groups
This chapter includes the following topics:
Image expire
To reclaim the storage space occupied by the expired images, you need to delete
those images.
Following are the important points related to image expiration.
For a recovery point consisting of multiple images:
Managing image groups 92
About image groups
■ If you have expired a single image in an image group, then it does not lead to
automatic expiration of remaining images. You must explicitly expire all images
in an image group.
■ If you have expired a few images then the recovery point will be incomplete.
Restore operation is not supported for incomplete recovery point.
■ If you have changed the expiration time for any of the images, then the expiration
time for rest of the images must be changed. Otherwise, the expiration time for
the images corresponding to recovery point gets skewed, leading to incomplete
recovery point at some point in time.
Image import
Kubernetes recovery point may consist of multiple images. To perform restore
operation, all the images corresponding to the recovery point must be imported.
Otherwise, the recovery point is marked as incomplete and restore is not performed.
For more information, refer to the About importing backup images section in the
NetBackup™ Administrator's Guide, Volume I
Image copy
You can create an image copy with two types of backup operations:
1. Snapshot is the default copy and is marked as copy no 1.
2. Backup from snapshot is marked as copy no 2.
Whenever any backup-now operation or scheduled backup triggers, Snapshot is
taken. But, Backup from snapshot is optional as it depends whether Backup from
snapshot option is selected or not while creating a protection plan.
An image group consists of asset images for metadata and Persistent Volume
Claims (PVC). Every copy has one image for namespace and one image for each
PVC present in the namespace.
Recovery point detail API is used to identify the copy completion status of an image.
This API also details all the backup ids and resource name present in the respective
copy. This complete or incomplete status of the image copy helps in restore
functionality as an error is thrown if someone tries to restore the asset from an
incomplete image copy.
3. If the child image of a copy gets expired (with more than 1 child), then the copy
is marked as incomplete.
Chapter 9
Protecting Rancher
managed clusters in
NetBackup
This chapter includes the following topics:
Note: Extract the Global Rancher Management server certificate. This CA cert can
be a default generated cert by rancher or configured by using a different/external
CA (Certifying Authority) during the management servers creation.
Protecting Rancher managed clusters in NetBackup 95
Add Rancher managed RKE cluster in NetBackup using automated configuration
1 Extract the CA cert: Navigate to the Rancher Management Server UI> Open
the left side panel Global Settings > Under CA Certs, click the Show CA
Certs button. Extract the complete CA cert value in a temporary file.
Note: Make sure you extract the complete value which includes the starting
and ending lines.
The secret have the values that are extracted in steps 1 & 3.
7 Create a yaml file nb-config-deploy-secret. yaml with the following format
and enter the values in all the fields.
apiVersion: v1
kind: secret
metadata:
name: <kops-namespce>-nb-config-deploy-secret
namespace: <kops-namespace>
type: Opaque
stringData:
#All the 3 fields are mandatory here to add a Rancher managed RKF2 cluster
apikey: A_YoUkgYQwPLUkmyj9Q6A1-6RX8RNY-PtYX0SukbqCwIK-osPz8qVm9zCL9ph
k8stoken: kubeconfig-user-mvvgcm8sq8:nrscvn8hj46t24r2tjrxd2kn8tzo2bg4
k8scacert: |
-------BEGIN CERTIFICATE-----
MIIDDDCCAfSgAwIBAgIBATANBgkqhkiG9w0BAQwIgYDVQQDDBtpbmdy
ZXNzLW9wZXJhdG9yQDE2ODc1MzY4NjgWHhcNMjMwNjIzMTYxNDI3WhcNMjUwNjIy
XtXqbaBGrXIuCCo90mxv4g==
-------END CERTIFICATE------
Protecting Rancher managed clusters in NetBackup 96
Add Rancher managed RKE cluster manually in NetBackup
Note: You must do the following the step as there is a different CA (Certifying
Authority) configured for external access of the Kubernetes API server
compared to the service account CA cert which is available within the cluster.
Hence, these two CA certificates must be combined.
To get the service account CA certificate, run the following commands on the
Linux cluster host.
■ Get the service account secret name available on the Kubernetes operator’s
namespace using the following command:
kubectl describe serviceaccount <kopsnamespace>-backup-server
-n <kopsnamespace> | grep Tokens | cut -d ":" -f 2
■ Get the CA certificate in the base 64 decoded form from this service account
secret using this command:
kubectl get secret <output-from-previous-command> -n
<kopsnamespace> -o jsonpath='{. data.ca\.crt}' | base64 -d
Entire output of this command must be appended to the temporary file which
we created in step 1.
3 Append the output that was generated after Step 2 at the end of the
<cacert-value-file> file. The necessary external and internal CA cert values
have are extracted and available in the file <cacert-value-file>. The CA cert
values are base 64 decoded form which you have to encode again while
creating credentials on NetBackup.
4 Token: Rancher Management Server UI > Open the left side panel > Under
the EXPLORE CLUSTER section > Navigate to the cluster you want to protect
> Kubeconfig icon on the top right corner.
■ Extract the token: value without the double quotes " " from the downloaded
Kubeconfig file (using the Download KubeConfig) into a temporary file
<token-value-file>.
■ Both these fields token and cacert are required in the base64 encoded
form to add in the NetBackup credentials for Kubernetes.
■ To get the base64 encoded version of both these extracted values using
the following base64 command:
#Use a Linux VM to encode the values for this step #Note: the flag -w0 has
the zero digit and not a 0 Symbol.
#For CA cert:
Cat <cacert-value-file>| base64 -w0
Paste this output in the CA certificate field in the NetBackup credentials
creation page.
Protecting Rancher managed clusters in NetBackup 98
Add Rancher managed RKE cluster manually in NetBackup
#For Token:
Paste this output in the Token field in the NetBackup web UI’s credentials
creation page.
■ Use these values in the NetBackup web UI > Credential management >
Named Credentials > Add to add the valid Rancher credentials in
NetBackup.
■ Once the credentials are created, add the Kubernetes cluster in NetBackup
using the name shown in the following cluster-info output.
Note: You cannot edit the cluster names added using the endpoint-based
approach. You can only delete and re-add such cluster names.
7 Enter the cluster info output which is extracted above into the input field on the
NetBackup web UI (Endpoint or URL).
8 Proceed ahead and select or create the credentials which were prepared in
steps 1 to 4.
9 Once the credentials are validated and a cluster is added successfully. It will
trigger an automated validation and discovery.
10 After a successful automated discovery, user attempts a manual credential
validation and discovery to ensure that everything is working fine.
Protecting Rancher managed clusters in NetBackup 99
Add Rancher managed RKE cluster manually in NetBackup
Note: After recovery, the newly created namespaces, persistent volumes, and other
resources get new system-generated UIDs.
Note: Enable the Restore option for all infected copies by selecting Allow the
selection of recovery points that are malware-affected option.
Recovering Kubernetes assets 102
Restore from snapshot
9 Under Select resource types to recover, select any of the following resource
types to restore:
■ All resource types to recover all resource types. By default, this option is
selected.
■ Recover selected resource types to recover only the selected resource
types.
Note: Select resource types to recover option is for advance users. If you
are not careful in selecting the resources that you want to restore, you may
not get a fully functional namespace after restoring.
10 Under Select Persistent volume claims to recover, select any of the following
persistent volume claims to recover:
■ All Persistent volume claims to recover all persistent volume claims. By
default, this option is selected.
■ Recover selected Persistent volume claims to recover selected persistent
volume claims.
Note: If you do not select any option in Recover selected resource types,
then include empty persistent volume claims option is selected and no persistent
volume claims is restored.
If you do not select any options in the Recover selected persistent volume
claims then in the Recovery options section, it includes empty persistent
volume claims and no persistent volume claims is restored.
Recovering Kubernetes assets 103
Restore from backup copy
11 Click on the Failure strategy section to view the failure strategy options to
recover.
12 Under Select failure strategy to recover, select any of the following failure
strategies to recover:
Note: If a failure occurs during the restore of metadata or PVCs, the restore
job runs according to the failure strategy selected.
13 Click Next.
14 On the Recovery options page, click Start recovery to submit the recovery
entry.
15 In Activity monitor, click the Job ID to view the restore job details.
Note: NetBackup Kubernetes restore uses single job to restore all the persistent
volume claims and a namespace. You can view logs on the Activity monitor to
track the restore of persistent volumes, persistent volume claims, or metadata.
Note: If metadata restore fails, no further jobs are submitted for restore operation.
Once metadata is restored successfully, PVCs are restored parallel in batches.
You can follow the same procedure that is explained in restoring from snapshot,
select the copy type as Backup. You can also restore to alternate target cluster.
To restore from a backup copy
1 On the left, click Workloads > Kubernetes.
2 On the Namespace tab, click the namespace of the asset that you want to
recover. Click the Recovery pointstab.
3 The Recovery points tab shows you all the recovery points with the date,
time, and copies of the backup. You can set filters to filter the displayed recovery
points. Click the date in the Date column to view the details of the recovery
point. The Recovery points details dialog shows the resources that were
backed up, like ConfigMaps, secrets, persistent volumes, pod, and so on. For
details about these resources, see https://kubernetes.io/docs/reference.
4 Locate the recovery point that you want to restore.
5 In the Copies column, click the # copies button. For example, if there are two
copies, the button displays as 2 copies.
6 In the list of copies, locate the Backup copy. Then click Actions > Restore.
Note: Enable the Restore option for all infected copies by selecting the Allow
the selection of recovery points that are malware-affected option.
7 In the Recovery target page, to recover the asset to the same cluster source
are auto populated. Click Next.
8 Under Specify destination namespace, select from the options:
■ Select Use original namespace to use the original namespace.
■ Select Use alternate namespace and enter the alternate namespace.
Click Next.
9 Under Select resource types to recover, select from the following resource
types:
■ Select All resource types to recover all resource types.
■ Select Recover selected resource types to recover only the selected
resource types.
Recovering Kubernetes assets 105
Restore from backup copy
10 Under Select Persistent volume claims to recover, select from the following
options:
■ Select All Persistent volume claims to recover all persistent volume
claims.
■ Select Recover selected Persistent volume claims to recover selected
persistent volume claims.
Note: If you do not select any option in Recover selected resource types,
then include empty persistent volume claims option is selected and no persistent
volume claims are restored.
If you do not select any options in the Recover selected persistent volume
claims, then in the Recovery options section it includes empty persistent
volume claims and no persistent volume claims are restored.
Note: Restore only persistent volume enables the toggle in the selected
persistent volume claims to restore only the persistent volume. This setting
does not create a corresponding persistent volume claim.
11 Click on the Failure strategy section to view the failure strategy options to
recover.
12 Under Select failure strategy to recover, select any of the following failure
strategies to recover:
■ In this case, the final job status which is marked as partial success and
a list of PVCs with failed status appears in the Activity Monitor tab for
the parent job.
■ Retry to specify a retry count for metadata or PVC restore. If the restore
fails even after retries, then the restore job terminates.
■ This failure strategy helps you to retry the restore job of failed
PVC/metadata which is configurable at the start of the restore job.
■ If the restore job fails despite the maximum number of retries, the job
which is marked as failed and no further batches are submitted for
restore.
■ Click Next.
Note: You can cancel the parent job to cancel the restore operation. The parent
job terminates all the active child restore jobs.
Configuration change
The batch size for parallel PVC restore is configurable in bp.conf. User can add the
key KUBERNETES_RESTORE_FROM_BACKUP_COPY_PARALLEL_RESTORE_BATCH_SIZE
in bp.conf file to set the desired batch size. This key is optional and has the value
of 5 if it is not defined.
The minimum value that can be assigned for batch size is 1, whereas the maximum
value is 100.
You can use the bpsetconfig command on the NetBackup primary server to update
the batch size.
Chapter 11
About incremental backup
and restore
This chapter includes the following topics:
Note: Snapshot copy is always a full backup due to storage class limitation. Apart
from snapshot copy, backup from snapshot, duplicate copies have incremental
support.
Restore jobs
Restore from a complete recovery point performs point in time restore. All the data
till that recovery point is restored.
About incremental backup and restore 108
Incremental backup and restore support for Kubernetes
If the Complete field displays No, you cannot restore from that recovery point.
software to use both modification time and inode change time during incremental
backups to determine if a file has changed (mtime and ctime).
DO_NOT_RESET_FILE_ACCESS_TIME option for NetBackup clients:
■ The DO_NOT_RESET_FILE_ACCESS_TIME entry specifies that if a file is
backed up, its access time (Atime) displays the time of the backup. By default,
NetBackup preserves the access time. NetBackup resets the previous value of
the backup.
■ To set the data mover properties: The user must set or update the flag in the
NetBackup primary server-specific ConfigMap that is created under the
NetBackupKOps namespace on the Kubernetes cluster.
■ Example:
apiVersion: v1
data:
datamover.properties: |
image=reg.domain.com/datamover/image:latest
VERBOSE=5
VXMS_VERBOSE=5
USE_CTIME_FOR_INCREMENTALS=YES
DO_NOT_RESET_FILE_ACCESS_TIME=YES
version: "1"
kind: ConfigMap
metadata:
name: backupserver.sample.domain.com
namespace: <NetBackupKOps-Namespace>
Protection plan
Following the schedules are supported in NetBackup Kubernetes workload.
■ Automatic
■ Full
■ Differential Incremental
■ Cumulative Incremental
A protection plan with different schedules can be configured as follows.
About incremental backup and restore 110
Incremental backup and restore support for Kubernetes
Automatic schedule
■ In the case of automatic type schedule, based on recurrence of snapshot
schedule get resolved after creation of protection plan.
■ If recurrence is less than one week, then a single differential and full schedule
is created.
Recommendations
■ Follow the recommendation for Retention values in a protection plan for
incremental schedules.
■ To perform a restore from recovery point for any copies (snapshot, backup
from snapshot, duplicate), it is recommended to keep the retention duration
of the copy for a longer period.
■ For example, to restore from a backup copy the retention of Backup from
Snapshot must be more than a Snapshot copy. Otherwise, the backup copy
expires and the recovery point is marked as COMPLETE = NO.
■ In such cases, the warnings appear in the NetBackup web UI as follows:
■ It is recommended to set backup retention period more than
snapshot retention period.
■ Always add a full backup schedule along with cumulative backup schedule.
Otherwise, every Cumulative backup is performed as a Full backup.
■ By default, the Backup from Snapshot option is always enabled for incremental
backup types.
Chapter 12
Enabling accelerator
based backup
This chapter includes the following topics:
Note: Accelerator enabled backup is supported only for the file mode PVCs.
To enable Accelerator, specify a valid storage class name to generate the track
logs for Accelerator backups. Storage class helps to create a file mode volume
which is usable on any of the worker nodes within the Kubernetes cluster.
Backup streams
NetBackup Accelerator creates the backup stream as follows:
■ If the namespace has no previous backup, NetBackup performs a full backup.
■ For the next backup job, NetBackup identifies data that is changed since the
previous backup. Only the changed blocks and the header information are
included in the backup, to create a full backup.
■ Once the backup is done, bpbkar on the data mover updates the track log. Track
log path inside data mover - usr/openv/netbackup/track/<primary
server>/<storage server>/<k8s cluster name>_<namespace uuid>_<pvc
uuid>/<policy>/<backup selection>
Enabling accelerator based backup 113
Controlling disk space for track logs on primary server
■ This track log is then transferred to primary server in the inline style at the
following location:
/usr/openv/netbackup/db/track/<primary server>/<storage server>/<k8s cluster
name>_<namespace uuid>_<pvc uuid>/<policy>/<backup selection>
■ When the next Accelerator backup job is initiated, the track log is fetched from
the primary server to identify the changed files. Then it is updated with the new
content and transferred back to the primary server.
■ For example,
bpbackup -i -p msdp_10mins_FRS+5d990ab5-f791-474f-885a-ae0c30f31c98
-s ForcedRescan
Following API can be used to initiate the force rescan schedule:
POST admin/manual-backup
System requirements
Following are the system requirements for FIPS support in NetBackup Kubernetes
workload.
Enabling FIPS mode in Kubernetes 117
Enable Federal Information Processing Standards (FIPS) mode in Kubernetes
Name Parameters
NetBackup Primary and ■ Both Primary and Media must be deployed on the
Media NetBackup10.2.1 with underlying RHEL-8system which is
enabled with FIPS.
■ RHEL OS version must be greater than RHEL8.
■ You can check version of RedHat machine using the
following command:
cat /etc/Redhat-release
■ You can check if underlying system have FIPS enabled
using the following command:
FIPS-MODE-SETUP--CHECK
■ For more details, you can check man page entry of the
following command:
fips-mode-setup
Configuration parameters
Following are the configuration parameters for FIPS mode in NetBackup Kubernetes
workload.
Configuration Parameters
■ Update
<Netbackup-Installation-Path>/netbackup/bp.conf
with the following key:
NB_FIPS_MODE = ENABLE
■ For more information on NetBackup in
FIPS mode, refer to the Configure the
FIPS mode in your NetBackup domain
section in the NetBackup™Security and
Encryption Guide
Enabling FIPS mode in Kubernetes 118
Enable Federal Information Processing Standards (FIPS) mode in Kubernetes
Configuration Parameters
Note: Make sure that all the systems on which NetBackup Kubernetes workload
runs are FIPS compliant.
■ (Optional) Install Container Data Importer (CDI), If you want to create a virtual
machine using disk image from network source.
Intelligent group
With Virtualization support in Kubernetes, you can create intelligent groups with
filtering the namespaces based on specific resource kinds. Following are resource
kinds available to filter:
■ Virtual Machines
■ Persistent Volume
■ Persistent Volume Claims
netbackup-pre.hook.backup.velero.io/container=compute
netbackup-post.hook.backup.velero.io/command='
netbackup-post.hook.backup.velero.io/container=compute
For more details, about configuration of NetBackup pre and post hook, see
https://www.veritas.com/
About Openshift Virtualization support 121
Troubleshooting for virtualization
■ Custom Kubernetes role created for specific clusters cannot view the jobs
■ Restore of namespace file mode PVCs to different file system partially fails
■ Error during accelerator backup when there is no space available for track log
■ Error during accelerator backup due to track log PVC creation failure
■ Failed to setup the data mover instance for track log PVC operation
Resource limit for Backup from Snapshot jobs per Kubernetes cluster is set to 1.
Job IDs 3020 and 3021 are the parent jobs for Backup from snapshot. The creation
of the data mover pod and its cleanup process are part of the backup job life cycle.
Job ID 3022 is the child job, where the data movement takes place from the cluster
to the storage unit.
Based on the resource limit setting, while job ID 3022 is in the running state, job ID
3021 will continue to be in queued state. Once, the backup job ID 3022 is completed,
then the parent Job ID 3021 will start.
Notice that the job ID 3020 is still in progress, since we are in process to clean up
the data mover pod and complete the life cycle of the parent job ID 3020.
Scenario no 2
Troubleshooting Kubernetes issues 126
Error during restore: Job fails on the highly loaded cluster
At this stage, we may encounter that there are 2 data mover pods running
simultaneously in the NetBackup Kubernetes operator deployment namespace.
Because the data mover pod created as part of job ID 3020 is still not cleaned up,
but we started with creation of data mover pod for job 3021.
In a busy environment, where multiple Backup from Snapshot jobs are triggered,
a low resource limit value setting may lead to backup jobs spending most of the
time in the queued state.
But if we have a higher resource limit setting, we may observe that the data mover
pods might exceed the count specified in the resource limit. This may lead to
resource starvation in the Kubernetes cluster.
While the data movement job like 3022 runs in parallel, cleanup activities are handled
sequentially. This when combined with the time it takes to cleanup the datamover
resource, if closer to the time it takes to backup the pvc/namespace data leads to
longer delay in the completion of the jobs.
If the combined time duration for data movement and clean up resources is like the
backup job. Then, the backup job of persistent volume or namespace data may
lead to delay in the job completion.
Recommended action: Ensure to review your system resources and performance,
to set the resource limit value accordingly. This measure will help you achieve the
best performance for all backup jobs.
Error messages: ERR - Unable to initialize VxMS, cannot write data to socket,
Connection reset by peer.
Error bpbrm (pid=712755) from client cluster.sample.domain.com: ERR - Unable
to initialize VxMS.
Error bptm (pid=712795) cannot write data to socket, Connection reset by peer.
Recommended action: If you face this issue during restore operation, then you
should run the restore operation on a less loaded cluster or when the cluster is idle.
Note: For such applications, the PVCs, are auto-provisioned as per the deployment
configurations even if a user does not select them for restore.
As a parent process it attaches zombie process to itself after the child process
completion to terminate the persistent zombie process. It also helps you to shut
down the container gracefully. Initd script is available in NBUKOPs build version
10.0.1.
■ Use the following steps to remove the existing nbcertcmdtool zombie processes:
1. Describe the NetBackup operator pod and find the Kubernetes node on which
the controller pod is running. Run the command:
kubectl describe -c netbackupkops <NB k8s operator pod name> -n
<namespace>
Note: These steps terminate all the zombie processes for that worker node. But it
resolves the issue temporarily. For a permanent solution, you must deploy a new
KOps build with Initd script.
Backup from snapshot fails for large sized PVC (example: 1.5 TB) with error code
34
Error messages:
Error nbcs (pid=250908) failed to setup the data mover instance for tracklog pvc
operation.
Error nbcs (pid=250908) unable to initialize the tracklog data mover instance, data
mover pod status: Pending reason:Failed message:Error: context deadline exceeded.
Restore from Snapshot or Restore from Backup
Restore from snapshot fails for large sized PVC (example: 100GB) with error code
5
Error message:
Error nbcs (pid=29228) timeout occurred while waiting for the persistent volume
claim pvc-sample status to be in the bound phase
Recommended action:
Increase the polling timeout in the backup operator configmap.
■ Configmap name: <kops-name>-backup-operator-configuration
■ Key to update: pollingTimeoutInMinutes
Note: The responses to both these commands must indicate that a connection is
established successfully.