Red Hat Openshift Container Storage 4.6
Red Hat Openshift Container Storage 4.6
4.6
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
Read this document for instructions on installing Red Hat OpenShift Container Storage 4.6 using
Amazon Web Services for local or cloud storage.
Table of Contents
Table of Contents
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . . . .
PREFACE
.CHAPTER
. . . . . . . . . . 1.. .DEPLOY
. . . . . . . . . USING
. . . . . . . .DYNAMIC
. . . . . . . . . . STORAGE
. . . . . . . . . . . DEVICES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . .
1.1. ENABLING FILE SYSTEM ACCESS FOR CONTAINERS ON RED HAT ENTERPRISE LINUX BASED NODES
4
1.2. INSTALLING RED HAT OPENSHIFT CONTAINER STORAGE OPERATOR 5
1.3. CREATING AN OPENSHIFT CONTAINER STORAGE CLUSTER SERVICE IN INTERNAL MODE 6
.CHAPTER
. . . . . . . . . . 2.
. . DEPLOYING
. . . . . . . . . . . . . .USING
. . . . . . .LOCAL
. . . . . . . .STORAGE
. . . . . . . . . . .DEVICES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
..............
2.1. OVERVIEW OF DEPLOYING WITH INTERNAL LOCAL STORAGE 10
2.2. REQUIREMENTS FOR INSTALLING OPENSHIFT CONTAINER STORAGE USING LOCAL STORAGE
DEVICES 10
2.3. ENABLING FILE SYSTEM ACCESS FOR CONTAINERS ON RED HAT ENTERPRISE LINUX BASED NODES
11
2.4. INSTALLING RED HAT OPENSHIFT CONTAINER STORAGE OPERATOR 12
2.5. INSTALLING LOCAL STORAGE OPERATOR 13
2.6. FINDING AVAILABLE STORAGE DEVICES 14
2.7. CREATING OPENSHIFT CONTAINER STORAGE CLUSTER ON AMAZON EC2 STORAGE OPTIMIZED -
I3EN.2XLARGE INSTANCE TYPE 15
. . . . . . . . . . . 3.
CHAPTER . . VERIFYING
. . . . . . . . . . . . .OPENSHIFT
. . . . . . . . . . . . CONTAINER
. . . . . . . . . . . . . .STORAGE
. . . . . . . . . . .DEPLOYMENT
. . . . . . . . . . . . . . . FOR
. . . . . INTERNAL
. . . . . . . . . . . .MODE
. . . . . . . . . . . . .20
..............
3.1. VERIFYING THE STATE OF THE PODS 20
3.2. VERIFYING THE OPENSHIFT CONTAINER STORAGE CLUSTER IS HEALTHY 21
3.3. VERIFYING THE MULTICLOUD OBJECT GATEWAY IS HEALTHY 22
3.4. VERIFYING THAT THE OPENSHIFT CONTAINER STORAGE SPECIFIC STORAGE CLASSES EXIST 23
.CHAPTER
. . . . . . . . . . 4.
. . .UNINSTALLING
. . . . . . . . . . . . . . . . OPENSHIFT
. . . . . . . . . . . . . CONTAINER
. . . . . . . . . . . . . STORAGE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24
..............
4.1. UNINSTALLING OPENSHIFT CONTAINER STORAGE IN INTERNAL MODE 24
4.1.1. Removing local storage operator configurations 29
4.2. REMOVING MONITORING STACK FROM OPENSHIFT CONTAINER STORAGE 30
4.3. REMOVING OPENSHIFT CONTAINER PLATFORM REGISTRY FROM OPENSHIFT CONTAINER STORAGE
33
4.4. REMOVING THE CLUSTER LOGGING OPERATOR FROM OPENSHIFT CONTAINER STORAGE 34
1
Red Hat OpenShift Container Storage 4.6 Deploying OpenShift Container Storage using Amazon Web Services
2
PREFACE
PREFACE
Red Hat OpenShift Container Storage 4.6 supports deployment on existing Red Hat OpenShift
Container Platform (RHOCP) AWS clusters in connected or disconnected environments along with out-
of-the-box support for proxy environments.
NOTE
Only internal Openshift Container Storage clusters are supported on AWS. See Planning
your deployment for more information about deployment requirements.
To deploy OpenShift Container Storage in internal mode, follow the appropriate deployment process for
your environment:
3
Red Hat OpenShift Container Storage 4.6 Deploying OpenShift Container Storage using Amazon Web Services
NOTE
Only internal Openshift Container Storage clusters are supported on AWS. See Planning
your deployment for more information about deployment requirements.
1. For Red Hat Enterprise Linux based hosts for worker nodes in a user provisioned infrastructure
(UPI), enable the container access to the underlying file system. Follow the instructions on
enable file system access for containers on Red Hat Enterprise Linux based nodes .
NOTE
Skip this step for Red Hat Enterprise Linux CoreOS (RHCOS).
NOTE
This process is not necessary for hosts based on Red Hat Enterprise Linux CoreOS.
Procedure
Perform the following steps on each node in your cluster.
1. Log in to the Red Hat Enterprise Linux based node and open a terminal.
4
CHAPTER 1. DEPLOY USING DYNAMIC STORAGE DEVICES
# setsebool -P container_use_cephfs on
Prerequisites
You must be logged into the OpenShift Container Platform (RHOCP) cluster.
You must have at least three worker nodes in the RHOCP cluster.
NOTE
When you need to override the cluster-wide default node selector for OpenShift
Container Storage, you can use the following command in command line
interface to specify a blank node selector for the openshift-storage namespace:
Taint a node as infra to ensure only Red Hat OpenShift Container Storage
resources are scheduled on that node. This helps you save on subscription costs.
For more information, see How to use dedicated worker nodes for Red Hat
OpenShift Container Storage chapter in Managing and Allocating Storage
Resources guide.
Procedure
2. Use Filter by keyword text box or the filter list to search for OpenShift Container Storage from
the list of operators.
5. On the Install Operator page, the following required options are selected by default:
5
Red Hat OpenShift Container Storage 4.6 Deploying OpenShift Container Storage using Amazon Web Services
8. Click Install.
Verification steps
Verify that OpenShift Container Storage Operator shows a green tick indicating successful
installation.
Click View Installed Operators in namespace openshift-storage link to verify that OpenShift
Container Storage Operator shows the Status as Succeeded on the Installed Operators
dashboard.
Prerequisites
The OpenShift Container Storage operator must be installed from the Operator Hub. For more
information, see Installing OpenShift Container Storage Operator using the Operator Hub .
Procedure
4. On the Create Storage Cluster page, ensure that the following options are selected:
8
CHAPTER 1. DEPLOY USING DYNAMIC STORAGE DEVICES
c. Select OpenShift Container Storage Service Capacity from drop down list.
NOTE
Once you select the initial storage capacity, cluster expansion will only be
performed using the selected usable capacity (times 3 of raw storage).
d. (Optional) In the Encryption section, set the toggle to Enabled to enable data encryption
on the cluster.
e. In the Nodes section, select at least three worker nodes from the available list for the use
of OpenShift Container Storage service.
For cloud platforms with multiple availability zones, ensure that the Nodes are spread
across different Locations/availability zones.
NOTE
To find specific worker nodes in the cluster, you can filter nodes on the basis
of Name or Label.
If the nodes selected do not match the OpenShift Container Storage cluster requirement of
an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster will be deployed. For
minimum starting node requirements, see Resource requirements section in Planning guide.
5. Click Create.
The Create button is enabled only after you select three nodes. A new storage cluster with
three storage devices will be created, one per selected node. The default configuration uses a
replication factor of 3.
Verification steps
1. Verify that the final Status of the installed storage cluster shows as Phase: Ready with a green
tick mark.
Click Operators → Installed Operators → Storage Clusterlink to view the storage cluster
installation status.
Alternatively, when you are on the Operator Details tab, you can click on the Storage
Cluster tab to view the status.
2. To verify that all components for OpenShift Container Storage are successfully installed, see
Verifying your OpenShift Container Storage installation .
9
Red Hat OpenShift Container Storage 4.6 Deploying OpenShift Container Storage using Amazon Web Services
Use this section to deploy OpenShift Container Storage on Amazon EC2 storage optimized I3 where
OpenShift Container Platform is already installed.
IMPORTANT
1. Understand the requirements for installing OpenShift Container Storage using local storage
devices.
2. For Red Hat Enterprise Linux based hosts for worker nodes, enable file system access for
containers on Red Hat Enterprise Linux based nodes.
NOTE
Skip this step for Red Hat Enterprise Linux CoreOS (RHCOS).
6. Create OpenShift Container Storage cluster service on Amazon EC2 storage optimized -
i3en.2xlarge instance type.
The Local Storage Operator version must match the Red Hat OpenShift Container Platform
version in order to have the Local Storage Operator fully supported with Red Hat OpenShift
10
CHAPTER 2. DEPLOYING USING LOCAL STORAGE DEVICES
Container Storage. The Local Storage Operator does not get upgraded when Red Hat
OpenShift Container Platform is upgraded.
You must have at least three OpenShift Container Platform worker nodes in the cluster with
locally attached storage devices on each of them.
Each of the three selected nodes must have at least one raw block device available to be
used by OpenShift Container Storage.
The devices to be used must be empty, that is, there should be no persistent volumes (PVs),
volume groups (VGs), or local volumes (LVs) remaining on the disks.
For minimum starting node requirements, see Resource requirements section in Planning guide.
Ensure that the Nodes are spread across different Locations/Availability Zones for a
multiple availability zones platform.
Each node that has local storage devices to be used by OpenShift Container Storage must
have a specific label to deploy OpenShift Container Storage pods. To label the nodes, use
the following command:
NOTE
This process is not necessary for hosts based on Red Hat Enterprise Linux CoreOS.
Procedure
Perform the following steps on each node in your cluster.
1. Log in to the Red Hat Enterprise Linux based node and open a terminal.
11
Red Hat OpenShift Container Storage 4.6 Deploying OpenShift Container Storage using Amazon Web Services
# setsebool -P container_use_cephfs on
Prerequisites
You must be logged into the OpenShift Container Platform (RHOCP) cluster.
You must have at least three worker nodes in the RHOCP cluster.
NOTE
When you need to override the cluster-wide default node selector for OpenShift
Container Storage, you can use the following command in command line
interface to specify a blank node selector for the openshift-storage namespace:
Taint a node as infra to ensure only Red Hat OpenShift Container Storage
resources are scheduled on that node. This helps you save on subscription costs.
For more information, see How to use dedicated worker nodes for Red Hat
OpenShift Container Storage chapter in Managing and Allocating Storage
Resources guide.
Procedure
2. Use Filter by keyword text box or the filter list to search for OpenShift Container Storage from
the list of operators.
5. On the Install Operator page, the following required options are selected by default:
12
CHAPTER 2. DEPLOYING USING LOCAL STORAGE DEVICES
8. Click Install.
Verification steps
Verify that OpenShift Container Storage Operator shows a green tick indicating successful
installation.
Click View Installed Operators in namespace openshift-storage link to verify that OpenShift
Container Storage Operator shows the Status as Succeeded on the Installed Operators
dashboard.
Procedure
3. Search for Local Storage Operator from the list of operators and click on it.
4. Click Install.
6. Click Install.
7. Verify that the Local Storage Operator shows the Status as Succeeded.
Procedure
1. List and verify the name of the nodes with the OpenShift Container Storage label.
Example output:
2. Log in to each node that is used for OpenShift Container Storage resources and find the unique
14
CHAPTER 2. DEPLOYING USING LOCAL STORAGE DEVICES
2. Log in to each node that is used for OpenShift Container Storage resources and find the unique
by-id device name for each available raw block device.
$ oc debug node/<Nodename>
Example output:
$ oc debug node/ip-10-0-135-71.us-east-2.compute.internal
Starting pod/ip-10-0-135-71us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
Pod IP: 10.0.135.71
If you don't see a command prompt, try pressing enter.
sh-4.2# chroot /host
sh-4.4# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 120G 0 disk
|-xvda1 202:1 0 384M 0 part /boot
|-xvda2 202:2 0 127M 0 part /boot/efi
|-xvda3 202:3 0 1M 0 part
`-xvda4 202:4 0 119.5G 0 part
`-coreos-luks-root-nocrypt 253:0 0 119.5G 0 dm /sysroot
nvme0n1 259:0 0 2.3T 0 disk
nvme1n1 259:1 0 2.3T 0 disk
In this example, for the selected node, the local devices available are nvme0n1 and nvme1n1.
In the example above, the IDs for the two local devices are
nvme0n1: nvme-Amazon_EC2_NVMe_Instance_Storage_AWS10382E5D7441494EC
nvme1n1: nvme-Amazon_EC2_NVMe_Instance_Storage_AWS60382E5D7441494EC
4. Repeat the above step to identify the device ID for all the other nodes that have the storage
devices to be used by OpenShift Container Storage. See this Knowledge Base article for more
details.
The Amazon EC2 storage optimized - i3en.2xlarge instance type includes two non-volatile memory
15
Red Hat OpenShift Container Storage 4.6 Deploying OpenShift Container Storage using Amazon Web Services
The Amazon EC2 storage optimized - i3en.2xlarge instance type includes two non-volatile memory
express (NVMe) disks. The example in this procedure illustrates the use of both the disks that the
instance type comes with.
Use three availability zones to decrease the risk of losing all the data.
Limit the number of users with ec2:StopInstances permissions to avoid instance shutdown by
mistake.
WARNING
Cloud burst where data is copied from another location for a specific data crunching, which is
limited in time
IMPORTANT
Prerequisites
Ensure that all the requirements in the Requirements for installing OpenShift Container Storage
using local storage devices section are met.
Verify your OpenShift Container Platform worker nodes are labeled for OpenShift Container
Storage, which is used as the nodeSelector.
Example output:
ip-10-0-135-71.us-east-2.compute.internal
ip-10-0-145-125.us-east-2.compute.internal
ip-10-0-160-91.us-east-2.compute.internal
16
CHAPTER 2. DEPLOYING USING LOCAL STORAGE DEVICES
Procedure
1. Create local persistent volumes (PVs) on the storage nodes using LocalVolume custom
resource (CR).
Example of LocalVolume CR local-storage-block.yaml using OpenShift Storage Container
label as node selector and by-id device identifier:
apiVersion: local.storage.openshift.io/v1
kind: LocalVolume
metadata:
name: local-block
namespace: openshift-local-storage
labels:
app: ocs-storagecluster
spec:
tolerations:
- key: "node.ocs.openshift.io/storage"
value: "true"
effect: NoSchedule
nodeSelector:
nodeSelectorTerms:
- matchExpressions:
- key: cluster.ocs.openshift.io/openshift-storage
operator: In
values:
- ''
storageClassDevices:
- storageClassName: localblock
volumeMode: Block
devicePaths:
- /dev/disk/by-id/nvme-
Amazon_EC2_NVMe_Instance_Storage_AWS10382E5D7441494EC # <-- modify this line
- /dev/disk/by-id/nvme-
Amazon_EC2_NVMe_Instance_Storage_AWS1F45C01D7E84FE3E9 # <-- modify this line
- /dev/disk/by-id/nvme-
Amazon_EC2_NVMe_Instance_Storage_AWS136BC945B4ECB9AE4 # <-- modify this line
- /dev/disk/by-id/nvme-
Amazon_EC2_NVMe_Instance_Storage_AWS10382E5D7441464EP # <-- modify this line
- /dev/disk/by-id/nvme-
Amazon_EC2_NVMe_Instance_Storage_AWS1F45C01D7E84F43E7 # <-- modify this line
- /dev/disk/by-id/nvme-
Amazon_EC2_NVMe_Instance_Storage_AWS136BC945B4ECB9AE8 # <-- modify this line
Each Amazon EC2 I3 instance has two disks and this example uses both disks on each node.
$ oc create -f local-storage-block.yaml
Example output:
localvolume.local.storage.openshift.io/local-block created
17
Red Hat OpenShift Container Storage 4.6 Deploying OpenShift Container Storage using Amazon Web Services
$ oc get pv
Example output:
5. Check for the new StorageClass that is now present when the LocalVolume CR is created.
This StorageClass is used to provide the StorageCluster PVCs in the following steps.
Example output:
6. Create the StorageCluster CR that uses the localblock StorageClass to consume the PVs
created by the Local Storage Operator.
Example of StorageCluster CR ocs-cluster-service.yaml using monDataDirHostPath and
localblock StorageClass.
apiVersion: ocs.openshift.io/v1
kind: StorageCluster
metadata:
name: ocs-storagecluster
namespace: openshift-storage
spec:
manageNodes: false
resources:
mds:
limits:
cpu: 3
memory: 8Gi
requests:
cpu: 1
18
CHAPTER 2. DEPLOYING USING LOCAL STORAGE DEVICES
memory: 8Gi
monDataDirHostPath: /var/lib/rook
storageDeviceSets:
- count: 2
dataPVCTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2328Gi
storageClassName: localblock
volumeMode: Block
name: ocs-deviceset
placement: {}
portable: false
replica: 3
resources:
limits:
cpu: 2
memory: 5Gi
requests:
cpu: 1
memory: 5Gi
IMPORTANT
To ensure that the OSDs have a guaranteed size across the nodes, the storage
size for storageDeviceSets must be specified as less than or equal to the size of
the PVs created on the nodes.
$ oc create -f ocs-cluster-service.yaml
Example output
storagecluster.ocs.openshift.io/ocs-cluster-service created
Verification steps
See Verifying your OpenShift Container Storage installation .
19
Red Hat OpenShift Container Storage 4.6 Deploying OpenShift Container Storage using Amazon Web Services
Procedure
1. Click Workloads → Pods from the left pane of the OpenShift Web Console.
3. Verify that the following pods are in running and completed state by clicking on the Running
and the Completed tabs:
ocs-metrics-exporter-*
MON rook-ceph-mon-*
20
CHAPTER 3. VERIFYING OPENSHIFT CONTAINER STORAGE DEPLOYMENT FOR INTERNAL MODE
MGR rook-ceph-mgr-*
MDS rook-ceph-mds-ocs-storagecluster-
cephfilesystem-*
CSI
cephfs
csi-cephfsplugin-* (1 pod on each
worker node)
csi-cephfsplugin-provisioner-* (2
pods distributed across storage nodes)
rbd
csi-rbdplugin-* (1 pod on each worker
node)
csi-rbdplugin-provisioner-* (2 pods
distributed across storage nodes)
rook-ceph-drain-canary rook-ceph-drain-canary-*
rook-ceph-crashcollector rook-ceph-crashcollector-*
OSD
rook-ceph-osd-* (1 pod for each device)
rook-ceph-osd-prepare-ocs-
deviceset-* (1 pod for each device)
In the Status card, verify that OCS Cluster and Data Resiliency has a green tick mark as shown in
the following image:
In the Details card, verify that the cluster information is displayed as follows:
Service Name
OpenShift Container Storage
Cluster Name
ocs-storagecluster
Provider
AWS
Mode
Internal
Version
ocs-operator-4.6.0
For more information on the health of OpenShift Container Storage cluster using the persistent storage
dashboard, see Monitoring OpenShift Container Storage .
In the Status card, verify that both Object Service and Data Resiliency are in Ready state
(green tick).
In the Details card, verify that the MCG information is displayed as follows:
Service Name
OpenShift Container Storage
System Name
Multicloud Object Gateway
Provider
AWS
Version
22
CHAPTER 3. VERIFYING OPENSHIFT CONTAINER STORAGE DEPLOYMENT FOR INTERNAL MODE
ocs-operator-4.6.0
For more information on the health of the OpenShift Container Storage cluster using the object service
dashboard, see Monitoring OpenShift Container Storage .
Click Storage → Storage Classesfrom the left pane of the OpenShift Web Console.
Verify that the following storage classes are created with the OpenShift Container Storage
cluster creation:
ocs-storagecluster-ceph-rbd
ocs-storagecluster-cephfs
openshift-storage.noobaa.io
23
Red Hat OpenShift Container Storage 4.6 Deploying OpenShift Container Storage using Amazon Web Services
Uninstall Annotations
Annotations on the Storage Cluster are used to change the behavior of the uninstall process. To define
the uninstall behavior, the following two annotations have been introduced in the storage cluster:
uninstall.ocs.openshift.io/cleanup-policy: delete
uninstall.ocs.openshift.io/mode: graceful
The below table provides information on the different values that can used with these annotations:
You can change the cleanup policy or the uninstall mode by editing the value of the annotation by using
the following commands:
24
CHAPTER 4. UNINSTALLING OPENSHIFT CONTAINER STORAGE
Prerequisites
Ensure that the OpenShift Container Storage cluster is in a healthy state. The uninstall process
can fail when some of the pods are not terminated successfully due to insufficient resources or
nodes. In case the cluster is in an unhealthy state, contact Red Hat Customer Support before
uninstalling OpenShift Container Storage.
Ensure that applications are not consuming persistent volume claims (PVCs) or object bucket
claims (OBCs) using the storage classes provided by OpenShift Container Storage.
If any custom resources (such as custom storage classes, cephblockpools) were created by the
admin, they must be deleted by the admin after removing the resources which consumed them.
Procedure
1. Delete the volume snapshots that are using OpenShift Container Storage.
b. From the output of the previous command, identify and delete the volume snapshots that
are using OpenShift Container Storage.
2. Delete PVCs and OBCs that are using OpenShift Container Storage.
In the default uninstall mode (graceful), the uninstaller waits till all the PVCs and OBCs that use
OpenShift Container Storage are deleted.
If you wish to delete the Storage Cluster without deleting the PVCs beforehand, you may set
the uninstall mode annotation to "forced" and skip this step. Doing so will result in orphan PVCs
and OBCs in the system.
a. Delete OpenShift Container Platform monitoring stack PVCs using OpenShift Container
Storage.
See Section 4.2, “Removing monitoring stack from OpenShift Container Storage”
b. Delete OpenShift Container Platform Registry PVCs using OpenShift Container Storage.
See Section 4.3, “Removing OpenShift Container Platform registry from OpenShift
Container Storage”
c. Delete OpenShift Container Platform logging PVCs using OpenShift Container Storage.
See Section 4.4, “Removing the cluster logging operator from OpenShift Container
Storage”
d. Delete other PVCs and OBCs provisioned using OpenShift Container Storage.
Given below is a sample script to identify the PVCs and OBCs provisioned using
OpenShift Container Storage. The script ignores the PVCs that are used internally by
Openshift Container Storage.
#!/bin/bash
25
Red Hat OpenShift Container Storage 4.6 Deploying OpenShift Container Storage using Amazon Web Services
RBD_PROVISIONER="openshift-storage.rbd.csi.ceph.com"
CEPHFS_PROVISIONER="openshift-storage.cephfs.csi.ceph.com"
NOOBAA_PROVISIONER="openshift-storage.noobaa.io/obc"
RGW_PROVISIONER="openshift-storage.ceph.rook.io/bucket"
NOOBAA_DB_PVC="noobaa-db"
NOOBAA_BACKINGSTORE_PVC="noobaa-default-backing-store-noobaa-pvc"
NOTE
NOTE
Ensure that you have removed any custom backing stores, bucket
classes, etc., created in the cluster.
3. Delete the Storage Cluster object and wait for the removal of the associated resources.
26
CHAPTER 4. UNINSTALLING OPENSHIFT CONTAINER STORAGE
5. Confirm that the directory /var/lib/rook is now empty. This directory will be empty only if the
uninstall.ocs.openshift.io/cleanup-policy annotation was set to delete(default).
6. If encryption was enabled at the time of install, remove dm-crypt managed device-mapper
mapping from OSD devices on all the OpenShift Container Storage nodes.
a. Create a debug pod and chroot to the host on the storage node.
b. Get Device names and make note of the OpenShift Container Storage devices.
$ dmsetup ls
ocs-deviceset-0-data-0-57snx-block-dmcrypt (253:1)
If the above command gets stuck due to insufficient privileges, run the following
commands:
$ ps
Example output:
Take a note of the PID number to kill. In this example, PID is 778825.
$ kill -9 <PID>
$ dmsetup ls
27
Red Hat OpenShift Container Storage 4.6 Deploying OpenShift Container Storage using Amazon Web Services
7. Delete the namespace and wait till the deletion is complete. You will need to switch to another
project if openshift-storage is the active project.
For example:
$ oc project default
$ oc delete project openshift-storage --wait=true --timeout=5m
NOTE
8. Delete the local storage operator configurations if you have deployed OpenShift Container
Storage using local storage devices. See Removing local storage operator configurations .
10. Remove the OpenShift Container Storage taint if the nodes were tainted.
11. Confirm all PVs provisioned using OpenShift Container Storage are deleted. If there is any PV
left in the Released state, delete it.
$ oc get pv
$ oc delete pv <pv name>
14. To ensure that OpenShift Container Storage is uninstalled completely, on the OpenShift
Container Platform Web Console,
28
CHAPTER 4. UNINSTALLING OPENSHIFT CONTAINER STORAGE
b. Verify that the Persistent Storage and Object Service tabs no longer appear next to the
Cluster tab.
NOTE
Procedure
$ export SC="<StorageClassName>"
$ oc delete sc $SC
7. Delete LocalVolumeDiscovery.
$ oc delete localvolumediscovery.local.storage.openshift.io/auto-discover-devices -n
openshift-local-storage
29
Red Hat OpenShift Container Storage 4.6 Deploying OpenShift Container Storage using Amazon Web Services
b. Set the variable LV to the name of the LocalVolume and variable SC to the name of the
StorageClass
For example:
$ LV=local-block
$ SC=localblock
e. Clean up the artifacts from the storage nodes for that resource.
Example output:
30
CHAPTER 4. UNINSTALLING OPENSHIFT CONTAINER STORAGE
The PVCs that are created as a part of configuring the monitoring stack are in the openshift-
monitoring namespace.
Prerequisites
Procedure
1. List the pods and PVCs that are currently running in the openshift-monitoring namespace.
31
Red Hat OpenShift Container Storage 4.6 Deploying OpenShift Container Storage using Amazon Web Services
3. Remove any config sections that reference the OpenShift Container Storage storage classes
as shown in the following example and save it.
Before editing
.
.
.
apiVersion: v1
data:
config.yaml: |
alertmanagerMain:
volumeClaimTemplate:
metadata:
name: my-alertmanager-claim
spec:
resources:
requests:
storage: 40Gi
storageClassName: ocs-storagecluster-ceph-rbd
prometheusK8s:
volumeClaimTemplate:
metadata:
name: my-prometheus-claim
spec:
resources:
requests:
storage: 40Gi
storageClassName: ocs-storagecluster-ceph-rbd
kind: ConfigMap
metadata:
creationTimestamp: "2019-12-02T07:47:29Z"
name: cluster-monitoring-config
namespace: openshift-monitoring
resourceVersion: "22110"
selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config
uid: fd6d988b-14d7-11ea-84ff-066035b9efa8
.
.
.
After editing
32
CHAPTER 4. UNINSTALLING OPENSHIFT CONTAINER STORAGE
.
.
.
apiVersion: v1
data:
config.yaml: |
kind: ConfigMap
metadata:
creationTimestamp: "2019-11-21T13:07:05Z"
name: cluster-monitoring-config
namespace: openshift-monitoring
resourceVersion: "404352"
selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config
uid: d12c796a-0c5f-11ea-9832-063cd735b81c
.
.
.
In this example, alertmanagerMain and prometheusK8s monitoring components are using the
OpenShift Container Storage PVCs.
4. Delete relevant PVCs. Make sure you delete all the PVCs that are consuming the storage
classes.
The PVCs that are created as a part of configuring OpenShift Container Platform registry are in the
openshift-image-registry namespace.
Prerequisites
The image registry should have been configured to use an OpenShift Container Storage PVC.
Procedure
$ oc edit configs.imageregistry.operator.openshift.io
Before editing
33
Red Hat OpenShift Container Storage 4.6 Deploying OpenShift Container Storage using Amazon Web Services
.
.
.
storage:
pvc:
claim: registry-cephfs-rwx-pvc
.
.
.
After editing
.
.
.
storage:
.
.
.
In this example, the PVC is called registry-cephfs-rwx-pvc, which is now safe to delete.
The PVCs that are created as a part of configuring cluster logging operator are in the openshift-
logging namespace.
Prerequisites
The cluster logging instance should have been configured to use OpenShift Container Storage
PVCs.
Procedure
2. Delete PVCs.
34
CHAPTER 4. UNINSTALLING OPENSHIFT CONTAINER STORAGE
35