Administering Ocr and Voting Disk
Administering Ocr and Voting Disk
Return to the Oracle DBA Tips Corner.
Contents
1. Overview
2. Example Configuration
3. Administering the OCR File
View OCR Configuration Information
Add an OCR File
Relocate an OCR File
Repair an OCR File on a Local Node
Remove an OCR File
4. Backup the OCR File
Automatic OCR Backups
Manual OCR Exports
5. Recover the OCR File
Recover OCR from Valid OCR Mirror
Recover OCR from Automatically Generated Physical Backup
Recover OCR from an OCR Export File
6. Administering the Voting Disk
View Voting Disk Configuration Information
Add a Voting Disk
Remove a Voting Disk
Relocate a Voting Disk
7. Backup the Voting Disk
8. Recover the Voting Disk
9. Move the Voting Disk and OCR from OCFS to RAW Devices
Move the OCR
Move the Voting Disk
10. About the Author
Overview
Oracle Clusterware 10g, formerly known as Cluster Ready Services (CRS) is software that when
installed on servers running the same operating system, enables the servers to be bound together
to operate and function as a single server or cluster. This infrastructure simplifies the
requirement for an Oracle Real Application Clusters (RAC) database by providing cluster
software that is tightly integrated with the Oracle Database.
The Oracle Clusterware requires two critical clusterware components: a voting disk to record
node membership information and the Oracle Cluster Registry (OCR) to record cluster
configuration information:
Voting Disk
The voting disk is a shared partition that Oracle Clusterware uses to verify cluster node
membership and status. Oracle Clusterware uses the voting disk to determine which instances are
members of a cluster by way of a health check and arbitrates cluster ownership among the
instances in case of network failures. The primary function of the voting disk is to manage node
membership and prevent what is known as Split Brain Syndrome in which two or more instances
attempt to control the RAC database. This can occur in cases where there is a break in
communication between nodes through the interconnect.
The voting disk must reside on a shared disk(s) that is accessible by all of the nodes in the
cluster. For high availability, Oracle recommends that you have multiple voting disks. Oracle
Clusterware can be configured to maintain multiple voting disks (multiplexing) but you must
have an odd number of voting disks, such as three, five, and so on. Oracle Clusterware supports a
maximum of 32 voting disks. If you define a single voting disk, then you should use external
mirroring to provide redundancy.
A node must be able to access more than half of the voting disks at any time. For example, if you
have five voting disks configured, then a node must be able to access at least three of the voting
disks at any time. If a node cannot access the minimum required number of voting disks it is
evicted, or removed, from the cluster. After the cause of the failure has been corrected and access
to the voting disks has been restored, you can instruct Oracle Clusterware to recover the failed
node and restore it to the cluster.
The OCR stores configuration information in a series of key-value pairs within a directory tree
structure. To view the contents of the OCR in a human-readable format, run the ocrdump
command. This will dump the contents of the OCR into an ASCII text file in the current
directory named OCRDUMPFILE.
The OCR must reside on a shared disk(s) that is accessible by all of the nodes in the cluster.
Oracle Clusterware 10g Release 2 allows you to multiplex the OCR and Oracle recommends that
you use this feature to ensure cluster high availability. Oracle Clusterware allows for a maximum
of two OCR locations; one is the primary and the second is an OCR mirror. If you define a single
OCR, then you should use external mirroring to provide redundancy. You can replace a failed
OCR online, and you can update the OCR through supported APIs such as Enterprise Manager,
the Server Control Utility (SRVCTL), or the Database Configuration Assistant (DBCA).
This article provides a detailed look at how to administer the two critical Oracle Clusterware
components — the voting disk and the Oracle Cluster Registry (OCR). The examples described in this
guide were tested with Oracle RAC 10g Release 2 (10.2.0.4) on the Linux x86 platform.
It is highly recommended to take a backup of the voting disk and OCR file before making
any changes! Instruction are included in this guide on how to perform backups of the voting
disk and OCR file.
CRS
_hom
e
The
Oracl
e
Clust
erwar
e
binari
es
inclu
ded
in
this
articl
e (i.e.
crs_
stat,
ocrc
heck,
crsc
tl,
etc.)
are
being
execu
ted
from
the
Oracl
e
Clust
erwar
e
home
direct
ory
whic
h for
the
purpo
se of
this
articl
e is
/u01
/app
/crs.
The
envir
onme
nt
varia
ble
$ORA
_CRS
_HOM
E is
set
for
both
the
orac
le
and
root
user
accou
nts to
this
direct
ory
and is
also
inclu
ded
in the
$PAT
H:
[roo
t@ra
cnod
e1
~]#
echo
$ORA
_CRS
_HOM
E
/
u01/
app/
crs
[roo
t@ra
cnod
e1
~]#
whic
h
ocrc
heck
/
u01/
app/
crs/
bin/
ocrc
heck
Example Configuration
The example configuration used in this article consists of a two-node RAC with a clustered
database named racdb.idevelopment.info running Oracle RAC 10g Release 2 on the Linux
x86 platform. The two node names are racnode1 and racnode2, each hosting a single Oracle
instance named racdb1 and racdb2 respectively. For a detailed guide on building the example
clustered database environment, please see:
Building an Inexpensive Oracle RAC 10g Release 2 on Linux - (CentOS 5.3 / iSCSI)
The example Oracle Clusterware environment is configured with a single voting disk and a
single OCR file on an OCFS2 clustered file system. Note that the voting disk is owned by the
oracle user in the oinstall group with 0644 permissions while the OCR file is owned by root in
the oinstall group with 0640 permissions:
located 1 votedisk(s).
Preparation
To prepare for the examples used in this guide, five new iSCSI volumes were created from the SAN and
will be bound to RAW devices on all nodes in the RAC cluster. These five new volumes will be used to
demonstrate how to move the current voting disk and OCR file from an OCFS2 file system to RAW
devices:
Five New iSCSI Volumes and their Local Device Name Mappings
iSCSI Target Name Local Device Name Disk Size
iqn.2006-01.com.openfiler:racdb.ocr1 /dev/iscsi/ocr1/part 512 MB
iqn.2006-01.com.openfiler:racdb.ocr2 /dev/iscsi/ocr2/part 512 MB
iqn.2006-01.com.openfiler:racdb.voting1 /dev/iscsi/voting1/part 32 MB
iqn.2006-01.com.openfiler:racdb.voting2 /dev/iscsi/voting2/part 32 MB
iqn.2006-01.com.openfiler:racdb.voting3 /dev/iscsi/voting3/part 32 MB
After creating the new iSCSI volumes from the SAN, they now need to be configured for access
and bound to RAW devices by all Oracle RAC nodes in the database cluster.
1. From all Oracle RAC nodes in the cluster as root, discover the five new iSCSI volumes from
the SAN which will be used to store the voting disks and OCR files.
# +---------------------------------------------------------+
# | FILE: /usr/local/bin/setup_raw_devices.sh |
# +---------------------------------------------------------+
# +---------------------------------------------------------+
# | Bind OCR files to RAW device files. |
# +---------------------------------------------------------+
/bin/raw /dev/raw/raw1 /dev/iscsi/ocr1/part1
/bin/raw /dev/raw/raw2 /dev/iscsi/ocr2/part1
sleep 3
/bin/chown root:oinstall /dev/raw/raw1
/bin/chown root:oinstall /dev/raw/raw2
/bin/chmod 0640 /dev/raw/raw1
/bin/chmod 0640 /dev/raw/raw2
# +---------------------------------------------------------+
# | Bind voting disks to RAW device files. |
# +---------------------------------------------------------+
/bin/raw /dev/raw/raw3 /dev/iscsi/voting1/part1
/bin/raw /dev/raw/raw4 /dev/iscsi/voting2/part1
/bin/raw /dev/raw/raw5 /dev/iscsi/voting3/part1
sleep 3
/bin/chown oracle:oinstall /dev/raw/raw3
/bin/chown oracle:oinstall /dev/raw/raw4
/bin/chown oracle:oinstall /dev/raw/raw5
/bin/chmod 0644 /dev/raw/raw3
/bin/chmod 0644 /dev/raw/raw4
/bin/chmod 0644 /dev/raw/raw5
6. From all Oracle RAC nodes in the cluster, change the permissions of the new shell script to
execute:
Two methods exist to verify how many OCR files are configured for the cluster as well as their
location. If the cluster is up and running, use the ocrcheck utility as either the oracle or root user
account:
[oracle@racnode1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 4660
Available space (kbytes) : 257460
ID : 1331197
Device/File Name : /u02/oradata/racdb/OCRFile <-- OCR
(primary)
Device/File integrity check succeeded
If CRS is down, you can still determine the location and number of OCR files by viewing the file
ocr.loc, whose location is somewhat platform dependent. For example, on the Linux platform it
is located in /etc/oracle/ocr.loc while on Sun Solaris it is located at /var/opt/oracle/ocr.loc:
To view the actual contents of the OCR in a human-readable format, run the ocrdump command.
This command requires the CRS stack to be running. Running the ocrdump command will dump
the contents of the OCR into an ASCII text file in the current directory named OCRDUMPFILE:
#
# Write OCR contents to specified file name.
#
[root@racnode1 ~]# ocrdump /tmp/'hostname'_ocrdump_'date +%m%d%y:%H%M'
#
# Print OCR contents to the screen.
#
[root@racnode1 ~]# ocrdump -stdout -keyname SYSTEM.css
#
# Write OCR contents out to XML format.
#
[root@racnode1 ~]# ocrdump -stdout -keyname SYSTEM.css -xml > ocrdump.xml
Add an OCR File
Starting with Oracle Clusterware 10g Release 2 (10.2), users now have the ability to multiplex
(mirror) the OCR. Oracle Clusterware allows for a maximum of two OCR locations; one is the
primary and the second is an OCR mirror. To avoid simultaneous loss of multiple OCR files,
each copy of the OCR should be placed on a shared storage device that does not share any
components (controller, interconnect, and so on) with the storage devices used for the other OCR
file.
Before attempting to add a mirrored OCR, determine how many OCR files are currently
configured for the cluster as well as their location. If the cluster is up and running, use the
ocrcheck utility as either the oracle or root user account:
If CRS is down, you can still determine the location and number of OCR files by viewing the file
ocr.loc, whose location is somewhat platform dependent. For example, on the Linux platform it
is located in /etc/oracle/ocr.loc while on Sun Solaris it is located at /var/opt/oracle/ocr.loc:
The results above indicate I have only one OCR file and that it is located on an OCFS2 file
system. Since we are allowed a maximum of two OCR locations, I intend to create an OCR
mirror and locate it on the same OCFS2 file system in the same directory as the primary OCR.
Please note that I am doing this for the sake brevity. The OCR mirror should always be placed on
a separate device than the primary OCR file to guard against a single point of failure.
Note that the Oracle Clusterware stack should be online and running on all nodes in the cluster while
adding, replacing, or removing the OCR location and hence does not require any system downtime.
The operations performed in this section affect the OCR for the entire cluster. However, the
ocrconfig command cannot modify OCR configuration information for nodes that are shut
down or for nodes on which Oracle Clusterware is not running. So, you should avoid
shutting down nodes while modifying the OCR using the ocrconfig command. If for any
reason, any of the nodes in the cluster are shut down while modifying the OCR using the
ocrconfig command, you will need to perform a repair on the stopped node before it can
brought online to join the cluster. Please see the section "Repair an OCR File on a Local
Node" for instructions on repairing the OCR file on the affected node.
You can add an OCR mirror after an upgrade or after completing the Oracle Clusterware
installation. The Oracle Universal Installer (OUI) allows you to configure either one or two OCR
locations during the installation of Oracle Clusterware. If you already mirror the OCR, then you
do not need to add a new OCR location; Oracle Clusterware automatically manages two OCRs
when you configure normal redundancy for the OCR. As previously mentioned, Oracle RAC
environments do not support more than two OCR locations; a primary OCR and a secondary
(mirrored) OCR.
Run the following command to add or relocate an OCR mirror using either destination_file or
disk to designate the target location of the additional OCR:
You must be logged in as the root user to run the ocrconfig command.
Please note that ocrconfig -replace is the only way to add/relocate OCR files/mirrors.
Attempting to copy the existing OCR file to a new location and then manually
adding/changing the file pointer in the ocr.loc file is not supported and will actually fail to
work.
For example:
#
# Verify CRS is running on node 1.
#
[root@racnode1 ~]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
#
# Verify CRS is running on node 2.
#
[root@racnode2 ~]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
#
# Configure the shared OCR destination_file/disk before
# attempting to create the new ocrmirror on it. This example
# creates a destination_file on an OCFS2 file system.
# Failure to pre-configure the new destination_file/disk
# before attempting to run ocrconfig will result in the
# following error:
#
# PROT-21: Invalid parameter
#
[root@racnode1 ~]# cp /dev/null /u02/oradata/racdb/OCRFile_mirror
[root@racnode1 ~]# chown root /u02/oradata/racdb/OCRFile_mirror
[root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/OCRFile_mirror
[root@racnode1 ~]# chmod 640 /u02/oradata/racdb/OCRFile_mirror
#
# Add new OCR mirror.
#
[root@racnode1 ~]# ocrconfig -replace ocrmirror
/u02/oradata/racdb/OCRFile_mirror
After adding the new OCR mirror, check that it can be seen from all nodes in the cluster:
#
# Verify new OCR mirror from node 1.
#
[root@racnode1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 4668
Available space (kbytes) : 257452
ID : 1331197
Device/File Name : /u02/oradata/racdb/OCRFile
Device/File integrity check succeeded
Device/File Name : /u02/oradata/racdb/OCRFile_mirror <-- New
OCR Mirror
Device/File integrity check succeeded
#
# Verify new OCR mirror from node 2.
#
[root@racnode2 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 4668
Available space (kbytes) : 257452
ID : 1331197
Device/File Name : /u02/oradata/racdb/OCRFile
Device/File integrity check succeeded
Device/File Name : /u02/oradata/racdb/OCRFile_mirror <-- New
OCR Mirror
Device/File integrity check succeeded
As mentioned earlier, you can have at most two OCR files in the cluster; the primary OCR and a
single OCR mirror. Attempting to add an extra mirror will actually relocate the current OCR
mirror to the new location specified in the command:
Just as we were able to add a new ocrmirror while the CRS stack was online, the same holds true when
relocating an OCR file or OCR mirror and therefore does not require any system downtime.
You can relocate OCR only when the OCR is mirrored. A mirror copy of the OCR file is
required to move the OCR online. If there is no mirror copy of the OCR, first create the
mirror using the instructions in the previous section.
Attempting to relocate OCR when an OCR mirror does not exist will produce the following
error:
Run the following command as the root account to relocate the current OCR file to a new
location using either destination_file or disk to designate the new target location for the OCR:
Run the following command as the root account to relocate the current OCR mirror to a new
location using either destination_file or disk to designate the new target location for the OCR
mirror:
The following example assumes the OCR is mirrored and demonstrates how to relocate the
current OCR file (/u02/oradata/racdb/OCRFile) from the OCFS2 file system to a new raw device
(/dev/raw/raw1):
#
# Verify CRS is running on node 1.
#
[root@racnode1 ~]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
#
# Verify CRS is running on node 2.
#
[root@racnode2 ~]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
#
# Verify current OCR configuration.
#
[root@racnode2 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 4668
Available space (kbytes) : 257452
ID : 1331197
Device/File Name : /u02/oradata/racdb/OCRFile <-- Current OCR
to Relocate
Device/File integrity check succeeded
Device/File Name : /u02/oradata/racdb/OCRFile_mirror
Device/File integrity check succeeded
#
# Clear out the contents from the new raw device.
#
[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw1
#
# Relocate primary OCR file to new raw device. Note that
# there is no deletion of the old OCR file but simply a
# replacement.
#
[root@racnode1 ~]# ocrconfig -replace ocr /dev/raw/raw1
After relocating the OCR file, check that the change can be seen from all nodes in the cluster:
#
# Verify new OCR file from node 1.
#
[root@racnode1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 4668
Available space (kbytes) : 257452
ID : 1331197
Device/File Name : /dev/raw/raw1 <-- Relocated OCR
Device/File integrity check succeeded
Device/File Name : /u02/oradata/racdb/OCRFile_mirror
Device/File integrity check succeeded
#
# Verify new OCR file from node 2.
#
[root@racnode2 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 4668
Available space (kbytes) : 257452
ID : 1331197
Device/File Name : /dev/raw/raw1 <-- Relocated OCR
Device/File integrity check succeeded
Device/File Name : /u02/oradata/racdb/OCRFile_mirror
Device/File integrity check succeeded
After verifying the relocation was successful, remove the old OCR file at the OS level:
It was mentioned in the previous section that the ocrconfig command cannot modify OCR
configuration information for nodes that are shut down or for nodes on which Oracle Clusterware
is not running. You may need to repair an OCR configuration on a particular node if your OCR
configuration changes while that node is stopped. For example, you may need to repair the OCR
on a node that was shut down while you were adding, replacing, or removing an OCR.
To repair an OCR configuration, run the following command as root from the node on which
you have stopped the Oracle Clusterware daemon:
To repair an OCR mirror configuration, run the following command as root from the node on
which you have stopped the Oracle Clusterware daemon:
You cannot perform this operation on a node on which the Oracle Clusterware daemon is
running. The CRS stack must be shutdown before attempting to repair the OCR
configuration on the local node.
The ocrconfig –repair command changes the OCR configuration only on the node from which
you run this command. For example, if the OCR mirror was relocated to a disk named
/dev/raw/raw2 from racnode1 while the node racnode2 was down, then use the command
ocrconfig -repair ocrmirror /dev/raw/raw2 on racnode2 while the CRS stack is down on that node to
repair its OCR configuration:
#
# Shutdown CRS stack on node 2 and verify the CRS stack is not up.
#
[root@racnode2 ~]# crsctl stop crs
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
#
# Relocate OCR mirror to new raw device from node 1. Note
# that node 2 is down (actually CRS down on node 2) while
# we relocate the OCR mirror.
#
[root@racnode1 ~]# ocrconfig -replace ocrmirror /dev/raw/raw2
#
# Verify relocated OCR mirror from node 1.
#
[root@racnode1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 4668
Available space (kbytes) : 257452
ID : 1331197
Device/File Name : /dev/raw/raw1
Device/File integrity check succeeded
Device/File Name : /dev/raw/raw2 <-- Relocated OCR Mirror
Device/File integrity check succeeded
#
# Node 2 does not know about the OCR mirror relocation.
#
[root@racnode2 ~]# cat /etc/oracle/ocr.loc
#Device/file /u02/oradata/racdb/OCRFile getting replaced by device /dev/raw/raw1
ocrconfig_loc=/dev/raw/raw1
ocrmirrorconfig_loc=/u02/oradata/racdb/OCRFile_mirror
#
# While the CRS stack is down on node 2, perform a local OCR
# repair operation to inform the node of the relocated OCR
# mirror. The ocrconfig -repair option will only update the
# OCR configuration information on node 2. If there were
# other nodes shutdown during the relocation, they too will
# need repaired.
#
[root@racnode2 ~]# ocrconfig -repair ocrmirror /dev/raw/raw2
#
# Verify the repair updated the OCR configuration on node 2.
#
[root@racnode2 ~]# cat /etc/oracle/ocr.loc
#Device/file /u02/oradata/racdb/OCRFile_mirror getting replaced by device
/dev/raw/raw2
ocrconfig_loc=/dev/raw/raw1
ocrmirrorconfig_loc=/dev/raw/raw2
#
# Bring up the CRS stack on node 2.
#
[root@racnode2 ~]# crsctl start crs
Attempting to start CRS stack
The CRS stack will be started shortly
#
# Verify node 2 is back online.
#
[root@racnode2 ~]# crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.racdb.db application ONLINE ONLINE racnode1
ora....b1.inst application ONLINE ONLINE racnode1
ora....b2.inst application ONLINE ONLINE racnode2
ora....srvc.cs application ONLINE ONLINE racnode1
ora....db1.srv application ONLINE ONLINE racnode1
ora....db2.srv application ONLINE ONLINE racnode2
ora....SM1.asm application ONLINE ONLINE racnode1
ora....E1.lsnr application ONLINE ONLINE racnode1
ora....de1.gsd application ONLINE ONLINE racnode1
ora....de1.ons application ONLINE ONLINE racnode1
ora....de1.vip application ONLINE ONLINE racnode1
ora....SM2.asm application ONLINE ONLINE racnode2
ora....E2.lsnr application ONLINE ONLINE racnode2
ora....de2.gsd application ONLINE ONLINE racnode2
ora....de2.ons application ONLINE ONLINE racnode2
ora....de2.vip application ONLINE ONLINE racnode2
Remove an OCR File
To remove an OCR, you need to have at least one OCR online. You may need to perform this to
reduce overhead or for other storage reasons, such as stopping a mirror to move it to SAN, RAID
etc. Carry out the following steps:
For example:
#
# Verify CRS is running on node 1.
#
[root@racnode1 ~]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
#
# Verify CRS is running on node 2.
#
[root@racnode2 ~]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
#
# Get the existing OCR file information by running ocrcheck
# utility.
#
[root@racnode1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 4668
Available space (kbytes) : 257452
ID : 1331197
Device/File Name : /dev/raw/raw1
Device/File integrity check succeeded
Device/File Name : /dev/raw/raw2 <-- OCR Mirror to be Removed
Device/File integrity check succeeded
#
# Delete OCR mirror from the cluster configuration.
#
[root@racnode1 ~]# ocrconfig -replace ocrmirror
After removing the new OCR mirror, check that the change is seen from all nodes in the cluster:
#
# Verify OCR mirror was removed from node 1.
#
[root@racnode1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 4668
Available space (kbytes) : 257452
ID : 1331197
Device/File Name : /dev/raw/raw1
Device/File integrity check succeeded
#
# Verify OCR mirror was removed from node 2.
#
[root@racnode2 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 4668
Available space (kbytes) : 257452
ID : 1331197
Device/File Name : /dev/raw/raw1
Device/File integrity check succeeded
Removing the OCR or OCR mirror from the cluster configuration does not remove the
physical file at the OS level when using a clustered file system.
There are two methods for backing up the contents of the OCR and each backup method can be
used for different recovery purposes. This section discusses how to ensure the stability of the
cluster by implementing a robust backup strategy.
The first type of backup relies on automatically generated OCR file copies which are sometimes
referred to as physical backups. These physical OCR file copies are automatically generated by
the CRSD process on the master node and are primarily used to recover the OCR from a lost or
corrupt OCR file. Your backup strategy should include procedures to copy these automatically
generated OCR file copies to a secure location which is accessible from all nodes in the cluster in
the event the OCR needs to be restored.
The second type of backup uses manual procedures to create OCR export files; also known as
logical backups. Creating a manual OCR export file should be performed both before and after
making significant configuration changes to the cluster, such as adding or deleting nodes from
your environment, modifying Oracle Clusterware resources, or creating a database. If in the
event a configuration change is made to the OCR that causes errors, the OCR can be restored to a
previous state by performing an import of the logical backup taken before the configuration change.
Please note that an OCR logical export can also be used to restore the OCR from a lost or corrupt OCR
file.
Unlike the methods used to backup the voting disk, attempting to backup the OCR by
copying the OCR file directly at the OS level is not a valid backup and will result in errors
after the restore!
Because of the importance of OCR information, Oracle recommends that you make copies of the
automatically created backup files and an OCR export at least once a day. The following is a
working UNIX script that can be scheduled in CRON to backup the OCR File(s) and the Voting
Disk(s) on a regular basis:
crs_components_backup_10g.ksh
Automatic OCR Backups
The Oracle Clusterware automatically creates OCR physical backups every four hours. At any
one time, Oracle always retains the last 3 backup copies of the OCR that are 4 hours old. The
CRSD process that creates these backups also creates and retains an OCR backup for each full
day and at the end of each week. You cannot customize the backup frequencies or the number of
OCR physical backup files that Oracle retains.
You can change the location where the CRSD process writes the physical OCR copies to using:
Restoring the OCR from an automatic physical backup is accomplished using the ocrconfig
-restore command. Note that the CRS stack needs to be shutdown on all nodes in the cluster
prior to running the restore operation:
You cannot restore the OCR from a physical backup using the -import option. The only method
to restore the OCR from a physical backup is to use the -restore option.
As documented in Doc ID: 357262.1 on the My Oracle Support web site, the CRSD process only
creates automatic OCR physical backups on one node in the cluster, which is the OCR master
node. It does not create automatic backup copies on the other nodes; only from the OCR master
node. If the master node fails, the OCR backups will be created from the new master node. You
can determine which node in the cluster is the master node by examining the
$ORA_CRS_HOME/log/<node_name>/cssd/ocssd.log file on any node in the cluster. In this log
file, check for reconfiguration information (reconfiguration successful) after which you will see
which node is the master and how many nodes are active in the cluster:
Node 1 - (racnode1)
[ CSSD]CLSS-3000: reconfiguration successful, incarnation 1 with 2 nodes
[ CSSD]CLSS-3001: local node number 1, master node number 1
Node 2 - (racnode2)
[ CSSD]CLSS-3000: reconfiguration successful, incarnation 1 with 2 nodes
[ CSSD]CLSS-3001: local node number 2, master node number 1
Node 1 - (racnode1)
# grep -i "master node" $ORA_CRS_HOME/log/racnode?/cssd/ocssd.log | tail -1
[ CSSD]CLSS-3001: local node number 1, master node number 1
Node 2 - (racnode2)
# grep -i "master node" $ORA_CRS_HOME/log/racnode?/cssd/ocssd.log | tail -1
[ CSSD]CLSS-3001: local node number 2, master node number 1
# If not found in the ocssd.log, then look through all
# of the ocssd archives:
Node 1 - (racnode1)
# for x in 'ls -tr $ORA_CRS_HOME/log/racnode?/cssd/ocssd.*'
do grep -i "master node" $x; done | tail -1
[ CSSD]CLSS-3001: local node number 1, master node number 1
Node 2 - (racnode2)
# for x in 'ls -tr $ORA_CRS_HOME/log/racnode?/cssd/ocssd.*'
do grep -i "master node" $x; done | tail -1
[ CSSD]CLSS-3001: local node number 2, master node number 1
# ocrconfig -showbackup
Because of the importance of OCR information, Oracle recommends that you make copies of the
automatically created backup files at least once a day from the master node to a different device from
where the primary OCR resides. You can use any backup software to copy the automatically generated
physical backup files to a stable backup location:
It is possible and recommended that shared storage be used for the backup location(s). Keep in
mind that if the master node goes down and cannot be rebooted, it is possible to loose all OCR
physical backups if they were all on that node. The OCR backup process, however, will start on
the new master node within four hours for all new backups. It is highly recommended that you
integrate OCR backups with your normal database backup strategy. If possible, use a backup
location that is shared by all nodes in the cluster.
Performing a manual export of the OCR should be done before and after making significant
configuration changes to the cluster, such as adding or deleting nodes from your environment,
modifying Oracle Clusterware resources, or creating a database. This type of backup is often
referred to as a logical backup. If in the event a configuration change is made to the OCR that
causes errors, the OCR can be restored to its previous state by performing an import of the
logical backup taken before the configuration change. For example, if you have unresolvable
configuration problems, or if you are unable to restart your cluster database after such changes,
then you can restore your configuration by importing the saved OCR content from a valid
configuration.
Please note that an OCR logical export can also be used to restore the OCR from a lost or corrupt
OCR file.
To export the contents of the OCR to a dump file, use the following command, where
backup_file_name is the name of the OCR logical backup file you want to create:
For example:
You cannot restore the OCR from a logical backup using the -restore option. The only method to
restore the OCR from a logical export is to use the -import option.
You must be logged in as the root user to run the ocrconfig command.
If an application fails, then before attempting to restore the OCR, restart the application. As a
definitive verification that the OCR failed, run the ocrcheck command:
The example above indicates that both the primary OCR and OCR mirror checks were successful
and that no problems exist with the OCR configuration.
If the ocrcheck command does not display the message 'Device/File integrity check
succeeded' for at least one copy of the OCR, then both the primary OCR and the OCR mirror
have failed. In this case, the CRS stack must be brought down on all nodes in the cluster to
restore the OCR from a previous physical backup copy or an OCR export.
If there is at least one copy of the OCR available (either the primary OCR or the OCR mirror),
you can use that valid copy to restore the contents of the other copy of the OCR. The restore in
this case can be accomplished using the ocrconfig -replace command and does not require
the applications or CRS stack to be down.
This section describes a number of possible OCR recovery scenarios using the OCR
configuration described by the output of the ocrcheck command above. Both the primary OCR
and the OCR mirror are located on an OCFS2 file system in the same directory. The recovery
scenarios demonstrated in this section will make use of both types of OCR backups —
automatically generated OCR file copies and manually created OCR export files.
This section demonstrates how to restore the OCR when only one of the OCR copies is missing
or corrupt. The restore process will use the good OCR copy (whether its the primary OCR or the
OCR mirror) to restore the missing/corrupt copy. Remember that if there is at least one copy of
the OCR available, you can use that valid copy to restore the contents of the other copy of the
OCR. The best part about this type of recovery is that it doesn't require any downtime! Oracle
Clusterware and the applications can remain online during the recovery process.
For the purpose of this example, let's corrupt the primary OCR file:
Note that after loosing the one OCR copy (in this case, the primary OCR file), Oracle Clusterware and the
applications remain online:
While the applications and CRS remain online, perform the following steps to recover the
primary OCR using the contents of the OCR mirror.
1. When using a clustered file system, remove the corrupt OCR file and re-initialize it:
2. [root@racnode1 ~]# rm /u02/oradata/racdb/OCRFile
3. [root@racnode1 ~]# cp /dev/null /u02/oradata/racdb/OCRFile
4. [root@racnode1 ~]# chown root /u02/oradata/racdb/OCRFile
5. [root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/OCRFile
[root@racnode1 ~]# chmod 640 /u02/oradata/racdb/OCRFile
NOTE: If the target OCR is located on a raw device, verify the permissions are applied correctly for
an OCR file (owned by root:oinstall with 0640 permissions), that the device is being shared by
all nodes in the cluster, and finally use the dd command from only one node in the cluster to zero out
the device and make sure no data is written to the raw device.
[root@racnode1 ~]# ls -l /dev/raw/raw1
crw-r----- 1 root oinstall 162, 1 Oct 6 11:05 /dev/raw/raw1
This section demonstrates how to recover the Oracle Cluster Registry from a lost or corrupt OCR
file. This example assumes that both the primary OCR and the OCR mirror are lost from an
accidental delete by a user and that the latest automatic OCR backup copy on the master node is
accessible.
At this time, the second node in the cluster (racnode2) is the master node and currently
available. We will be restoring the OCR using the latest OCR backup copy from racnode2
which is located at /u01/app/crs/cdata/crs/backup00.ocr.
Let's now corrupt the OCR by removing both the primary OCR and the OCR mirror:
Running ocrcheck fails to provide any useful information given that both OCR files are lost
Note that after loosing both OCR files, Oracle Clusterware and the applications remain online.
Before restoring the OCR, the applications and CRS will need to be shutdown as described in the
steps below.
Perform the following steps to recover the OCR from the latest automatically generated physical
backup:
1. With CRS still online, identify the master node (which in this example is racnode2) and
all OCR backups using the ocrconfig -showbackup command:
2. [root@racnode1 ~]# ocrconfig -showbackup
3.
4. racnode2 2009/10/07 12:05:18 /u01/app/crs/cdata/crs
5.
6. racnode2 2009/10/07 08:05:17 /u01/app/crs/cdata/crs
7.
8. racnode2 2009/10/07 04:05:17 /u01/app/crs/cdata/crs
9.
10. racnode2 2009/10/07 00:05:16 /u01/app/crs/cdata/crs
11.
racnode1 2009/09/24 08:49:19 /u01/app/crs/cdata/crs
Note that ocrconfig -showbackup may result in a segmentation fault or simply not
show any results if CRS is shutdown.
12. For documentation purposes, identify the number and location of all configured OCR
files that will be recovered in this example.
13. [root@racnode2 ~]# cat /etc/oracle/ocr.loc
14. #Device/file /u02/oradata/racdb/OCRFile getting replaced by device
/u02/oradata/racdb/OCRFile
15. ocrconfig_loc=/u02/oradata/racdb/OCRFile
ocrmirrorconfig_loc=/u02/oradata/racdb/OCRFile_mirror
16. Although all OCR files have been lost or corrupted, the Oracle Clusterware daemons as
well as the clustered database remain running. In this scenario, Oracle Clusterware and
all managed resources need to be shut down in order to recover the OCR. Attempting to
stop CRS using crsctl stop crs will fail given it cannot write to the now lost/corrupt
OCR file:
17. [root@racnode1 ~]# crsctl stop crs
OCR initialization failed accessing OCR device: PROC-26: Error while
accessing the physical storage Operating System error [No such file or
directory] [2]
With the environment in this unstable state, shutdown all database instances from all
nodes in the cluster and then reboot each node:
------------------------------------------------
Perform the following steps to restore the previous configuration stored in the OCR from an
OCR export file:
1. For documentation purposes, identify the number and location of all configured OCR
files that will be recovered in this example.
2. [root@racnode2 ~]# cat /etc/oracle/ocr.loc
3. #Device/file /u02/oradata/racdb/OCRFile getting replaced by device
/u02/oradata/racdb/OCRFile
4. ocrconfig_loc=/u02/oradata/racdb/OCRFile
ocrmirrorconfig_loc=/u02/oradata/racdb/OCRFile_mirror
5. Place the OCR export file that you created previously using the ocrconfig -export
command on a local disk for the node that will be performing the import:
6. [root@racnode1 ~]# mkdir -p /u03/crs_backup/ocrbackup/exports
7. [root@racnode1 ~]# cd /u02/crs_backup/ocrbackup/RACNODE1/exports
8. [root@racnode1 ~]# cp -p OCRFileBackup.dmp
/u03/crs_backup/ocrbackup/exports
9. [root@racnode1 ~]# ls -l /u03/crs_backup/ocrbackup/exports
10. total 112
-rw-r--r-- 1 root root 110233 Oct 8 09:38 OCRFileBackup.dmp
NOTE: The ocrconfig -import process is unable to read an OCR export file from an OCFS2 file
system. Attempting to import an OCR export file that is located on an OCFS2 file system will fail
with the following error:
[root@racnode1 ~]# ocrconfig -import
/u02/crs_backup/ocrbackup/RACNODE1/exports/OCRFileBackup.dmp
PROT-8: Failed to import data from specified file to the cluster registry
Investigating the $ORA_CRS_HOME/log/<hostname>/client/ocrconfig_pid.log will reveal the
error:
...
[ OCRCONF][3012240]Error[112] encountered when reading from import file
...
The solution is to copy the OCR dump file to be imported from the OCFS2 file system to a file
system on the local disk.
11. As the root user, stop Oracle Clusterware on all the nodes in the cluster by
executing the following command:
12. [root@racnode1 ~]# crsctl stop crs
13. Stopping resources. This could take several minutes.
14. Error while stopping resources. Possible cause: CRSD is down.
15.
16. [root@racnode2 ~]# crsctl stop crs
17. Stopping resources. This could take several minutes.
Error while stopping resources. Possible cause: CRSD is down.
18. When using a clustered file system, re-initialize / pre-allocate the space (typically
280MB) for both the primary OCR and the OCR mirror target locations identified earlier
in the /etc/oracle/ocr.loc file:
19. [root@racnode1 ~]# rm -f /u02/oradata/racdb/OCRFile
20. [root@racnode1 ~]# dd if=/dev/zero of=/u02/oradata/racdb/OCRFile
bs=4096 count=65587
21. [root@racnode1 ~]# chown root /u02/oradata/racdb/OCRFile
22. [root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/OCRFile
23. [root@racnode1 ~]# chmod 640 /u02/oradata/racdb/OCRFile
24.
25. [root@racnode1 ~]# rm -f /u02/oradata/racdb/OCRFile_mirror
26. [root@racnode1 ~]# dd if=/dev/zero
of=/u02/oradata/racdb/OCRFile_mirror bs=4096 count=65587
27. [root@racnode1 ~]# chown root /u02/oradata/racdb/OCRFile_mirror
28. [root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/OCRFile_mirror
[root@racnode1 ~]# chmod 640 /u02/oradata/racdb/OCRFile_mirror
NOTE: If the target OCR is located on a raw device(s), verify the permissions are applied correctly
for an OCR file (owned by root:oinstall with 0640 permissions), that the device is being shared
by all nodes in the cluster, and finally use the dd command from only one node in the cluster to zero
out the device(s) and make sure no data is written to the raw device(s).
[root@racnode1 ~]# ls -l /dev/raw/raw[12]
crw-r----- 1 root oinstall 162, 1 Oct 8 09:43 /dev/raw/raw1
crw-r----- 1 root oinstall 162, 2 Oct 8 09:43 /dev/raw/raw2
Use the crsctl utility to verify how many voting disks are configured for the cluster as well as
their location. The the crsctl command can be run as either the oracle or root user account:
located 1 votedisk(s).
Add a Voting Disk
Adding or removing a voting disk from the cluster is a fairly straightforward process. Oracle
Clusterware 10g Release 1 (10.1) only allowed for one voting disk while Oracle Oracle
Clusterware 10g Release 2 (10.2) lifted this restriction to allow for 32 voting disks. Having
multiple voting disks available to the cluster removes the voting disk as a single point of failure
and eliminates the need to mirror them outside of Oracle Clusterware (i.e. RAID). The Oracle
Universal Installer (OUI) allows you to configure either one or three voting disks during the
installation of Oracle Clusterware. Having three voting disks available allows Oracle
Clusterware (CRS) to continue operating uninterrupted when any one of the voting disks fail.
When deciding how many voting disks is appropriate for your environment, consider that for
the cluster to survive failure of x number of voting disks, you need to configure (2x + 1)
voting disks. For example, to allow for the failure of 2 voting disks, you would need to
configure 5 voting disks.
When allocating shared raw storage devices for the voting disk(s), keep in mind that each voting
disk requires 20MB of raw storage.
OCR Corruption after Adding/Removing Voting Disk when CRS Stack is Running
In addition to allowing for more than one voting disk in the cluster, the Oracle10g R2 documentation
also indicates that adding and removing voting disks can be performed while CRS is online and does not
require any cluster-wide downtime. After reading of this new capability, I immediately tried adding a
new voting while CRS was running only to be greeted with the following error:
After some research, it appears this is a known issue on at least the Linux and Sun Solaris
platform with the 10.2.0.1.0 release and is fully documented in Oracle Bug 4898020: ADDING
VOTING DISK ONLINE CRASH THE CRS. Some have reported that this issue was to be fixed
with the 10.2.0.4 patch set; however that is the release I am currently using and the bug still
exists.
In order to workaround this bug, you must first shut down CRS and then use the -force flag when
running the crsctl command. Do not attempt to add or remove a voting disk to the cluster using
the -force flag while CRS is online. Oracle Clusterware should be shut down on all nodes in the cluster
before adding or removing voting disks.
Using the -force flag to add or remove a voting disk while the Oracle Clusterware stack is
active on any node in the cluster may corrupt your cluster configuration.
Bring down CRS on all nodes in the cluster prior to modifying the voting disk configuration
using the -force flag to avoid interacting with active Oracle Clusterware daemons.
If the Oracle Clusterware stack is online while attempting to use the -force flag, all nodes in
the cluster will reboot due to the css shutdown and corruption of your cluster configuration
is very likely.
For a detailed discussion on this issue, please see Oracle Doc ID: 390880.1 "OCR Corruption
after Adding/Removing voting disk to a cluster when CRS stack is running) on the My
Oracle Support web site.
To add a new voting disk to the cluster, use the following command where path is the fully
qualified path for the additional voting disk. Run the following command as the root user to add
a voting disk:
You must be logged in as the root user to run the crsctl command to add/remove voting
disks.
The following example demonstrates how to add two new voting disks to the current cluster. The
new voting disks will reside on the same OCFS2 file system in the same directory as the current
voting disk. Please note that I am doing this for the sake brevity. Multiplexed voting disks should
always be placed on a separate device than the current voting disk to guard against a single point
of failure.
Stop all application processes, shut down CRS on all nodes, and Oracle10g R2 users should use
the -force flag to the crsctl command when adding the new voting disk(s). For example:
#
# Query current voting disk configuration.
#
[root@racnode1 ~]# crsctl query css votedisk
0. 0 /u02/oradata/racdb/CSSFile
located 1 votedisk(s).
#
# Stop all application processes.
#
[root@racnode1 ~]# srvctl stop database -d racdb
[root@racnode1 ~]# srvctl stop asm -n racnode1
[root@racnode1 ~]# srvctl stop asm -n racnode2
[root@racnode1 ~]# srvctl stop nodeapps -n racnode1
[root@racnode1 ~]# srvctl stop nodeapps -n racnode2
#
# Verify all application processes are OFFLINE.
#
[root@racnode1 ~]# crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.racdb.db application OFFLINE OFFLINE
ora....b1.inst application OFFLINE OFFLINE
ora....b2.inst application OFFLINE OFFLINE
ora....srvc.cs application OFFLINE OFFLINE
ora....db1.srv application OFFLINE OFFLINE
ora....db2.srv application OFFLINE OFFLINE
ora....SM1.asm application OFFLINE OFFLINE
ora....E1.lsnr application OFFLINE OFFLINE
ora....de1.gsd application OFFLINE OFFLINE
ora....de1.ons application OFFLINE OFFLINE
ora....de1.vip application OFFLINE OFFLINE
ora....SM2.asm application OFFLINE OFFLINE
ora....E2.lsnr application OFFLINE OFFLINE
ora....de2.gsd application OFFLINE OFFLINE
ora....de2.ons application OFFLINE OFFLINE
ora....de2.vip application OFFLINE OFFLINE
#
# Shut down CRS on node 1 and verify the CRS stack is not up.
#
[root@racnode1 ~]# crsctl stop crs
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
#
# Shut down CRS on node 2 and verify the CRS stack is not up.
#
[root@racnode2 ~]# crsctl stop crs
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
#
# Add two new voting disks.
#
[root@racnode1 ~]# crsctl add css votedisk /u02/oradata/racdb/CSSFile_mirror1
-force
Now formatting voting disk: /u02/oradata/racdb/CSSFile_mirror1
successful addition of votedisk /u02/oradata/racdb/CSSFile_mirror1.
#
# Set the appropriate permissions on the new voting disks.
#
[root@racnode1 ~]# chown oracle /u02/oradata/racdb/CSSFile_mirror1
[root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/CSSFile_mirror1
[root@racnode1 ~]# chmod 644 /u02/oradata/racdb/CSSFile_mirror1
#
# Clear out the contents from the new raw devices.
#
[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw3
[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw4
#
# Add two new voting disks.
#
[root@racnode1 ~]# crsctl add css votedisk /dev/raw/raw3 -force
Now formatting voting disk: /dev/raw/raw3
successful addition of votedisk /dev/raw/raw3.
#
# Verify new voting disk access from node 1.
#
[root@racnode1 ~]# crsctl query css votedisk
0. 0 /u02/oradata/racdb/CSSFile
1. 0 /u02/oradata/racdb/CSSFile_mirror1
2. 0 /u02/oradata/racdb/CSSFile_mirror2
located 3 votedisk(s).
#
# Verify new voting disk access from node 2.
#
[root@racnode2 ~]# crsctl query css votedisk
0. 0 /u02/oradata/racdb/CSSFile
1. 0 /u02/oradata/racdb/CSSFile_mirror1
2. 0 /u02/oradata/racdb/CSSFile_mirror2
located 3 votedisk(s).
After verifying the new voting disk(s) can be seen from all nodes in the cluster, restart CRS and the
application processes:
#
# Restart CRS and application processes from node 1.
#
[root@racnode1 ~]# crsctl start crs
Attempting to start CRS stack
The CRS stack will be started shortly
#
# Restart CRS and application processes from node 2.
#
[root@racnode2 ~]# crsctl start crs
Attempting to start CRS stack
The CRS stack will be started shortly
Remove a Voting Disk
As discussed in the previous section, Oracle Clusterware must be shut down on all nodes in the
cluster before adding or removing voting disks. Just as we were required to add the -force flag
when adding a voting disk, the same holds true for Oracle10g R2 users attempting to remove a voting
disk:
Use the following command as the root user to remove a voting disk where path is the fully
qualified path for the voting disk to be removed:
You must be logged in as the root user to run the crsctl command to add/remove voting
disks.
The "crsctl delete css votedisk" command deletes an existing voting disk from the
cluster. This command does not, however, remove the physical file at the OS level if using a
clustered file system nor does it clear the data from a raw storage device.
The following example demonstrates how to delete two voting disks from the current cluster.
Stop all application processes, shut down CRS on all nodes, and Oracle10g R2 users should use
the -force flag to the crsctl command when removing a voting disk(s). For example:
#
# Query current voting disk configuration.
#
[root@racnode1 ~]# crsctl query css votedisk
0. 0 /u02/oradata/racdb/CSSFile
1. 0 /u02/oradata/racdb/CSSFile_mirror1
2. 0 /u02/oradata/racdb/CSSFile_mirror2
located 3 votedisk(s).
#
# Stop all application processes.
#
[root@racnode1 ~]# srvctl stop database -d racdb
[root@racnode1 ~]# srvctl stop asm -n racnode1
[root@racnode1 ~]# srvctl stop asm -n racnode2
[root@racnode1 ~]# srvctl stop nodeapps -n racnode1
[root@racnode1 ~]# srvctl stop nodeapps -n racnode2
#
# Verify all application processes are OFFLINE.
#
[root@racnode1 ~]# crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.racdb.db application OFFLINE OFFLINE
ora....b1.inst application OFFLINE OFFLINE
ora....b2.inst application OFFLINE OFFLINE
ora....srvc.cs application OFFLINE OFFLINE
ora....db1.srv application OFFLINE OFFLINE
ora....db2.srv application OFFLINE OFFLINE
ora....SM1.asm application OFFLINE OFFLINE
ora....E1.lsnr application OFFLINE OFFLINE
ora....de1.gsd application OFFLINE OFFLINE
ora....de1.ons application OFFLINE OFFLINE
ora....de1.vip application OFFLINE OFFLINE
ora....SM2.asm application OFFLINE OFFLINE
ora....E2.lsnr application OFFLINE OFFLINE
ora....de2.gsd application OFFLINE OFFLINE
ora....de2.ons application OFFLINE OFFLINE
ora....de2.vip application OFFLINE OFFLINE
#
# Shut down CRS on node 1 and verify the CRS stack is not up.
#
[root@racnode1 ~]# crsctl stop crs
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
#
# Shut down CRS on node 2 and verify the CRS stack is not up.
#
[root@racnode2 ~]# crsctl stop crs
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
#
# Remove two voting disks.
#
[root@racnode1 ~]# crsctl delete css votedisk /u02/oradata/racdb/CSSFile_mirror1
-force
successful deletion of votedisk /u02/oradata/racdb/CSSFile_mirror1.
#
# Remove voting disk files at the OS level.
#
[root@racnode1 ~]# rm /u02/oradata/racdb/CSSFile_mirror1
[root@racnode1 ~]# rm /u02/oradata/racdb/CSSFile_mirror2
#
# Remove two voting disks.
#
[root@racnode1 ~]# crsctl delete css votedisk /dev/raw/raw3 -force
successful deletion of votedisk /dev/raw/raw3.
#
# (Optional)
# Clear out the old contents (voting disk data) from the raw devices.
#
[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw3
[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw4
After removing the voting disk(s), check that the voting disk(s) were removed from the cluster and the
new voting disk configuration is seen from all nodes in the cluster:
#
# Verify voting disk(s) deleted from node 1.
#
[root@racnode1 ~]# crsctl query css votedisk
0. 0 /u02/oradata/racdb/CSSFile
located 1 votedisk(s).
#
# Verify voting disk(s) deleted from node 2.
#
[root@racnode2 ~]# crsctl query css votedisk
0. 0 /u02/oradata/racdb/CSSFile
located 1 votedisk(s).
After verifying the voting disk(s) have been removed, restart CRS and the application processes on all
nodes in the cluster:
#
# Restart CRS and application processes from node 1.
#
[root@racnode1 ~]# crsctl start crs
Attempting to start CRS stack
The CRS stack will be started shortly
#
# Restart CRS and application processes from node 2.
#
[root@racnode2 ~]# crsctl start crs
Attempting to start CRS stack
The CRS stack will be started shortly
Relocate a Voting Disk
The process of moving a voting disk consists simply of removing the old voting disk and adding
a new voting disk to the destination location:
As discussed earlier in this section, Oracle Clusterware must be shut down on all nodes in the
cluster before adding or removing voting disks. Oracle10g R2 users are required to add the -force
flag when removing/adding a voting disk. The CRS stack must be shutdown on all nodes in the
the cluster before attempting to use the -force flag. Failure to do so may result in OCR corruption.
#
# Determine the current location and number of voting disks.
# If there is only one voting disk location then first add
# at least one new location before attempting to move the
# current voting disk. The following will show that I have
# only one voting disk location and will need to add at
# least one additional voting disk in order to perform the
# move. After the move, this temporary voting disk can be
# removed from the cluster. The remainder of this example
# will provide the instructions required to move the current
# voting disk from its current location on an OCFS2 file
# system to a new shared raw device (/dev/raw/raw3).
#
[root@racnode1 ~]# crsctl query css votedisk
0. 0 /u02/oradata/racdb/CSSFile
located 1 votedisk(s).
#
# Stop all application processes.
#
[root@racnode1 ~]# srvctl stop database -d racdb
[root@racnode1 ~]# srvctl stop asm -n racnode1
[root@racnode1 ~]# srvctl stop asm -n racnode2
[root@racnode1 ~]# srvctl stop nodeapps -n racnode1
[root@racnode1 ~]# srvctl stop nodeapps -n racnode2
#
# Verify all application processes are OFFLINE.
#
[root@racnode1 ~]# crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.racdb.db application OFFLINE OFFLINE
ora....b1.inst application OFFLINE OFFLINE
ora....b2.inst application OFFLINE OFFLINE
ora....srvc.cs application OFFLINE OFFLINE
ora....db1.srv application OFFLINE OFFLINE
ora....db2.srv application OFFLINE OFFLINE
ora....SM1.asm application OFFLINE OFFLINE
ora....E1.lsnr application OFFLINE OFFLINE
ora....de1.gsd application OFFLINE OFFLINE
ora....de1.ons application OFFLINE OFFLINE
ora....de1.vip application OFFLINE OFFLINE
ora....SM2.asm application OFFLINE OFFLINE
ora....E2.lsnr application OFFLINE OFFLINE
ora....de2.gsd application OFFLINE OFFLINE
ora....de2.ons application OFFLINE OFFLINE
ora....de2.vip application OFFLINE OFFLINE
#
# Shut down CRS on node 1 and verify the CRS stack is not up.
#
[root@racnode1 ~]# crsctl stop crs
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
#
# Shut down CRS on node 2 and verify the CRS stack is not up.
#
[root@racnode2 ~]# crsctl stop crs
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
#
# Before moving the current voting disk
# (/u02/oradata/racdb/CSSFile) to a new location, we first
# need to add at least one new voting disks to the cluster.
#
[root@racnode1 ~]# cp /dev/null /u02/oradata/racdb/CSSFile_mirror1
[root@racnode1 ~]# chown oracle /u02/oradata/racdb/CSSFile_mirror1
[root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/CSSFile_mirror1
[root@racnode1 ~]# chmod 644 /u02/oradata/racdb/CSSFile_mirror1
#
# Use the dd command to zero out the device and make sure
# no data is written to the raw device.
#
[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw3
#
# Delete the old voting disk (the voting disk that is to be
# moved).
#
[root@racnode1 ~]# crsctl delete css votedisk /u02/oradata/racdb/CSSFile -force
successful deletion of votedisk /u02/oradata/racdb/CSSFile.
#
# Add the new voting disk to the new location.
#
[root@racnode1 ~]# crsctl add css votedisk /dev/raw/raw3 -force
Now formatting voting disk: /dev/raw/raw3
successful addition of votedisk /dev/raw/raw3.
#
# (Optional)
# Remove the temporary voting disk.
#
[root@racnode1 ~]# crsctl delete css votedisk /u02/oradata/racdb/CSSFile_mirror1
-force
successful deletion of votedisk /u02/oradata/racdb/CSSFile_mirror1.
#
# Remove all deleted voting disk files from the OCFS2 file system.
#
[root@racnode1 ~]# rm /u02/oradata/racdb/CSSFile
[root@racnode1 ~]# rm /u02/oradata/racdb/CSSFile_mirror1
#
# Verify voting disk(s) relocation from node 1.
#
[root@racnode1 ~]# crsctl query css votedisk
0. 0 /dev/raw/raw3
located 1 votedisk(s).
#
# Verify voting disk(s) relocation from node 2.
#
[root@racnode2 ~]# crsctl query css votedisk
0. 0 /dev/raw/raw3
located 1 votedisk(s).
#
# After verifying the voting disk(s) have been moved, restart
# CRS and the application processes on all nodes in the
# cluster.
#
[root@racnode1 ~]# crsctl start crs
Attempting to start CRS stack
The CRS stack will be started shortly
Oracle Clusterware 10g Release 1 (10.1) only allowed for one voting disk while Oracle
Clusterware 10g Release 2 (10.2) lifted this restriction to allow for 32 voting disks. For high
availability, Oracle recommends that Oracle Clusterware 10g R2 users configure multiple voting
disks while keeping in mind that you must have an odd number of voting disks, such as three,
five, and so on. To avoid simultaneous loss of multiple voting disks, each voting disk should be
placed on a shared storage device that does not share any components (controller, interconnect,
and so on) with the storage devices used for the other voting disks. If you define a single voting
disk, then you should use external mirroring to provide redundancy.
To make a backup copy of the voting disk on UNIX/Linux, use the dd command:
Perform this operation on every voting disk where voting_disk_name is the name of the active
voting disk (input file), backup_file_name is the name of the file to which you want to back up
the voting disk contents (output file), and block_size is the value to set both the input and output
block sizes. As a general rule on most platforms, including Linux and Sun, the block size for the
dd command should be 4k to ensure that the backup of the voting disk gets complete blocks.
If your voting disk is stored on a raw device, use the device name in place of voting_disk_name.
For example:
When you use the dd command to make backups of the voting disk, the backup can be performed
while the Cluster Ready Services (CRS) process is active; you do not need to stop the CRS
daemons (namely, the crsd.bin process) before taking a backup of the voting disk.
The following is a working UNIX script that can be scheduled in CRON to backup the OCR File
and the Voting Disk on a regular basis:
crs_components_backup_10g.ksh
For the purpose of this example, the current Oracle Clusterware environment is configured with three
voting disks on an OCFS2 clustered file system that will be backed up to a local file system on one of the
nodes in the cluster. For example:
#
# Query the location and number of voting disks.
#
[root@racnode1 ~]# crsctl query css votedisk
0. 0 /u02/oradata/racdb/CSSFile
1. 0 /u02/oradata/racdb/CSSFile_mirror1
2. 0 /u02/oradata/racdb/CSSFile_mirror2
#
# Backup all three voting disks.
#
[root@racnode1 ~]# dd if=/u02/oradata/racdb/CSSFile
of=/u03/crs_backup/votebackup/CSSFile.bak bs=4k
2500+0 records in
2500+0 records out
10240000 bytes (10 MB) copied, 0.259862 seconds, 39.4 MB/s
The recommended way to recover from a lost or corrupt voting disk is to restore it from a
previous good backup that was taken with the dd command.
There are actually very few steps required to restore the voting disks:
For example:
[root@racnode1 ~]# crsctl stop crs
[root@racnode2 ~]# crsctl stop crs
The following is an example of what occurs on all RAC nodes when a voting disk is destroyed.
This example will manually corrupt all voting disks in the cluster. After the Oracle RAC nodes
reboot from the crash, we will follow up with the steps required to restore the lost/corrupt voting
disk which will make use of the voting disk backups that were created in the previous section.
Although it should go without saying, DO NOT perform this recovery scenario on a critical
system like production!
First, let's check the status of the cluster and all RAC components, list the current location of the voting
disk(s), and finally list the voting disk backup that will be used to recover from:
located 3 votedisk(s).
The next step is to simulate the corruption or loss of the voting disk(s).
If you are using Oracle RAC 10g R1 or Oracle RAC 10g R2 (not patched with 10.2.0.4), simply write zero's
to one of the voting disk:
Both RAC servers are now stuck and will be rebooted by CRS...
Oracle RAC 11g or higher (including Oracle RAC 10g R2 patched with 10.2.0.4)
Starting with Oracle RAC 11g R1 (including Oracle RAC 10g R2 patched with 10.2.0.4),
attempting to corrupt a voting disk using dd will result in all nodes being rebooted, however,
Oracle Clusterware will re-construct the corrupt voting disk and successfully bring up the RAC
components. Because the voting disks do not contain persistent data, CSSD is able to fully
reconstruct the voting disks so long as the cluster is running. This feature was introduced with
Oracle Clusterware 11.1 and is also available with Oracle Clusterware 10.2 patched with
10.2.0.4.
This makes it a bit more difficult to corrupt a voting disk by simply writing zero's to it. You
would need to find a way to dd the voting disks and stop the cluster before any of the voting disks
could be automatically recovered by CSSD. Good luck with that! To simulate the corruption (actually the
loss) of the voting disk and have both nodes crash, I'm simply going to delete all of the voting disks and
then manually reboot the nodes:
After the reboot, CRS will not come up and all RAC components will be down:
#
# Recover the voting disk (or voting disks) using the same
# dd command that was used to back it up, but with the input
# file and output file in reverse.
#
[root@racnode1 ~]# dd if=/u03/crs_backup/votebackup/CSSFile.bak
of=/u02/oradata/racdb/CSSFile bs=4k
2500+0 records in
2500+0 records out
10240000 bytes (10 MB) copied, 0.252425 seconds, 40.6 MB/s
#
# Verify the permissions on the recovered voting disk(s) are
# set appropriately.
#
[root@racnode1 ~]# chown oracle /u02/oradata/racdb/CSSFile
[root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/CSSFile
[root@racnode1 ~]# chmod 644 /u02/oradata/racdb/CSSFile
#
# With the recovered voting disk(s) in place, restart CRS
# on all Oracle RAC nodes.
#
[root@racnode1 ~]# crsctl start crs
[root@racnode2 ~]# crsctl start crs
If you have multiple voting disks, then you can remove the voting disks and add them back
into your environment using the crsctl delete css votedisk path and crsctl add
css votedisk path commands respectively, where path is the complete path of the location
on which the voting disk resides.
After recovering the voting disk, run through several tests to verify that Oracle Clusterware is
functioning correctly:
Move the Voting Disk and OCR from OCFS to RAW Devices
This section provides instructions on how to move the OCR and all voting disks used throughout this
article from an OCFS2 file system to raw storage devices.
#
# Use the dd command to zero out the devices and make sure
# no data is written to the raw devices.
#
[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw1
[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw2
#
# Verify CRS is running on node 1.
#
[root@racnode1 ~]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
#
# Verify CRS is running on node 2.
#
[root@racnode2 ~]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
#
# Query the current location and number of OCR files on
# the OCFS2 file system.
#
[root@racnode1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 4676
Available space (kbytes) : 257444
ID : 1513888898
Device/File Name : /u02/oradata/racdb/OCRFile <-- OCR
(primary)
Device/File integrity check succeeded
Device/File Name : /u02/oradata/racdb/OCRFile_mirror <-- OCR
(mirror)
Device/File integrity check succeeded
#
# Move OCR and OCR mirror to new storage location.
#
[root@racnode1 ~]# ocrconfig -replace ocr /dev/raw/raw1
[root@racnode1 ~]# ocrconfig -replace ocrmirror /dev/raw/raw2
#
# Verify OCR relocation from node 1.
#
[root@racnode1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 4676
Available space (kbytes) : 257444
ID : 1513888898
Device/File Name : /dev/raw/raw1
Device/File integrity check succeeded
Device/File Name : /dev/raw/raw2
Device/File integrity check succeeded
#
# Verify OCR relocation from node 2.
#
[root@racnode2 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 4676
Available space (kbytes) : 257444
ID : 1513888898
Device/File Name : /dev/raw/raw1
Device/File integrity check succeeded
Device/File Name : /dev/raw/raw2
Device/File integrity check succeeded
#
# Remove all deleted OCR files from the OCFS2 file system.
#
[root@racnode1 ~]# rm /u02/oradata/racdb/OCRFile
[root@racnode1 ~]# rm /u02/oradata/racdb/OCRFile_mirror
Move the Voting Disk
#
# The new raw storage devices for the voting disks should be
# owned by the oracle user, must be in the oinstall group,
# and and must have permissions set to 644. Provide at least
# 20MB of disk space for each voting disk and verify the raw
# storage devices can be seen from all nodes in the cluster.
#
[root@racnode1 ~]# ls -l /dev/raw/raw[345]
crw-r--r-- 1 oracle oinstall 162, 3 Oct 8 22:44 /dev/raw/raw3
crw-r--r-- 1 oracle oinstall 162, 4 Oct 8 22:45 /dev/raw/raw4
crw-r--r-- 1 oracle oinstall 162, 5 Oct 9 00:22 /dev/raw/raw5
#
# Use the dd command to zero out the devices and make sure
# no data is written to the raw devices.
#
[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw3
[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw4
[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw5
#
# Query the current location and number of voting disks on
# the OCFS2 file system. There needs to be at least two
# voting disks configured before attempting to perform the
# move.
#
[root@racnode1 ~]# crsctl query css votedisk
0. 0 /u02/oradata/racdb/CSSFile
1. 0 /u02/oradata/racdb/CSSFile_mirror1
2. 0 /u02/oradata/racdb/CSSFile_mirror2
located 3 votedisk(s).
#
# Stop all application processes.
#
[root@racnode1 ~]# srvctl stop database -d racdb
[root@racnode1 ~]# srvctl stop asm -n racnode1
[root@racnode1 ~]# srvctl stop asm -n racnode2
[root@racnode1 ~]# srvctl stop nodeapps -n racnode1
[root@racnode1 ~]# srvctl stop nodeapps -n racnode2
#
# Verify all application processes are OFFLINE.
#
[root@racnode1 ~]# crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.racdb.db application OFFLINE OFFLINE
ora....b1.inst application OFFLINE OFFLINE
ora....b2.inst application OFFLINE OFFLINE
ora....srvc.cs application OFFLINE OFFLINE
ora....db1.srv application OFFLINE OFFLINE
ora....db2.srv application OFFLINE OFFLINE
ora....SM1.asm application OFFLINE OFFLINE
ora....E1.lsnr application OFFLINE OFFLINE
ora....de1.gsd application OFFLINE OFFLINE
ora....de1.ons application OFFLINE OFFLINE
ora....de1.vip application OFFLINE OFFLINE
ora....SM2.asm application OFFLINE OFFLINE
ora....E2.lsnr application OFFLINE OFFLINE
ora....de2.gsd application OFFLINE OFFLINE
ora....de2.ons application OFFLINE OFFLINE
ora....de2.vip application OFFLINE OFFLINE
#
# Shut down CRS on node 1 and verify the CRS stack is not up.
#
[root@racnode1 ~]# crsctl stop crs
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
#
# Shut down CRS on node 2 and verify the CRS stack is not up.
#
[root@racnode2 ~]# crsctl stop crs
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
#
# Move all three voting disks to new storage location.
#
[root@racnode1 ~]# crsctl delete css votedisk /u02/oradata/racdb/CSSFile -force
successful deletion of votedisk /u02/oradata/racdb/CSSFile.
#
# Verify voting disk(s) relocation from node 1.
#
[root@racnode1 ~]# crsctl query css votedisk
0. 0 /dev/raw/raw3
1. 0 /dev/raw/raw4
2. 0 /dev/raw/raw5
located 3 votedisk(s).
#
# Verify voting disk(s) relocation from node 2.
#
[root@racnode2 ~]# crsctl query css votedisk
0. 0 /dev/raw/raw3
1. 0 /dev/raw/raw4
2. 0 /dev/raw/raw5
located 3 votedisk(s).
#
# Remove all deleted voting disk files from the OCFS2 file system.
#
[root@racnode1 ~]# rm /u02/oradata/racdb/CSSFile
[root@racnode1 ~]# rm /u02/oradata/racdb/CSSFile_mirror1
[root@racnode1 ~]# rm /u02/oradata/racdb/CSSFile_mirror2
#
# With all voting disks now located on raw storage devices,
# restart CRS on all Oracle RAC nodes.
#
[root@racnode1 ~]# crsctl start crs
[root@racnode2 ~]# crsctl start crs
All articles, scripts and material located at the Internet address of http://www.idevelopment.info is the copyright of
Jeffrey M. Hunter and is protected under copyright laws of the United States. This document may not be hosted on
any other site without my express, prior, written permission. Application to host any of the material elsewhere can
be made by contacting me at [email protected].
I have made every effort and taken great care in making sure that the material included on my web site is
technically accurate, but I disclaim any and all responsibility for any loss, damage or destruction of data or any
other property which may arise from relying on it. I will in no case be liable for any monetary damages arising from
such loss, damage or destruction.
Last modified on
Wednesday, 14-Oct-2009 10:59:32 EDT
Page Count: 2641