Solaris Cluster 3.3 Upgradation
Solaris Cluster 3.3 Upgradation
Documentation (http://docs.sun.com)
Support (http://www.oracle.com/us/support/systems/index.html)
Training (http://education.oracle.com) Click the Sun link in the left navigation bar.
Preface
7
OracleWelcomesYour Comments
Oracle welcomes your comments and suggestions on the quality and usefulness of its
documentation. If you fnd any errors or have any other suggestions for improvement, go to
http://docs.sun.com and click Feedback. Indicate the title and part number of the
documentation along with the chapter, section, and page number, if available. Please let us
knowif you want a reply.
Oracle Technology Network (http://www.oracle.com/technetwork/index.html) ofers a
range of resources related to Oracle software:
The release number of the Oracle Solaris OS (for example, Oracle Solaris 10)
The release number of Oracle Solaris Cluster (for example, Oracle Solaris Cluster 3.3)
Use the following commands to gather information about your systemfor your service
provider.
Command Function
prtconf -v Displays the size of the systemmemory and reports
information about peripheral devices
psrinfo -v Displays information about processors
showrev -p Reports which patches are installed
SPARC: prtdiag -v Displays systemdiagnostic information
/usr/cluster/bin/clnode show-rev Displays Oracle Solaris Cluster release and package
version information
Preface
Oracle Solaris Cluster Upgrade Guide September 2010, Revision 10 8
Also have available the contents of the /var/adm/messages fle.
Preface
9
10
Preparing to Upgrade Oracle Solaris Cluster
Software
This chapter provides the following information and procedures to prepare to upgrade to
Oracle Solaris Cluster 3.3 software:
Upgrade of x86 based systems - On x86 based systems, you cannot upgrade fromthe
Solaris 9 OS to the Oracle Solaris 10 OS. You must reinstall the cluster with a fresh
installation of the Oracle Solaris 10 OS and Oracle Solaris Cluster 3.3 software for x86 based
systems. Followprocedures in Chapter 2, Installing Software on Global-Cluster Nodes, in
Oracle Solaris Cluster Software Installation Guide.
MinimumOracle Solaris Cluster software version - Oracle Solaris Cluster 3.3 software
supports the following direct upgrade paths:
SPARC: Fromversion 3.1 8/05 through version 3.2 11/09 - Use the standard,
dual-partition, or live upgrade method.
Fromversion 3.2 including update releases through version 3.2 11/09 - Use the
standard, dual-partition, or live upgrade method.
On version 3.3 to an Oracle Solaris Cluster 3.3 update release with no Oracle Solaris
upgrade except to an Oracle Solaris update release, or to upgrade only Oracle Solaris to an
update release You can also use the rolling upgrade method.
See Choosing an Oracle Solaris Cluster Upgrade Method on page 13 for additional
requirements and restrictions for each upgrade method.
1
C H A P T E R 1
11
Supported hardware - The cluster hardware must be a supported confguration for Oracle
Solaris Cluster 3.3 software. Contact your Sun representative for information about current
supported Oracle Solaris Cluster confgurations.
Architecture changes during upgrade - Oracle Solaris Cluster 3.3 software does not
support upgrade between architectures.
Software migration - Do not migrate fromone type of software product to another product
during Oracle Solaris Cluster upgrade. For example, migration fromSolaris Volume
Manager disk sets to VxVMdisk groups or fromUFS fle systems to VxFS fle systems is not
supported during Oracle Solaris Cluster upgrade. Performonly software confguration
changes that are specifed by upgrade procedures of an installed software product.
Data services - You must upgrade data-service software to the Oracle Solaris Cluster 3.3
version.
Upgrading to compatible versions - You must upgrade all software on the cluster nodes to
a version that is supported by Oracle Solaris Cluster 3.3 software. For example, if a version of
an application is supported on Sun Cluster 3.2 software but is not supported on Oracle
Solaris Cluster 3.3 software, you must upgrade that application to the version that is
supported on Oracle Solaris Cluster 3.3 software, if such a version exists. See Supported
Products in Oracle Solaris Cluster 3.3 Release Notes for information about supported
products.
Downgrade - Oracle Solaris Cluster 3.3 software does not support any downgrade of Oracle
Solaris Cluster software.
Limitation of scinstall for data-service upgrades - The scinstall upgrade utility only
upgrades those data services that are provided with Oracle Solaris Cluster 3.3 software. You
must manually upgrade any customor third-party data services.
Upgrade Requirements and Software Support Guidelines
Oracle Solaris Cluster Upgrade Guide September 2010, Revision 10 12
ChoosinganOracle Solaris Cluster Upgrade Method
The following matrixes summarize the supported upgrade methods for each Oracle Solaris OS
version and platform, provided that all other requirements for any supported method are met.
Check the documentation for other products in the cluster, such as volume management
software and other applications, for any additional upgrade requirements or restrictions.
Note If your cluster uses a ZFS root fle system, you can upgrade the Oracle Solaris OS only by
using the live upgrade method. See Oracle Solaris upgrade documentation for more
information.
This limitation does not apply if you are not upgrading the Oracle Solaris OS.
TABLE 11 Upgrade FromOracle Solaris Cluster 3.1 8/05 Through 3.2 11/09 Software, Including Oracle
Solaris OS Upgrade
Method
Oracle Solaris 10
SPARC x86
Standard upgrade X X
Dual-partition upgrade X X
Live upgrade X X
Rolling upgrade - -
TABLE 12 Upgrade on Oracle Solaris Cluster 3.3 Software of Oracle Solaris OS Update Only
Method
Oracle Solaris 10
SPARC x86
Standard upgrade X X
Dual-partition upgrade X X
Live upgrade X X
Rolling upgrade X X
Choose fromthe following methods to upgrade your cluster to Oracle Solaris Cluster 3.3
software:
ZFS root fle systems - If your cluster uses a ZFS root fle system, you cannot use standard
upgrade to upgrade the Solaris OS. You must use only the live upgrade method to upgrade
the Solaris OS. But you can use standard upgrade to separately upgrade Oracle Solaris
Cluster and other software.
Dual-PartitionUpgrade
In a dual-partition upgrade, you divide the cluster into two groups of nodes. You bring down
one group of nodes and upgrade those nodes. The other group of nodes continues to provide
services. After you complete upgrade of the frst group of nodes, you switch services to those
upgraded nodes. You then upgrade the remaining nodes and boot themback into the rest of the
cluster.
The cluster outage time is limited to the amount of time that is needed for the cluster to switch
over services to the upgraded partition, with one exception. If you upgrade fromthe Sun Cluster
3.1 8/05 release and you intend to confgure zone clusters, you must temporarily take the
upgraded frst partition out of cluster mode to set newprivate-network settings that were
introduced in the Sun Cluster 3.2 release.
Observe the following additional restrictions and requirements for the dualpartition upgrade
method:
ZFS root fle systems - If your cluster uses a ZFS root fle system, you cannot use
dual-partition upgrade to upgrade the Solaris OS. You must use only the live upgrade
method to upgrade the Solaris OS. But you can use dual-partition upgrade to separately
upgrade Oracle Solaris Cluster and other software.
HAfor Sun Java SystemApplication Server EE (HADB) - If you are running the HAfor
Sun Java SystemApplication Server EE (HADB) data service with Sun Java System
Application Server EE (HADB) software as of version 4.4, you must shut down the database
before you begin the dual-partition upgrade. The HADB database does not tolerate the loss
of membership that would occur when a partition of nodes is shut down for upgrade. This
requirement does not apply to versions before version 4.4.
Choosing an Oracle Solaris Cluster Upgrade Method
Oracle Solaris Cluster Upgrade Guide September 2010, Revision 10 14
Data format changes - Do not use the dual-partition upgrade method if you intend to
upgrade an application that requires that you change its data format during the application
upgrade. The dual-partition upgrade method is not compatible with the extended downtime
that is needed to performdata transformation.
Division of storage - Each shared storage device must be connected to a node in each
group.
Confguration changes - Do not make cluster confguration changes that are not
documented in the upgrade procedures. Such changes might not be propagated to the fnal
cluster confguration. Also, validation attempts of such changes would fail because not all
nodes are reachable during a dual-partition upgrade.
Live Upgrade
Alive upgrade maintains your previous cluster confguration until you have upgraded all nodes
and you commit to the upgrade. If the upgraded confguration causes a problem, you can revert
to your previous cluster confguration until you can rectify the problem.
The cluster outage is limited to the amount of time that is needed to reboot the cluster nodes
into the upgraded boot environment.
Observe the following additional restrictions and requirements for the live upgrade method:
ZFS root fle systems - If your cluster confguration uses a ZFS root fle system, you must
use only live upgrade to upgrade the Solaris OS. See Solaris documentation for more
information.
Dual-partition upgrade - The live upgrade method cannot be used in conjunction with a
dual-partition upgrade.
Non-global zones - Unless the cluster is already running on at least Solaris 10 11/06, the live
upgrade method does not support the upgrade of clusters that have non-global zones that
are confgured on any of the cluster nodes. Instead, use the standard upgrade or
dual-partition upgrade method.
Disk space - To use the live upgrade method, you must have enough spare disk space
available to make a copy of each node's boot environment. You reclaimthis disk space after
the upgrade is complete and you have verifed and committed the upgrade. For information
about space requirements for an inactive boot environment, refer to or Allocating Disk and
Swap Space in Solaris 10 10/09 Installation Guide: Planning for Installation and Upgrade.
Choosing an Oracle Solaris Cluster Upgrade Method
Chapter 1 Preparing to Upgrade Oracle Solaris Cluster Software 15
RollingUpgrade
In a rolling upgrade, you upgrade software to an update release on one node at a time. Services
continue on the other nodes except for the time it takes to switch services froma node to be
upgraded to a node that will remain in service.
Observe the following additional restrictions and requirements for the rolling upgrade method:
MinimumOracle Solaris Cluster version - The cluster must be running an Oracle Solaris
Cluster 3.3 release.
Solaris upgrade paths - You can upgrade the Solaris OS only to an update version of the
same release. For example, you can performa rolling upgrade fromSolaris 10 5/08 to Solaris
10 10/09. But you cannot performa rolling upgrade froma version of Solaris 9 to a version
of Oracle Solaris 10.
ZFS root fle systems - If your cluster confguration uses a ZFS root fle system, you cannot
use rolling upgrade to upgrade the Solaris OS. You must use only live upgrade to upgrade
the Solaris OS. See Solaris documentation for more information.
Hardware confguration changes - Do not change the cluster confguration during a rolling
upgrade. For example, do not add to or change the cluster interconnect or quorumdevices.
If you need to make such a change, do so before you start the rolling upgrade procedure or
wait until after all nodes are upgraded and the cluster is committed to the newsoftware
version.
Duration of the upgrade - Limit the amount of time that you take to complete a rolling
upgrade of all cluster nodes. After a node is upgraded, begin the upgrade of the next cluster
node as soon as possible. You can experience performance penalties and other penalties
when you run a mixed-version cluster for an extended period of time.
New-feature availability - Until all nodes of the cluster are successfully upgraded and the
upgrade is committed, newfeatures that are introduced by the newrelease might not be
available.
Choosing an Oracle Solaris Cluster Upgrade Method
Oracle Solaris Cluster Upgrade Guide September 2010, Revision 10 16
Performing a Standard Upgrade to Oracle
Solaris Cluster 3.3 Software
This chapter provides the following information to upgrade to Oracle Solaris Cluster 3.3
software by using the standard nonrolling upgrade method:
Howto Upgrade the Solaris OS and Volume Manager Software (Standard) on page 27
Ensure that the confguration meets the requirements for upgrade. See Upgrade
Requirements and Software Support Guidelines on page 11.
Have available the installation media, documentation, and patches for all software products
that you are upgrading, including the following software:
Solaris OS
Applications that are managed by Oracle Solaris Cluster 3.3 data services
If you use role-based access control (RBAC) instead of superuser to access the cluster nodes,
ensure that you can assume an RBACrole that provides authorization for all Oracle Solaris
Cluster commands. This series of upgrade procedures requires the following Oracle Solaris
Cluster RBACauthorizations if the user is not superuser:
solaris.cluster.modify
solaris.cluster.admin
solaris.cluster.read
6
7
8
BeforeYouBegin
Performing a Standard Upgrade of a Cluster
Oracle Solaris Cluster Upgrade Guide September 2010, Revision 10 20
See Role-Based Access Control (Overview) in SystemAdministration Guide: Security
Services for more information about using RBACroles. See the Oracle Solaris Cluster man
pages for the RBACauthorization that each Oracle Solaris Cluster subcommand requires.
Ensure that the cluster is functioningnormally.
a. Viewthe current status of the cluster by runningthe followingcommandfromany node.
On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following command:
phys-schost% cluster status
See the scstat(1M) or cluster(1CL) man page for more information.
b. Searchthe /var/adm/messages logonthe same node for unresolvederror messages or
warningmessages.
c. Check the volume-manager status.
Notify users that cluster services will be unavailable duringthe upgrade.
If Geographic Editionsoftware is installed, uninstall it.
For uninstallation procedures, see the documentation for your version of Geographic Edition
software.
Become superuser ona node of the cluster.
Take eachresource groupofine anddisable all resources.
Take ofine all resource groups in the cluster, including those that are in non-global zones.
Then disable all resources, to prevent the cluster frombringing the resources online
automatically if a node is mistakenly rebooted into cluster mode.
On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following
command:
phys-schost# clsetup
1
2
3
4
5
Performing a Standard Upgrade of a Cluster
Chapter 2 Performing a Standard Upgrade to Oracle Solaris Cluster 3.3 Software 21
The Main Menu is displayed.
b. Choose the menuitem, Resource Groups.
The Resource Group Menu is displayed.
c. Choose the menuitem, Online/Ofine or Switchover a Resource Group.
d. Followthe prompts totake ofine all resource groups andtoput theminthe unmanaged
state.
e. Whenall resource groups are ofine, type q toreturntothe Resource GroupMenu.
f. Exit the scsetup utility.
Type q to back out of each submenu or press Ctrl-C.
On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following
command:
phys-schost# clresource offline resource-group
b. Fromany node, list all enabledresources inthe cluster.
On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following
command:
phys-schost# clresource show -p Enabled
=== Resources ===
Resource: resource
Enabled{nodename1}: True
Enabled{nodename2}: True
...
Performing a Standard Upgrade of a Cluster
Oracle Solaris Cluster Upgrade Guide September 2010, Revision 10 22
c. Identify those resources that dependonother resources.
phys-schost# clresource show -p resource_dependencies
=== Resources ===
Resource: node
Resource_dependencies: node
...
You must disable dependent resources frst before you disable the resources that they
depend on.
d. Disable eachenabledresource inthe cluster.
On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following
command:
phys-schost# clresource disable resource
See the scswitch(1M) or clresource(1CL) man page for more information.
e. Verify that all resources are disabled.
On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following
command:
phys-schost# clresource show -p Enabled
=== Resources ===
Resource: resource
Enabled{nodename1}: False
Enabled{nodename2}: False
...
f. Move eachresource grouptothe unmanagedstate.
On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following
command:
phys-schost# clresourcegroup unmanage resource-group
Verify that all resources onall nodes are Offline andthat all resource groups are inthe
Unmanaged state.
On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following command:
phys-schost# cluster status -t resource,resourcegroup
Stopall applications that are runningoneachnode of the cluster.
Ensure that all shareddata is backedup.
If youwill upgrade the Solaris OS andyour cluster uses dual-stringmediators for Solaris Volume
Manager software, unconfgure your mediators.
See Confguring Dual-String Mediators in Oracle Solaris Cluster Software Installation Guide
for more information about mediators.
a. Runthe followingcommandtoverify that nomediator data problems exist.
phys-schost# medstat -s setname
-s setname Specifes the disk set name.
If the value in the Status feld is Bad, repair the afected mediator host. Followthe procedure
Howto Fix Bad Mediator Data in Oracle Solaris Cluster Software Installation Guide.
b. List all mediators.
Save this information for when you restore the mediators during the procedure Howto
Finish Upgrade to Oracle Solaris Cluster 3.3 Software on page 101.
c. For a disk set that uses mediators, take ownershipof the disk set if nonode already has
ownership.
On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following command:
6
7
8
9
Performing a Standard Upgrade of a Cluster
Oracle Solaris Cluster Upgrade Guide September 2010, Revision 10 24
phys-schost# cldevicegroup switch -n node devicegroup
d. Unconfgure all mediators for the disk set.
phys-schost# metaset -s setname -d -m mediator-host-list
-s setname Specifes the disk set name.
-d Deletes fromthe disk set.
-m mediator-host-list Specifes the name of the node to remove as a mediator host for the
disk set.
See the mediator(7D) man page for further information about mediator-specifc options to
the metaset command.
e. Repeat Stepc throughStepdfor eachremainingdisk set that uses mediators.
Fromone node, shut downthe cluster.
On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following command:
phys-schost# cluster shutdown -g0 -y
See the scshutdown(1M)man page for more information.
Boot eachnode intononcluster mode.
To upgrade Solaris software before you performOracle Solaris Cluster software upgrade, go
to Howto Upgrade the Solaris OS and Volume Manager Software (Standard) on page 27.
12
Next Steps
Performing a Standard Upgrade of a Cluster
Oracle Solaris Cluster Upgrade Guide September 2010, Revision 10 26
You must upgrade the Solaris software to a supported release if Oracle Solaris Cluster 3.3
software does not support the release of the Solaris OS that your cluster currently runs .
See Supported Products in Oracle Solaris Cluster 3.3 Release Notes for more
information.
If Oracle Solaris Cluster 3.3 software supports the release of the Solaris OS that you
currently run on your cluster, further Solaris software upgrade is optional.
Otherwise, upgrade to Oracle Solaris Cluster 3.3 software. Go to Howto Upgrade Oracle
Solaris Cluster 3.3 Software (Standard) on page 32.
If these scripts exist and contain an uppercase K or S in the fle name, the scripts are enabled.
No further action is necessary for these scripts.
If these scripts do not exist, in Step 7 you must ensure that any Apache run control scripts
that are installed during the Solaris OS upgrade are disabled.
If these scripts exist but the fle names contain a lowercase k or s, the scripts are disabled. In
Step 7 you must ensure that any Apache run control scripts that are installed during the
Solaris OS upgrade are disabled.
Comment out all entries for globally mountedfle systems inthe node's /etc/vfstab fle.
a. For later reference, make a recordof all entries that are already commentedout.
b. Temporarily comment out all entries for globally mountedfle systems inthe /etc/vfstab
fle.
Entries for globally mounted fle systems contain the global mount option. Comment out
these entries to prevent the Solaris upgrade fromattempting to mount the global devices.
Determine whichprocedure tofollowtoupgrade the Solaris OS.
To use Live Upgrade, go instead to Chapter 4, Performing a Live Upgrade to Oracle Solaris
Cluster 3.3 Software.
To upgrade a cluster that uses Solaris Volume Manager by a method other than Live
Upgrade, followupgrade procedures in Solaris installation documentation.
To upgrade a cluster that uses Veritas Volume Manager by a method other than Live
Upgrade, followupgrade procedures in Veritas Storage Foundation installation
documentation.
Note If your cluster has VxVMinstalled and you are upgrading the Solaris OS, you must
reinstall or upgrade to VxVMsoftware that is compatible with the version of Oracle Solaris 10
that you upgrade to.
Upgrade the Solaris software, followingthe procedure that youselectedinStep4.
Note Do not performthe fnal reboot instruction in the Solaris software upgrade. Instead, do
the following:
a. Return to this procedure to performStep 6 and Step 7.
b. Reboot into noncluster mode in Step 8 to complete Solaris software upgrade.
When you are instructed to reboot a node during the upgrade process, always reboot into
noncluster mode. For the boot and reboot commands, add the -x option to the command.
The -x option ensures that the node reboots into noncluster mode. For example, either of
the following two commands boot a node into single-user noncluster mode:
After VxVMupgrade is complete but before youreboot, verify the entries inthe /etc/vfstab
fle.
If any of the entries that you uncommented in Step 6 were commented out, make those
entries uncommented again.
Ensure that all steps in Howto Prepare the Cluster for Upgrade (Standard) on page 20 are
completed.
Ensure that you have installed all required Solaris software patches and hardware-related
patches.
Become superuser ona node of the cluster.
Loadthe installationDVD-ROMintothe DVD-ROMdrive.
If the volume management daemon vold(1M) is running and is confgured to manage
CD-ROMor DVDdevices, the daemon automatically mounts the media on the /cdrom/cdrom0
directory.
Next Steps
BeforeYouBegin
1
2
Performing a Standard Upgrade of a Cluster
Oracle Solaris Cluster Upgrade Guide September 2010, Revision 10 32
Change tothe /Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where
arch is sparc or x86 andwhere ver is 10 for Oracle Solaris 10.
phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools
Start the scinstall utility.
phys-schost# ./scinstall
Note Do not use the /usr/cluster/bin/scinstall command that is already installed on the
node. You must use the scinstall command that is located on the installation DVD-ROM.
The scinstall Main Menu is displayed.
Choose the menuitem, UpgradeThis Cluster Node.
*** Main Menu ***
Please select from one of the following (*) options:
1) Create a new cluster or add a cluster node
2) Configure a cluster to be JumpStarted from this install server
* 3) Manage a dual-partition upgrade
* 4) Upgrade this cluster node
* 5) Print release information for this cluster node
* ?) Help with menu options
* q) Quit
Option: 4
The Upgrade Menu is displayed.
Choose the menuitem, Upgrade Oracle Solaris Cluster Framework onThis Node.
Followthe menuprompts toupgrade the cluster framework.
During the Oracle Solaris Cluster upgrade, scinstall might make one or more of the following
confguration changes:
Set the local-mac-address? variable to true, if the variable is not already set to that value.
Upgrade processing is fnished when the systemdisplays the message Completed Oracle
Solaris Cluster framework upgrade and prompts you to press Enter to continue.
Quit the scinstall utility.
Upgrade data service packages.
You must upgrade all data services to the Oracle Solaris Cluster 3.3 version.
3
4
5
6
7
8
9
Performing a Standard Upgrade of a Cluster
Chapter 2 Performing a Standard Upgrade to Oracle Solaris Cluster 3.3 Software 33
Note For HAfor SAP Web Application Server, if you are using a J2EE engine resource or a web
application server component resource or both, you must delete the resource and recreate it
with the newweb application server component resource. Changes in the newweb application
server component resource includes integration of the J2EE functionality. For more
information, see Oracle Solaris Cluster Data Service for SAP Web Application Server Guide.
a. Start the upgradedinteractive scinstall utility.
phys-schost# /usr/cluster/bin/scinstall
Note Do not use the scinstall utility that is on the installation media to upgrade data
service packages.
The scinstall Main Menu is displayed.
b. Choose the menuitem, UpgradeThis Cluster Node.
The Upgrade Menu is displayed.
c. Choose the menuitem, Upgrade Oracle Solaris Cluster Data Service Agents onThis Node.
d. Followthe menuprompts toupgrade Oracle Solaris Cluster data service agents that are
installedonthe node.
You can choose fromthe list of data services that are available to upgrade or choose to
upgrade all installed data services.
e. When the system displays the message Completed upgrade of Oracle Solaris Cluster
data services agents, press Enter.
The Upgrade Menu is displayed.
Quit the scinstall utility.
Unloadthe installationDVD-ROMfromthe DVD-ROMdrive.
a. Toensure that the DVD-ROMis not beingused, change toa directory that does not reside on
the DVD-ROM.
b. Eject the DVD-ROM.
phys-schost# eject cdrom
If youhave HAfor NFS confguredona highly available local fle system, ensure that the
loopback fle system(LOFS) is disabled.
10
11
12
Performing a Standard Upgrade of a Cluster
Oracle Solaris Cluster Upgrade Guide September 2010, Revision 10 34
Note If you have non-global zones confgured, LOFS must remain enabled. For guidelines
about using LOFS and alternatives to disabling it, see Cluster File Systems in Oracle Solaris
Cluster Software Installation Guide.
To disable LOFS, ensure that the /etc/system fle contains the following entry:
exclude:lofs
This change becomes efective at the next systemreboot.
As needed, manually upgrade any customdata services that are not suppliedonthe product
media.
Verify that eachdata-service update is installedsuccessfully.
Viewthe upgrade log fle that is referenced at the end of the upgrade output messages.
Install any Oracle Solaris Cluster 3.3 framework anddata-service software patches.
See Patches and Required Firmware Levels in the Oracle Solaris Cluster 3.3 Release Notes for
the location of patches and installation instructions.
Upgrade software applications that are installedonthe cluster.
If you want to upgrade VxVMand did not upgrade the Solaris OS, followprocedures in Veritas
Storage Foundation installation documentation to upgrade VxVMwithout upgrading the
operating system.
Note If any upgrade procedure instruct you to performa reboot, you must add the -x option to
the boot command. This option boots the cluster into noncluster mode.
Ensure that application levels are compatible with the current versions of Oracle Solaris Cluster
and Solaris software. See your application documentation for installation instructions.
If youupgradedfromSunCluster 3.1 8/05 software, reconfgure the private-network address
range.
Performthis step if you want to increase or decrease the size of the IP address range that is used
by the private interconnect. The IP address range that you confgure must minimally support
the number of nodes and private networks in the cluster. See Private Network in Oracle
Solaris Cluster Software Installation Guide for more information.
13
14
15
16
17
Performing a Standard Upgrade of a Cluster
Chapter 2 Performing a Standard Upgrade to Oracle Solaris Cluster 3.3 Software 35
If you also expect to confgure zone clusters, you specify that number in Howto Finish
Upgrade to Oracle Solaris Cluster 3.3 Software on page 101, after all nodes are back in cluster
mode.
a. Fromone node, start the clsetup utility.
When run in noncluster mode, the clsetup utility displays the Main Menu for
noncluster-mode operations.
b. Choose the menuitem, Change IPAddress Range.
The clsetup utility displays the current private-network confguration, then asks if you
would like to change this confguration.
c. Tochange either the private-network IPaddress or the IPaddress range, type yes andpress
the Returnkey.
The clsetup utility displays the default private-network IP address, 172.16.0.0, and asks if
it is okay to accept this default.
d. Change or accept the private-network IPaddress.
Toaccept the default IPaddress netmask andrange, type yes andpress the Returnkey.
Then skip to the next step.
The frst netmask is the minimumnetmask to support the number of nodes and
private networks that you specifed.
The second netmask supports twice the number of nodes and private networks
that you specifed, to accommodate possible future growth.
iii. Specify either of the calculatednetmasks, or specify a diferent netmask that
supports the expectednumber of nodes andprivate networks.
f. Type yes inresponse tothe clsetup utility's questionabout proceedingwiththe update.
g. Whenfnished, exit the clsetup utility.
After all nodes inthe cluster are upgraded, reboot the upgradednodes.
a. Shut downeachnode.
phys-schost# shutdown -g0 -y
b. Boot eachnode intocluster mode.
Ensure that the confguration meets the requirements for upgrade. See Upgrade
Requirements and Software Support Guidelines on page 11.
Have available the installation media, documentation, and patches for all software products
that you are upgrading, including the following software:
Solaris OS
Applications that are managed by Oracle Solaris Cluster 3.3 data services
If you use role-based access control (RBAC) instead of superuser to access the cluster nodes,
ensure that you can assume an RBACrole that provides authorization for all Oracle Solaris
Cluster commands. This series of upgrade procedures requires the following Oracle Solaris
Cluster RBACauthorizations if the user is not superuser:
solaris.cluster.modify
solaris.cluster.admin
solaris.cluster.read
See Role-Based Access Control (Overview) in SystemAdministration Guide: Security
Services for more information about using RBACroles. See the Oracle Solaris Cluster man
pages for the RBACauthorization that each Oracle Solaris Cluster subcommand requires.
Ensure that the cluster is functioningnormally.
a. Viewthe current status of the cluster by runningthe followingcommandfromany node.
On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following command:
phys-schost% cluster status
See the scstat(1M) or cluster(1CL) man page for more information.
b. Searchthe /var/adm/messages logonthe same node for unresolvederror messages or
warningmessages.
c. Check the volume-manager status.
1
Performing a Dual-Partition Upgrade of a Cluster
Chapter 3 Performing a Dual-Partition Upgrade to Oracle Solaris Cluster 3.3 Software 43
If necessary, notify users that cluster services might be temporarily interruptedduringthe
upgrade.
Service interruption will be approximately the amount of time that your cluster normally takes
to switch services to another node.
Become superuser.
Ensure that the RG_system property of all resource groups inthe cluster is set toFALSE.
Asetting of RG_system=TRUE would restrict certain operations that the dual-partition software
must perform.
a. Oneachnode, determine whether any resource groups are set toRG_system=TRUE.
phys-schost# clresourcegroup show -p RG_system
Make note of which resource groups to change. Save this list to use when you restore the
setting after upgrade is completed.
b. For eachresource groupthat is set toRG_system=TRUE, change the settingtoFALSE.
phys-schost# clresourcegroup set -p RG_system=FALSE resourcegroup
If Geographic Editionsoftware is installed, uninstall it.
For uninstallation procedures, see the documentation for your version of Geographic Edition
software.
If youwill upgrade the Solaris OS andyour cluster uses dual-stringmediators for Solaris Volume
Manager software, unconfgure your mediators.
See Confguring Dual-String Mediators in Oracle Solaris Cluster Software Installation Guide
for more information about mediators.
a. Runthe followingcommandtoverify that nomediator data problems exist.
phys-schost# medstat -s setname
-s setname Specifes the disk set name.
If the value in the Status feld is Bad, repair the afected mediator host. Followthe procedure
Howto Fix Bad Mediator Data in Oracle Solaris Cluster Software Installation Guide.
b. List all mediators.
Save this information for when you restore the mediators during the procedure Howto
Finish Upgrade to Oracle Solaris Cluster 3.3 Software on page 101.
c. For a disk set that uses mediators, take ownershipof the disk set if nonode already has
ownership.
2
3
4
5
6
Performing a Dual-Partition Upgrade of a Cluster
Oracle Solaris Cluster Upgrade Guide September 2010, Revision 10 44
On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following command:
phys-schost# cldevicegroup switch -n node devicegroup
d. Unconfgure all mediators for the disk set.
phys-schost# metaset -s setname -d -m mediator-host-list
-s setname Specifes the disk set name.
-d Deletes fromthe disk set.
-m mediator-host-list Specifes the name of the node to remove as a mediator host for the
disk set.
See the mediator(7D) man page for further information about mediator-specifc options to
the metaset command.
e. Repeat Stepc throughStepdfor eachremainingdisk set that uses mediators.
If youare upgradinga two-node cluster, skiptoStep17.
Otherwise, proceed to Step 8 to determine the partitioning scheme to use. You will determine
which nodes each partition will contain, but interrupt the partitioning process. You will then
compare the node lists of all resource groups against the node members of each partition in the
scheme that you will use. If any resource group does not contain a member of each partition,
you must change the node list.
Loadthe installationDVD-ROMintothe DVD-ROMdrive.
If the volume management daemon vold(1M) is running and is confgured to manage
CD-ROMor DVDdevices, the daemon automatically mounts the media on the /cdrom/cdrom0
directory.
Become superuser ona node of the cluster.
Change tothe /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/
directory, where arch is sparc or x86andwhere ver is 10 for Oracle Solaris 10 .
phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools
7
8
9
10
Performing a Dual-Partition Upgrade of a Cluster
Chapter 3 Performing a Dual-Partition Upgrade to Oracle Solaris Cluster 3.3 Software 45
Start the scinstall utility ininteractive mode.
phys-schost# ./scinstall
Note Do not use the /usr/cluster/bin/scinstall command that is already installed on the
node. You must use the scinstall command on the installation DVD-ROM.
The scinstall Main Menu is displayed.
Choose the menuitem, Manage a Dual-PartitionUpgrade.
*** Main Menu ***
Please select from one of the following (*) options:
1) Create a new cluster or add a cluster node
2) Configure a cluster to be JumpStarted from this install server
* 3) Manage a dual-partition upgrade
* 4) Upgrade this cluster node
* 5) Print release information for this cluster node
* ?) Help with menu options
* q) Quit
Option: 3
The Manage a Dual-Partition Upgrade Menu is displayed.
Choose the menuitem, Display andSelect Possible PartitioningSchemes.
Followthe prompts toperformthe followingtasks:
a. Display the possible partitioningschemes for your cluster.
b. Choose a partitioningscheme.
c. Choose whichpartitiontoupgrade frst.
Note Stop and do not respond yet when prompted, Do you want to begin the
dual-partition upgrade?, but do not exit the scinstall utility. You will respond to this
prompt in Step 19 of this procedure.
Make note of whichnodes belongtoeachpartitioninthe partitionscheme.
Onanother node of the cluster, become superuser.
Ensure that any critical data services canswitchover betweenpartitions.
For a two-node cluster, each node will be the only node in its partition.
11
12
13
14
15
16
17
Performing a Dual-Partition Upgrade of a Cluster
Oracle Solaris Cluster Upgrade Guide September 2010, Revision 10 46
When the nodes of a partition are shut down in preparation for dual-partition upgrade, the
resource groups that are hosted on those nodes switch over to a node in the other partition. If a
resource group does not contain a node fromeach partition in its node list, the resource group
cannot switch over. To ensure successful switchover of all critical data services, verify that the
node list of the related resource groups contains a member of each upgrade partition.
a. Display the node list of eachresource groupthat yourequire toremaininservice duringthe
entire upgrade.
On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following command:
phys-schost# clresourcegroup show -p nodelist
=== Resource Groups and Resources ===
Resource Group: resourcegroup
Nodelist: node1 node2
...
b. If the node list of a resource groupdoes not containat least one member of eachpartition,
redefne the node list toinclude a member of eachpartitionas a potential primary node.
On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following command:
phys-schost# clresourcegroup add-node -n node resourcegroup
Determine your next step.
If youare upgradinga cluster withthree or more nodes, returntothe node that is running
the interactive scinstall utility.
Proceed to Step 19.
At the interactive scinstall prompt Do you want to begin the dual-partition upgrade?,
type Yes.
The command verifes that a remote installation method is available.
Whenprompted, press Enter tocontinue eachstage of preparationfor dual-partitionupgrade.
The command switches resource groups to nodes in the second partition, and then shuts down
each node in the frst partition.
After all nodes inthe frst partitionare shut down, boot eachnode inthat partitioninto
noncluster mode.
Create separate scripts for those applications that you want stopped before applications
under RGMcontrol are stopped and for those applications that you want stop
afterwards.
22
23
Performing a Dual-Partition Upgrade of a Cluster
Chapter 3 Performing a Dual-Partition Upgrade to Oracle Solaris Cluster 3.3 Software 49
To stop applications that are running on more than one node in the partition, write the
scripts accordingly.
Use any name and directory path for your scripts that you prefer.
b. Ensure that eachnode inthe cluster has its owncopy of your scripts.
c. Oneachnode, modify the followingOracle Solaris Cluster scripts tocall the scripts that you
placedonthat node.
To upgrade Solaris software before you performOracle Solaris Cluster software upgrade, go
to Howto Upgrade the Solaris OS and Volume Manager Software (Dual-Partition) on
page 50.
If Oracle Solaris Cluster 3.3 software does not support the release of the Solaris OS that
you currently run on your cluster, you must upgrade the Solaris software to a supported
release. See Supported Products in Oracle Solaris Cluster 3.3 Release Notes for more
information.
If Oracle Solaris Cluster 3.3 software supports the release of the Solaris OS that you
currently run on your cluster, further Solaris software upgrade is optional.
Otherwise, upgrade to Oracle Solaris Cluster 3.3 software. Go to Howto Upgrade Oracle
Solaris Cluster 3.3 Software (Dual-Partition) on page 56.
If these scripts exist and contain an uppercase K or S in the fle name, the scripts are enabled.
No further action is necessary for these scripts.
If these scripts do not exist, in Step 7 you must ensure that any Apache run control scripts
that are installed during the Solaris OS upgrade are disabled.
If these scripts exist but the fle names contain a lowercase k or s, the scripts are disabled. In
Step 7 you must ensure that any Apache run control scripts that are installed during the
Solaris OS upgrade are disabled.
Comment out all entries for globally mountedfle systems inthe node's /etc/vfstab fle.
a. For later reference, make a recordof all entries that are already commentedout.
b. Temporarily comment out all entries for globally mountedfle systems inthe /etc/vfstab
fle.
Entries for globally mounted fle systems contain the global mount option. Comment out
these entries to prevent the Solaris upgrade fromattempting to mount the global devices.
Determine whichprocedure tofollowtoupgrade the Solaris OS.
To use Live Upgrade, go instead to Chapter 4, Performing a Live Upgrade to Oracle Solaris
Cluster 3.3 Software.
To upgrade a cluster that uses Solaris Volume Manager by a method other than Live
Upgrade, followupgrade procedures in Solaris installation documentation.
BeforeYouBegin
1
2
3
4
Performing a Dual-Partition Upgrade of a Cluster
Chapter 3 Performing a Dual-Partition Upgrade to Oracle Solaris Cluster 3.3 Software 51
To upgrade a cluster that uses Veritas Volume Manager by a method other than Live
Upgrade, followupgrade procedures in Veritas Storage Foundation installation
documentation.
Note If your cluster has VxVMinstalled and you are upgrading the Solaris OS, you must
reinstall or upgrade to VxVMsoftware that is compatible with the version of Oracle Solaris 10
you upgraded to.
Upgrade the Solaris software, followingthe procedure that youselectedinStep4.
a. Whenprompted, choose the manual reboot option.
b. Whenpromptedtoreboot, always reboot intononcluster mode.
Note Do not performthe fnal reboot instruction in the Solaris software upgrade. Instead,
do the following:
a. Return to this procedure to performStep 6 and Step 7.
b. Reboot into noncluster mode in Step 8 to complete Solaris software upgrade.
Execute the following commands to boot a node into noncluster mode during Solaris
upgrade:
After VxVMupgrade is complete but before youreboot, verify the entries inthe
/etc/vfstab fle.
If any of the entries that you uncommented in Step 6 were commented out, make those
entries uncommented again.
Ensure that all steps in Howto Prepare the Cluster for Upgrade (Dual-Partition) on
page 42 are completed.
Ensure that the node you are upgrading belongs to the partition that is not active in the
cluster and that the node is in noncluster mode.
Ensure that you have installed all required Solaris software patches and hardware-related
patches.
Become superuser ona node that is a member of the partitionthat is innoncluster mode.
Loadthe installationDVD-ROMintothe DVD-ROMdrive.
If the volume management daemon vold(1M) is running and is confgured to manage
CD-ROMor DVDdevices, the daemon automatically mounts the media on the /cdrom/cdrom0
directory.
Change tothe /Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where
arch is sparc or x86 andwhere ver is 10 for Oracle Solaris 10.
phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools
Start the scinstall utility.
phys-schost# ./scinstall
BeforeYouBegin
1
2
3
4
Performing a Dual-Partition Upgrade of a Cluster
Oracle Solaris Cluster Upgrade Guide September 2010, Revision 10 56
Note Do not use the /usr/cluster/bin/scinstall command that is already installed on the
node. You must use the scinstall command that is located on the installation DVD-ROM.
The scinstall Main Menu is displayed.
Choose the menuitem, UpgradeThis Cluster Node.
*** Main Menu ***
Please select from one of the following (*) options:
1) Create a new cluster or add a cluster node
2) Configure a cluster to be JumpStarted from this install server
* 3) Manage a dual-partition upgrade
* 4) Upgrade this cluster node
* 5) Print release information for this cluster node
* ?) Help with menu options
* q) Quit
Option: 4
The Upgrade Menu is displayed.
Choose the menuitem, Upgrade Oracle Solaris Cluster Framework onThis Node.
Followthe menuprompts toupgrade the cluster framework.
During the Oracle Solaris Cluster upgrade, scinstall might make one or more of the following
confguration changes:
Set the local-mac-address? variable to true, if the variable is not already set to that value.
Upgrade processing is fnished when the systemdisplays the message Completed Oracle
Solaris Cluster framework upgrade and prompts you to press Enter to continue.
Quit the scinstall utility.
Upgrade data service packages.
You must upgrade all data services to the Oracle Solaris Cluster 3.3 version.
Note For HAfor SAP Web Application Server, if you are using a J2EE engine resource or a web
application server component resource or both, you must delete the resource and recreate it
with the newweb application server component resource. Changes in the newweb application
server component resource includes integration of the J2EE functionality. For more
information, see Oracle Solaris Cluster Data Service for SAP Web Application Server Guide.
5
6
7
8
9
Performing a Dual-Partition Upgrade of a Cluster
Chapter 3 Performing a Dual-Partition Upgrade to Oracle Solaris Cluster 3.3 Software 57
a. Start the upgradedinteractive scinstall utility.
phys-schost# /usr/cluster/bin/scinstall
Note Do not use the scinstall utility that is on the installation media to upgrade data
service packages.
The scinstall Main Menu is displayed.
b. Choose the menuitem, UpgradeThis Cluster Node.
The Upgrade Menu is displayed.
c. Choose the menuitem, Upgrade Oracle Solaris Cluster Data Service Agents onThis Node.
d. Followthe menuprompts toupgrade Oracle Solaris Cluster data service agents that are
installedonthe node.
You can choose fromthe list of data services that are available to upgrade or choose to
upgrade all installed data services.
e. When the system displays the message Completed upgrade of Oracle Solaris Cluster
data services agents, press Enter.
The Upgrade Menu is displayed.
Quit the scinstall utility.
Unloadthe installationDVD-ROMfromthe DVD-ROMdrive.
a. Toensure that the DVD-ROMis not beingused, change toa directory that does not reside on
the DVD-ROM.
b. Eject the DVD-ROM.
phys-schost# eject cdrom
If youhave Oracle Solaris Cluster HAfor NFS confguredona highly available local fle system,
ensure that the loopback fle system(LOFS) is disabled.
Note If you have non-global zones confgured, LOFS must remain enabled. For guidelines
about using LOFS and alternatives to disabling it, see Cluster File Systems in Oracle Solaris
Cluster Software Installation Guide.
To disable LOFS, ensure that the /etc/system fle contains the following entry:
exclude:lofs
10
11
12
Performing a Dual-Partition Upgrade of a Cluster
Oracle Solaris Cluster Upgrade Guide September 2010, Revision 10 58
This change becomes efective at the next systemreboot.
As needed, manually upgrade any customdata services that are not suppliedonthe product
media.
Verify that eachdata-service update is installedsuccessfully.
Viewthe upgrade log fle that is referenced at the end of the upgrade output messages.
Install any Oracle Solaris Cluster 3.3 framework anddata-service software patches.
See Patches and Required Firmware Levels in the Oracle Solaris Cluster 3.3 Release Notes for
the location of patches and installation instructions.
Upgrade software applications that are installedonthe cluster.
Ensure that application levels are compatible with the current versions of Oracle Solaris Cluster
and Solaris software. See your application documentation for installation instructions.
If you want to upgrade VxVMand did not upgrade the Solaris OS, followprocedures in Veritas
Storage Foundation installation documentation to upgrade VxVMwithout upgrading the
operating system.
Note If any upgrade procedure instruct you to performa reboot, you must add the -x option to
the boot command. This option boots the cluster into noncluster mode.
Repeat all steps inthis procedure uptothis point onall remainingnodes that youneedto
upgrade inthe partition.
After all nodes ina partitionare upgraded, apply the upgrade changes.
a. Fromone node inthe partitionthat youare upgrading, start the interactive scinstall
utility.
phys-schost# /usr/cluster/bin/scinstall
Note Do not use the scinstall command that is located on the installation media. Only
use the scinstall command that is located on the cluster node.
The scinstall Main Menu is displayed.
b. Type optionnumber for Apply Dual-PartitionUpgrade Changes tothe Partition.
c. Followthe prompts tocontinue eachstage of the upgrade processing.
The command performs the following tasks, depending on which partition the command is
run from:
13
14
15
16
17
18
Performing a Dual-Partition Upgrade of a Cluster
Chapter 3 Performing a Dual-Partition Upgrade to Oracle Solaris Cluster 3.3 Software 59
First partition - The command halts each node in the second partition, one node at a
time. When a node is halted, any services on that node are automatically switched over
to a node in the frst partition, provided that the node list of the related resource group
contains a node in the frst partition. After all nodes in the second partition are halted,
the nodes in the frst partition are booted into cluster mode and take over providing
cluster services.
Caution Do not reboot any node of the frst partition again until after the upgrade is
completed on all nodes. If you again reboot a node of the frst partition before the second
partition is upgraded and rebooted into the cluster, the upgrade might fail in an
unrecoverable state.
Second partition - The command boots the nodes in the second partition into cluster
mode, to join the active cluster that was formed by the frst partition. After all nodes have
rejoined the cluster, the command performs fnal processing and reports on the status of
the upgrade.
d. Exit the scinstall utility, if it is still running.
e. If youare fnishingupgrade fromSunCluster 3.1 8/05 software of the frst partitionandyou
want toconfgure zone clusters, set the expectednumber of nodes andprivate networks in
the cluster.
If you upgraded fromSun Cluster 3.1 8/05 software and do not want to confgure zone
clusters, or if you upgraded fromSun Cluster 3.2 software, this task is optional.
i. Boot all nodes inthe frst partitionintononcluster mode.
Toaccept the default IPaddress netmask andrange, type yes andpress the Return
key.
Then skip to the next step.
The second netmask supports twice the number of nodes and private
networks that you specifed, to accommodate possible future growth.
Specify either of the calculatednetmasks, or specify a diferent netmask that
supports the expectednumber of nodes andprivate networks.
vii. Type yes inresponse tothe clsetup utility's questionabout proceedingwiththe update.
viii.Whenfnished, exit the clsetup utility.
ix. Boot the nodes of the frst partitionintocluster mode.
f. If youare fnishingupgrade of the frst partition, performthe followingsubsteps toprepare
the secondpartitionfor upgrade.
Otherwise, if you are fnishing upgrade of the second partition, proceed to Howto Verify
Upgrade of Oracle Solaris Cluster 3.3 Software on page 100.
i. Boot eachnode inthe secondpartitionintononcluster mode.
Howto Upgrade the Solaris OS and Oracle Solaris Cluster 3.3 Software (Live Upgrade) on
page 71
If your cluster confguration uses a ZFS root fle systemand is confgured with zone clusters,
you can use live upgrade only to upgrade the Solaris OS. To upgrade Oracle Solaris Cluster
software, after using live upgrade to upgrade Solaris software, use either standard upgrade or
dual-partition upgrade to upgrade Oracle Solaris Cluster software.
Performinga Live Upgrade of a Cluster
The following table lists the tasks to performto upgrade to Oracle Solaris Cluster 3.3 software.
You also performthese tasks to upgrade only the Solaris OS.
Note If you upgrade the Solaris OS to a newmarketing release, such as fromSolaris 9 to Oracle
Solaris 10 software, you must also upgrade the Oracle Solaris Cluster software and dependency
software to the version that is compatible with the newOS version.
TABLE 41 Task Map: Performing a Live Upgrade to Oracle Solaris Cluster 3.3 Software
Task Instructions
1. Read the upgrade requirements and restrictions. Determine the
proper upgrade method for your confguration and needs.
Upgrade Requirements and Software Support Guidelines on
page 11
Choosing an Oracle Solaris Cluster Upgrade Method on
page 13
4
C H A P T E R 4
67
TABLE 41 Task Map: Performing a Live Upgrade to Oracle Solaris Cluster 3.3 Software (Continued)
Task Instructions
2. If a quorumserver is used, upgrade the QuorumServer
software.
Howto Upgrade QuorumServer Software on page 68
3. If Oracle Solaris Cluster Geographic Edition software is
installed, uninstall it.
Howto Prepare the Cluster for Upgrade (Live Upgrade) on
page 70
4. If the cluster uses dual-string mediators for Solaris Volume
Manager software, unconfgure the mediators. Upgrade the
Solaris software, if necessary, to a supported Solaris update.
Upgrade to Oracle Solaris Cluster 3.3 framework and data-service
software. If necessary, upgrade applications. If the cluster uses
dual-string mediators, reconfgure the mediators. As needed,
upgrade Veritas Volume Manager (VxVM)software and disk
groups and Veritas File System(VxFS).
Howto Upgrade the Solaris OS and Oracle Solaris Cluster 3.3
Software (Live Upgrade) on page 71
5. Use the scversions command to commit the cluster to the
upgrade.
Howto Commit the Upgraded Cluster to Oracle Solaris Cluster
3.3 Software on page 99
6. Verify successful completion of upgrade to Oracle Solaris
Cluster 3.3 software.
Howto Verify Upgrade of Oracle Solaris Cluster 3.3 Software
on page 100
7. Enable resources and bring resource groups online. Migrate
existing resources to newresource types. Upgrade to Oracle
Solaris Cluster Geographic Edition 3.3 software, if used.
Howto Finish Upgrade to Oracle Solaris Cluster 3.3 Software
on page 101
8. (Optional) SPARC: Upgrade the Oracle Solaris Cluster module
for Sun Management Center, if needed.
SPARC: Howto Upgrade Oracle Solaris Cluster Module
Software for Sun Management Center on page 121
Ensure that the confguration meets the requirements for upgrade. See Upgrade
Requirements and Software Support Guidelines on page 11.
Have available the installation media, documentation, and patches for all software products
that you are upgrading, including the following software:
Solaris OS
Applications that are managed by Oracle Solaris Cluster 3.3 data services
If you use role-based access control (RBAC) instead of superuser to access the cluster nodes,
ensure that you can assume an RBACrole that provides authorization for all Oracle Solaris
Cluster commands. This series of upgrade procedures requires the following Oracle Solaris
Cluster RBACauthorizations if the user is not superuser:
solaris.cluster.modify
solaris.cluster.admin
solaris.cluster.read
See Role-Based Access Control (Overview) in SystemAdministration Guide: Security
Services for more information about using RBACroles. See the Oracle Solaris Cluster man
pages for the RBACauthorization that each Oracle Solaris Cluster subcommand requires.
7
8
BeforeYouBegin
Performing a Live Upgrade of a Cluster
Oracle Solaris Cluster Upgrade Guide September 2010, Revision 10 70
Ensure that the cluster is functioningnormally.
a. Viewthe current status of the cluster by runningthe followingcommandfromany node.
On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following command:
phys-schost% cluster status
See the scstat(1M) or cluster(1CL) man page for more information.
b. Searchthe /var/adm/messages logonthe same node for unresolvederror messages or
warningmessages.
c. Check the volume-manager status.
If necessary, notify users that cluster services will be temporarily interruptedduringthe
upgrade.
Service interruption will be approximately the amount of time that your cluster normally takes
to switch services to another node.
If Geographic Editionsoftware is installed, uninstall it.
For uninstallation procedures, see the documentation for your version of Geographic Edition
software.
Become superuser ona node of the cluster.
Ensure that all shareddata is backedup.
Ensure that eachsystemdisk is backedup.
Performa live upgrade of the Solaris OS, Oracle Solaris Cluster 3.3 software, and other software.
Go to Howto Upgrade the Solaris OS and Oracle Solaris Cluster 3.3 Software (Live Upgrade)
on page 71.
Solaris 10 10/09 Installation Guide: Solaris Live Upgrade and Upgrade Planning
If non-global zones are installed on the cluster, see Chapter 8, Upgrading the Solaris OS on
a SystemWith Non-Global Zones Installed, in Solaris 10 10/09 Installation Guide: Solaris
Live Upgrade and Upgrade Planning.
Note The cluster must already run on, or be upgraded to, at least the minimumrequired level
of the Solaris OS to support upgrade to Oracle Solaris Cluster 3.3 software. See Supported
Products in Oracle Solaris Cluster 3.3 Release Notes for more information.
Performthis procedure on each node in the cluster.
Tip You can use the cconsole utility to performthis procedure on multiple nodes
simultaneously. See Howto Install Cluster Control Panel Software on an Administrative
Console in Oracle Solaris Cluster Software Installation Guide for more information.
Ensure that all steps in Howto Prepare the Cluster for Upgrade (Live Upgrade) on page 70
are completed.
Install a supportedversionof Solaris Live Upgrade software.
Followinstructions in Solaris Live Upgrade SystemRequirements in Solaris 10 10/09
Installation Guide: Solaris Live Upgrade and Upgrade Planning and Installing Solaris Live
Upgrade in Solaris 10 10/09 Installation Guide: Solaris Live Upgrade and Upgrade Planning.
If youwill upgrade the Solaris OS andyour cluster uses dual-stringmediators for Solaris Volume
Manager software, unconfgure your mediators.
See Confguring Dual-String Mediators in Oracle Solaris Cluster Software Installation Guide
for more information about mediators.
a. Runthe followingcommandtoverify that nomediator data problems exist.
phys-schost# medstat -s setname
-s setname Specifes the disk set name.
If the value in the Status feld is Bad, repair the afected mediator host. Followthe procedure
Howto Fix Bad Mediator Data in Oracle Solaris Cluster Software Installation Guide.
b. List all mediators.
Save this information for when you restore the mediators during the procedure Howto
Finish Upgrade to Oracle Solaris Cluster 3.3 Software on page 101.
BeforeYouBegin
1
2
Performing a Live Upgrade of a Cluster
Oracle Solaris Cluster Upgrade Guide September 2010, Revision 10 72
c. For a disk set that uses mediators, take ownershipof the disk set if nonode already has
ownership.
On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following command:
phys-schost# cldevicegroup switch -n node devicegroup
d. Unconfgure all mediators for the disk set.
phys-schost# metaset -s setname -d -m mediator-host-list
-s setname Specifes the disk set name.
-d Deletes fromthe disk set.
-m mediator-host-list Specifes the name of the node to remove as a mediator host for the
disk set.
See the mediator(7D) man page for further information about mediator-specifc options to
the metaset command.
e. Repeat Stepc throughStepdfor eachremainingdisk set that uses mediators.
Oneachnode that uses a UFS root fle system, temporarily change the name of the global
devices entry inthe /etc/vfstab fle fromthe DIDname tothe physical name.
This name change is necessary for live upgrade software to recognize the global-devices fle
system. You will restore the DIDnames after the live upgrade is completed.
a. Back upthe /etc/vfstab fle.
phys-schost# cp /etc/vfstab /etc/vfstab.old
b. Openthe /etc/vfstab fle for editing.
c. Locate andedit the line that corresponds to/global/.device/node@N.
You upgraded fromSun Cluster 3.1 8/05 software and you want to confgure zone
clusters.
Your cluster hosts software applications that require upgrade and for which you cannot
use Solaris Live Upgrade.
Toaccept the default IPaddress netmask andrange, type yes andpress the Returnkey.
Then skip to the next step.
The frst netmask is the minimumnetmask to support the number of nodes and
private networks that you specifed.
Performing a Live Upgrade of a Cluster
Oracle Solaris Cluster Upgrade Guide September 2010, Revision 10 80
The second netmask supports twice the number of nodes and private networks
that you specifed, to accommodate possible future growth.
iii. Specify either of the calculatednetmasks, or specify a diferent netmask that
supports the expectednumber of nodes andprivate networks.
f. Type yes inresponse tothe clsetup utility's questionabout proceedingwiththe update.
g. Whenfnished, exit the clsetup utility.
Upgrade any software applications that require anupgrade andfor whichyoucannot use
Solaris Live Upgrade.
Note If an upgrade process directs you to reboot, always reboot into noncluster mode, as
described in Step 24, until all upgrades are complete.
After all nodes are upgraded, boot the nodes intocluster mode.
a. Shut downeachnode.
phys-schost# shutdown -g0 -y -i0
b. Whenall nodes are shut down, boot eachnode intocluster mode.
If you used an unmirrored volume for your inactive BE, delete the old BE fles. For specifc
information, see the appropriate procedure for your original Solaris OS version.
If you detached a plex to use as the inactive BE, reattach the plex and synchronize the
mirrors. For more information about working with a plex, see the appropriate procedure for
your original Solaris OS versions.
Ensure that the confguration meets requirements for upgrade. See Upgrade Requirements
and Software Support Guidelines on page 11.
Have available the installation media, documentation, and patches for all the software
products that you are upgrading, including the following software:
Solaris OS
Applications that are managed by Oracle Solaris Cluster 3.3 data service agents
See Patches and Required Firmware Levels in the Oracle Solaris Cluster 3.3 Release Notes
for the location of patches and installation instructions.
6
7
8
BeforeYouBegin
Performing a Rolling Upgrade of a Cluster
Oracle Solaris Cluster Upgrade Guide September 2010, Revision 10 88
Ensure that the cluster is functioningnormally.
a. Viewthe current status of the cluster by runningthe followingcommandfromany node.
phys-schost% cluster status
See the cluster(1CL) man page for more information.
b. Searchthe /var/adm/messages logonthe same node for unresolvederror messages or
warningmessages.
c. Check the volume-manager status.
If necessary, notify users that cluster services might be temporarily interruptedduringthe
upgrade.
Service interruption will be approximately the amount of time that your cluster normally takes
to switch services to another node.
If youare upgradingOracle Solaris Cluster 3.3 software andOracle Solaris Cluster Geographic
Editionsoftware is installed, uninstall it.
For uninstallation procedures, see the documentation for your version of Oracle Solaris Cluster
Geographic Edition software.
Become superuser ona node of the cluster.
Move all resource groups anddevice groups that are runningonthe node toupgrade.
phys-schost# clnode evacuate node-to-evacuate
See the clnode(1CL) man page for more information.
Verify that the move was completedsuccessfully.
phys-schost# cluster status -t devicegroup,resourcegroup
Ensure that the systemdisk, applications, andall data are backedup.
If youwill upgrade the Solaris OS andyour cluster uses dual-stringmediators for Solaris Volume
Manager software, unconfgure your mediators.
See Confguring Dual-String Mediators in Oracle Solaris Cluster Software Installation Guide
for more information.
a. Runthe followingcommandtoverify that nomediator data problems exist.
phys-schost# medstat -s setname
-s setname Specifes the disk set name
1
2
3
4
5
6
7
8
Performing a Rolling Upgrade of a Cluster
Chapter 5 Performing a Rolling Upgrade 89
If the value in the Status feld is Bad, repair the afected mediator host. Followthe procedure
Howto Fix Bad Mediator Data in Oracle Solaris Cluster Software Installation Guide.
b. List all mediators.
Save this information for when you restore the mediators during the procedure Howto
Commit the Upgraded Cluster to Oracle Solaris Cluster 3.3 Software on page 99.
c. For a disk set that uses mediators, take ownershipof the disk set if nonode already has
ownership.
phys-schost# cldevicegroup switch -n node devicegr
d. Unconfgure all mediators for the disk set.
phys-schost# metaset -s setname -d -m mediator-host-list
-s setname Specifes the disk-set name
-d Deletes fromthe disk set
-m mediator-host-list Specifes the name of the node to remove as a mediator host
for the disk set
See the mediator(7D) man page for further information about mediator-specifc options to
the metaset command.
e. Repeat these steps for eachremainingdisk set that uses mediators.
Shut downthe node that youwant toupgrade andboot it intononcluster mode.
Howto Commit the Upgraded Cluster to Oracle Solaris Cluster 3.3 Software on page 99
Howto Verify Upgrade of Oracle Solaris Cluster 3.3 Software on page 100
Howto Finish Upgrade to Oracle Solaris Cluster 3.3 Software on page 101
Completinga Cluster Upgrade
Ensure that all upgrade procedures are completed for all cluster nodes that you are
upgrading.
Ensure that all steps in Howto Commit the Upgraded Cluster to Oracle Solaris Cluster 3.3
Software on page 99 are completed successfully.
Oneachnode, become superuser.
Oneachupgradednode, viewthe installedlevels of Oracle Solaris Cluster software.
phys-schost# clnode show-rev -v
The frst line of output states which version of Oracle Solaris Cluster software the node is
running. This version should match the version that you just upgraded to.
Fromany node, verify that all upgradedcluster nodes are runningincluster mode (Online).
phys-schost# clnode status
See the clnode(1CL) man page for more information about displaying cluster status.
3
4
Next Steps
BeforeYouBegin
1
2
3
Completing a Cluster Upgrade
Oracle Solaris Cluster Upgrade Guide September 2010, Revision 10 100
Verifying Upgrade to Oracle Solaris Cluster 3.3 Software
The following example shows the commands used to verify upgrade of a two-node cluster to
Oracle Solaris Cluster 3.3 software. The cluster node names are phys-schost-1 and
phys-schost-2.
phys-schost# clnode show-rev -v
3.3
...
phys-schost# clnode status
=== Cluster Nodes ===
--- Node Status ---
Node Name Status
--------- ------
phys-schost-1 Online
phys-schost-2 Online
Go to Howto Finish Upgrade to Oracle Solaris Cluster 3.3 Software on page 101.
For Oracle Java Web Console, the external access is available if the output of the
following command returns an entry for 6789, which is the port number that is used to
connect to Oracle Java Web Console.
phys-schost# netstat -a | grep 6789
If external access to both services is enabled, skip to Step 3. Otherwise, continue to Step b
b. If external access toRPCcommunicationis restricted, performthe followingcommands.
phys-schost# svccfg
svc:> select network/rpc/bind
svc:/network/rpc/bind> setprop config/local_only=false
svc:/network/rpc/bind> quit
phys-schost# svcadm refresh network/rpc/bind:default
2
Completing a Cluster Upgrade
Oracle Solaris Cluster Upgrade Guide September 2010, Revision 10 102
c. If external access toOracle JavaWebConsole is restricted, performthe followingcommands.
phys-schost# svccfg
svc:> select system/webconsole
svc:/system/webconsole> setprop options/tcp_listen=true
svc:/system/webconsole> quit
phys-schost# /usr/sbin/smcwebserver restart
For more information about what services the restricted network profle restricts to local
connections, see Planning Network Security in Solaris 10 10/09 Installation Guide:
Planning for Installation and Upgrade.
d. Repeat Stepa toconfrmthat external access is restored.
Oneachnode, start the security fle agent andthenstart the SunJavaWebConsole agent.
phys-schost# /usr/sbin/cacaoadm start
phys-schost# /usr/sbin/smcwebserver start
If youupgradedany data services that are not suppliedonthe product media, register the new
resource types for those data services.
Followthe documentation that accompanies the data services.
If youupgradedHAfor SAPliveCache fromthe SunCluster 3.1 8/05 versiontothe Oracle Solaris
Cluster 3.3 version, modify the /opt/SUNWsclc/livecache/bin/lccluster confgurationfle.
a. Become superuser ona node that will host the liveCache resource.
b. Copy the new/opt/SUNWsclc/livecache/bin/lccluster fle tothe
/sapdb/LC_NAME/db/sap/ directory.
Overwrite the lccluster fle that already exists fromthe previous confguration of the data
service.
c. Confgure this /sapdb/LC_NAME/db/sap/lccluster fle as documentedinHowto
Register andConfgure Solaris Cluster HAfor SAP liveCachein Oracle Solaris Cluster Data
Service for SAP liveCache Guide.
If youupgradedthe Solaris OS andyour confgurationuses dual-stringmediators for Solaris
Volume Manager software, restore the mediator confgurations.
a. Determine whichnode has ownershipof a disk set towhichyouwill addthe mediator hosts.
phys-schost# metaset -s setname
-s setname Specifes the disk set name.
b. Onthe node that masters or will master the disk set, become superuser.
3
4
5
6
Completing a Cluster Upgrade
Chapter 6 Completing the Upgrade 103
c. If nonode has ownership, take ownershipof the disk set.
phys-schost# cldevicegroup switch -n node devicegroup
node Specifes the name of the node to become primary of the disk set.
devicegroup Specifes the name of the disk set.
d. Re-create the mediators.
phys-schost# metaset -s setname -a -m mediator-host-list
-a Adds to the disk set.
-m mediator-host-list Specifes the names of the nodes to add as mediator hosts for
the disk set.
e. Repeat these steps for eachdisk set inthe cluster that uses mediators.
If youupgradedVxVM, upgrade all disk groups.
a. Bringonline andtake ownershipof a disk grouptoupgrade.
phys-schost# cldevicegroup switch -n node devicegroup
b. Synchronize the disk group.
This step resolves any changes made to VxVMminor numbers during VxVMupgrade.
phys-schost# cldevicegroup sync devicegroup
c. Runthe followingcommandtoupgrade a disk grouptothe highest versionsupportedby
theVxVMrelease youinstalled.
phys-schost# vxdg upgrade devicegroup
See your VxVMadministration documentation for more information about upgrading disk
groups.
d. Oneachnode that is directly connectedtothe disk group, bringonline andtake ownership
of the upgradeddisk group.
phys-schost# cldevicegroup switch -n node devicegroup
This step is necessary is to update the VxVMdevice fles major number with the latest vxio
number that might have been assigned during the upgrade.
e. Repeat for eachremainingVxVMdisk groupinthe cluster.
Migrate resources tonewresource type versions.
You must migrate all resources to the Oracle Solaris Cluster 3.3 resource-type version to use the
newfeatures and bug fxes that are provided in this release.
7
8
Completing a Cluster Upgrade
Oracle Solaris Cluster Upgrade Guide September 2010, Revision 10 104
Note For HAfor SAP Web Application Server, if you are using a J2EE engine resource or a web
application server component resource or both, you must delete the resource and recreate it
with the newweb application server component resource. Changes in the newweb application
server component resource includes integration of the J2EE functionality. For more
information, see Oracle Solaris Cluster Data Service for SAP Web Application Server Guide.
See Upgrading a Resource Type in Oracle Solaris Cluster Data Services Planning and
Administration Guide, which contains procedures which use the command line. Alternatively,
you can performthe same tasks by using the Resource Group menu of the clsetup utility. The
process involves performing the following tasks:
1) Set the number of nodes and private networks in the cluster. Followinstructions in How
to Change the Private Network Address or Address Range of an Existing Cluster in Oracle
Solaris Cluster SystemAdministration Guide. This task requires putting all cluster nodes into
noncluster mode.
2) After you set the expected number of nodes and reboot all nodes into cluster mode, rerun
Step 11 to set the expected number of zone clusters.
Resource-type migration failure - Normally, you migrate resources to a newresource type
while the resource is ofine. However, some resources need to be online for a resource-type
migration to succeed. If resource-type migration fails for this reason, error messages similar to
the following are displayed:
phys-schost - Resource depends on a SUNW.HAStoragePlus type resource that is not
online anywhere.
(C189917) VALIDATE on resource nfsrs, resource group rg, exited with
non-zero exit status.
(C720144) Validation of resource nfsrs in resource group rg on node
phys-schost failed.
If resource-type migration fails because the resource is ofine, use the clsetup utility to
re-enable the resource and then bring its related resource group online. Then repeat migration
procedures for the resource.
Java binaries location change - If the location of the Java binaries changed during the upgrade
of Oracle Solaris software, you might see error messages similar to the following when you
attempt to run the cacaoadm start or smcwebserver start commands:
phys-schost# /usr/sbin/cacaoadm start
No suitable Java runtime found. Java 1.5.0_06 or higher is required.
Jan 3 17:10:26 ppups3 cacao: No suitable Java runtime found. Java 1.5.0_06 or
higher is required.
Cannot locate all the dependencies
phys-schost# smcwebserver start
/usr/sbin/smcwebserver: /usr/jdk/jdk1.5.0_06/bin/java: not found
Troubleshooting
Completing a Cluster Upgrade
Oracle Solaris Cluster Upgrade Guide September 2010, Revision 10 108
These errors are generated because the start commands cannot locate the current location of the
Java binaries. The JAVA_HOME property still points to the directory where the previous version of
Java was located, but that previous version was removed during upgrade.
To correct this problem, change the setting of JAVA_HOME in the following confguration fles to
use the current Java directory:
/etc/webconsole/console/config.properties/etc/opt/SUNWcacao/cacao.properties
If you have a SPARCbased systemand use Sun Management Center to monitor the cluster, go
to SPARC: Howto Upgrade Oracle Solaris Cluster Module Software for Sun Management
Center on page 121.
Otherwise, the cluster upgrade is complete.
Next Steps
Completing a Cluster Upgrade
Chapter 6 Completing the Upgrade 109
110
Recovering Froman Incomplete Upgrade
This chapter provides the following information to recover fromcertain kinds of incomplete
upgrades:
SPARC: Howto Recover Froma Partially Completed Dual-Partition Upgrade on page 114
x86: Howto Recover Froma Partially Completed Dual-Partition Upgrade on page 115
SPARC: Howto Upgrade Oracle Solaris Cluster Module Software for Sun Management
Center on page 121
Sun Management Center patches and Oracle Solaris Cluster module patches, if any.
See Patches and Required Firmware Levels in the Oracle Solaris Cluster 3.3 Release Notes
for the location of patches and installation instructions.
Stopany SunManagement Center processes.
a. If the SunManagement Center console is running, exit the console.
In the console window, choose FileExit.
b. OneachSunManagement Center agent machine (cluster node), stopthe SunManagement
Center agent process.
phys-schost# /opt/SUNWsymon/sbin/es-stop -a
c. Onthe SunManagement Center server machine, stopthe SunManagement Center server
process.
server# /opt/SUNWsymon/sbin/es-stop -S
BeforeYouBegin
1
Upgrading Sun Management Center Software
Chapter 8 SPARC: Upgrading Sun Management Center Software 123
As superuser, remove Oracle Solaris Clustermodule packages.
Use the pkgrm(1M) command to remove all Oracle Solaris Cluster module packages fromall
locations that are listed in the following table.
Location Module Package toRemove
Each cluster node SUNWscsam, SUNWscsal
Sun Management Center console machine SUNWscscn
Sun Management Center server machine SUNWscssv, SUNWscshl
machine# pkgrm module-package
If you do not remove the listed packages, the Sun Management Center software upgrade might
fail because of package dependency problems. You reinstall these packages in Step 4, after you
upgrade Sun Management Center software.
Upgrade the SunManagement Center software.
Followthe upgrade procedures in your Sun Management Center documentation.
As superuser, reinstall Oracle Solaris Cluster module packages fromthe installationDVD-ROMto
the locations that are listedinthe followingtable.
Location Module Package toInstall
Each cluster node SUNWscsam, SUNWscsal
Sun Management Center server machine SUNWscssv
a. Insert the installationDVD-ROMfor the appropriate platforminthe DVD-ROMdrive of the
machine.
b. Change tothe /Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/ directory,
where arch is sparc or x86, andver is 10 for Oracle Solaris 10.
machine# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/
Note The agent packages to install on the cluster nodes are available for both SPARCbased
systems and x86 based systems. The package for the server machine is available for SPARC
based systems only.
2
3
4
Upgrading Sun Management Center Software
Oracle Solaris Cluster Upgrade Guide September 2010, Revision 10 124
c. Install the appropriate module package onthe machine.