OceanStor Dorado 6.x & OceanStor 6.x Host Connectivity Guide For HP-UX
OceanStor Dorado 6.x & OceanStor 6.x Host Connectivity Guide For HP-UX
x & OceanStor
6.x Host Connectivity Guide for HP-
UX
Issue 10
Date 2023-08-02
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and
the customer. All or part of the products, services and features described in this document may not be
within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,
information, and recommendations in this document are provided "AS IS" without warranties, guarantees
or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: https://e.huawei.com
Contents
2 Introduction.............................................................................................................................. 5
2.1 Basic Concepts.......................................................................................................................................................................... 5
2.1.1 Introduction to HP-UX........................................................................................................................................................5
2.1.2 File Systems in HP-UX........................................................................................................................................................ 5
2.1.3 Directory Structures in HP-UX......................................................................................................................................... 6
2.2 Host-SAN Connectivity.......................................................................................................................................................... 7
2.2.1 FC Connectivity..................................................................................................................................................................... 7
2.2.2 iSCSI Connectivity................................................................................................................................................................ 8
2.2.3 Multipath Connectivity...................................................................................................................................................... 8
2.2.4 SAN Boot.............................................................................................................................................................................. 10
2.3 Interoperability Query......................................................................................................................................................... 11
2.3.1 Querying Host Version Information............................................................................................................................ 11
2.3.1.1 Querying the Current Host Version......................................................................................................................... 11
2.3.1.2 Querying the Version of a Component.................................................................................................................. 11
2.3.2 Querying Interoperability Between HP-UX and Storage Systems.................................................................... 11
2.4 Specifications.......................................................................................................................................................................... 13
2.4.1 File Systems......................................................................................................................................................................... 13
2.4.2 Number of LUNs................................................................................................................................................................ 13
2.4.3 Volume Management Software................................................................................................................................... 14
2.5 Common Management Tools and Commands........................................................................................................... 15
2.5.1 Management Tool............................................................................................................................................................. 15
2.5.2 Management Commands............................................................................................................................................... 15
3 Planning Connectivity.......................................................................................................... 18
3.1 Non-HyperMetro Scenarios............................................................................................................................................... 18
3.1.1 Direct-Attached FC Connections.................................................................................................................................. 19
3.1.2 Fabric-Attached FC Connections...................................................................................................................................20
5 Configuring Connectivity.....................................................................................................31
5.1 Establishing Fibre Channel Connections....................................................................................................................... 31
5.1.1 Host Configuration........................................................................................................................................................... 31
5.1.2 Storage System Configuration...................................................................................................................................... 32
5.1.3 Scanning LUNs on a Host............................................................................................................................................... 35
5.2 Scanning LUNs on the Host.............................................................................................................................................. 37
6 Configuring Multipathing................................................................................................... 39
6.1 Non-HyperMetro Scenarios............................................................................................................................................... 39
6.1.1 Storage System Configuration...................................................................................................................................... 39
6.1.2 Host Configuration........................................................................................................................................................... 41
6.2 HyperMetro Scenarios......................................................................................................................................................... 44
6.2.1 Storage System Configuration...................................................................................................................................... 44
6.2.2 Host Configuration........................................................................................................................................................... 47
6.2.3 Verification........................................................................................................................................................................... 48
7 FAQs..........................................................................................................................................53
7.1 Changing LUN Mappings When Third-Party Multipathing Software Is Used on an HP-UX Host............53
7.2 iSCSI Software Installation Failure.................................................................................................................................. 57
7.3 LUNs Cannot Be Discovered After a LUN with the Host LUN ID Being 0 Is Manually Added.................. 60
7.4 Recommended Configurations for 6.x Series Storage Systems for Taking Over Data from Other
Huawei Storage Systems When the Host Uses the OS Native Multipathing Software...................................... 60
10.1.1 Overview............................................................................................................................................................................ 86
10.1.2 Working Principles.......................................................................................................................................................... 87
10.1.2.1 Cluster Manager.......................................................................................................................................................... 87
10.1.2.2 Package Manager........................................................................................................................................................ 87
10.1.2.3 Network Manager....................................................................................................................................................... 87
10.1.3 Installation and Configuration................................................................................................................................... 87
10.1.4 Cluster Maintenance...................................................................................................................................................... 88
10.1.4.1 Common Maintenance Commands.......................................................................................................................88
10.1.4.2 Cluster Log Analysis.................................................................................................................................................... 88
10.2 Veritas VCS............................................................................................................................................................................ 89
10.2.1 Overview............................................................................................................................................................................ 89
10.2.2 Version Compatibility.....................................................................................................................................................89
10.2.3 Installation and Configuration................................................................................................................................... 89
1.1 Purpose
1.2 Audience
1.3 Related Documents
1.4 Conventions
1.5 Where To Get Help
1.1 Purpose
This document details the configuration methods and precautions for connecting
OceanStor Dorado storage systems to Hwelett Packard UniX (HP-UX for short)
hosts.
The following table lists the product models that this document is applicable to.
OceanStor 6810
OceanStor 18510
OceanStor 18810
OceanStor 2600
OceanStor 2220
OceanStor 2620
1.2 Audience
This document is intended for:
Readers of this guide are expected to be familiar with the following topics:
1.4 Conventions
Symbol Conventions
Symbol Description
Symbol Description
Product Information
For documentation, release notes, software updates, and other information about
Huawei products and support, go to the Huawei Online Support site (registration
required) at https://support.huawei.com/enterprise/.
Technical Support
Huawei has a global technical support system, able to offer timely onsite and
remote technical support service.
Document Feedback
Huawei welcomes your suggestions for improving our documentation. If you have
comments, send your feedback to [email protected].
2 Introduction
● NFS
NFS allows different systems to share files through servers. NFS allows
transparent file access from any location on a network. An NFS server outputs
a directory and allows hosts on a network to access the directory. NFS clients
mount that directory to access the directory on the NFS server. For NFS client
users, that directory is equivalent to a local file system.
● CDFS
CDFS is a file system used on a CD-ROM.
To view the file system type of a volume, use either of the following methods:
● To query the type of mounted file systems, run the following command:
bash-4.0# mount -v
/dev/vg00/lvol3 on / type vxfs ioerror=mwdisable,largefiles,delaylog,nodatainlog,dev=40000003 on
Thu Jul 26 10:33:11 2012
/dev/vg00/lvol1 on /stand type vxfs
ioerror=mwdisable,nolargefiles,log,nodatainlog,tranflush,dev=40000001 on Thu Jul 26 10:33:19 2012
/dev/vg00/lvol8 on /var type vxfs ioerror=mwdisable,largefiles,delaylog,nodatainlog,dev=40000008 on
Thu Jul 26 10:33:45 2012
/dev/vg00/lvol7 on /usr type vxfs ioerror=mwdisable,largefiles,delaylog,nodatainlog,dev=40000007 on
Thu Jul 26 10:33:45 2012
/dev/vg00/lvol4 on /tmp type vxfs ioerror=mwdisable,largefiles,delaylog,nodatainlog,dev=40000004
on Thu Jul 26 10:33:45 2012
/dev/vg00/lvol6 on /opt type vxfs ioerror=mwdisable,largefiles,delaylog,nodatainlog,dev=40000006 on
Thu Jul 26 10:33:45 2012
/dev/vg00/lvol5 on /home type vxfs ioerror=mwdisable,largefiles,delaylog,nodatainlog,dev=40000005
on Thu Jul 26 10:33:45 2012
-hosts on /net type autofs ignore,indirect,nosuid,soft,nobrowse,dev=4000002 on Thu Jul 26 10:39:10
2012
bash-4.0#
According to the command output, all mounted file systems are VxFS.
● To query the file system type of a specific volume, run the following
command:
bash-4.0# fstyp -v /dev/vg00/rlvol3
vxfs
version: 6
f_bsize: 8192
f_frsize: 8192
f_blocks: 131072
f_bfree: 91695
f_bavail: 90979
f_files: 25504
f_ffree: 22912
f_favail: 22912
f_fsid: 1073741827
f_basetype: vxfs
f_namemax: 254
f_magic: a501fcf5
f_featurebits: 0
f_flag: 16
f_fsindex: 9
f_size: 131072
According to the command output, the value of f_basetype is vxfs, indicating that
the file system type of the /dev/vg00/rlvol3 volume is VxFS.
2.2.1 FC Connectivity
A Fibre Channel (FC) SAN is a specialized high-speed network that connects host
servers to storage systems. The FC SAN components include HBAs in the host
servers, switches that help route storage traffic, cables, storage processors (SPs),
and storage disk arrays.
To transfer traffic from host servers to shared storage, the FC SAN uses the Fibre
Channel protocol that packages SCSI commands into Fibre Channel frames.
● Ports in FC SAN
Each node in the SAN, such as a host, a storage device, or a fabric component
has one or more ports that connect it to the SAN. Ports are identified in a
number of ways, such as by:
– World Wide Port Name (WWPN)
A globally unique identifier for a port that allows certain applications to
access the port. The FC switches discover the WWPN of a device or host
and assign a port address to the device.
By carrying SCSI commands over IP networks, iSCSI is used to access remote block
devices in the SAN, providing hosts with the illusion of locally attached devices.
● IP address
Each iSCSI node can have an IP address associated with it so that routing and
switching equipment on your network can establish the connection between
the server and storage. This address is just like the IP address that you assign
to your computer to get access to your company's network or the Internet.
● iSCSI name
A worldwide unique name for identifying the node. iSCSI uses the iSCSI
Qualified Name (IQN) and Extended Unique Identifier (EUI).
By default, HP-UX generates unique iSCSI names for your iSCSI initiators, for
example, iqn.1986-03.com.hp:HPV2.72570eb6-90cb-11dd-a3ed-
ed01f70c7e77_P0. Usually, you do not have to change the default value, but if
you do, make sure that the new iSCSI name you enter is worldwide unique.
Overview
NMP is native to HP-UX 11i v3 and is available to applications without any special
configuration.
NOTE
For more information about NMP, see HP-UX 11i v3 Native Multi-Pathing for Mass
Storage Technical White Paper.
Functions
NMP offers the following functions:
● Optimal distribution of I/O traffic across LUN paths
● Dynamic discovery of LUNs
● Automatic monitoring of LUN paths
● Automatic LUN path failover and recovery
● Intelligent I/O retry algorithms to deal with failed LUN paths
● LUN path authentication to avoid data corruption
Features
Load balancing policies:
NMP supports the following I/O load balancing policies
● Round-robin (round_robin)
This policy distributes the I/O load equally across all active LUN paths
irrespective of the current load on each LUN path. It is suitable when LUN
paths have similar I/O operation turnaround characteristics.
● Least command load (least_cmd_load)
This policy selects the LUN path with the least number of pending I/O
requests for the next I/O operation. It is suitable when LUN paths have
asymmetric performance characteristics.
● Cell aware round robin (cl_round_robin)
This policy is applicable to servers supporting hard partitions, which have high
latencies for non-local memory access operations. The LUN path chosen to
issue an I/O operation is in the same locality in which the I/O is issued. This
policy helps optimize memory access latency.
● Closest path (closest_path)
This policy selects the LUN path based on its affinity with the CPU processing
the I/O operation so as to minimize memory access latency. This policy is
more appropriate for cell-based platforms. The affinity between the LUN path
and CPU is determined based on the relative locations of the CPU processing
the I/O operation and the CPU to which the HBA used by the LUN path is
bound.
● Preferred path (preferred_path), Preferred target port (pref_tport)
These two policies apply to certain types of targets that present an
optimized/un-optimized controller model (different from active-passive). An
optimized/un-optimized controller pair is one in which the optimized
controller is favored for accessing that LUN since it yields better performance.
With the preferred path, you specify a LUN path to the optimized controller.
This LUN path is used preferably for I/O transfers to the disk device.
● Weighted round-robin (weighted_rr)
This policy distributes the I/O load across all active LUN paths in a round-
robin manner and according to the weight assigned to each LUN path. A
number of I/O operations corresponding to the weight of the LUN path is
transferred on the same LUN path before another LUN path is selected. LUN
paths with a weight of 0 are excluded from I/O transfer.
Starting from HP-UX 11.11, HP uses 11i plus v and a number, such as 11i v1, 11i
v2, and 11i v3, where the letter i indicates that a version has Internet functions.
NOTE
According to the command output, the current host version is 11.31, that is, HP-
UX 11i v3.
The command output shows version information about the Fibre Channel HBA
driver.
You can query the latest compatibility information by performing the following
steps:
----End
2.4 Specifications
NOTE
In Table 2-3, there are two values regarding the maximum number of LUNs supported by
HP-UX 11i v1, where 16,384 is the maximum number of LUNs supported by the update
version that was released in December 2003.
NOTE
In HP-UX 11i v3, System Management Homepage (SMH) replaces SAM. Users can run SAM
or SMH commands to enter the user interface.
NOTE
The preceding figure shows SMH commands. However, you can still run SAM commands to
enter the user interface.
Command Function
NOTE
In the preceding table, # in command lines is a variable and must be set to a number based
on site requirements.
3 Planning Connectivity
Hosts and storage systems can be connected based on different criteria. Table 3-1
describes the typical connection modes.
Fibre Channel connections are the most widely used. To ensure service data
security, both direct-attached connections and fabric-attached connections require
multiple paths.
The following details the connections in HyperMetro and non-HyperMetro
scenarios.
3.1 Non-HyperMetro Scenarios
3.2 HyperMetro Scenarios
NOTE
In this connection diagram, each of the two controllers is connected to a host HBA port
with an optical fiber. The cable connections are detailed in Table 3-2.
NOTE
In this connection diagram, each front-end interface module is fully interconnected with the
four controllers and therefore can be accessed by all of them.
NOTE
In this connection diagram, two controllers of the storage system and two ports of the host
are connected to switches through optical fibers. On the switches, the ports connecting to a
storage controller and to the host are grouped in a zone, ensuring connectivity between the
host port and the storage system.
NOTE
● Port numbers in the Zone Member column in this table refer to numbers in Figure 3-3
rather than switch port IDs.
● Zone division in this table is for reference only. Plan zones based on site requirements.
● If you use LLDesigner to plan connectivity, you can obtain zone division data from the
Zone Planning worksheet in the exported LLD file.
NOTE
In this connection diagram, four controllers of the storage system and two ports of the host
are connected to switches through optical fibers. On the switches, the ports connecting to a
storage controller and to the host are grouped in a zone, ensuring connectivity between the
host port and the storage system.
NOTE
● Port numbers in the Zone Member column in this table refer to numbers in Figure 3-4
rather than switch port IDs.
● Zone division in this table is for reference only. Plan zones based on site requirements.
● If you use LLDesigner to plan connectivity, you can obtain zone division data from the
Zone Planning worksheet in the exported LLD file.
NOTE
In this connection diagram, each of the two controllers is connected to a host NIC port with
an Ethernet cable.
NOTE
● IP addresses in this table are for reference only. Plan IP addresses based on site
requirements.
● If you use LLDesigner to plan connectivity, you can obtain IP address data from the IP
Address Planning worksheet in the exported LLD file.
NOTE
In this connection diagram, each of the four controllers is connected to a host NIC port with
an Ethernet cable.
NOTE
● IP addresses in this table are for reference only. Plan IP addresses based on site
requirements.
● If you use LLDesigner to plan connectivity, you can obtain IP address data from the IP
Address Planning worksheet in the exported LLD file.
NOTE
● In this connection diagram, two controllers of the storage system and two NIC ports of
the host are connected to switches through Ethernet cables, ensuring the connectivity
between the host ports and the storage.
● If you use LLDesigner to plan connectivity, you can obtain IP address data from the IP
Address Planning worksheet in the exported LLD file.
NOTE
● In this connection diagram, four controllers of the storage system and two NIC ports of
the host are connected to switches through Ethernet cables, ensuring the connectivity
between the host ports and the storage.
● If you use LLDesigner to plan connectivity, you can obtain IP address data from the IP
Address Planning worksheet in the exported LLD file.
NOTE
NOTE
If switches are used, obtain official product documentation specific to the switch model and
version, and learn about how to configure the switches.
NOTE
In HyperMetro storage scenarios, the Fast Write function cannot be enabled on both the
storage devices and switches (this function is called Fast Write on Brocade switches and
Write Acceleration on Cisco switches).
4.2 Host
Before connecting a host to a storage system, ensure that the host HBAs have
been identified and are functioning properly. You also need to obtain the world
wide names (WWNs) of HBA ports for subsequent storage system configurations.
According to the command output, the host has identified two 8 Gbit/s Fibre
Channel host ports provided by an HP AT094-60001 HBA. The information is
consistent with that about the actually installed HBA, indicating that the host
operating system has identified the installed HBA.
By running the preceding command, you can also obtain the device name
assigned by the host operating system to each port on the HBA, for
example, /dev/fclp0 and /dev/fclp1 in the preceding command output. These
device names will be used in follow-up query commands.
HP-UX 11i v2
You can query the WWN and speed of an HBA by running the fcmsutil command.
HP-UX 11i v3
You can run either the fcmsutil or scsimgr command to query the WWN and
speed of an HBA.
bash-4.1# fcmsutil /dev/fclp0
Vendor ID is = 0x10df
Device ID is = 0xf100
PCI Sub-system Vendor ID is = 0x103c
PCI Sub-system ID is = 0x3392
Chip version = 3
Firmware Version = 2.00A4 SLI-3 (U3D2.00A4)
EFI Version = UU5.03A10
EFI Boot = ENABLED
Driver-Firmware Dump Available = NO
Driver-Firmware Dump Timestamp = N/A
Previous Topology = PTTOPT_FABRIC
Link Speed = 8Gb
Local N_Port_id is = 0x130900
Previous N_Port_id is = None
N_Port Node World Wide Name = 0x2000ac162d1ea160
N_Port Port World Wide Name = 0x1000ac162d1ea160
Switch Port World Wide Name = 0x200900051edda111
Switch Node World Wide Name = 0x100000051edda111
N_Port Symbolic Port Name = louis201_fclp0
N_Port Symbolic Node Name = louis201_HP-UX_B.11.31
Driver state = AWAITING_LINK_UP
Hardware Path is = 0/0/0/7/0/0/0/5/0/0/0
Maximum Frame Size = 2048
TYPE = PFC
NPIV Supported = YES
Driver Version = @(#) FCLP: PCIe Fibre Channel driver (FibrChanl-02), B.11.31.1403, Dec 2 2013, FCLP_IFC
(3,2)
bash-4.1#
bash-4.1#
To query firmware information about an HBA, you can also run the following
command:
bash-4.1# fcmsutil /dev/fclp0 vpd
VITALPRODUCTDATA
--------- ------------- -------
Product Description : "HP AT094A Fibre Channel PCIe 2p 8Gb FC and 2p 1/10GbE Adtr"
bash-4.1#
5 Configuring Connectivity
This chapter describes how to configure connectivity between storage systems and
hosts.
NOTE
If switches are used, configure zones (for FC connections) or VLANs (for iSCSI connections)
by referring to the official product documentation specific to the switch model and version.
Vendor ID is = 0x1077
Device ID is = 0x2422
PCI Sub-system Vendor ID is = 0x103C
PCI Sub-system ID is = 0x12DE
PCI Mode = PCI-X 133 MHz
ISP Code version = 5.6.5
ISP Chip version = 3
Topology = PTTOPT_FABRIC
Link Speed = 4Gb
Local N_Port_id is = 0x011c00
Previous N_Port_id is = None
N_Port Node World Wide Name = 0x50014380017ab03f
N_Port Port World Wide Name = 0x50014380017ab03e
bash-4.1#
Step 1 After configuring zones on the switches, log in to DeviceManager of the storage
system. On the Hosts page, select the desired host, click on the right, and
choose Add Initiator.
NOTE
● The information displayed on the GUI may vary slightly with the product version.
● On DeviceManager of OceanStor Dorado 6.0.1 and later versions, the icon is changed
to More.
In the preceding figure, the host initiator (WWN: 50014380017ab03e) has been
discovered.
NOTE
If host initiators cannot be discovered on the storage system, add the initiators manually.
Step 7 Click the host name and check the initiators. Ensure that the Status of the
initiators is Online.
If the initiator is in the Offline state, run the ioscan command on the host and
then check the initiator status again. In the preceding figure, the initiator has been
added to the host and its Status is Online. Then the Fibre Channel connections
have been established.
----End
NOTICE
● If a LUN mapped to a host does not have any service, the initiator on the
storage system is in the offline state. To restore the initiator status to Online,
run the ioscan command or read/write the mapped LUN.
● Due to the restrictions of the host, when LUNs are mapped from the storage
system to the host, the LUN whose Host LUN ID is 0 must be mapped.
Otherwise, the host may fail to scan for LUNs. There are no such requirements
on the device LUN ID (Dev LUN ID) allocated when a LUN was created.
To enable the host to discover mapped LUNs, perform the following operations:
Step 2 View the information about disks identified by the operating system.
● For HP-UX 11i v2 or 11i v1, run the ioscan -funC disk command.
● For HP-UX 11i v3, run the ioscan -funNC disk command.
In this example, one mapped LUN is found, and its device file is disk12. If the host
operating system does not create a device file for a mapped LUN, you must run
the insf -e command to create one for the LUN.
You can run the following command to view the disk capacity information:
bash-4.1# diskinfo /dev/rdisk/disk12
SCSI describe of /dev/rdisk/disk12:
vendor: HUAWEI
product id: XSG1
type: direct access
size: 104857600 Kbytes
bytes per sector: 512
bash-4.1#
----End
NOTICE
● If a LUN mapped to a host does not have any service, the initiator on the
storage system is in the offline state. To restore the initiator status to Online,
run the ioscan command or read/write the mapped LUN.
● Due to the restrictions of the host, when LUNs are mapped from the storage
system to the host, the LUN whose Host LUN ID is 0 must be mapped.
Otherwise, the host may fail to scan for LUNs. There are no such requirements
on the device LUN ID (Dev LUN ID) allocated when a LUN was created.
● To change the LUN mappings on the storage system, refer to 7.1 Changing
LUN Mappings When Third-Party Multipathing Software Is Used on an HP-
UX Host.
To enable the host to discover mapped LUNs, perform the following operations:
Step 2 View the information about disks identified by the operating system.
● For HP-UX 11i v2 or 11i v1, run the ioscan -funC disk command.
● For HP-UX 11i v3, run the ioscan -funNC disk command.
The following is an example:
bash-4.1# ioscan -funNC disk
Class I H/W Path Driver S/W State H/W Type Description
===================================================================
disk 2 64000/0xfa00/0x0 esdisk CLAIMED DEVICE HP DG146BB976
/dev/disk/disk2 /dev/disk/disk2_p1 /dev/disk/disk2_p2 /dev/rdisk/disk2 /dev/rdisk/
disk2_p1 /dev/rdisk/disk2_p2
disk 3 64000/0xfa00/0x1 esdisk CLAIMED DEVICE HP DG146BB976
/dev/disk/disk3 /dev/disk/disk3_p1 /dev/disk/disk3_p2 /dev/disk/disk3_p3 /dev/rdisk/
disk3 /dev/rdisk/disk3_p1 /dev/rdisk/disk3_p2 /dev/rdisk/disk3_p3
disk 5 64000/0xfa00/0x2 esdisk CLAIMED DEVICE TEAC DVD-ROM DW-224EV
/dev/disk/disk5 /dev/rdisk/disk5
disk 12 64000/0xfa00/0xa esdisk CLAIMED DEVICE HUAWEI XSG1
/dev/disk/disk12 /dev/rdisk/disk12
bash-4.1#
In this example, one mapped LUN is found, and its device file is disk12. If the host
operating system does not create a device file for a mapped LUN, you must run
the insf -e command to create one for the LUN.
You can run the following command to view the disk capacity information:
bash-4.1# diskinfo /dev/rdisk/disk12
SCSI describe of /dev/rdisk/disk12:
vendor: HUAWEI
----End
6 Configuring Multipathing
NOTE
The information displayed on the GUI may vary slightly with the product version.
If Host Access Mode is not Load balancing, perform the following steps to
change it:
Step 1 Click the host name and choose Operation > Modify.
Step 2 Set Host Access Mode to Load balancing and click OK.
----End
NOTICE
● For details about the HP-UX versions, see the Huawei Storage Interoperability
Navigator.
● If a LUN has been mapped to a host, you must restart the host for the
configuration to take effect after you modify Host Access Mode. If you map
the LUN for the first time, restart is not needed.
● When data is migrated from other Huawei storage systems (including Dorado
V3, OceanStor V3, and OceanStor V5) to 6.x series storage systems, configure
the storage system by following instructions in 7.4 Recommended
Configurations for 6.x Series Storage Systems for Taking Over Data from
Other Huawei Storage Systems When the Host Uses the OS Native
Multipathing Software.
name = leg_mpath_enable
current = true
default = true
saved =
NOTE
The first command will be invalid after the host is restarted. The second command is
effective permanently.
Step 3 Check the disks that the system discovers and the NMP status of mapped LUNs.
bash-4.1# ioscan -funNC disk
Class I H/W Path Driver S/W State H/W Type Description
===================================================================
disk 2 64000/0xfa00/0x0 esdisk CLAIMED DEVICE HP DG146BB976
/dev/disk/disk2 /dev/disk/disk2_p1 /dev/disk/disk2_p2 /dev/rdisk/disk2 /dev/rdisk/
disk2_p1 /dev/rdisk/disk2_p2
disk 3 64000/0xfa00/0x1 esdisk CLAIMED DEVICE HP DG146BB976
/dev/disk/disk3 /dev/disk/disk3_p1 /dev/disk/disk3_p2 /dev/disk/disk3_p3 /dev/rdisk/
disk3 /dev/rdisk/disk3_p1 /dev/rdisk/disk3_p2 /dev/rdisk/disk3_p3
disk 5 64000/0xfa00/0x2 esdisk CLAIMED DEVICE TEAC DVD-ROM DW-224EV
/dev/disk/disk5 /dev/rdisk/disk5
disk 12 64000/0xfa00/0xa esdisk CLAIMED DEVICE HUAWEI XSG1
/dev/disk/disk12 /dev/rdisk/disk12
bash-4.1# scsimgr get_attr -D /dev/rdisk/disk12 -a leg_mpath_enable
name = leg_mpath_enable
current = true
default = true
saved =
According to the command output, the system identifies one LUN mapped from
the Huawei storage system (model: XSG1). The device name assigned to the LUN
is /dev/disk/disk12.
NMP is enabled for /dev/disk/disk12.
Step 4 (Optional) If NMP is already enabled for the LUN, skip this step. If the NMP status
for the LUN is false, run either of the following commands to change it to the
enabled state:
scsimgr set_attr -D /dev/rdisk/disk12 -a leg_mpath_enable=true
scsimgr save_attr -D /dev/rdisk/disk12 -a leg_mpath_enable=true
NOTE
The first command will be invalid after the host is restarted. The second command is
effective permanently.
----End
name = load_bal_policy
current = round_robin
default = round_robin
saved =
bash-4.1#
bash-4.1#
In the preceding command output, ensure that all paths are in the ACTIVE state.
NOTICE
When a LUN mapped to the host does not have any service, the state of paths to
this LUN on the host becomes UNOPEN. To restore the path status to ACTIVE, run
the ioscan command or read or write the mapped LUN.
----End
Table 6-1 Storage configurations for interconnection with HP-UX application servers
HyperMetro Storage OS Host Access Preferred Description
Working System Setting Mode Path for
Mode HyperMetro
Load Local HP-UX Asymmetric Yes The host uses all paths of
balancing storage a disk with equal priority.
NOTICE
● For details about the HP-UX versions, see the Huawei Storage Interoperability
Navigator.
● If a LUN has been mapped to a host, you must restart the host for the
configuration to take effect after you modify Host Access Mode or Preferred
Path for HyperMetro. If you map the LUN for the first time, restart is not
needed.
● Ensure that HyperMetro is working properly when modifying networking.
● When data is migrated from other Huawei storage systems (including
OceanStor Dorado V3, OceanStor V3, and OceanStor V5) to 6.x series storage
systems, configure the storage system by following instructions in 7.4
Recommended Configurations for 6.x Series Storage Systems for Taking
Over Data from Other Huawei Storage Systems When the Host Uses the
OS Native Multipathing Software.
NOTE
The information displayed on the GUI may vary slightly with the product version.
Step 2 For both the local and remote storage systems, set Host Access Mode to
Asymmetric and Preferred Path for HyperMetro to Yes..
----End
Step 2 For the local storage system, set Host Access Mode to Asymmetric and Preferred
Path for HyperMetro to Yes. For the remote storage system, set Host Access
Mode to Asymmetric and Preferred Path for HyperMetro to No.
----End
According to the command output, the version is HP-UX 11i v3 1403, meeting
requirements.
name = load_bal_policy
current = round_robin
default = round_robin
saved =
bash-4.1#
bash-4.1# scsimgr get_attr -D /dev/rdisk/disk12 -a alua_enabled
name = alua_enabled
current = true
default = true
saved =
bash-4.1#
According to the command output, the load balancing policy is round-robin, and
ALUA is enabled.
NOTE
disk12 is a device file created for the mapped LUN after connectivity is configured.
6.2.3 Verification
Verifying the Load Balancing Mode
Step 1 Run the ioscan -funNC disk command to check whether HyperMetro LUNs have
been properly aggregated.
HyperMetro LUNs should be aggregated as a drive letter on the host, such as
disk12 in the following example:
bash-4.1# ioscan -funNC disk
Class I H/W Path Driver S/W State H/W Type Description
===================================================================
disk 2 64000/0xfa00/0x0 esdisk CLAIMED DEVICE HP DG146BB976
/dev/disk/disk2 /dev/disk/disk2_p1 /dev/disk/disk2_p2 /dev/rdisk/disk2 /dev/rdisk/
disk2_p1 /dev/rdisk/disk2_p2
disk 3 64000/0xfa00/0x1 esdisk CLAIMED DEVICE HP DG146BB976
/dev/disk/disk3 /dev/disk/disk3_p1 /dev/disk/disk3_p2 /dev/disk/disk3_p3 /dev/rdisk/
disk3 /dev/rdisk/disk3_p1 /dev/rdisk/disk3_p2 /dev/rdisk/disk3_p3
disk 5 64000/0xfa00/0x2 esdisk CLAIMED DEVICE TEAC DVD-ROM DW-224EV
/dev/disk/disk5 /dev/rdisk/disk5
disk 12 64000/0xfa00/0xa esdisk CLAIMED DEVICE HUAWEI XSG1
/dev/disk/disk12 /dev/rdisk/disk12
Step 2 Run the scsimgr lun_map -D /dev/rdisk/disk# command to check the path status
and number of paths.
The number of paths should be the sum of the logical paths on both storage
systems (consistent with the actual configuration).
bash-4.1# scsimgr lun_map -D /dev/rdisk/disk12
Class = lunpath
Instance = 13
Hardware path = 0/2/1/0/4/0.0x2811010203040509.0x4001000000000000
SCSI transport protocol = fibre_channel
State = UNOPEN
Last Open or Close state = ACTIVE
bash-4.1#
NOTICE
When a LUN mapped to the host does not have any service, the state of paths to
this LUN on the host becomes UNOPEN. To restore the path status to ACTIVE, run
the ioscan command or read or write the mapped LUN.
----End
Step 2 Run the scsimgr lun_map -D /dev/rdisk/disk# command to check the path status
and number of paths.
The number of paths should be the sum of the logical paths on both storage
systems (consistent with the actual configuration).
bash-4.1# scsimgr lun_map -D /dev/rdisk/disk12
NOTICE
When a LUN mapped to the host does not have any service, the state of paths to
this LUN on the host becomes UNOPEN. To restore the path status to ACTIVE, run
the ioscan command or read or write the mapped LUN.
----End
7 FAQs
Root Cause
When a LUN mapping is removed from the storage system, the HP-UX host does
not delete the disk corresponding to the LUN. When a new LUN is mapped to the
host and the LUN has the same host LUN ID with the previously deleted one, NMP
may manage paths incorrectly after the host scans disks and identify the new and
previously deleted LUNs as the same one. As a result, the drive letter of the new
LUN cannot be refreshed. To prevent this problem, you must follow the correct
procedure when changing LUN mappings.
Procedure
In non-HyperMetro scenarios:
1. Stop the services running on the disk for which you want to change the
mapping.
2. Run the ioscan -funNC disk and diskinfo /dev/rdisk/diskxxx commands to
check whether residual disk information and command devices (16 KB disks)
exist on the host.
In the following example, NO_HW indicates that residual disk information
exist.
NOTE
If the third-party Veritas DMP multipathing software is used, run the vxdisk rm
huawei-xsg1*_# command to delete the disks managed by DMP, and then go to 3.
3. If the host has residual disk information and command devices, run the rmsf -
a /dev/disk/diskXXX command to delete all residual disk information and
command devices. This operation may change the drive letter of the disk after
the new LUN is mapped. If upper-layer applications on the host depend on
the drive letter, record the relationship between the drive letter and WWN
before deleting the disk, which helps you reconfigure services or restore the
original drive letter after the change.
The following is an example:
NOTICE
If the newly mapped LUN does not preempt the host LUN ID of the command
device, you do not need to delete the command device.
4. Add new mappings on the storage system and scan for disks on the host.
5. Contact the administrator to restart the services.
In HyperMetro scenarios:
To change the mappings on both storage systems, use the same method as the
non-HyperMetro scenario. If you only need to change the mapping on one storage
system, perform the following steps:
1. On the storage system, suspend the services on the LUN for which you want
to change the mapping. After all services have been switched to the other
storage system, perform 2.
2. Run the scsimgr lun_map -D /dev/rdisk/diskXXX command to query all
logical paths of the LUN for which you want to change the mapping.
3. Run the rmsf –H Hardware_path command to delete all logical paths of the
involved LUN.
4. Add new mappings on the storage system and scan for disks on the host.
5. Run the scsimgr lun_map -D /dev/rdisk/diskXXX command to check whether
the paths of the involved LUN are identified successfully.
Troubleshooting
To change the path to the correct local directory, perform the following
operations:
Step 1 On the software installation interface, choose Actions > Change Source.
Step 2 In the Source Depot Type drop-down list, change Network Directory/CDROM to
Local Directory.
Set Source Depot Path to the absolute path of the installation package. Figure
7-3 shows the configuration information after modification.
NOTICE
Source Host Name must be the same as that configured in the /etc/hosts.
Otherwise, the software cannot be installed.
Step 3 Mark again the software to be installed and then reinstall it.
----End
Troubleshooting
If the tracing log contains "Run'scsimgr replace_wwid' command to validate the
change", run the scsimgr replace_wwid command to rectify the fault.
For example, the /var/adm/syslog/syslog.log file contains the following
information:
Oct 12 10:52:15 root vmunix: class : lunpath, instance 11
Oct 12 10:52:15 root vmunix: Evpd inquiry page 83h/80h failed or the current page 83h/80h data do
not match the previous known page 83h/80h data on LUN id 0x0 probed beneath the target path (class =
tgtpath, instance = 5) The lun path is (class = lunpath, instance 11).Run 'scsimgr replace_wwid' command
to validate the change
Oct 12 10:52:15 root vmunix:
Oct 12 10:52:15 root vmunix: An attempt to probe existing LUN id 0x0 failed with errno of 14.
of the old storage and host to ensure proper running. The following provides
recommended configurations.
1. Configuration method for 6.x series storage systems working in load balancing mode
HP-UX Non- ALUA ALUA Unchang Use the Unchang Unchang Unchang
HyperM ed default ed ed ed
etro host
access
mode,
that is,
load
balancin
g.
Note 1: If LUNs have been mapped to the host, you must restart the host for the configuration to
take effect after changing the initiator mode or host access mode on the storage systems.
2. Configuration method for 6.x series storage systems working in asymmetric mode
HP-UX Non- ALUA ALUA Unchang Set the Unchang Unchang Unchang
HyperM ed host ed ed ed
etro access
mode to
asymme
tric.
Note 1: If LUNs have been mapped to the host, you must restart the host for the configuration to
take effect after changing the initiator mode or host access mode on the storage systems.
A
AN Active Non-optimized
AO Active Optimized
C
CLI Command Line Interface
CDFS CD-ROM File System
E
EUI Extended Unique Identifier
F
FC Fibre Channel
FCoE Fibre Channel over Ethernet
H
HBA Host Bus Adapter
HFS High Performance File System
HP-UX Hewlett Packard UniX
I
iSCSI Internet Small Computer System
Interface
ISM Integrated Storage Manager
J
JFS Journaled File System
L
LE Logical Extent
LACP Link Aggregation Control Protocol
LUN Logical Unit Number
LV Logic Volume
LVM Logical Volume Manager
M
MC Multi-Computer
N
NMP Native Multipathing Plug-In
NFS Network File System
R
RAID Redundant Array of Independent Disks
S
SAM System Administration Manager
SG ServiceGuard
SMH System Management Homepage
SP storage processor
P
PE Physical Extent
V
VG Volume Group
VxFS Veritas file system
VxVM Veritas Volume Manager
W
WWN World Wide Name
Volume management software widely used in HP-UX includes LVM delivered with
HP-UX and VxVM provided by Symantec.
9.1 LVM
9.1.1 Overview
LVM combines space in several disks (physical volumes) into a volume group (VG)
and then divides the space in the VG into logical volumes (LVs), namely, partitions
in LVM.
Physical Volume
Disks managed by LVM are called physical volumes (PVs). Before disks are used by
LVM, certain special data is constructed on these disks. Disks with the constructed
data are considered to be PVs and can be added to VGs.
Volume Group
A volume group (VG) consists of one or more PVs. PVs in a VG provide disk space
that can be assigned to one or more LVs.
Logical Volume
The disk space of a VG can be assigned to one or more LVs. Similar to a partition,
an LV can contain a file system, swap area, or original data.
In addition, LVs have the following characteristics:
● An LV can contain the space of all or some PVs.
● An LV can be expanded to include multiple PVs managed by LVM.
● The size of an LV can be changed, and an LV can be moved to another disk.
PE and LE
In LVM, the minimum space unit assigned is called extent. PVs can be divided into
multiple physical extents (PEs). PVs can be assigned and used immediately after
being added to a VG.
An LV contains a series of sequential logical extents (LEs). Each LE is a pointer
that points to a PE on a disk.
The size of a PE and that of LE in a VG are the same and can be set during VG
creation. By default, the extent size is 4 MB.
9.1.2 Installation
By default, LVM is automatically installed along with the installation of the host
operating system and does not need to be configured.
bash-4.0#
bash-4.0# pvcreate -f /dev/rdsk/c14t0d0
Physical volume "/dev/rdsk/c14t0d0" has been successfully created.
bash-4.0#
In the preceding example, c13t0d0 and c14t0d0 are selected to create PVs.
If you are not sure whether disks have bad blocks, run the mediainit command to
initialize the disks. This command checks disk completeness in read and write test
mode and sets all identified bad blocks to the idle state. The command syntax is
as follows:
bash-4.0# mediainit /dev/rdsk/c14t0d2
NOTICE
The mediainit command damages all existing user data on a disk. Therefore,
exercise caution when using this command.
Step 3 Run the pvdisplay command to check whether PVs are created successfully.
bash-4.0# pvdisplay -l /dev/dsk/c13t0d0
/dev/dsk/c13t0d0:LVM_Disk=yes
bash-4.0# pvdisplay -l /dev/dsk/c13t0d1
/dev/dsk/c13t0d1:LVM_Disk=no
bash-4.0#
----End
Creating a VG
To create a VG, perform the following steps:
The first command queries the VG device files that exist in the system. The second
and third commands are used to create a VG device file.
VG device files (also called control files) enable the LVM kernel and LVM
commands to communicate in the created VGs.
NOTE
The group file is a character device file. Its major number is 64 and its minor number ends
with 0000 in hexadecimal format as follows:
0xhh0000
hh indicates a VG number in hexadecimal format and its value varies with VGs.
Devices on two paths of the same LUN are added to the vg_try VG. Therefore, the
LUN is managed by PV-Links.
The preceding command output indicates that the current primary path
is /dev/dsk/c13t0d0 and the secondary path is /dev/dsk/c14t0d0.
NOTICE
----End
Expanding a VG
The command syntax is as follows:
vgextend vgname pvname
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
VG Version 1.0
VG Max Size 114624m
VG Max Extents 28656
--- Physical volumes ---
PV Name /dev/dsk/c14t0d1
PV Name /dev/dsk/c13t0d1 Alternate Link
PV Status available
Total PE 1791
Free PE 1791
Autoswitch On
Proactive Polling On
bash-4.0#
Creating an LV
To create an LV, perform the following steps:
Act PV 1
Max PE per PV 1535
VGDA 2
PE Size (Mbytes) 4
Total PE 1535
Alloc PE 103
Free PE 1432
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
VG Version 1.0
VG Max Size 98240m
VG Max Extents 24560
LV Name /dev/vg_try/lv_try01
LV Status available/syncd
LV Size (Mbytes) 400
Current LE 100
Allocated PE 100
Used PV 1
VG Name /dev/vg_try
LV Permission read/write
LV Status available/syncd
Mirror copies 0
Consistency Recovery MWC
Schedule parallel
LV Size (Mbytes) 400
Current LE 100
Allocated PE 100
Stripes 0
Stripe Size (Kbytes) 0
Bad block on
Allocation strict
IO Timeout (Seconds) default
----End
Step 1 Run the newfs command to create a file system. The following is an example:
bash-4.0# newfs -F vxfs -o largefiles /dev/vg_try/rlv_try00
version 7 layout
12288 sectors, 12288 blocks of size 1024, log size 1024 blocks
largefiles supported
bash-4.0#
The value of f_flag can be 0 or 16. 16 indicates that a large file system is
supported. If the value is 0, a large file system is not supported.
Step 2 Create mounting points for the file system and mount LVs to the mounting points.
bash-4.0# mkdir -p /test/mnt1
bash-4.0# mkdir -p /test/mnt2
bash-4.0# mount /dev/vg_try/lv_try00 /test/mnt1/
bash-4.0# mount /dev/vg_try/lv_try01 /test/mnt2/
Two LVs are properly mounted and the I/O operations can be performed properly.
Step 4 To unmount a volume, run the following commands:
bash-4.0# umount /dev/vg_try/lv_try00
bash-4.0# umount /test/mnt2/
bash-4.0# bdf
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol3 1048576 315296 727608 30% /
/dev/vg00/lvol1 1835008 364368 1459224 20% /stand
/dev/vg00/lvol8 8912896 1419528 7436448 16% /var
/dev/vg00/lvol7 6553600 3037552 3488696 47% /usr
/dev/vg00/lvol4 524288 20952 499536 4% /tmp
/dev/vg00/lvol6 7864320 3071152 4760808 39% /opt
/dev/vg00/lvol5 114688 37872 76352 33% /home
bash-4.0#
----End
Expanding an LV
To expand an existing LV, perform the following steps:
or
lvextend -L yyy lvpath
In the preceding commands, xxx indicates the logical extent after the expansion,
and yyy indicates the capacity after the expansion.
The following is an example:
bash-4.0# lvextend -l 300 /dev/vg_try/lv_try00
Logical volume "/dev/vg_try/lv_try00" has been successfully extended.
Volume Group configuration for /dev/vg_try has been saved in /etc/lvmconf/vg_try.conf
bash-4.0#
bash-4.0# lvextend -L 800 /dev/vg_try/lv_try01
Logical volume "/dev/vg_try/lv_try01" has been successfully extended.
Volume Group configuration for /dev/vg_try has been saved in /etc/lvmconf/vg_try.conf
NOTE
If the file system is an online JFS, you can perform online expansion for the JFS by running
fsadm. For example, run the following command to expand the /test/mnt1 directory to 32
MB:
fsadm -F vxfs -b 32768 /test/mnt1
The b parameter indicates the number of blocks of the new file system. The size of a JFS
block is typically 1 KB.
----End
Activating a VG
After importing a VG, you must activate it before mounting it and performing I/O
operations on it. The command syntax is as follows:
vgchange -a y VG name
Deactivating a VG
Before exporting a VG, you must deactivate it. The command syntax is as follows:
vgchange –a n VG name
Exporting a VG
VGs must be imported or exported in cluster, data backup, and data restoration
application scenarios.
The preceding command saves the VG information in the vgname.map file. Before
exporting a VG, you must deactivate it.
Importing a VG
The command syntax is as follows:
vgimport –s –v –m vgname.map VG name
Deleting an LV
The command syntax is as follows:
lvremove lvname
Deleting a VG
The command syntax is as follows:
vgremove vgname
Step 1 Ensure that all LVs have been deleted from the VG.
bash-4.0# vgdisplay -v /dev/vg_tong
--- Volume groups ---
VG Name /dev/vg_tong
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 0
Open LV 0
Max PV 16
Cur PV 1
Act PV 1
Max PE per PV 1791
VGDA 2
PE Size (Mbytes) 4
Total PE 1791
Alloc PE 0
Free PE 1791
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
VG Version 1.0
VG Max Size 114624m
VG Max Extents 28656
Step 2 Keep one PV and delete all other PVs from the VG.
bash-4.0# vgreduce /dev/vg_tong /dev/dsk/c13t0d1
Device file path "/dev/dsk/c13t0d1" is an alternate path.
Volume group "/dev/vg_tong" has been successfully reduced.
Volume Group configuration for /dev/vg_tong has been saved in /etc/lvmconf/vg_tong.conf
bash-4.0# vgdisplay -v /dev/vg_tong
--- Volume groups ---
VG Name /dev/vg_tong
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 0
Open LV 0
Max PV 16
Cur PV 1
Act PV 1
Max PE per PV 1791
VGDA 2
PE Size (Mbytes) 4
Total PE 1791
Alloc PE 0
Free PE 1791
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
VG Version 1.0
VG Max Size 114624m
VG Max Extents 28656
----End
Deleting a PV
The command syntax is as follows:
pvremove raw device name
9.2 VxVM
9.2.1 Overview
VxVM is a storage management subsystem that enables you to manage physical
disks as logical devices.
VxVM provides simple functions of managing online disks for the computation
environment and storage area networks (SANs). It has the following advantages:
● Supports RAID.
● Provides functions that improve error tolerance and enable quick disk failure
recovery.
● Provides an LV management layer to enable cross-disk management and
remove physical restrictions of disk devices.
● Provides tools that improve performance and ensure data availability and
integrity.
● Enables dynamic disk storage configuration when the system is active.
9.2.2 Installation
VxVM is not for free and not come pre-installed in an operating system.
Pre-installation Check
Before installation, run the following command to check whether VxVM has been
installed:
swlist|grep –i vxvm
Installation Procedure
To install VxVM, perform the following steps:
Step 1 Upload the VxVM installation package to a directory in the UP-UX system.
If the file name extension of the package is .gz, run gunzip filename.gz.
If the file name extension of the package is .tar, run tar -xvf filename.tar.
Step 3 In the directory containing the package, run chmod +x installer to grant
permissions to the installer file.
Step 4 Run ./installer to install VxVM.
----End
Initializing Disks
Disks that were just managed are not in online state because they have not been
initialized. As a result, they cannot be used. Therefore, you must run the
vxdisksetup -i disk name command to initialize them first. The status of
successfully initialized disks is online. The following is an example:
bash-4.0# /opt/VRTS/bin/vxdisksetup -i huawei-xsg10_0
bash-4.0# vxdisk list
DEVICE TYPE DISK GROUP STATUS
disk_0s2 auto:LVM - - LVM
disk_1s2 auto - - error
huawei-xsg10_0 auto:cdsdisk - - online
huawei-xsg10_1 auto:none - - online invalid
huawei-xsg10_2 auto:FS_wholedisk - - FS_wholedisk
huawei-xsg10_3 auto:FS_wholedisk - - FS_wholedisk
huawei-xsg10_4 auto:none - - online invalid
bash-4.0#
Creating a Volume
You can run the vxassist -g DG name make volume name capacity command to
create a volume in a created DG. The following is an example:
bash-4.0# vxassist -g dg1 make vol2 1g
bash-4.0# vxprint -g dg1 -t vol2
V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE
Mounting a Volume
You can mount a created volume to a directory.
● To mount a volume to an OS native VxFS, run the following command:
mount /dev/vx/dsk/<disk group>/<volume name> <mount directory>
Disabling a Volume
This command makes a volume unavailable to a user and changes the volume
status from ENABLED or DETACHED to DISABLED. The command syntax is as
follows:
vxvol -g DG name stop volume name
Enabling a Volume
This command makes a volume available to a user and changes the volume status
from DISABLED to ENABLED or DETACHED.
Deleting a Volume
The command syntax is as follows:
vxedit -g DG name -rf rm volume name
Exporting a DG
DGs must be imported or exported in cluster, data backup, and data restoration
application scenarios. Before exporting a DG, you must stop all volumes on the
DG. Run the vxdg deport DG name command to export the DG. The following is
an example:
bash-4.0# vxvol -g dg1 stop vol2
bash-4.0# vxdg deport dg1
bash-4.0# vxdg list
NAME STATE ID
bash-4.0#
Importing a DG
The command syntax is as follows:
vxdg import DG name
After importing a DG, you must activate it before using it. The following is an
example:
bash-4.0#vxdg import dg1
bash-4.0#vxdg list
NAME STATE ID
dg1 enabled,cds 1330044217.14.ibm130
bash-4.0#vxvol -g dg1 startall
Adding a Disk to a DG
You can add disks to a DG when its capacity is insufficient. The command syntax is
as follows:
vxdg -g DG name adddisk disk name
10 Introduction to High-Availability
Technologies
10.1 MC/SG
10.2 Veritas VCS
10.1 MC/SG
10.1.1 Overview
With ever-increasing service requirements, mission-critical applications must be
always available and the system must be tolerant of errors. However, systems with
error tolerance capabilities are costly. Therefore, the error tolerance capability
needs to be provided by applications at low cost.
A high availability (HA) solution ensures that a user can use applications and data
even if errors occur in any component in the system. Scheduled and unscheduled
system suspension is eliminated or insensible by eliminating single points of
failure (SPOFs).
● High reliability
● Balanced workload
● Data integrity protection
● Integrated MC/SG cluster and network node management programs
As of the release of this document, the latest MC/SG version issued with the HP-
UX is A.11.20.
MC/SG manages nodes to form a cluster. The master node is called the Cluster
Coordinator. It receives heartbeat messages from other nodes to identify their
states.
If a node becomes abnormal, MC/SG forms a new cluster without this abnormal
node. The configuration information about this new cluster is transmitted to the
Package Manager to prevent application systems from running on the abnormal
node.
● Determines the time and node for running, suspending, and migrating a
package.
● Executes a user-defined control text file to properly suspend and run a
package.
http://h20565.www2.hpe.com/portal/site/hpsc/template.PAGE/public/psi/
manualsResults/?
sp4ts.oid=4162060&spf_p.tpst=psiContentResults&spf_p.prp_psiContentResults
=wsrp-navigationalState%3Daction%253Dmanualslist%257Ccontentid
%253DUser-Guide-%252528how-to-use%252529%257Clang
%253Den&javax.portlet.begCacheTok=com.vignette.cachetoken&javax.portlet.
endCacheTok=com.vignette.cachetoken
Enabling a Cluster
The command syntax is as follows:
# cmruncl -v
# cmruncl -v -n nodename1 -n nodename2
Disabling a Cluster
The command syntax is as follows:
# cmhaltcl -f -v
To disable the cluster service on one node, run the following command:
# cmhaltnode -f -v nodename1
----End
● /var/opt/cmom/cmomd.log
This log provides information recorded by cmomd, the cluster object manager
daemon. You can run the cmreadlog command to view the information. The
command syntax is as follows:
cmreadlog /var/opt/cmom/cmomd.log
The cmomd log file includes information about programs that request data from
the object manager, such as data type and timestamp.
10.2.1 Overview
Veritas Cluster Server (VCS) can connect multiple independent systems to a
management framework to improve availability. Each system (or node) runs its
own operating system and collaborates at the software level to form a cluster. The
VCS combines common hardware with intelligent software to provide failover and
control for applications. If a node or a monitored application becomes faulty,
other nodes perform predefined operations to take over services and start these
services in other locations in the cluster.