Network Storage Device
Network Storage Device
Abstract
This guide describes tasks related to cluster configuration and monitoring, system upgrade and recovery, hardware component replacement, and troubleshooting. It does not document X9000 file system features or standard Linux administrative tools and commands. For information about configuring and using X9000 Software file system features, see the HP X9000 File Serving Software File System User Guide. This guide is intended for system administrators and technicians who are experienced with installing and administering networks, and with performing Linux operating and administrative tasks. For the latest X9000 guides, browse to http://www.hp.com/ support/manuals. In the storage section, select NAS Systems and then select HP X9000 Network Storage Systems from the IBRIX Storage Systems section.
Copyright 2009, 201 1 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.21 1 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Acknowledgments Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. UNIX is a registered trademark of The Open Group. Warranty WARRANTY STATEMENT: To obtain a copy of the warranty for this product, see the warranty information website: http://www.hp.com/go/storagewarranty Revision History
Edition
Date
Software Version
Description
Initial release of the X9720 Network Storage System. Added network management and Support ticket. Added management console backup, migration to an agile management console configuration, software upgrade procedures, and system recovery procedures. Revised upgrade procedure. Added information about NDMP backups and configuring virtual interfaces, and updated cluster procedures. Updated segment evacuation information. Revised upgrade procedure. Added or updated information about the agile management console, Statistics tool, Ibrix Collect, event notification, capacity block installation, NTP servers, upgrades. Revised upgrade procedure.
August 2010 December 2010 March 201 1 April 201 1 September 201 1
Ninth
December 201 1
6.0
Contents
1 Product description...................................................................................11
HP X9720 Network Storage System features...............................................................................11 System components.................................................................................................................11 HP X9000 Software features....................................................................................................11 High availability and redundancy.............................................................................................12
2 Getting started.........................................................................................13
Setting up the X9720 Network Storage System...........................................................................13 Installation steps................................................................................................................13 Additional configuration steps.............................................................................................13 Logging in to the X9720 Network Storage System.......................................................................14 Using the network..............................................................................................................14 Using the TFT keyboard/monitor..........................................................................................14 Using the serial link on the Onboard Administrator.................................................................14 Booting the system and individual server blades.........................................................................14 Management interfaces...........................................................................................................15 Using the GUI...................................................................................................................15 Customizing the GUI..........................................................................................................18 Adding user accounts for GUI access...................................................................................19 Using the CLI.....................................................................................................................19 Starting the array management software...............................................................................19 X9000 client interfaces.......................................................................................................19 X9000 Software manpages.....................................................................................................19 Changing passwords..............................................................................................................19 Configuring ports for a firewall.................................................................................................20 Configuring NTP servers..........................................................................................................21 Configuring HP Insight Remote Support on X9000 systems...........................................................21 Configuring X9000 systems for Insight Remote Support...........................................................22 Troubleshooting Insight Remote Support................................................................................24
4 Configuring failover..................................................................................28
Agile management consoles....................................................................................................28 Agile management console modes.......................................................................................28 Agile management consoles and failover..............................................................................28 Viewing information about management consoles..................................................................29 Cluster high availability...........................................................................................................29 Failover modes..................................................................................................................29 What happens during a failover..........................................................................................29 Setting up automated failover..............................................................................................30 Configuring standby pairs..............................................................................................30 Identifying power sources...............................................................................................30 Turning automated failover on and off..............................................................................31
Contents 3
Manually failing over a file serving node..............................................................................31 Failing back a file serving node...........................................................................................32 Using network interface monitoring......................................................................................32 Setting up HBA monitoring..................................................................................................34 Discovering HBAs..........................................................................................................35 Identifying standby-paired HBA ports...............................................................................35 Turning HBA monitoring on or off....................................................................................35 Deleting standby port pairings........................................................................................35 Deleting HBAs from the configuration database.................................................................35 Displaying HBA information............................................................................................35 Checking the High Availability configuration.........................................................................36
Monitoring cluster health.........................................................................................................53 Health checks....................................................................................................................53 Health check reports..........................................................................................................53 Viewing logs..........................................................................................................................55 Viewing and clearing the Integrated Management Log (IML).........................................................55 Viewing operating statistics for file serving nodes........................................................................55
13 Licensing...............................................................................................96
Viewing license terms..............................................................................................................96 Retrieving a license key...........................................................................................................96 Using AutoPass to retrieve and install permanent license keys........................................................96
15 Troubleshooting....................................................................................107
Collecting information for HP Support with Ibrix Collect.............................................................107 Collecting logs................................................................................................................107 Deleting the archive file....................................................................................................108 Downloading the archive file.............................................................................................108 Configuring Ibrix Collect...................................................................................................109 Viewing data collection information....................................................................................110 Viewing data collection configuration information................................................................110
6 Contents
Adding/deleting commands or logs in the XML file..............................................................110 General troubleshooting steps................................................................................................110 Escalating issues...................................................................................................................110 Useful utilities and processes..................................................................................................111 Accessing the Onboard Administrator (OA) through the network............................................111 Access the OA Web-based administration interface.........................................................111 Accessing the Onboard Administrator (OA) through the serial port.........................................111 Accessing the Onboard Administrator (OA) via service port..................................................112 Using hpacucli Array Configuration Utility (ACU)...............................................................112 The exds_stdiag utility......................................................................................................112 Syntax.......................................................................................................................112 Network testing tools........................................................................................................113 exds_netdiag..............................................................................................................113 Sample output........................................................................................................114 exds_netperf...............................................................................................................114 POST error messages............................................................................................................115 LUN layout..........................................................................................................................115 X9720 monitoring................................................................................................................115 Identifying failed I/O modules on an X9700cx chassis..............................................................116 Failure indications............................................................................................................116 Identifying the failed component........................................................................................116 Re-seating an X9700c controller........................................................................................120 Viewing software version numbers..........................................................................................120 Troubleshooting specific issues................................................................................................121 Software services.............................................................................................................121 Failover..........................................................................................................................121 Mode 1 or mode 6 bonding.............................................................................................121 X9000 RPC call to host failed............................................................................................122 Degrade server blade/Power PIC.......................................................................................122 ibrix_fs -c failed with "Bad magic number in super-block"......................................................122 LUN status is failed..........................................................................................................123 Apparent failure of HP P700m...........................................................................................123 X9700c enclosure front panel fault ID LED is amber..............................................................124 Spare disk drive not illuminated green when in use..............................................................124 Replacement disk drive LED is not illuminated green.............................................................124 X9700cx GSI LED is amber...............................................................................................124 X9700cx drive LEDs are amber after firmware is flashed.......................................................125 Configuring the Virtual Connect domain..................................................................................125 Synchronizing information on file serving nodes and the configuration database...........................126
Replacing the SAS switch in Bay 3 or 4..............................................................................131 Replacing the P700m mezzanine card................................................................................132 Replacing capacity block parts...............................................................................................132 Replacing capacity block hard disk drive............................................................................133 Replacing the X9700c controller........................................................................................133 Replacing the X9700c controller battery..............................................................................134 Replacing the X9700c power supply..................................................................................135 Replacing the X9700c fan.................................................................................................135 Replacing the X9700c chassis...........................................................................................135 Replacing the X9700cx I/O module ..................................................................................136 Replacing the X9700cx power supply.................................................................................137 Replacing the X9700cx fan...............................................................................................137 Replacing a SAS cable.....................................................................................................137
Danish recycling notice.....................................................................................................182 Dutch recycling notice.......................................................................................................183 Estonian recycling notice...................................................................................................183 Finnish recycling notice.....................................................................................................183 French recycling notice.....................................................................................................183 German recycling notice...................................................................................................183 Greek recycling notice......................................................................................................183 Hungarian recycling notice...............................................................................................184 Italian recycling notice......................................................................................................184 Latvian recycling notice.....................................................................................................184 Lithuanian recycling notice................................................................................................184 Polish recycling notice.......................................................................................................184 Portuguese recycling notice...............................................................................................185 Romanian recycling notice................................................................................................185 Slovak recycling notice.....................................................................................................185 Spanish recycling notice...................................................................................................185 Swedish recycling notice...................................................................................................185 Recycling notices..................................................................................................................186 English recycling notice....................................................................................................186 Bulgarian recycling notice.................................................................................................186 Czech recycling notice......................................................................................................186 Danish recycling notice.....................................................................................................187 Dutch recycling notice.......................................................................................................187 Estonian recycling notice...................................................................................................187 Finnish recycling notice.....................................................................................................187 French recycling notice.....................................................................................................188 German recycling notice...................................................................................................188 Greek recycling notice......................................................................................................188 Hungarian recycling notice...............................................................................................189 Italian recycling notice......................................................................................................189 Latvian recycling notice.....................................................................................................189 Lithuanian recycling notice................................................................................................190 Polish recycling notice.......................................................................................................190 Portuguese recycling notice...............................................................................................190 Romanian recycling notice................................................................................................191 Slovak recycling notice.....................................................................................................191 Spanish recycling notice...................................................................................................191 Swedish recycling notice...................................................................................................192 Battery replacement notices...................................................................................................192 Dutch battery notice.........................................................................................................192 French battery notice........................................................................................................193 German battery notice......................................................................................................193 Italian battery notice........................................................................................................194 Japanese battery notice....................................................................................................194 Spanish battery notice......................................................................................................195
Glossary..................................................................................................196 Index.......................................................................................................198
10
Contents
1 Product description
HP X9720 Network Storage System is a scalable, network-attached storage (NAS) product. The system combines HP X9000 File Serving Software with HP server and storage hardware to create a cluster of file serving nodes.
IMPORTANT: It is important to keep regular backups of the cluster configuration. See Backing up the management console configuration (page 44) for more information.
System components
The X9720 Network Storage System includes the following components: X9720 Network Storage System Base Rack, including Two management switches Keyboard, video, and mouse (KVM) A c-Class blade enclosure Two Flex-10 Virtual Connect modules Redundant SAS switch pair
Performance block comprised of a server blade and blade infrastructure Capacity block (array), (minimum of one) comprising: X9700c (array controller chassis and 12 disk drives) X9700cx (disk enclosure with 70 disk drives)
IMPORTANT: All software that is included with the X9720 Network Storage System is for the sole purpose of operating the system. Do not add, remove, or change any software unless instructed to do so by HP-authorized personnel. For more information about system components and cabling, see Component and cabling diagrams (page 155).
of multiple components, and a centralized management interface. X9000 Software can scale to thousands of nodes. Based on a Segmented File System architecture, X9000 Software integrates I/O and storage systems into a single clustered environment that can be shared across multiple applications and managed from a single central management console. X9000 Software is designed to operate with high-performance computing applications that require high I/O bandwidth, high IOPS throughput, and scalable configurations. Some of the key features and benefits are as follows: Scalable configuration. You can add servers to scale performance and add storage devices to scale capacity. Single namespace. All directories and files are contained in the same namespace. Multiple environments. Operates in both the SAN and DAS environments. High availability. The high-availability software protects servers. Tuning capability. The system can be tuned for large or small-block I/O. Flexible configuration. Segments can be migrated dynamically for rebalancing and data tiering.
12
Product description
2 Getting started
This chapter describes how to log into the system, how to boot the system and individual server blades, how to change passwords, and how to back up the management console configuration. It also describes the management interfaces provided with X9000 Software. IMPORTANT: Follow these guidelines when using your system:
Do not modify any parameters of the operating system or kernel, or update any part of the X9720 Network Storage System unless instructed to do so by HP; otherwise, the system could fail to operate properly. File serving nodes are tuned for file serving operations. With the exception of supported backup programs, do not run other applications directly on the nodes.
Installation steps
Remove the product from the shipping cartons that you have placed in the location where the product will be installed, confirm the contents of each carton against the list of included items and check for any physical damage to the exterior of the product, and connect the product to the power and network provided by you. Review your server, network, and storage environment relevant to the HP Enterprise NAS product implementation to validate that prerequisites have been met. Validate that your file system performance, availability, and manageability requirements have not changed since the service planning phase. Finalize the HP Enterprise NAS product implementation plan and software configuration. Implement the documented and agreed-upon configuration based on the information you provided on the pre-delivery checklist. Document configuration details.
These cluster features are described later in this guide. File systems. Set up the following features as needed: Additional file systems. Optionally, configure data tiering on the file systems to move files to specific tiers based on file attributes. NFS, CIFS, FTP, or HTTP. Configure the methods you will use to access file system data.
Setting up the X9720 Network Storage System 13
Quotas. Configure user, group, and directory tree quotas as needed. Remote replication. Use this feature to replicate changes in a source file system on one cluster to a target file system on either the same cluster or a second cluster. Data replication and validation. Use this feature to manage WORM and retained files. X9000 software snapshots. This feature is included in the X9000 software and can be used to take scheduled or on-demand software snapshots of a file system. Block Snapshots. This feature uses the array snapshot facility to take scheduled or on-demand snapshots of a file system. File allocation. Use this feature to specify the manner in which segments are selected for storing new files and directories.
For more information about these file system features, see the HP X9000 File Serving Software File System User Guide.
3.
To power on the remaining server blades, run the command: ibrix_server -P on -h <hostname> NOTE: Alternatively, press the power button on all of the remaining servers. There is no need to wait for the first server blade to boot.
Management interfaces
Cluster operations are managed through the X9000 Software management console, which provides both a GUI and a CLI. Most operations can be performed from either the GUI or the CLI. The following operations can be performed only from the CLI: SNMP configuration (ibrix_snmpagent, ibrix_snmpgroup, ibrix_snmptrap, ibrix_snmpuser, ibrix_snmpview) Health checks (ibrix_haconfig, ibrix_health, ibrix_healthconfig) Raw storage management (ibrix_pv, ibrix_vg, ibrix_lv) Management console operations (ibrix_fm) and management console tuning (ibrix_fm_tune) File system checks (ibrix_fsck) Kernel profiling (ibrix_profile) NFS autoconnection (ibrix_autoconnect) Cluster configuration (ibrix_clusterconfig) Configuration database consistency (ibrix_dbck) Shell task management (ibrix_shell) Scheduling recurring data validation scans Scheduling recurring software snapshots Scheduling recurring block snapshots
Management interfaces
15
The GUI dashboard opens in the same browser window. You can open multiple GUI windows as necessary. See the online help for information about all GUI displays and operations.
The GUI dashboard enables you to monitor the entire cluster. There are three parts to the dashboard: System Status, Cluster Overview, and the Navigator.
16
Getting started
System Status
The System Status section lists the number of cluster events that have occurred in the last 24 hours. There are three types of events:
Alerts. Disruptive events that can result in loss of access to file system data. Examples are a segment that is unavailable or a server that cannot be accessed. Warnings. Potentially disruptive conditions where file system access is not lost, but if the situation is not addressed, it can escalate to an alert condition. Examples are a very high server CPU utilization level or a quota limit close to the maximum. Information. Normal events that change the cluster. Examples are mounting a file system or creating a segment.
Cluster Overview
The Cluster Overview provides the following information: Capacity The amount of cluster storage space that is currently free or in use. Filesystems The current health status of the file systems in the cluster. The overview reports the number of file systems in each state (healthy, experiencing a warning, experiencing an alert, or unknown). Segment Servers The current health status of the file serving nodes in the cluster. The overview reports the number of nodes in each state (healthy, experiencing a warning, experiencing an alert, or unknown). Services Whether the specified file system services are currently running:
Statistics Historical performance graphs for the following items: Network I/O (MB/s) Disk I/O (MB/s) CPU usage (%) Memory usage (%)
On each graph, the X-axis represents time and the Y-axis represents performance. Use the Statistics menu to select the servers to monitor (up to two), to change the maximum value for the Y-axis, and to show or hide resource usage distribution for CPU and memory. Recent Events The most recent cluster events. Use the Recent Events menu to select the type of events to display. You can also access certain menu items directly from the Cluster Overview. Mouse over the Capacity, Filesystems or Segment Server indicators to see the available options.
Management interfaces
17
Navigator
The Navigator appears on the left side of the window and displays the cluster hierarchy. You can use the Navigator to drill down in the cluster configuration to add, view, or change cluster objects such as file systems or storage, and to initiate or view tasks such as snapshots or replication. When you select an object, a details page shows a summary for that object. The lower Navigator allows you to view details for the selected object, or to initiate a task. In the following example, we selected Filesystems in the upper Navigator and Mountpoints in the lower Navigator to see details about the mounts for file system ifs1.
NOTE: When you perform an operation on the GUI, a spinning finger is displayed until the operation is complete. However, if you use Windows Remote Desktop to access the management console, the spinning finger is not displayed.
18
Getting started
You can add other users to these groups as needed, using Linux procedures.
Changing passwords
You may want to change the passwords on your system: Hardware passwords. See the documentation for the specific hardware for more information. Root password. Use the passwd(8) command on each server in turn. X9000 Software user password. This password is created during installation and is used to log on to the management console GUI. The default is ibrix. You can change the password on the management console using the Linux passwd command. You will be prompted to enter the new password.
# passwd ibrix
19
20
Getting started
Port 9000:9200/udp 20/tcp, 20/udp 21/tcp, 21/udp 7777/tcp 8080/tcp 5555/tcp, 5555/udp 631/tcp, 631/udp
Description
Between X9000 management console GUI and clients that need to access the GUI
For product descriptions and information about downloading the software, see the HP Insight Remote Support Software web page: http://www.hp.com/go/insightremotesupport For information about HP SIM, see the following page: http://www.hp.com/products/systeminsightmanager For IRSA documentation, see the following page:
Configuring NTP servers 21
Limitations
Note the following: For X9000 systems, the HP Insight Remote Support implementation is limited to hardware events. The MDS600 storage device on X9720 systems is not supported for HP Insight Remote Support. Some manual configurations require that the X9320 and X9300 nodes be recognized as a X9000 solution. They are described under Configure Entitlements (page 23).
To enter more than one SNMP Manager IP, copy the following lines:
rwcommunity public <Manager IP> rocommunity public <Manager IP> trapsink <Manager IP> public
After updating the snmpd.conf file, restart the snmpd service: # service snmpd restart For more information about the /sbin/hpsnmpconfig script, see SNMP Configuration in the hp-snmp-agents(4) man page. For information about the HP System Management Homepage, see: http://h18013.www1.hp.com/products/servers/management/agents/index.html
22
Getting started
IMPORTANT: The /opt/hp/hp-snmp-agents/cma.conf file controls certain actions of the SNMP agents. You can add a trapIf entry to the file to configure the IP address used by the SNMP daemon when sending traps. For example, to send traps using the IP address of the eth1 interface, add the following:
trapIf eth1
For more information about the cma.conf file, see Section 3.1 of the Managing ProLiant servers with Linux HOWTO at Managing ProLiant servers with Linux HOWTO .
SNMP Agent:
# service snmpd restart
HP SNMP Agents:
# service hp-snmp-agents start
To ensure that hp-snmp-agents restarts when the system is rebooted, enter the following command:
# chkconfig hp-snmp-agents on
Configure Entitlements
Configure the CMS software to enable remote support for X9000 systems.
23
NOTE: If you are using IRSS, see Using the HP Insight Remote Support Configuration Wizard and Editing Managed Systems to Complete Configuration in the HP Insight Remote Support Standard A.05.50 Hosting Device Configuration Guide. If you are using IRSA, see "Using the Remote Support Setting Tab to Update Your Client and CMS Information and Adding Individual Managed Systems in the HP Insight Remote Support Advanced A.05.50 Operations Guide. Custom field settings for X9720 Servers are discovered with their IP addresses. When a server is discovered, edit the system properties on the HP Systems Insight Manager. Locate the Entitlement Information section on the Contract and Warranty Information page and update the following: Enter X9000 as the Custom Delivery ID Select the System Country Code Enter the appropriate Customer Contact and Site Information details
Custom field settings for X9720 Onboard Administrator The Onboard Administrator (OA) is discovered with OA IP addresses. When the OA is discovered, edit the system properties on the HP Systems Insight Manager. Locate the Entitlement Information section of the Contract and Warranty Information page and update the following: Enter the X9000 enclosure product number as the Customer-Entered product number Enter X9000 as the Custom Delivery ID Select the System Country Code Enter the appropriate Customer Contact and Site Information details
For example, if the the CMS IP address is 10.2.2.2 and the X9000 node is 10.2.2.10, enter the following:
snmptrap -v1 -c public 10.2.2.2 .1.3.6.1.4.1.232 10.2.2.10 6 11003 1234 .1.3.6.1.2.1.1.5.0 s test .1.3.6.1.4.1.232.11.2.11.1.0 i 0 .1.3.6.1.4.1.232.11.2.8.1.0 s "X9000 remote support testing"
For IRSS, replace the CMS IP address with the IP address of the IRSS server.
If nodes are configured and the system is discovered properly but alerts are not reaching the CMS, verify that a trapif entry exists in the cma.conf configuration file on the file serving nodes. See Configure SNMP on the file serving nodes (page 22) for more information.
24
Getting started
# # # #
b b b b
H H H H
Example configuration
This example uses two nodes, ib50-81 and ib50-82. These nodes are backups for each other, forming a backup pair.
[root@ib50-80 ~]# ibrix_server -l Segment Servers =============== SERVER_NAME BACKUP STATE ----------- ------- -----------ib50-81 ib50-82 Up ib50-82 ib50-81 Up
All VIFs on ib50-81 have backup (standby) VIFs on ib50-82. Similarly, all VIFs on ib50-82 have backup (standby) VIFs on ib50-81. NFS, CIFS, FTP, and HTTP clients can connect to bond1:1 on either host. If necessary, the selected server will fail over to bond1:2 on the opposite host. X9000 clients could connect to bond1 on either host, as these clients do not support or require NIC failover. (The following sample output shows only the relevant fields.)
[root@ib50-80 ~]# ibrix_nic -l HOST IFNAME TYPE STATE ------- ------ ------- ------------------ib50-81 bond1:1 User Up, LinkUp ib50-81 bond0 Cluster Up, LinkUp ib50-81 bond1:2 User Inactive, Standby ib50-81 bond1 User Up, LinkUp ib50-82 bond0 Cluster Up, LinkUp ib50-82 bond1 User Up, LinkUp ib50-82 bond1:2 User Inactive, Standby ib50-82 bond1:1 User Up, LinkUp IP_ADDRESS ------------16.226.50.220 172.16.0.81 16.226.50.81 172.16.0.82 16.226.50.82 16.226.50.228 ib50-81 bond1:1 BACKUP_HOST ----------ib50-82 BACKUP_IF --------bond1:1
26
HTTP. When you create a virtual host on the Create Vhost dialog box or with the ibrix_httpvhost command, specify the VIF as the IP address that clients should use to access shares associated with the Vhost. X9000 clients. Use the following command to prefer the appropriate user network. Execute the command once for each destination host that the client should contact using the specified interface. ibrix_client -n -h SRCHOST -A DESTNOST/IFNAME For example: ibrix_client -n -h client12.mycompany.com -A ib50-81.mycompany.com/bond1 NOTE: Because the backup NIC cannot be used as a preferred network interface for X9000 clients, add one or more user network interfaces to ensure that HA and client communication work together.
27
4 Configuring failover
This chapter describes how to configure failover for agile management consoles, file serving nodes, network interfaces, and HBAs.
28
Configuring failover
Failover modes
High Availability has two failover modes: the default manual failover and the optional automated failover. A manual failover uses the ibrix_server command or the management console GUI to fail over a file serving node to its standby. The server can be powered down or remain up during the procedure. Manual failover also includes failover of any network interfaces having defined standbys. You can perform a manual failover at any time, regardless of whether automated failover is in effect. Automated failover allows the management console to initiate failover when it detects that standby-protected components have failed. A basic automated failover setup protects all file serving nodes. A comprehensive setup also includes network interface monitoring to protect user network interfaces and HBA monitoring to protect access from file serving nodes to storage via an HBA. When automated failover is enabled, the management console listens for heartbeat messages that the file serving nodes broadcast at one-minute intervals. The management console automatically initiates failover when it fails to receive five consecutive heartbeats or, if HBA monitoring is enabled, when a heartbeat message indicates that a monitored HBA or pair of HBAs has failed. If network interface monitoring is enabled, automated failover occurs when the management console receives a heartbeat message indicating that a monitored network might be down and then the console cannot reach that interface. If a file serving node fails over, you will need to manually fail back the node.
1. 2. 3.
The management console verifies that the standby is powered on and accessible. The management console migrates ownership of the nodes segments to the standby and notifies all file serving nodes and X9000 clients about the migration. This is a persistent change. If network interface monitoring has been set up, the management console activates the standby user network interface and transfers the IP address of the nodes user network interface to it.
To determine the progress of a failover, view the Status tab on the GUI or execute the ibrix_server -l command. While the management console is migrating segment ownership, the operational status of the node is Up-InFailover or Down-InFailover, depending on whether the node was powered up or down when failover was initiated. When failover is complete, the operational status changes to Up-FailedOver or Down-FailedOver. For more information about operational states, see Monitoring the status of file serving nodes (page 51). Both automated and manual failovers trigger an event that is reported on the GUI.
If your cluster includes one or more user network interfaces carrying NFS/CIFS client traffic, HP recommends that you identify standby network interfaces and set up network interface monitoring. If your file serving nodes are connected to storage via HBAs, HP recommends that you set up HBA monitoring.
To identify a standby for a file serving node, use the following command:
<installdirectory>/bin/ibrix_server -b -h HOSTNAME1,HOSTNAME2
For example, to identify node s2.hp.com as the standby for node s1.hp.com:
<installdirectory>/bin/ibrix_server -b -h s1.hp.com,s2.hp.com
30
Configuring failover
Preliminary configuration The following configuration steps are required when setting up integrated power sources: If you plan to implement automated failover, ensure that the management console has LAN access to the power sources. Install the environment and any drivers and utilities, as specified by the vendor documentation. If you plan to protect access to the power sources, set up the UID and password to be used.
Identifying power sources All power sources must be identified to the configuration database before they can be used. To identify an integrated power source, use the following command:
<installdirectory>/bin/ibrix_powersrc -a -t {ipmi|openipmi|openipmi2|ilo} -h HOSTNAME -I IPADDR -u USERNAME -p PASSWORD
For example, to identify an iLO power source at IP address 192.168.3.170 for node ss01:
<installdirectory>/bin/ibrix_powersrc -a -t ilo -h ss01 -I 192.168.3.170 -u Administrator -p password
Updating the configuration database with power source changes If you change IP address or password for a power source, you must update the configuration database with the changes. To do this, use the following command. The user name and password options are needed only for remotely managed power sources. Include the -s option to have the management console skip BMC.
<installdirectory>/bin/ibrix_powersrc -m [-I IPADDR] [-u USERNAME] [-p PASSWORD] [-s] -h POWERSRCLIST
The following command changes the IP address for power source ps1:
<installdirectory>/bin/ibrix_powersrc -m -I 192.168.3.153 -h ps1
Dissociating a file serving node from a power source You can dissociate a file serving node from an integrated power source by dissociating it from slot 1 (its default association) on the power source. Use the following command:
<installdirectory>/bin/ibrix_hostpower -d -s POWERSOURCE -h HOSTNAME
Deleting power sources from the configuration database To conserve storage, delete power sources that are no longer in use from the configuration database. If you are deleting multiple power sources, use commas to separate them.
<installdirectory>/bin/ibrix_powersrc -d -h POWERSRCLIST
To turn automated failover on or off for a single file serving node, include the -h SERVERNAME option.
Manual failover does not require the use of programmable power supplies. However, if you have installed and identified power supplies for file serving nodes, you can power down a server before manually failing it over. You can fail over a file serving node manually, even when automated failover is turned on. A file serving node can be failed over from the GUI or the CLI. On the CLI, complete the following steps: 1. Run ibrix_server -f, specifying the node to be failed over in the HOSTNAME option. If appropriate, include the -p option to power down the node before segments are migrated:
<installdirectory>/bin/ibrix_server -f [-p] -h HOSTNAME
2.
The contents of the STATE field indicate the status of the failover. If the field persistently shows Down-InFailover or Up-InFailover, the failover did not complete; contact HP Support for assistance. For information about the values that can appear in the STATE field, see What happens during a failover (page 29).
After failing back the node, determine whether the failback completed fully. If the failback is not complete, contact HP Support for assistance. NOTE: A failback might not succeed if the time period between the failover and the failback is too short, and the primary server has not fully recovered. HP recommends ensuring that both servers are up and running and then waiting 60 seconds before starting the failback. Use the ibrix_server -l command to verify that the primary server is up and running. The status should be Up-FailedOver before performing the failback.
problems if the cluster interface fails.) There is no difference in the way that monitoring is set up for the cluster interface and a user network interface. In both cases, you set up file serving nodes to monitor each other over the interface.
Sample scenario
The following diagram illustrates a monitoring and failover scenario in which a 1:1 standby relationship is configured. Each standby pair is also a network interface monitoring pair. When SS1 loses its connection to the user network interface (eth1), as shown by the red X, SS2 can no longer contact SS1 (A). SS2 notifies the management console, which then tests its own connection with SS1 over eth1 (B). The management console cannot contact SS1 on eth1, and initiates failover of SS1s segments (C) and user network interface (D).
Identifying standbys
To protect a network interface, you must identify a standby for it on each file serving node that connects to the interface. The following restrictions apply when identifying a standby network interface: The standby network interface must be unconfigured and connected to the same switch (network) as the primary interface. The file serving node that supports the standby network interface must have access to the file system that the clients on that interface will mount.
Virtual interfaces are highly recommended for handling user network interface failovers. If a VIF user network interface is teamed/bonded, failover occurs only if all teamed network interfaces fail. Otherwise, traffic switches to the surviving teamed network interfaces. To identify standbys for a network interface, execute the following command once for each file serving node. IFNAME1 is the network interface that you want to protect and IFNAME2 is the standby interface.
<installdirectory>/bin/ibrix_nic -b -H HOSTNAME1/IFNAME1,HOSTNAME2/IFNAME2
The following command identifies virtual interface eth2:2 on file serving node s2.hp.com as the standby interface for interface eth2 on file serving node s1.hp.com:
<installdirectory>/bin/ibrix_nic -b -H s1.hp.com/eth2,s2.hp.com/eth2:2
33
Setting up a monitor
File serving node failover pairs can be identified as network interface monitors for each other. Because the monitoring must be declared in both directions, this is a two-pass process for each failover pair. To set up a network interface monitor, use the following command:
<installdirectory>/bin/ibrix_nic -m -h MONHOST -A DESTHOST/IFNAME
For example, to set up file serving node s2.hp.com to monitor file serving node s1.hp.com over user network interface eth1:
<installdirectory>/bin/ibrix_nic -m -h s2.hp.com -A s1.hp.com/eth1
Deleting standbys
To delete a standby for a network interface, use the following command:
<installdirectory>/bin/ibrix_nic -b -U HOSTNAME1/IFNAME1
For example, to delete the standby that was assigned to interface eth2 on file serving node s1.hp.com:
<installdirectory>/bin/ibrix_nic -b -U s1.hp.com/eth2
When both HBA monitoring and automated failover for file serving nodes are set up, the management console will fail over a server in two situations: Both ports in a monitored set of standby-paired ports fail. Because, during the HBA monitoring setup, all standby pairs were identified in the configuration database, the management console knows that failover is required only when both ports fail. A monitored single-port HBA fails. Because no standby has been identified for the failed port, the management console knows to initiate failover immediately.
34
Configuring failover
Discovering HBAs
You must discover HBAs before you set up HBA monitoring, when you replace an HBA, and when you add a new HBA to the cluster. Discovery informs the configuration database of only a ports WWPN. You must identify ports that are teamed as standby pairs. Use the following command:
<installdirectory>/bin/ibrix_hba -a [-h HOSTLIST]
Use the following command to identify two HBA ports as a standby pair:
<installdirectory>/bin/ibrix_hba -b -P WWPN1:WWPN2 -h HOSTNAME
Enter the WWPN as decimal-delimited pairs of hexadecimal digits. The following command identifies port 20.00.12.34.56.78.9a.bc as the standby for port 42.00.12.34.56.78.9a.bc for the HBA on file serving node s1.hp.com:
<installdirectory>/bin/ibrix_hba -b -P 20.00.12.34.56.78.9a.bc:42.00.12.34.56.78.9a.bc -h s1.hp.com
For example, to turn on HBA monitoring for port 20.00.12.34.56.78.9a.bc on node s1.hp.com:
<installdirectory>/bin/ibrix_hba -m -h s1.hp.com -p 20.00.12.34.56.78.9a.bc
To turn off HBA monitoring for an HBA port, include the -U option:
<installdirectory>/bin/ibrix_hba -m -U -h HOSTNAME -p PORT
For example, to delete the pairing of ports 20.00.12.34.56.78.9a.bc and 42.00.12.34.56.78.9a.bc on node s1.hp.com:
<installdirectory>/bin/ibrix_hba -b -U -P 20.00.12.34.56.78.9a.bc:42.00.12.34.56.78.9a.bc -h s1.hp.com
35
For each High Availability feature, the summary report returns one of the following results for each tested file serving node and optionally for their standbys: Passed. The feature has been configured. Warning. The feature has not been configured, but the significance of the finding is not clear. For example, the absence of discovered HBAs can indicate either that the HBA monitoring feature was not configured or that HBAs are not physically present on the tested servers. Failed. The feature has not been configured.
The detailed report includes an overall result status for all tested file serving nodes and describes details about the checks performed on each High Availability feature. By default, the report includes details only about checks that received a Failed or a Warning result. You can expand the report to include details about checks that received a Passed result.
For example, to view a summary report for file serving nodes xs01.hp.com and xs02.hp.com:
<installdirectory>/bin/ibrix_haconfig -l -h xs01.hp.com,xs02.hp.com Host HA Configuration Power Sources Backup Servers Auto Failover Nics Monitored Standby Nics HBAs Monitored xs01.hp.com FAILED PASSED PASSED PASSED FAILED PASSED FAILED xs02.hp.com FAILED PASSED FAILED FAILED FAILED WARNED WARNED
36 Configuring failover
The -h HOSTLIST option lists the nodes to check. To also check standbys, include the -b option. To view results only for file serving nodes that failed a check, include the -f argument. The -s option expands the report to include information about the file system and its segments. The -v option produces detailed information about configuration checks that received a Passed result. For example, to view a detailed report for file serving nodes xs01.hp.com:
<installdirectory>/bin/ibrix_haconfig -i -h xs01.hp.com --------------- Overall HA Configuration Checker Results --------------FAILED --------------- Overall Host Results --------------Host HA Configuration Power Sources Backup Servers Auto Failover Nics Monitored Standby Nics HBAs Monitored xs01.hp.com FAILED PASSED PASSED PASSED FAILED PASSED FAILED --------------- Server xs01.hp.com FAILED Report --------------Check Description ================================================ Power source(s) configured Backup server or backups for segments configured Automatic server failover configured Cluster & User Nics monitored Cluster nic xs01.hp.com/eth1 monitored User nics configured with a standby nic HBA ports monitored Hba port 21.01.00.e0.8b.2a.0d.6d monitored Hba port 21.00.00.e0.8b.0a.0d.6d monitored Result ====== PASSED PASSED PASSED Result Information ==================
FAILED PASSED
Not monitored
FAILED FAILED
37
Warnings. Potentially disruptive conditions where file system access is not lost, but if the situation is not addressed, it can escalate to an alert condition. Information. Normal events that change the cluster.
The following chart includes examples of important events that can trigger notification by email or SNMP traps.
Event Type ALERT Trigger Point User fails to log into GUI File system is unmounted File serving node is down/restarted File serving node terminated unexpectedly Name login.failure filesystem.unmounted server.status.down server.unreachable
WARN
segment.migrated
INFO
User successfully logs in to GUI File system is created File serving node is deleted NIC is added using GUI NIC is removed using GUI Physical storage is discovered and added using management console
38
The notification threshold for Alert events is 90% of capacity. Threshold-triggered notifications are sent when a monitored system resource exceeds the threshold and are reset when the resource utilization dips 10% below the threshold. For example, a notification is sent the first time usage reaches 90% or more. The next notice is sent only if the usage declines to 80% or less (event is reset), and subsequently rises again to 90% or above. To associate all types of events with recipients, omit the -e argument in the following command. Use the ALERT, WARN, and INFO keywords to make specific type associations or use EVENTLIST to associate specific events.
<installdirectory>/bin/ibrix_event -c [-e ALERT|WARN|INFO|EVENTLIST] -m EMAILLIST
The next command associates all Alert events and two Info events to [email protected]:
<installdirectory>/bin/ibrix_event -c -e ALERT,server.registered,filesystem.space.full -m [email protected]
The following command configures email settings to use the mail.hp.com SMTP server and to turn on notifications:
<installdirectory>/bin/ibrix_event -m on -s mail.hp.com -f [email protected] -r [email protected] -t Cluster1 Notification
To turn off the server.registered and filesystem.created notifications for [email protected] and [email protected]:
<installdirectory>/bin/ibrix_event -d -e server.registered,filesystem.created -m [email protected],[email protected]
X9000 Software implements an SNMP agent on the management console that supports the private X9000 Software MIB. The agent can be polled and can send SNMP traps to configured trapsinks. Setting up SNMP notifications is similar to setting up email notifications. You must associate events to trapsinks and configure SNMP settings for each trapsink to enable the agent to send a trap when an event occurs.
40
Some SNMP parameters and the SNMP default port are the same, regardless of SNMP version. The agent port is 5061 by default. SYSCONTACT, SYSNAME, and SYSLOCATION are optional MIB-II agent parameters that have no default values. The -c and -s options are also common to all SNMP versions. The -c option turns the encryption of community names and passwords on or off. There is no encryption by default. Using the -s option toggles the agent on and off; it turns the agent on by starting a listener on the SNMP port, and turns it off by shutting off the listener. The default is off. The format for a v1 or v2 update command follows:
ibrix_snmpagent -u v {1|2} [-p PORT] [-r READCOMMUNITY] [-w WRITECOMMUNITY] [-t SYSCONTACT] [-n SYSNAME] [-o SYSLOCATION] [-c {yes|no}] [-s {on|off}]
The update command for SNMPv1 and v2 uses optional community names. By convention, the default READCOMMUNITY name used for read-only access and assigned to the agent is public. No default WRITECOMMUNITY name is set for read-write access (although the name private is often used). The following command updates a v2 agent with the write community name private, the agents system name, and that systems physical location:
ibrix_snmpagent -u v 2 -w private -n agenthost.domain.com -o DevLab-B3-U6
The SNMPv3 format adds an optional engine id that overrides the default value of the agents host name. The format also provides the -y and -z options, which determine whether a v3 agent can process v1/v2 read and write requests from the management station. The format is:
ibrix_snmpagent -u v 3 [-e engineId] [-p PORT] [-r READCOMMUNITY] [-w WRITECOMMUNITY] [-t SYSCONTACT] [-n SYSNAME] [-o SYSLOCATION] [-y {yes|no}] [-z {yes|no}] [-c {yes|no}] [-s {on|off}]
If a port is not specified, the command defaults to port 162. If a community is not specified, the command defaults to the community name public. The -s option toggles agent trap transmission on and off. The default is on. For example, to create a v2 trapsink with a new community name, enter:
ibrix_snmptrap -c -h lab13-116 -v 2 -m private
For a v3 trapsink, additional options define security settings. USERNAME is a v3 user defined on the trapsink host and is required. The security level associated with the trap message depends on which passwords are specifiedthe authentication password, both the authentication and privacy passwords, or no passwords. The CONTEXT_NAME is required if the trap receiver has defined subsets of managed objects. The format is:
ibrix_snmptrap -c -h HOSTNAME -v 3 [-p PORT] -n USERNAME [-j {MD5|SHA}] [-k AUTHORIZATION_PASSWORD] [-y {DES|AES}] [-z PRIVACY_PASSWORD] [-x CONTEXT_NAME] [-s {on|off}]
The following command creates a v3 trapsink with a named user and specifies the passwords to be applied to the default algorithms. If specified, passwords must contain at least eight characters.
ibrix_snmptrap -c -h lab13-114 -v 3 -n trapsender -k auth-passwd -z priv-passwd Setting up SNMP notifications 41
For example, to associate all Alert events and two Info events with a trapsink at IP address 192.168.2.32, enter:
<installdirectory>/bin/ibrix_event -c -y SNMP -e ALERT,server.registered, filesystem.created -m 192.168.2.32
Defining views
A MIB view is a collection of paired OID subtrees and associated bitmasks that identify which subidentifiers are significant to the views definition. Using the bitmasks, individual OID subtrees can be included in or excluded from the view. An instance of a managed object belongs to a view if: The OID of the instance has at least as many sub-identifiers as the OID subtree in the view. Each sub-identifier in the instance and the subtree match when the bitmask of the corresponding sub-identifier is nonzero.
The management console automatically creates the excludeAll view that blocks access to all OIDs. This view cannot be deleted; it is the default read and write view if one is not specified for a group with the ibrix_snmpgroup command. The catch-all OID and mask are:
OID = .1 Mask = .1
Consider these examples, where instance .1.3.6.1.2.1.1 matches, instance .1.3.6.1.4.1 matches, and instance .1.2.6.1.2.1 does not match.
OID = .1.3.6.1.4.1.18997 Mask = .1.1.1.1.1.1.1 OID = .1.3.6.1.2.1 Mask = .1.1.0.1.0.1
To add a pairing of an OID subtree value and a mask value to a new or existing view, use the following format:
ibrix_snmpview -a -v VIEWNAME [-t {include|exclude}] -o OID_SUBTREE [-m MASK_BITS]
The subtree is added in the named view. For example, to add the X9000 Software private MIB to the view named hp, enter:
ibrix_snmpview -a -v hp -o .1.3.6.1.4.1.18997 -m .1.1.1.1.1.1.1
For example, to create the group group2 to require authorization, no encryption, and read access to the hp view, enter:
ibrix_snmpgroup -c -g group2 -s authNoPriv -r hp
The format to create a user and add that user to a group follows:
ibrix_snmpuser -c -n USERNAME -g GROUPNAME [-j {MD5|SHA}] [-k AUTHORIZATION_PASSWORD] [-y {DES|AES}] [-z PRIVACY_PASSWORD]
Authentication and privacy settings are optional. An authentication password is required if the group has a security level of either authNoPriv or authPriv. The privacy password is required if the group has a security level of authPriv. If unspecified, MD5 is used as the authentication algorithm and DES as the privacy algorithm, with no passwords assigned. For example, to create user3, add that user to group2, and specify an authorization password for authorization and no encryption, enter:
ibrix_snmpuser -c -n user3 -g group2 -k auth-passwd -s authNoPriv
There are two restrictions on SNMP object deletions: A view cannot be deleted if it is referenced by a group. A group cannot be deleted if it is referenced by a user.
This command lists the defined group settings for all SNMP groups. Specifying an optional group name lists the defined settings for that group only.
43
Each file serving node functions as an NDMP Server and runs the NDMP Server daemon (ndmpd) process. When you start a backup or restore operation on the DMA, you can specify the node and tape device to be used for the operation. Following are considerations for configuring and using the NDMP feature: When configuring your system for NDMP operations, attach your tape devices to a SAN and then verify that the file serving nodes to be used for backup/restore operations can see the appropriate devices. When performing backup operations, take snapshots of your file systems and then back up the snapshots.
44
To configure NDMP parameters from the CLI, use the following command:
ibrix_ndmpconfig c [-d IP1,IP2,IP3,...] [-m MINPORT] [-x MAXPORT] [-n LISTENPORT] [-u USERNAME] [-p PASSWORD] [-e {0=disable,1=enable}] v {0=10}] [-w BYTES] [-z NUMSESSIONS]
To cancel a session, select that session and click Cancel Session. Canceling a session kills all spawned sessions processes and frees their resources if necessary.
To see similar information for completed sessions, select NDMP Backup > Session History. To view active sessions from the CLI, use the following command:
ibrix_ndmpsession l
To view completed sessions, use the following command. The -t option restricts the history to sessions occurring on or before the specified date.
ibrix_ndmpsession l -s [-t YYYY-MM-DD]
To cancel sessions on a specific file serving node, use the following command:
ibrix_ndmpsession c SESSION1,SESSION2,SESSION3,... h HOST
If you add a tape or media changer device to the SAN, click Rescan Device to update the list. If you remove a device and want to delete it from the list, you will need to reboot all of the servers to which the device is attached. To view tape and media changer devices from the CLI, use the following command:
ibrix_tape l
46
NDMP events
An NDMP Server can generate three types of events: INFO, WARN, and ALERT. These events are displayed on the management console GUI and can be viewed with the ibrix_event command. INFO events. These events specify when major NDMP operations start and finish, and also report progress. For example:
7012:Level 3 backup of /mnt/ibfs7 finished at Sat Nov 7 21:20:58 PST 2009 7013:Total Bytes = 38274665923, Average throughput = 236600391 bytes/sec.
WARN events. These events might indicate an issue with NDMP access, the environment, or NDMP operations. Be sure to review these events and take any necessary corrective actions. Following are some examples:
0000:Unauthorized NDMP Client 16.39.40.201 trying to connect 4002:User [joe] md5 mode login failed.
ALERT events. These alerts indicate that an NDMP action has failed. For example:
1102: Cannot start the session_monitor daemon, ndmpd exiting. 7009:Level 6 backup of /mnt/shares/accounts1 failed (writing eod header error). 8001:Restore Failed to read data stream signature.
You can configure the system to send email or SNMP notifications when these types of events occur.
47
Hostgroups are optional. If you do not choose to set them up, you can mount file systems on clients and tune host settings and allocation policies on an individual level.
48
To set up one level of hostgroups beneath the root, simply create the new hostgroups. You do not need to declare that the root node is the parent. To set up lower levels of hostgroups, declare a parent element for hostgroups. Optionally, you can specify a domain rule for a hostgroup. Use only alphanumeric characters and the underscore character (_) in hostgroup names. Do not use a host name as a group name. To create a hostgroup tree using the CLI: 1. Create the first level of the tree and optionally declare a domain rule for it:
<installdirectory>/bin/ibrix_hostgroup -c -g GROUPNAME [-D DOMAIN]
2.
Create all other levels by specifying a parent for the group and optionally a domain rule:
<installdirectory>/bin/ibrix_hostgroup -c -g GROUPNAME [-D DOMAIN] [-p PARENT]
For example, to add the domain rule 192.168 to the finance group:
<installdirectory>/bin/ibrix_hostgroup -a -g finance -D 192.168
Viewing hostgroups
To view hostgroups, use the following command. You can view all hostgroups or a specific hostgroup.
<installdirectory>/bin/ibrix_hostgroup -l [-g GROUP]
Adding an X9000 client to a hostgroup 49
Deleting hostgroups
When you delete a hostgroup, its members are assigned to the parent of the deleted group. To force the moved X9000 clients to implement the mounts, tunings, network interface preferences, and allocation policies that have been set on their new hostgroup, either restart X9000 Software services on the clients (see Starting and stopping processes in the system administration guide for your system) or execute the following commands locally: ibrix_lwmount -a to force the client to pick up mounts or allocation policies ibrix_lwhost --a to force the client to pick up host tunings
50
Monitoring intervals
The monitoring interval is set by default to 15 minutes (900 seconds). You can change the interval setting by using the following command to change the <interval_in_seconds> variable: ibrix_host_tune -C vendorStorageHardwareMonitoringReportInterval=<interval_in_seconds> NOTE: The storage monitor will not run if the interval is set to less than 10 minutes.
File serving nodes can be in one of three operational states: Normal, Alert, or Error. These states are further broken down into categories that are mostly related to the failover status of the node. The following table describes the states.
State Normal Alert Description Up: Operational. Up-Alert: Server has encountered a condition that has been logged. An event will appear in the Status tab of the management console GUI, and an email notification may be sent. Up-InFailover: Server is powered on and visible to the management console, and the management console is failing over the servers segments to a standby server. Up-FailedOver: Server is powered on and visible to the management console, and failover is complete. Error Down-InFailover: Server is powered down or inaccessible to the management console, and the management console is failing over the server's segments to a standby server.
51
State
Description Down-FailedOver: Server is powered down or inaccessible to the management console, and failover is complete. Down: Server is powered down or inaccessible to the management console, and no standby server is providing access to the servers segments.
The STATE field also reports the status of monitored NICs and HBAs. If you have multiple HBAs and NICs and some of them are down, the state will be reported as HBAsDown or NicsDown.
Events are written to an events table in the configuration database as they are generated. To maintain the size of the file, HP recommends that you periodically remove the oldest events. See Removing events from the events database table (page 52) for more information. You can set up event notifications through email (see Setting up email notification of cluster events (page 38)) or SNMP traps (see Setting up SNMP notifications (page 40)).
Viewing events
The dashboard on the management console GUI specifies the number of events that have occurred in the last 24 hours. Click Events in the GUI Navigator to view a report of the events. You can also view events that have been reported for specific file systems or servers. To view events from the CLI, use the following commands: View events by type:
<installdirectory>/bin/ibrix_event -q [-e ALERT|WARN|INFO]
View adesignated number of events. The command displays the 100 most recent messages by default. Use the -n EVENTS_COUNT option to increase or decrease the number of events displayed.
<installdirectory>/bin/ibrix_event -l [-n EVENTS_COUNT]
52
Health checks
The ibrix_health command runs these health checks on file serving nodes: Pings remote file serving nodes that share a network with the test hosts. Remote servers that are pingable might not be connected to a test host because of a Linux or X9000 Software issue. Remote servers that are not pingable might be down or have a network problem. If test hosts are assigned to be network interface monitors, pings their monitored interfaces to assess the health of the connection. (For information on network interface monitoring, see Using network interface monitoring (page 32).) Determines whether specified hosts can read their physical volumes. Determines whether information maps on the tested hosts are consistent with the configuration database.
The ibrix_health command runs this health check on both file serving nodes and X9000 clients:
If you include the -b option, the command also checks the health of standby servers (if configured).
The detailed report consists of the summary report and the following additional data:
By default, the Result Information field in a detailed report provides data only for health checks that received a Failed or a Warning result. Optionally, you can expand a detailed report to provide data about checks that received a Passed result, as well as details about the file system and segments.
By default, the command reports on all hosts. To view specific hosts, include the -h HOSTLIST argument. To view results only for hosts that failed the check, include the -f argument. To include standby servers in the health check, include the -b argument. For example, to view a summary report for node i080 and client lab13-116:
Monitoring cluster health 53
<installdirectory>/bin/ibrix_health -l -h i080,lab13-116
The -f option displays results only for hosts that failed the check. The -s option includes information about the file system and its segments. The -v option includes details about checks that received a Passed or Warning result. The following example shows a detailed health report for file serving node lab13-116:
<installdirectory>/bin/ibrix_health -i -h Overall Health Checker Results - PASSED ======================================= Host Summary Results ==================== Host Result Type State -------- ------ ------ -----------lab15-62 PASSED Server Up, HBAsDown lab15-62 Report =============== Overall Result ============== Result Type State Network Thread Protocol ------ ------ ----------------------- ------ -------PASSED Server Up, HBAsDown 99.126.39.72 16 true CPU Information =============== Cpu(System,User,Util,Nice) -------------------------0, 1, 1, 0 Memory Information ================== Mem Total Mem Free --------- -------1944532 1841548 lab13-116
Module -----Loaded
Up time --------3267210.0
Network(Bps) -----------1301
Disk(Bps) --------9728
Buffers(KB) ----------688
Cached(KB) ---------34616
Version/OS Information ====================== Fs Version IAD Version OS OS Version Version Architecture Processor ----------------- ----------- --------- ----------------------------------------------------- -------------- ------------ --------5.3.468(internal) 5.3.446 GNU/Linux Red Hat Enterprise Linux Server release 5.2 (Tikanga) 2.6.18-92.el5 i386 i686 Remote Hosts ============ Host Type -------- -----lab15-61 Server lab15-62 Server
Kernel
Check Results ============= Check : lab15-62 can ping remote segment server hosts ===================================================== Check Description Result Result Information ------------------------------- ------ -----------------Remote server lab15-61 pingable PASSED Check : Physical volumes are readable ===================================== Check Description Information ------------------------------------------------------------------------Physical volume 0ownQk-vYCm-RziC-OwRU-qStr-C6d5-ESrMIf readable Physical volume 1MY7Gk-zb7U-HnnA-D24H-Nxhg-WPmX-ZfUvMb readable
54
Check : Iad and Fusion Manager consistent ========================================= Check Description Result Information ---------------------------------------------------------------------------------------------- -----lab15-61 engine uuid matches on Iad and Fusion Manager lab15-61 IP address matches on Iad and Fusion Manager lab15-61 network protocol matches on Iad and Fusion Manager lab15-61 engine connection state on Iad is up lab15-62 engine uuid matches on Iad and Fusion Manager lab15-62 IP address matches on Iad and Fusion Manager lab15-62 network protocol matches on Iad and Fusion Manager lab15-62 engine connection state on Iad is up ifs2 file system uuid matches on Iad and Fusion Manager ifs2 file system generation matches on Iad and Fusion Manager ifs2 file system number segments matches on Iad and Fusion Manager ifs2 file system mounted state matches on Iad and Fusion Manager Segment owner for segment 1 filesystem ifs2 matches on Iad and Fusion Manager Segment owner for segment 2 filesystem ifs2 matches on Iad and Fusion Manager ifs1 file system uuid matches on Iad and Fusion Manager ifs1 file system generation matches on Iad and Fusion Manager ifs1 file system number segments matches on Iad and Fusion Manager ifs1 file system mounted state matches on Iad and Fusion Manager Segment owner for segment 1 filesystem ifs1 matches on Iad and Fusion Manager Superblock owner for segment 1 of filesystem ifs2 on lab15-62 matches on Iad and Fusion Manager PASSED Superblock owner for segment 2 of filesystem ifs2 on lab15-62 matches on Iad and Fusion Manager PASSED Superblock owner for segment 1 of filesystem ifs1 on lab15-62 matches on Iad and Fusion Manager PASSED
Result
PASSED PASSED PASSED PASSED PASSED PASSED PASSED PASSED PASSED PASSED PASSED PASSED PASSED PASSED PASSED PASSED PASSED PASSED PASSED
Viewing logs
Logs are provided for the management console, file serving nodes, and X9000 clients. Contact HP Support for assistance in interpreting log files. You might be asked to tar the logs and email them to HP.
CPU. Statistics about processor and CPU activity. NFS. Statistics about NFS client and server activity.
The management console GUI displays most of these statistics on the dashboard. See Using the GUI (page 15) for more information. To view the statistics from the CLI, use the following command:
<installdirectory>/bin/ibrix_stats -l [-s] [-c] [-m] [-i] [-n] [-f] [-h HOSTLIST]
Use -s -c -m -i -n -f -h
the options to view only certain statistics or to view statistics for specific file serving nodes: Summary statistics CPU statistics Memory statistics I/O statistics Network statistics NFS statistics The file serving nodes to be included in the report
56
IMPORTANT: The Statistics tool uses remote file copy (rsync) to move statistics data from the file serving nodes to the management console for processing, report generation, and display. To use rsync, you will need to configure one-way shared ssh keys on all file serving nodes. See Configuring shared ssh keys (page 66).
By default, installing the Statistics tool does not start the Statistics tool processes. See Controlling Statistics tool processes (page 65) for information about starting and stopping the processes.
3.
Set up synchronization between nodes. Run the following command on the active management console node, specifying the node names of all file serving nodes: /usr/local/ibrix/stats/bin/stmanage setrsync <node1_name> ... <nodeN_name> For example: # stmanage setrsync ibr-3-31-1 ibr-3-31-2 ibr-3-31-3
57
NOTE: Do not run the command on individual nodes. All nodes must be specified in the same command and can be specified in any order. Be sure to use node names, not IP addresses. To test the rsync mechanism, see Testing rsync access (page 65). 4. On the active node, edit the /etc/ibrix/stats.conf file to add the age.retain.files=24h parameter. See Changing the Statistics tool configuration (page 62). Create a symbolic link from /var/lib/ibrix/histstats to the /local/statstool/ histstats directory. See Creating a symbolic link to the statistics reports (histstats) folder (page 61).
5.
NOTE: If statistics processes were running before the upgrade started, those processes will automatically restart when the upgrade is complete. If processes were not running before the upgrade started, you must start them manually after the upgrade completes.
58
The Time View lists the reports in chronological order, and the Table View lists the reports by cluster or server. Click a report to view it.
Generating reports
To generate a new report, click Request New Report on the X9000 Management Console Historical Reports GUI.
59
Enter the specifications for your report and click Submit. The management console will then generate the report. The completed report will appear in the list of reports on the statistics home page. When generating reports, you should be aware of the following: A report can be generated only from statistics that have been gathered. For example, if you start the tool at 9:40am and ask for a report from 9:00am to 9:30am, the report cannot be generated because data was not gathered for that period. Reports are generated on an hourly basis. It may take up to an hour before a report is generated and made available for viewing.
NOTE: If the system is currently generating reports and you request a new report at the same time, the GUI issues an error. Wait a few moments and then request the report again.
Deleting reports
To delete a report, log into each node and remove the report from the /var/lib/ibrix/ histstats directory.
60
You should also configure how long to retain staging data and data stored in the statistics database. The retention periods are controlled by parameters in the /etc/ibrix/stats.conf file. Aging configuration for staging data: The age.retain.files parameter specifies the number of hours, starting from the current time, to retain collected data. If the parameter is not set, data is retained for 30 days by default. Aging configuration for statistics database: the following parameters specify the number of hours, starting from the current time, to retain aggregated data in the statistics database for the cluster and individual nodes. age.retain.60s: specifies how long to retain data aggregated every minute. The default is 72 hours. age.retain.15m: specifies how long to retain data aggregated every 15 minutes. The default is 360 hours. age.retain.60m: specifies how long to retain data aggregated every hour. The default is 1440 hours.
For more information about setting these parameters, see Changing the Statistics tool configuration (page 62).
For example, if age.retain.60s=72h, data aggregated in the last 72 hours is retained, and older data is removed. To change this parameter, add a line such as the following to the stats.conf file: age.retain.60s=150h The examples shown here are suggestions only. HP recommends that you test your particular configuration before configuring it on your nodes. CAUTION: If invalid inputs are specified for these parameters, the values are set to zero, which can cause the loss of all statistical data. NOTE: You do not need to restart processes after changing the configuration. The updated configuration is collected automatically.
62
# mkdir -p /local/statstool/histstats 3. 4. Run the setup command on the current active node: # /etc/init.d/ibrix_statsagent setup agilefm_active Run the loadfm command on the current active node: # /usr/local/ibrix/stats/bin/stmanage loadfm On the old active node: If the old active node is up and running, perform steps 57; otherwise, go to step 8. 5. Start statstool: # /etc/init.d/ibrix_statsagent stop agilefm_active 6. Back up the histstats folder: # mv /var/lib/ibrix/histstats <OAN_histstats_backup/DD:MM:YYYY> NOTE: If a symbolic link was created to the statistics reports (histstats) folder, move the symbolically linked folder and create a histstats folder in /local/statstool: # mv /local/statstool/histstats <OAN_histstats_backup/DD:MM:YYYY>
nl
# mkdir -p /local/statstool/histstats 7. Restore the reports folder from <OAN_histstats_backup/DD:MM:YYYY> to /var/lib/ibrix/histstats/reports on the current active node. If a symbolic link was created, restore the folder under /local/statstool/histstats/reports.
On the current active node: 8. Start statstool: # /etc/init.d/ibrix_statsagent start --agilefm_active 9. Set the rsync mechanism: # /usr/local/ibrix/stats/bin/stmanage setrsync <active node> <passive node1>...<passive node N> On the old active node: If you completed steps 57 earlier, perform only steps 10 and 1 1.
63
10. Run the passive migrator script: # /usr/local/ibrix/stats/bin/stats_passive_migrator 1 1. Start statstool: # /etc/init.d/ibrix_statsagent start agilefm_passive NOTE: Passwordless authentication is required for the migration to succeed. To determine whether passwordless authentication is enabled, see Configuring shared ssh keys (page 66) and Enabling collection and synchronization (page 57). Migrate reports from the old active node to the current active node. If the old active node was not up and running during the previous steps, but is now up, complete the following steps. On the old active node: 12. Stop statstool: # /etc/init.d/ibrix_statsagent stop --agilefm_active 13. Back up the histstats folder: # mv /var/lib/ibrix/histstats <OAN_histstats_backup/DD:MM:YYYY> NOTE: If a symbolic link was created to the statistics reports (histstats) folder, move the symbolically linked folder and create a histstats folder in /local/statstool: # mv /local/statstool/histstats <OAN_histstats_backup/DD:MM:YYYY>
nl
# mkdir -p /local/statstool/histstats On the current active node: 14. Copy the <OAN_histstats_backup/DD:MM:YYYY>/histstats/reports folder to the current active node. 15. Append the hourly, daily, and weekly reports from <OAN_histstats_backup/DD:MM:YYYY> to /var/lib/ibrix/histstats/reports. If using a symbolic link, append the reports to /local/statstool/histstats/reports. # cp -r <OAN_histstats_backup/DD:MM:YYYY>/histstats/reports/hourly/ <YYYY-MM-DD-NN>/<OldActiveNode> /var/lib/ibrix/histstats/reports/ hourly/<YYYY-MM-DD-NN>/<OldActiveNode> # cp -r <OAN_histstats_backup/DD:MM:YYYY>/histstats/reports/daily/ <YYYY-MM-DD-NN>/<OldActiveNode> /var/lib/ibrix/histstats/reports/ daily/<YYYY-MM-DD-NN>/<OldActiveNode> # cp -r <histstats_backup/DD:MM:YYYY>/histstats/reports/weekly/ <YYYY-MM-DD-NN>/<OldActiveNode> /var/lib/ibrix/histstats/reports/ weekly/<YYYY-MM-DD-NN>/<OldActiveNode> On the old active node: 16. Run the passive migrator script: # /usr/local/ibrix/stats/bin/stats_passive_migrator 17. Start statstool: # /etc/init.d/ibrix_statsagent start --agilefm_passive
Troubleshooting
Testing rsync access
To verify that ssh authentication is enabled and data can be pulled from the nodes without prompting for a password, run the following command: # /usr/local/ibrix/stats/bin/stmanage testpull
Other conditions
Data is not collected. If data is not being gathered on the common directory for the Statistics Manager (/var/lib/ibrix/histstats by default), stop the Statistics Manager on all nodes, restart ibrix_statsagent on all nodes, and then restart the Statistics Manager on all nodes. Installation issues. Check the /tmp/stats-install.log and try to fix the condition, or send the /tmp/stats-install.log to HP Support. Missing reports for file serving nodes. If reports are missing on the Stats tool web page, check the following: Determine whether collection is enabled for the particular file serving node. If not, see Enabling collection and synchronization (page 57). Determine whether password-less logging is enabled, If not, see Configuring shared ssh keys (page 66). Check for time sync. All servers in the cluster should have the same date time and time zone to allow proper collection and viewing of reports.
Troubleshooting
65
Log files
See /var/log/stats.log for detailed logging for the Statistics tool. (The information includes detailed exceptions and traceback messages). The logs are rolled over at midnight every day and only seven days of compressed statistics logs are retained. The default /var/log/messages log file also includes logging for the Statistics tool, but the messages are short.
This command creates two files: $HOME/.ssh/id_dsa (private key) and $HOME/.ssh/ id_dsa.pub (public key). 2. On the management console, run the following command for each file serving node:
# ssh-copy-id -i $HOME/.ssh/id_dsa.pub server
For example: ssh-copy-id -i $HOME/.ssh/id_dsa.pub root@<NodeIP> 3. On the management console, test the results by ssh'ing to each file serving node:
# ssh {hostname for file serving node}
66
1. 2. 3. 4.
Power on the 9100cx disk capacity block(s). Power on the 9100c controllers. Wait for all controllers to report on in the 7-segment display. Power on the file serving nodes.
68
Use one of the following schemes for the reboot: Reboot the file serving nodes one-at-a-time. Divide the file serving nodes into two groups, with the nodes in the first group having backups in the second group, and the nodes in the second group having backups in the first group. You can then reboot one group at-a-time.
To perform the rolling reboot, complete the following steps on each file serving node: 1. Reboot the node directly from Linux. (Do not use the "Power Off" functionality in the management console, as it does not trigger failover of file serving services.) The node will fail over to its backup. 2. Wait for the management console to report that the rebooted node is Up. 3. From the management console, failback the node, returning services to the node from its backup. Run the following command on the backup node: <installdirectory>/bin/ibrix_server -f -U -h HOSTNAME HOSTNAME is the name of the node that you just rebooted.
To start and stop processes and view process status on a file serving node, use the following command. In certain situations, a follow-up action is required after stopping, starting, or restarting a file serving node.
/etc/init.d/ibrix_server [start | stop | restart | status]
To start and stop processes and view process status on an X9000 client, use the following command:
/etc/init.d/ibrix_client [start | stop | restart | status]
69
Use the ibrix_host_tune command to list or change host tuning settings: To list default values and valid ranges for all permitted host tunings:
<installdirectory>/bin/ibrix_host_tune -L
Contact HP Support to obtain the values for OPTIONLIST. List the options as option=value pairs, separated by commas. To set host tunings on all clients, include the -g clients option. To reset host parameters to their default values on nodes or hostgroups:
<installdirectory>/bin/ibrix_host_tune -U {-h HOSTLIST|-g GROUPLIST} [-n OPTIONS]
To reset all options on all file serving nodes, hostgroups, and X9000 clients, omit the -h HOSTLIST and -n OPTIONS options. To reset host tunings on all clients, include the -g clients option. The values that are restored depend on the values specified for the -h HOSTLIST command: File serving nodes. The default file serving node host tunings are restored. X9000 clients. The host tunings that are in effect for the default clients hostgroup are restored. Hostgroups. The host tunings that are in effect for the parent of the specified hostgroups are restored.
To list host tuning settings on file serving nodes, X9000 clients, and hostgroups, use the following command. Omit the -h argument to see tunings for all hosts. Omit the -n argument to see all tunings.
<installdirectory>/bin/ibrix_host_tune -l [-h HOSTLIST] [-n OPTIONS]
To set the communications protocol on nodes and hostgroups, use the following command. To set the protocol on all X9000 clients, include the -g clients option.
<installdirectory>/bin/ibrix_host_tune -p {UDP|TCP} {-h HOSTLIST| -g GROUPLIST}
To set server threads on file serving nodes, hostgroups, and X9000 clients:
<installdirectory>/bin/ibrix_host_tune -t THREADCOUNT {-h HOSTLIST| -g GROUPLIST}
To set admin threads on file serving nodes, hostgroups, and X9000 clients, use this command. To set admin threads on all X9000 clients, include the -g clients option.
<installdirectory>/bin/ibrix_host_tune -a THREADCOUNT {-h HOSTLIST| -g GROUPLIST}
To list host tuning parameters that have been changed from their defaults:
<installdirectory>/bin/ibrix_lwhost --list
See the ibrix_lwhost command description in the HP X9000 File Serving Software CLI Reference Guide for other available options.
Migrating segments
To improve cluster performance, segment ownership can be transferred from one host to another through segment migration. Segment migration transfers segment ownership but it does not move segments from their physical locations in networked storage systems. Segment ownership is recorded
70
on the physical segment itself, and the ownership data is part of the metadata that the management console distributes to file serving nodes and X9000 clients so that they can locate segments.
To force the migration, include -M. To skip the source host update during the migration, include -F. To skip host health checks, include -N. The following command migrates ownership of ilv2 and ilv3 in file system ifs1 to s1.hp.com:
<installdirectory>/bin/ibrix_fs -m -f ifs1 -s ilv2,ilv3 -h s1.hp.com
For example, to migrate ownership of all segments in file system ifs1 that reside on s1.hp.com to s2.hp.com:
<installdirectory>/bin/ibrix_fs -m -f ifs1 -H s1.hp.com,s2.hp.com
1.
2.
Identify the segment residing on the physical volume to be removed. Select Storage from the Navigator on the management console GUI. Note the file system and segment number on the affected physical volume. Locate other segments on the file system that can accommodate the data being evacuated from the affected segment. Select the file system on the management console GUI and then select Segments from the lower Navigator. If segments with adequate space are not available, add segments to the file system. If quotas are enabled on the file system, disable them: ibrix_fs -q -D -f FSNAME Evacuate the segment. Select the file system on the management console GUI and then select Active Tasks > Rebalancer from the lower Navigator. Select New on the Task Summary page. When the Start Rebalancing dialog box appears, open the Advanced tab. Check Evacuate source segments and note the cautions that are displayed. In the Source Segments column, select the segments to evacuate, and in the Destination Segments column, select the segments to receive the data. (If you do not select destination segments, the data is spread among the available segments.)
3. 4.
The Task Summary window displays the progress of the rebalance operation and reports any errors. If you need to stop the operation, click Stop. 5. When the rebalance operation completes, remove the storage from the cluster: ibrix_fs -B -f FSNAME -n BADSEGNUMLIST The segment number associated with the storage is not reused. 6. If quotas were disabled on the file system, unmount the file system and then re-enable quotas using the following command: ibrix_fs -q -E -f FSNAME Then remount the file system. To evacuate a segment using the CLI, use the ibrix_rebalance -e command, as described in the HP X9000 File Serving Software CLI Reference Guide.
72
Maintaining networks
Cluster and user network interfaces
X9000 Software supports the following logical network interfaces: Cluster network interface. This network interface carries management console traffic, traffic between file serving nodes, and traffic between file serving nodes and clients. A cluster can have only one cluster interface. For backup purposes, each file serving node and management console can have two cluster NICs. User network interface. This network interface carries traffic between file serving nodes and clients. Multiple user network interfaces are permitted.
The cluster network interface was created for you when your cluster was installed. For clusters with an agile management console configuration, a virtual interface is used for the cluster network interface. One or more user network interfaces may also have been created, depending on your site's requirements. You can add user network interfaces as necessary.
Maintaining networks
73
If you are identifying a VIF, add the VIF suffix (:nnnn) to the physical interface name. For example, the following command identifies virtual interface eth1:1 to physical network interface eth1 on file serving nodes s1.hp.com and s2.hp.com:
<installdirectory>/bin/ibrix_nic -a -n eth1:1 -h s1.hp.com,s2.hp.com
When you identify a user network interface for a file serving node, the management console queries the node for its IP address, netmask, and MAC address and imports the values into the configuration database. You can modify these values later if necessary. If you identify a VIF, the management console does not automatically query the node. If the VIF will be used only as a standby network interface in an automated failover setup, the management console will query the node the first time a network is failed over to the VIF. Otherwise, you must enter the VIFs IP address and netmask manually in the configuration database (see Setting network interface options in the configuration database (page 74)). The management console does not require a MAC address for a VIF. If you created a user network interface for X9000 client traffic, you will need to prefer the network for the X9000 clients that will use the network (see Preferring network interfaces (page 74)).
For example, to set netmask 255.255.0.0 and broadcast address 10.0.0.4 for interface eth3 on file serving node s4.hp.com:
<installdirectory>/bin/ibrix_nic -c -n eth3 -h s4.hp.com -M 255.255.0.0 -B 10.0.0.4
clients when you prefer a network interface, you can force clients to query the management console by executing the command ibrix_lwhost --a on the client or by rebooting the client.
Preferring a network interface for a file serving node or Linux X9000 client
The first command prefers a network interface for a File Server Node; the second command prefers a network interface for a client.
<installdirectory>/bin/ibrix_server -n -h SRCHOST -A DESTHOST/IFNAME <installdirectory>/bin/ibrix_client -n -h SRCHOST -A DESTHOST/IFNAME
Execute this command once for each destination host that the file serving node or X9000 client should contact using the specified network interface (IFNAME). For example, to prefer network interface eth3 for traffic from file serving node s1.hp.com to file serving node s2.hp.com:
<installdirectory>/bin/ibrix_server -n -h s1.hp.com -A s2.hp.com/eth3
The destination host (DESTHOST) cannot be a hostgroup. For example, to prefer network interface eth3 for traffic from all X9000 clients (the clients hostgroup) to file serving node s2.hp.com:
<installdirectory>/bin/ibrix_hostgroup -n -g clients -A s2.hp.com/eth3
5. 6.
Changing the IP address for the cluster interface on a dedicated management console
You must change the IP address for the cluster interface on both the file serving nodes and the management console. 1. If High Availability is enabled, disable it by executing ibrix_server -m -U. 2. Unmount the file system from all file serving nodes, and reboot. 3. On each file serving node, locally change the IP address of the cluster interface. 4. Change the IP address of the cluster interface for each file serving node:
<installdirectory>/bin/ibrix_nic -c -n IFNAME -h HOSTNAME [-I IPADDR]
5. 6.
Remount the file system. Re-enable High Availability if necessary by executing ibrix_server -m.
To specify a new cluster interface for a cluster with a dedicated management console, use the following command:
<installdirectory>/bin/ibrix_nic -t -n IFNAME -h HOSTNAME
To specify a new virtual cluster interface for a cluster with an agile management console configuration, use the following command:
<installdirectory>/bin/ibrix_fm -c <VIF IP address> d <VIF Device> -n <VIF Netmask> -v cluster [I <Local IP address_or_DNS hostname>]
The following command adds a route for virtual interface eth2:232 on file serving node s2.hp.com, sending all traffic through gateway gw.hp.com:
<installdirectory>/bin/ibrix_nic -r -n eth2:232 -h s2.hp.com -A -R gw.hp.com
Deleting a routing table entry If you delete a routing table entry, it is not replaced with a default entry. A new replacement route must be added manually. To delete a route, use the following command:
<installdirectory>/bin/ibrix_nic -r -n IFNAME -h HOSTNAME -D
The following command deletes all routing table entries for virtual interface eth0:1 on file serving node s2.hp.com:
<installdirectory>/bin/ibrix_nic -r -n eth0:1 -h s2.hp.com -D
76
The following command deletes interface eth3 from file serving nodes s1.hp.com and s2.hp.com:
<installdirectory>/bin/ibrix_nic -d -n eth3 -h s1.hp.com,s2.hp.com
When ibrix_nic is used with the -i option, it reports detailed information about the interfaces. Use the -h option to limit the output to specific hosts. Use the -n option to view information for a specific interface.
ibrix_nic -i [-h HOSTLIST] [-n NAME]
Maintaining networks
77
To perform the migration, the X9000 installation code must be available. As delivered, this code is provided in /tmp/X9720/ibrix. If this directory no longer exists, download the installation code from the HP support website for your storage system. IMPORTANT: The migration procedure can be used only on clusters running HP X9000 File Serving Software 5.4 or later.
Verify that you can ping the new local IP address. 4. Configure the agile management console:
ibrix_fm -c <cluster_VIF_addr> -d <cluster_VIF_device> n <cluster_VIF_netmask> -v cluster -I <local_cluster_IP_addr>
78
In the command, <cluster_VIF_addr> is the old cluster IP address for the original management console and <local_cluster_IP_addr> is the new IP address you acquired. For example:
[root@x109s1 ~]# ibrix_fm -c 172.16.3.1 -d bond0:1 -n 255.255.248.0 -v cluster -I 172.16.3.100 Command succeeded!
The original cluster IP address is now configured to the newly created cluster VIF device (bond0:1). 5. If you created the interface bond1:0 in step 3, now set up the user network VIF, specifying the user VIF IP address and VIF device used in step 3. NOTE: This step does not apply to CIFS/NFS clients. If you are not using X9000 clients, you can skip this step. Set up the user network VIF:
ibrix_fm -c <user_VIF_IP> -d <user_VIF_device> -n <user_VIF_netmask> -v user
For example:
[root@x109s1 ~]# ibrix_fm -c 10.30.83.1 -d bond1:0 -n 255.255.0.0 -v user Command succeeded
6.
Install the file serving node software on the agile management console node:
ibrix/ibrixinit -ts -C <cluster_interface> -i <agile_cluster_VIF_IP_Addr> F
For example:
ibrix/ibrixinit -ts -C eth4 -i 172.16.3.100 F
7.
Register the agile management console (also known as agile FM) to the cluster:
ibrix_fm R <FM hostname> -I <local_cluster_ipaddr>
NOTE: Verify that the local agile management console name is in the /etc/ibrix/ fminstance.xml file. Run the following command:
grep i current /etc/ibrix/fminstance.xml <property name="currentFmName" value="ib50-86"></property>
8.
From the agile management console, verify that the definition was set up correctly:
grep i vif /etc/ibrix/fusion.xml
NOTE: 9.
If the output is empty, restart the fusionmanager services as in step 9 and then recheck.
NOTE: It takes approximately 90 seconds for the agile management console to return to optimal with the agile_cluster_vif device appearing in ifconfig output. Verify that this device is present in the output. 10. Verify that the agile management console is active: ibrix_fm i For example:
[root@x109s1 ~]# ibrix_fm -i FusionServer: x109s1 (active, quorum is running) ================================================ Command succeeded!
Performing the migration 79
1 1. Verify that there is only one management console in this cluster: ibrix_fm -f For example:
[root@x109s1 ~]# ibrix_fm -f NAME IP ADDRESS ------ ---------X109s1 172.16.3.100 Command succeeded!
12. Install a passive agile management console on a second file serving node. In the command, the -F option forces the overwrite of the new_lvm2_uuid file that was installed with the X9000 Software. Run the following command on the file serving node:
/ibrix/ibrixinit -tm -C <local_cluster_interface_device> v <agile_cluster_VIF_IP> -m <cluster_netmask> -d <cluster_VIF_device> -w 9009 M passive -F
For example:
[root@x109s3 ibrix]# <install_code_directory>/ibrixinit -tm -C bond0 -v 172.16.3.1 -m 255.255.248.0 -d bond0:0 -V 10.30.83.1 -N 255.255.0.0 -D bond1:0 -w 9009 -M passive -F
NOTE: Verify that the local agile management console name is in the /etc/ibrix/ fminstance.xml file. Run the following command:
grep i current /etc/ibrix/fminstance.xml <property name="currentFmName" value="ib50-86"></property>
13. From the active management console, verify that both management consoles are in the cluster:
ibrix_fm -f
For example:
[root@x109s3 ibrix]# ibrix_fm -f NAME IP ADDRESS ------ ---------x109s1 172.16.3.100 x109s3 172.16.3.3 Command succeeded!
14. Verify that the newly installed management console is in passive mode:
ibrix_fm i
For example:
[root@x109s3 ibrix]# ibrix_fm -i FusionServer: x109s3 (passive, quorum is running) ============================= Command succeeded
NOTE: If iLO was not previously configured on the server, the command will fail with the following error:
com.ibrix.ias.model.BusinessException: x467s2 is not associated with any power sources
Use the following command to define the iLO parameters into the X9000 cluster database:
ibrix_powersrc -a -t ilo -h HOSTNAME -I IPADDR [-u USERNAME -p PASSWORD]
See the installation guide for more information about configuring iLO.
80
1.
On the node hosting the active management console, place the management console into maintenance mode. This step fails over the active management console role to the node currently hosting the passive agile management console. <ibrixhome>/bin/ibrix_fm m maintenance Wait approximately 60 seconds for the failover to complete, and then run the following command on the node that was hosting the passive agile management console: <ibrixhome>/bin/ibrix_fm -i The command should report that the agile management console is now Active on this node.
2.
3.
From the node on which you failed over the active management console in step 1, change the status of the management console from maintenance to passive: <ibrixhome>/bin/ibrix_fm -m passive Verify that the fusion manager database /usr/local/ibrix/.db/ is intact on both active and passive management console nodes. Repeat steps 14 to return the node originally hosting the active management console back to active mode.
4. 5.
Converting the original management console node to a file serving node hosting the agile management console
To convert the original management console node, usually node 1, to a file serving node, complete the following steps: 1. Place the agile management console on the node into maintenance mode: ibrix_fm m maintenance 2. Verify that the management console is in maintenance mode: ibrix_fm i For example:
[root@x109s1 ibrix]# ibrix_fm -i FusionServer: x109s1 (maintenance, quorum not started) ================================== Command succeeded!
3.
Verify that the passive management console is now the active management console. Run the ibrix_fm -i command on the file serving node hosting the passive management console (x109s3 in this example). It may take up to two minutes for the passive management console to become active.
[root@x109s3 ibrix]# ibrix_fm -i FusionServer: x109s3 (active, quorum is running) ============================= Command succeeded!
4. 5.
Install the file serving node software on the node: ./ibrixinit -ts -C <cluster_device> -i <cluster VIP> -F Verify that the new file serving node has joined the cluster: ibrix_server -l Look for the new file serving node in the output.
6. 7.
Rediscover storage on the file serving node: ibrix_pv -a Set up the file serving node to match the other nodes in the cluster. For example, configure any user NICs, user and cluster NIC monitors, NIC failover pairs, power, backup servers, preferred NIC s for X9000 clients, and so on.
81
Converting the original management console node to a file serving node hosting the agile management console
NOTE: During the upgrade, the support tickets collected with the ibrix_supportticket command will be deleted. Before upgrading to 6.0, download a copy of the archive files (.tgz) from the /admin/platform/diag/supporttickets directory.
5.
82
7.
On all nodes hosting the passive management console, place the management console into maintenance mode: <ibrixhome>/bin/ibrix_fm m maintenance On the active management console node, disable automated failover on all file serving nodes: <ibrixhome>/bin/ibrix_server -m -U Run the following command to verify that automated failover is off. In the output, the HA column should display off. <ibrixhome>/bin/ibrix_server -l
8. 9.
10. Unmount file systems on Linux X9000 clients: ibrix_lwumount -m MOUNTPOINT 1 1. Stop the CIFS, NFS and NDMP services on all nodes. Run the following commands on the node hosting the active management console: ibrix_server -s -t cifs -c stop ibrix_server -s -t nfs -c stop ibrix_server -s -t ndmp -c stop
nl nl
If you are using CIFS, verify that all likewise services are down on all file serving nodes: ps ef | grep likewise Use kill -9 to stop any likewise services that are still running. If you are using NFS, verify that all NFS processes are stopped: ps ef | grep nfs If necessary, use the following command to stop NFS services: /etc/init.d/nfs stop Use kill -9 to stop any NFS processes that are still running. If necessary, run the following command on all nodes to find any open file handles for the mounted file systems: lsof </mountpoint> Use kill -9 to stop any processes that still have open file handles on the file systems. 12. Unmount each file system manually: ibrix_umount -f FSNAME Wait up to 15 minutes for the file systems to unmount. Troubleshoot any issues with unmounting file systems before proceeding with the upgrade. See File system unmount issues (page 94).
nodes. The management console is in active mode on the node where the upgrade was run, and is in passive mode on the other file serving nodes. If the cluster includes a dedicated Management Server, the management console is installed in passive mode on that server. 5. 6. Upgrade Linux X9000 clients. See Upgrading Linux X9000 clients (page 91). If you received a new license from HP, install it as described in the Licensing chapter in this guide.
Use ibrix_cifsconfig to set the parameters, specifying the value appropriate for your cluster (1=enabled, 0=disabled). The following examples set the parameters to the default values for the 6.0 release: ibrix_cifsconfig -t -S "smb_signing_enabled=0, smb_signing_required=0" ibrix_cifsconfig -t -S "ignore_writethru=1" The SMB signing feature specifies whether clients must support SMB signing to access CIFS shares. See the HP X9000 File Serving Software File System User Guide for more information about this feature. Whenignore_writethru is enabled, X9000 software ignores writethru buffering to improve CIFS write performance on some user applications that request it. 7. 8. Mount file systems on Linux X9000 clients. If the cluster network is configured on bond1, the 6.0 release requires that the management console VIF (Agile_Cluster_VIF) also be on bond1. To check your system, run the ibrix_nic -l and ibrix_fm -f commands. Verify that the TYPE for bond1 is set to Cluster and that the IP_ADDRESS for both nodes matches the subnet or network on which your management consoles are registered. For example:
[root@ib121-121 fmt]# ibrix_nic -l HOST IFNAME TYPE STATE IP_ADDRESS MAC_ADDRESS BACKUP_HOST BACKUP_IF ROUTE VLAN_TAG LINKMON ----------------------------- ------- ------- ---------------------- ------------- --------------------------- ---------- ----- -------- ------ib121-121 bond1 Cluster Up, LinkUp 10.10.121.121 10:1f:74:35:a1:30 No ib121-122 bond1 Cluster Up, LinkUp 10.10.121.122 10:1f:74:35:83:c8 No ib121-121 [Active FM Nonedit] bond1:0 Cluster Up, LinkUp (Active FM) 10.10.121.220 No [root@ib121-121 fmt]# ibrix_fm -f NAME IP ADDRESS
84
--------ib121-121 ib121-122
---------10.10.121.121 10.10.121.122
If there is a mismatch on your system, you will see errors when connecting to ports 1234 and 9009. To correct this condition, see Moving the management console VIF to bond1 (page 94).
Note any custom tuning parameters, such as file system mount options. When the upgrade is complete, you can reapply the parameters. Stop any active Remote Replication, data tiering, or Rebalancer tasks running on the cluster. (Use ibrix_task -l to list active tasks.) When the upgrade is complete, you can start the tasks again. The 6.0 release requires that nodes hosting the agile management be registered on the cluster network. Run the following command to verify that nodes hosting the agile management console have IP addresses on the cluster network: ibrix_fm -f If a node is configured on the user network, see Node is not registered with the cluster network (page 93) for a workaround.
1. 2. 3. 4. 5.
Ensure that all nodes are up and running. To determine the status of the cluster nodes, check the dashboard on the GUI or use the ibrix_health command. Obtain the latest HP X9000 Quick Restore DVD version 6.0 ISO image from the HP kiosk at http://www.software.hp.com/kiosk (you will need your HP-provided login credentials). Unmount file systems on Linux X9000 clients. Copy the .iso file onto the server hosting the current active management console. Stop the CIFS, NFS and NDMP services on all nodes. Run the following commands on the node hosting the active management console: ibrix_server -s -t cifs -c stop ibrix_server -s -t nfs -c stop ibrix_server -s -t ndmp -c stop
nl nl
If you are using CIFS, verify that all likewise services are down on all file serving nodes: ps ef | grep likewise Use kill -9 to stop any likewise services that are still running. If you are using NFS, verify that all NFS processes are stopped: ps ef | grep nfs If necessary, use the following command to stop NFS services: /etc/init.d/nfs stop Use kill -9 to stop any NFS processes that are still running. If necessary, run the following command on all nodes to find any open file handles for the mounted file systems: lsof </mountpoint> Use kill -9 to stop any processes that still have open file handles on the file systems. 6. Unmount each file system manually: ibrix_umount -f FSNAME Wait up to 15 minutes for the file systems to unmount. Troubleshoot any issues with unmounting file systems before proceeding with the upgrade. See File system unmount issues (page 94). 7. Run the following command, specifying the absolute location of the local iso copy as the argument: /usr/local/ibrix/setup/upgrade <iso> The upgrade script performs all necessary upgrade steps on every server in the cluster and logs progress in the file /usr/local/ibrix/setup/upgrade.log. For a normal cluster configuration, the upgrade script can take up to 20 minutes to complete. After the script completes, each server will be automatically rebooted and will begin installing the latest software. 8. After the install is complete, the upgrade process automatically restores node-specific configuration information and the cluster should be running the latest software. The upgrade process also ensures that the management console is available on all file serving nodes and installs the management console in passive mode on any dedicated Management Servers. If an UPGRADE FAILED message appears on the active management console, see the specified log file for details. 9. Upgrade Linux X9000 clients. See Upgrading Linux X9000 clients (page 91). 10. If you received a new license from HP, install it as described in the Licensing chapter in this guide.
86
Use ibrix_cifsconfig to set the parameters, specifying the value appropriate for your cluster (1=enabled, 0=disabled). The following examples set the parameters to the default values for the 6.0 release: ibrix_cifsconfig -t -S "smb_signing_enabled=0, smb_signing_required=0" ibrix_cifsconfig -t -S "ignore_writethru=1" The SMB signing feature specifies whether clients must support SMB signing to access CIFS shares. See the HP X9000 File Serving Software File System User Guide for more information about this feature. Whenignore_writethru is enabled, X9000 software ignores writethru buffering to improve CIFS write performance on some user applications that request it. 7. 8. Mount file systems on Linux X9000 clients. If the cluster network is configured on bond1, the 6.0 release requires that the management console VIF (Agile_Cluster_VIF) also be on bond1. To check your system, run the ibrix_nic -l and ibrix_fm -f commands. Verify that the TYPE for bond1 is set to Cluster and that the IP_ADDRESS for both nodes matches the subnet or network on which your management consoles are registered. For example:
[root@ib121-121 fmt]# ibrix_nic -l HOST IFNAME TYPE STATE IP_ADDRESS MAC_ADDRESS BACKUP_HOST BACKUP_IF ROUTE VLAN_TAG LINKMON ----------------------------- ------- ------- ---------------------- ------------- --------------------------- ---------- ----- -------- ------ib121-121 bond1 Cluster Up, LinkUp 10.10.121.121 10:1f:74:35:a1:30 No ib121-122 bond1 Cluster Up, LinkUp 10.10.121.122 10:1f:74:35:83:c8 No ib121-121 [Active FM Nonedit] bond1:0 Cluster Up, LinkUp (Active FM) 10.10.121.220 No [root@ib121-121 fmt]# ibrix_fm -f NAME IP ADDRESS --------- ---------ib121-121 10.10.121.121 ib121-122 10.10.121.122
If there is a mismatch on your system, you will see errors when connecting to ports 1234 and 9009. To correct this condition, see Moving the management console VIF to bond1 (page 94).
87
To determine which node is hosting the agile management console configuration, run the ibrix_fm -i command.
5.
If you are using CIFS, verify that all likewise services are down on all file serving nodes: ps ef | grep likewise
88 Upgrading the X9000 Software
Use kill -9 to stop any likewise services that are still running. If you are using NFS, verify that all NFS processes are stopped: ps ef | grep nfs If necessary, use the following command to stop NFS services: /etc/init.d/nfs stop Use kill -9 to stop any NFS processes that are still running. If necessary, run the following command on all nodes to find any open file handles for the mounted file systems: lsof </mountpoint> Use kill -9 to stop any processes that still have open file handles on the file systems. 12. Unmount each file system manually: ibrix_umount -f FSNAME Wait up to 15 minutes for the file systems to unmount. Troubleshoot any issues with unmounting file systems before proceeding with the upgrade. See File system unmount issues (page 94).
2.
3.
Run /usr/local/ibrix/setup/save_cluster_config. This script creates a tgz file named <hostname>_cluster_config.tgz, which contains a backup of the node configuration. Save the <hostname>_cluster_config.tgz file, which is located in /tmp, to the external storage media.
89
5.
When the following screen appears, enter qr to install the X9000 software on the file serving node.
The server reboots automatically after the software is installed. Remove the DVD from the DVD-ROM drive.
script also ensures that the management console is available on all file serving nodes and installs the management console in passive mode on any dedicated Management Servers. 4. 5. 6. 7. 8. Upgrade Linux X9000 clients. See Upgrading Linux X9000 clients (page 91). If you received a new license from HP, install it as described in the Licensing chapter in this document. Run the following command to rediscover physical volumes: ibrix_pv -a Start any Remote Replication, Rebalancer, or data tiering tasks that were stopped before the upgrade. If you are using CIFS, set the following parameters to synchronize the CIFS software and the management console database: smb signing enabled smb signing required ignore_writethru
Use ibrix_cifsconfig to set the parameters, specifying the value appropriate for your cluster (1=enabled, 0=disabled). The following examples set the parameters to the default values for the 6.0 release: ibrix_cifsconfig -t -S "smb_signing_enabled=0, smb_signing_required=0" ibrix_cifsconfig -t -S "ignore_writethru=1" The SMB signing feature specifies whether clients must support SMB signing to access CIFS shares. See the HP X9000 File Serving Software File System User Guide for more information about this feature. Whenignore_writethru is enabled, X9000 software ignores writethru buffering to improve CIFS write performance on some user applications that request it. 9. Mount file systems on Linux clients. 10. If the cluster network is configured on bond1, the 6.0 release requires that the management console VIF (Agile_Cluster_VIF) also be on bond1. To check your system, run the ibrix_nic -l and ibrix_fm -f commands. Verify that the TYPE for bond1 is set to Cluster and that the IP_ADDRESS for both nodes matches the subnet or network on which your management consoles are registered. For example:
[root@ib121-121 fmt]# ibrix_nic -l HOST IFNAME TYPE STATE IP_ADDRESS MAC_ADDRESS BACKUP_HOST BACKUP_IF ROUTE VLAN_TAG LINKMON ----------------------------- ------- ------- ---------------------- ------------- --------------------------- ---------- ----- -------- ------ib121-121 bond1 Cluster Up, LinkUp 10.10.121.121 10:1f:74:35:a1:30 No ib121-122 bond1 Cluster Up, LinkUp 10.10.121.122 10:1f:74:35:83:c8 No ib121-121 [Active FM Nonedit] bond1:0 Cluster Up, LinkUp (Active FM) 10.10.121.220 No [root@ib121-121 fmt]# ibrix_fm -f NAME IP ADDRESS --------- ---------ib121-121 10.10.121.121 ib121-122 10.10.121.122
If there is a mismatch on your system, you will see errors when connecting to ports 1234 and 9009. To correct this condition, see Moving the management console VIF to bond1 (page 94).
91
1. 2. 3.
Download the latest HP X9000 Client 6.0 package. Expand the tar file. Run the upgrade script: ./ibrixupgrade -f The upgrade software automatically stops the necessary services and restarts them when the upgrade is complete.
4.
Execute the following command to verify the client is running X9000 software:
/etc/init.d/ibrix_client status IBRIX Filesystem Drivers loaded IBRIX IAD Server (pid 3208) running...
The IAD service should be running, as shown in the previous sample output. If it is not, contact HP Support.
If the minor kernel update is compatible, install the update with the vendor RPM and reboot the system. The X9000 client software is then automatically updated with the new kernel, and X9000 client services start automatically. Use the ibrix_version -l -C command to verify the kernel version on the client. NOTE: To use the verify_client command, the X9000 client software must be installed.
Automatic upgrade
Check the following: If the initial execution of /usr/local/ibrix/setup/upgrade fails, check /usr/local/ibrix/setup/upgrade.log for errors. It is imperative that all servers are up and running the X9000 Software before you execute the upgrade script. If the install of the new OS fails, power cycle the node. Try rebooting. If the install does not begin after the reboot, power cycle the machine and select the upgrade line from the grub boot menu. After the upgrade, check /usr/local/ibrix/setup/logs/postupgrade.log for errors or warnings. If configuration restore fails on any node, look at /usr/local/ibrix/autocfg/logs/appliance.log on that node to determine which feature restore failed. Look at the specific feature log file under /usr/local/ibrix/setup/ logs/ for more detailed information. To retry the copy of configuration, use the following command:
92 Upgrading the X9000 Software
/usr/local/ibrix/autocfg/bin/ibrixapp upgrade f s If the install of the new image succeeds, but the configuration restore fails and you need to revert the server to the previous install, run the following command and then reboot the machine. This step causes the server to boot from the old version (the alternate partition). /usr/local/ibrix/setup/upgrade/boot_info r If the public network interface is down and inaccessible for any node, power cycle that node.
Manual upgrade
Check the following: If the restore script fails, check /usr/local/ibrix/setup/logs/restore.log for details. If configuration restore fails, look at /usr/local/ibrix/autocfg/logs/appliance.log to determine which feature restore failed. Look at the specific feature log file under /usr/ local/ibrix/setup/logs/ for more detailed information. To retry the copy of configuration, use the following command: /usr/local/ibrix/autocfg/bin/ibrixapp upgrade f s
1.
If the node is hosting the active management console, as in this example, stop the management console on that node:
[root@ib51-101 ibrix]# /etc/init.d/ibrix_fusionmanager stop Stopping Fusion Manager Daemon [ [root@ib51-101 ibrix]# OK ]
2.
On the node now hosting the active management console (ib51102 in the example), unregister node ib51101:
[root@ib51-102 ~]# ibrix_fm -u ib51-101 Command succeeded!
3.
On the node hosting the active management console, register node ib51101 and assign the correct IP address:
[root@ib51-102 ~]# ibrix_fm -R ib51-101 -I 10.10.51.101 Command succeeded!
NOTE: When registering a management console, be sure the hostname specified with -R matches the hostname of the server. The ibrix_fm commands now show that node ib51101 has the correct IP address and node ib51102 is hosting the active management console.
93
[root@ib51-102 ~]# ibrix_fm -f NAME IP ADDRESS -------- ---------ib51-101 10.10.51.101 ib51-102 10.10.51.102 [root@ib51-102 ~]# ibrix_fm -i FusionServer: ib51-102 (active, quorum is running) ==================================================
2. 3.
Define a new Agile_Cluster_VIF_DEV and the associated Agile_Cluster_VIF_IP. Change the management console's local cluster address from bond0 to bond1 in the X9000 database: a. Change the previously defined Agile_Cluster_VIF_IP registration address. On the active management console, specify a new Agile _Cluster_VIF_IP on the bond1 subnet: ibrix_fm t I <new_Agile_Cluster_VIF_IP> NOTE: The ibrix_fm t command is not documented, but can be used for this operation. b. On each file serving node, edit the /etc/ibrix/iadconf.xml file: vi /etc/ibrix/iadconf.xml In the file, enter the new Agile_Cluster_VIF_IP address on the following line: <property name=fusionManagerPrimaryAddress value=xxx.xxx.xxx.xxx/>
94
4.
On the active agile management console, re-register all backup management consoles using the bond1 Local Cluster IP address for each node: # ibrix_fm R <management_console_name> I <local_cluster_network_IP> NOTE: When registering a management console, be sure the hostname specified with -R matches the hostname of the server.
5. 6.
Return the backup management consoles to passive mode: # ibrix_fm m passive Place the active management console into maintenance mode to force it to fail over. (It can take up to a minute for a passive management console to take the active role.) # ibrix_fm m maintenance Unregister the original active management console from the new active management console: # ibrix_fm u <original_active_management_console_name> Re-register that management console with the new values and then move it to passive mode: # ibrix_fm R <agileFM_name> I <local_cluster_network_ip> # ibrix_fm m passive
nl
7. 8.
9.
Verify that all management consoles are registered properly on the bond1 local cluster network: # ibrix_fm f You should see all registered management consoles and their new local cluster IP addresses. If an entry is incorrect, unregister that management console and re-register it.
10. Reboot the file serving nodes. After you have completed the procedure, if the management console is not failing over or the /usr/local/ibrix/log/Iad.log file reports errors communicating to port 1234 or 9009, contact HP Support for further assistance.
95
13 Licensing
This chapter describes how to view your current license terms and how to obtain and install new X9000 Software product license keys.
The output reports your current node count and capacity limit. In the output, Segment Server refers to file serving nodes.
2. 3. 4. 5.
When prompted, enter the password for the management console. Launch the AutoPass GUI:
/usr/local/ibrix/bin/fusion-license-manager
In the AutoPass GUI, go to Tools, select Configure Proxy, and configure your proxy settings. Click Retrieve/Install License > Key and then retrieve and install your license key. If the management console does not have an Internet connection, retrieve the license from a machine that does have a connection, deliver the file with the license to the management console machine, and then use the AutoPass GUI to import the license.
96
Licensing
Upgrading firmware
IMPORTANT: The X9720 system is shipped with the correct firmware and drivers. Do not upgrade firmware or drivers unless the upgrade is recommended by HP Support or is part of an X9720 patch provided on the HP web site. The patch release notes describe how to install the firmware.
2.
3.
Upgrading firmware
97
4. 5. 6. 7. 8.
Install the software on the server blade. The Quick Restore DVD is used for this purpose. See Recovering the X9720 Network Storage System (page 138) for more information. Set up fail over. For more information, see the HP s X9000 File Serving Software User Guide. Enable high availability (automated failover) by running the following command on server 1: # ibrix_server m Discover storage on the server blade: ibrix_pv -a To enable health monitoring on the server blade, first unregister the vendor storage: ibrix_vs -d -n <vendor storage name> Next, re-register the vendor storage. In the command, <sysName> is, for example, x710. The <hostlist> is a range inside square brackets, such as X710s[24]. ibrix_vs -r -n <sysName> -t exds 172.16.1.1 -U exds -P <password> -h <hostlist>
9.
If you made any other customizations to other servers, you may need to apply them to the newly installed server.
CAUTION: When handling system components, equipment may be damaged by electrostatic discharge (ESD). Use proper anti-static protection at all times: Keep the replacement component in the ESD bag until needed. Wear an ESD wrist strap grounded to an unpainted surface of the chassis. Touch an unpainted surface of the chassis before handling the component. Never touch the connector pins.
98
Carton contents
HP X9700c, containing 12 disk drives HP X9700cx (also known as HP 600 Modular Disk System [MDS600]), containing 70 disk drives Rack mounting hardware Two-meter cables (quantity4) Four-meter cables (quantity2)
6 X9700cx 3 7 TFT monitor and keyboard 8 c-Class Blade Enclosure 9 X9700cx 2 10 X9700cx 1
System, the X9700c 5 component goes in slots U31 through 32 (see callout 4), and the X9700cx 5 goes in slots U1 through U5 (see callout 8).
Installation procedure
Add the capacity blocks one at a time, until the system contains the maximum it can hold. The factory pre-provisions the additional capacity blocks with the standard LUN layout and capacity block settings (for example, rebuild priority). Parity is initialized on all LUNs. The LUNs arrive blank. IMPORTANT: You can add a capacity block to a new installation or to an existing system. The existing system can be either online or offline; however, it might be necessary to reboot the blades to make the new storage visible to the cluster.
100 Upgrading the X9720 Network Storage System hardware and firmware
1.
Secure the front end of the rails to the cabinet in the correct location. NOTE: Identify the left (L) and right (R) rack rails by markings stamped into the sheet metal.
2. 3. 4.
Secure the back end of the rails to the cabinet. Insert the X9700c into the cabinet. Use the thumbscrews on the front of the chassis to secure it to the cabinet.
c. d. 2.
Slide the front end of the rail toward the front column of the rack. When fully seated, the rack rail will lock into place. Repeat the procedure for the right rack rail.
Insert the X9700cx into the cabinet. WARNING! the cabinet. The X9700cx is very heavy. Use an appropriate lifting device to insert it into
3.
1 2 3 4 5
X9700c X9700cx primary I/O module (drawer 2) X9700cx secondary I/O module (drawer 2) X9700cx primary I/O module (drawer 1) X9700cx secondary I/O module (drawer 1)
Base cabinet
Callouts 1 through 3 indicate additional X9700c components.
1 2 3
102 Upgrading the X9720 Network Storage System hardware and firmware
4 5 6 7 8
X9700c 1 SAS switch ports 1 through 4 (in interconnect bay 3 of the c-Class Blade Enclosure). Ports 2 through 4 are used by additional capacity blocks. Reserved for expansion cabinet use. SAS switch ports 1 through 4 (in interconnect bay 4 of the c-Class Blade Enclosure). Ports 2 through 4 are used by additional capacity blocks. Reserved for expansion cabinet use.
Expansion cabinet
1 2 3 4 5 6 7 8
X9700c 8 X9700c 7 X9700c 6 X9700c 5 Used by base cabinet. SAS switch ports 5 through 8 (in interconnect bay 3 of the c-Class Blade Enclosure). Used by base cabinet. SAS switch ports 5 through 8 (in interconnect bay 4 of the c-Class Blade Enclosure).
Do not disable the power cord grounding plug. The grounding plug is an important safety feature. Plug the power cord into a grounded (earthed) electrical outlet that is easily accessible at all times. Do not route the power cord where it can be walked on or pinched by items placed against it. Pay particular attention to the plug, electrical outlet, and the point where the cord extends from the storage system.
The X9720 Network Storage System cabinet comes with the power cords tied to the cabinet. Connect the power cords to the X9700cx first, and then connect the power cords to the X9700c. IMPORTANT: If your X9720 Network Storage System cabinet contains more than two capacity blocks, you must connect all the PDUs to a power source.
2. 3.
4.
The result should include 1 1 LUNs for each 82-TB capacity block, and 19 LUNs for each 164-TB capacity block. If the LUNs do not appear, take these steps: Run the hpacucli rescan command. Check /dev/cciss again for the new LUNs. If the LUNs still do not appear, reboot the nodes.
IMPORTANT: If you added the capacity block to an existing system that must remain online, reboot the nodes according to the procedure Performing a rolling reboot (page 68). If you added the capacity block to an existing system that is offline, you can reboot all nodes at once. The capacity block is pre-configured in the factory with data LUNs; however, there are no logical volumes (segments) on the capacity block. To import the LUNs and create segments, take these steps: 1. Run the ibrix_pv command to import the LUNs. 2. Run the ibrix_pv -p -h command to verify that the LUNs are visible to all servers. 3. Run the ibrix_fs command to bind the segments and expand (or create) file systems. For more information about creating or extending file systems, see the HP X9000 File Serving Software File System User Guide.
104 Upgrading the X9720 Network Storage System hardware and firmware
1.
2.
Register the vendor storage. In the command, the IP, USERNAME, and PASSWORD are for the OA.
ibrix_vs r n STORAGENAME t exds I IP(s) U USERNAME P PASSWORD
For more information about ibrix_vs, see the HP X9000 File Serving Software CLI Reference Guide.
2.
Run the following command, specifying the serial number YYYYYYYYYY. In the command, XX is the desired name for the new capacity block: hpacucli ctrl csn=YYYYYYYYYY modify chassisname=XX
When the chassis name is set, it appears in the exds_stdiag output, in the location specified by XX in the following example:
ctlr P89A40E9SWY02E ExDS9100cc in XX/SGA00302RB slot 1 fw 0134.2010120901 boxes 3 disks 82 luns 11 batteries 2/OK cache OK box 1 ExDS9100c sn SGA00302RB fw 1.56 temp OK fans OK,OK,OK,OK power OK,OK box 2 ExDS9100cx sn CN800100CP fw 2.66 temp OK fans OK,OK power OK,OK,OK,OK box 3 ExDS9100cx sn CN800100CP fw 2.66 temp OK fans OK,OK power OK,OK,OK,OK
The chassis name also appears in the output from ibrix_vs -i, in the location specified by XX in the following example:
6001438006E50B800506070830950007 X9720 Logical Unit OK CB: XX, LUN: 7:
2. 3.
Delete the volume groups, logical volumes, and physical volumes associated with the LUN. Disconnect the SAS cables connecting both array controllers to the SAS switches.
CAUTION: Ensure that you remove the correct capacity block. Removing the wrong capacity block could result in data that is inaccessible.
106 Upgrading the X9720 Network Storage System hardware and firmware
15 Troubleshooting
Collecting information for HP Support with Ibrix Collect
Ibrix Collect is an enhancement to the log collection utility that already exists in X9000 (Support Ticket). If system issues occur, you can collect relevant information for diagnosis by HP Support. The collection can be triggered manually using the GUI or CLI, or automatically during a system crash. Ibrix Collect gathers the following information: Specific operating system and X9000 command results and logs Crash digester results Summary of collected logs including error/exception/failure messages Collection of information from LHN and MSA storage connected to the cluster
NOTE: When the cluster is upgraded from an X9000 software version earlier than 6.0, the support tickets collected using the ibrix_supportticket command will be deleted. Before performing the upgrade, download a copy of the archive files (.tgz) from the /admin/platform/ diag/supporttickets directory.
Collecting logs
To collect logs and command results using the X9000 Management Console GUI: 1. Select Cluster Configuration, and then select Data Collection. 2. Click Collect.
3.
The data will be stored locally on each node in a compressed archive file <nodename>_<filename>_<timestamp>.tgz under /local/ibrixcollect. Enter the name of the zip file that contains the collected data. The default location to store this zip file is located on the active management console node at /local/ibrixcollect/ archive.
4.
Click Okay.
To collect logs and command results using the CLI, use the following command: ibrix_collect c n NAME NOTE: Only one manual collection of data is allowed at a time.
NOTE: When a node restores from a system crash, the vmcore under /var/crash/ <timestamp> directory is processed. Once processed, the directory will be renamed /var/ crash/<timestamp>_PROCESSED. HP Support may request that you send this information to assist in resolving the system crash. NOTE: HP recommends that you maintain your crash dumps in the /var/crash directory. Ibrix Collect processes the core dumps present in the /var/crash directory (linked to /local/ platform/crash) only. HP also recommends that you monitor this directory and remove unnecessary processed crashes.
108 Troubleshooting
NOTE:
NOTE: The average size of the archive file depends on the size of the logs present on individual nodes in the cluster. NOTE: You may later be asked to email this final zip file to HP Support. Be aware that the final zip file is not the same as the zip file that you receive in your email.
c. d.
Under General Settings, enable or disable automatic collection by checking or unchecking the appropriate box. Enter the number of data sets to be retained in the cluster in the text box.
To enable/disable automatic data collection using the CLI, use the following command: ibrix_collect C a <Yes\No> To set the number of data sets to be retained in the cluster using the CLI, use the following command: ibrix_collect -C -r NUMBER 2. To configure emails containing a summary of collected information of each node to be sent automatically to your desktop after every data collection event: a. Select Cluster Configuration, and then select Ibrix Collect. b. Click Modify.
c. d.
Under Email Settings, enable or disable sending cluster configuration by email by checking or unchecking the appropriate box. Fill in the remaining required fields for the cluster configuration and click Okay.
To set up email settings to send cluster configurations using the CLI, use the following command: ibrix_collect C m <Yes\No> [s <SMTP_server>] [f <From>] [t <To>] NOTE: More than one email ID can be specified for -t option, separated by a semicolon.
The From and To command for this SMTP server are Ibrix Collect specific.
NOTE: These xml files should be modified carefully. Any missing tags during modification might cause Ibrix Collect to not work properly.
Escalating issues
The X9720 Network Storage System escalate tool produces a report on the state of the system. When you report a problem to HP technical support, you will always be asked for an escalate report, so it saves time if you include the report up front. Run the exds_escalate command as shown in the following example: [root@glory1 ~]# exds_escalate
1 10
Troubleshooting
The escalate tool needs the root password to perform some actions. Be prepared to enter the root password when prompted. There are a few useful options; however, you can usually run without options. The -h option displays the available options. It is normal for the escalate command to take a long time (over 20 minutes). When the escalate tool finishes, it generates a report and stores it in a file such as /exds_glory1_escalate.tgz.gz. Copy this file to another system and send it to HP Services.
1 1 1
The exds_stdiag utility prints a report showing a summary of the storage layout, called the map. It then analyzes the map and prints information about each check as it is performed. Any line starting with the asterisk (*) character indicates a problem. The exds_stdiag utility does not access the utility file system, so it can be run even if storage problems prevent the utility file system from mounting.
Syntax
# exds_stdiag [--raw=<filename>] The --raw=<filename> option saves the raw data gathered by the tool into the specified file in a format suitable for offline analysis, for example by HP support personnel. Following is a typical example of output from this command:
[root@kudos1 ~]# exds_stdiag ExDS storage diagnostic rev 7336 Storage visible to kudos1 Wed 14 Oct 2009 14:15:33 +0000 node 7930RFCC BL460c.G6 fw I24.20090620 cpus 2 arch Intel hba 5001438004DEF5D0 P410i in 7930RFCC fw 2.00 boxes 1 disks 2 luns 1 batteries 0/cache -
1 12
Troubleshooting
hba
PAPWV0F9SXA00S P700m in 7930RFCC fw 5.74 boxes 0 disks 0 luns 0 batteries 0/cache switch HP.3G.SAS.BL.SWH in 4A fw 2.72 switch HP.3G.SAS.BL.SWH in 3A fw 2.72 switch HP.3G.SAS.BL.SWH in 4B fw 2.72 switch HP.3G.SAS.BL.SWH in 3B fw 2.72
ctlr P89A40A9SV600X ExDS9100cc in 01/USP7030EKR slot 1 fw 0126.2008120502 boxes 3 disks 80 luns 10 batteries 2/OK cache OK box 1 ExDS9100c sn USP7030EKR fw 1.56 temp OK fans OK,OK,OK,OK power OK,OK box 2 ExDS9100cx sn CN881502JE fw 1.28 temp OK fans OK,OK power OK,OK,OK,OK box 3 ExDS9100cx sn CN881502JE fw 1.28 temp OK fans OK,OK power OK,OK,OK,OK ctlr P89A40A9SUS0LC ExDS9100cc in 01/USP7030EKR slot 2 fw 0126.2008120502 boxes 3 disks 80 luns 10 batteries 2/OK cache OK box 1 ExDS9100c sn USP7030EKR fw 1.56 temp OK fans OK,OK,OK,OK power OK,OK box 2 ExDS9100cx sn CN881502JE fw 1.28 temp OK fans OK,OK power OK,OK,OK,OK box 3 ExDS9100cx sn CN881502JE fw 1.28 temp OK fans OK,OK power OK,OK,OK,OK Analysis: disk problems on USP7030EKR * box 3 drive [10,15] missing or failed ctlr firmware problems on USP7030EKR * 0126.2008120502 (min 0130.2009092901) on ctlr P89A40A9SV600
exds_netdiag
The exds_netdiag utility performs tests on and retrieves data from the networking components in an X9720 Network Storage System. It performs the following functions: Reports failed Ethernet Interconnects (failed as reported by the HP Blade Chassis Onboard Administrator) Reports missing, failed, or degraded site uplinks Reports missing or failed NICs in server blades
1 13
Sample output
exds_netperf
The exds_netperf tool measures network performance. The tool measures performance between a client system and the X9720 Network Storage System. Run this test when the system is first installed. Where networks are working correctly, the performance results should match the expected link rate of the network, that is, for a 1 link, expect about 90 MB/s. You can also run the test at other times to determine if degradation has occurred. The exds_netperf utility measures streaming performance in two modes: SerialStreaming I/O is done to each network interface in turn. The host where exds_netperf is run is the client that is being tested. ParallelStreaming I/O is done on all network interfaces at the same time. This test uses several clients.
The serial test measures point-to-point performance. The parallel test measures more components of the network infrastructure and could uncover problems not visible with the serial test. Keep in mind that overall throughput of the parallel test is probably limited by clients network interface. The test is run as follows: Copy the contents of /opt/hp/mxso/diags/netperf-2.1.p13 to an x86_64 client host. Copy the test scripts to one client from which you will be running the test. The scripts required are exds_netperf, diags_lib.bash, and nodes_lib.bash from the /opt/hp/mxso/ diags/bin directory. Run exds_netserver -s <server_list> to start a receiver for the test on each X9720 Network Storage System server blade, as shown in the following example: exds_netserver -s glory[1-8] Read the README.txt file to build for instructions on building exds_netperf and build and install exds_netperf. Install on every client you plan to use for the test.
1 14
Troubleshooting
On the client host, run exds_netperf in serial mode against each X9720 Network Storage System server in turn. For example, if there are two servers whose eth2 addresses are 16.123.123.1 and 16.123.123.2, use the following command: # exds_netperf --serial --server 16.123.123.1 16.123.123.2 On a client host, run exds_netperf in parallel mode, as shown in the following example. In this example, hosts blue and red are the tested clients (exds_netperf itself could be one of these hosts or on a third host): # exds_netperf --parallell \ --server 16.123.123.1,16.123.123.2 \ --clients red,blue
Normally, the IP addresses you use are the IP addresses of the host interfaces (eth2, eth3, and so on).
LUN layout
The LUN layout is presented here in case it's needed for troubleshooting. For a capacity block with 1 TB HDDs: 2x 1 GB LUNsThese were used by the X9100 for membership partitions, and remain in the X9720 for backwards compatibility. Customers may use them as they see fit, but HP does not recommend their use for normal data storage, due to performance limitations. 1x 100 GB LUNThis is intended for administrative use, such as backups. Bandwidth to these disks is shared with the 1 GB LUNs above and one of the data LUNs below. 8x ~8 TB LUNsThese are intended as the main data storage of the product. Each is supported by ten disks in a RAID6 configuration; the first LUN shares its disks with the three LUNs described above. The 1 GB and 100 GB LUNs are the same as above. 16x ~8 TB LUNsThese are intended as the main data storage of the product. Each pair of LUNs is supported by a set of ten disks in a RAID6 configuration; the first pair of LUNs shares its disks with the three LUNs described above.
X9720 monitoring
The X9720 actively monitors the following components in the system: Blade Chassis: Power Supplies, Fans, Networking Modules, SAS Switches, Onboard Administrator modules. Blades: Local hard drives, access to all 9100cc controllers. 9100c: Power Supplies, Fans, Hard Drives, 9100cc controllers, and LUN status. 9100cx: Power Supplies, Fans, I/O modules, and Hard Drives.
If any of these components fail, an event is generated. Depending on how you have Events configured, each event will generate an e-mail or SNMP trap. Some components may generate multiple events if they fail. Failed components will be reported in the output of ibrix_vs -i, and failed storage components will be reported in the output of ibrix_health -V -i.
1 15
Failure indications
A failed or halted X9700c controller is indicated in a number of ways as follows: The exds_stdiag report could indicate a failed or halted X9700c controller. An email alert. In the X9000 management console, the logical volumes in the affected capacity black show a warning. The amber fault LED on the X9700c controller is flashing. The seven-segment display shows an H1, H2, C1, or C2 code. The second digit represents the controller with a problem. For example, H1 indicates a problem with controller 1 (the left controller, as viewed from the back).
1 16
Troubleshooting
1.
Verify that SAS cables are connected to the correct controller and I/O module. The following diagram shows the correct wiring of the SAS cables.
1. 2. 3. 4. 5.
X9700c X9700cx primary I/O module (drawer 2) X9700cx secondary I/O module (drawer 2) X9700cx primary I/O module (drawer 1) X9700cx secondary I/O module (drawer 1)
As indicated in the figure above, the X9700c controller 1 (left) is connected to the primary (top) X9700cx I/O modules and the controller 2 (right) is connected to the secondary (bottom) I/O modules. If possible, trace one of the SAS cables to validate that the system is wired correctly. 2. Check the seven-segment display and note the following as it applies to your situation: If the seven-segment display shows on, then both X9700c controllers are operational. If the seven-segment displays shows on but there are path errors as described earlier in this document, then the problem could be with the SAS cables connecting the X9700c controller to the SAS Switch in the blade chassis. Replace the SAS cable and run the exds stdiag command, which should report two controllers. If not, try connecting the SAS cable to a different port of the SAS switch. If the seven-segment displays does not show on, it shows an alphanumeric code. The number represents the controller that has an issue. For example C1 indicates the issue is with controller 1 (the left controller). Press the down button beside the seven-segment display. This display now shows a two-digit number. The following table describes the codes where n is 1 or 2 depending on the affected controller:
Explanation Controller n is halted because there is a connectivity problem with an X9700cx I/O module Controller n is halted because there is a connectivity problem with an X9700cx I/O module Next steps Continue to next step.
Code Hn 67
Cn 02
Other code Fault is in the X9700c controller. The fault is Re-seat the controller as described later in this not in the X9700cx or the SAS cables document If the fault does not clear, report to HP connecting the controller to the I/O modules. Support to obtain a replacement controller.
1 17
3.
Check the SAS cables connecting the halted X9700c controller and the X9700cx I/O modules. Disconnect and re-insert the SAS cables at both ends. In particular, ensure that the SAS cable is fully inserted into the I/O module and that the bottom port on the X9700cx I/O module is being used. If there are obvious signs of damage to a cable, replace the SAS cable. Re-seat the halted X9700c controller: a. Push the controller fully into the chassis until it engages. b. Reattach the SAS cable that connects the X9700c to the SAS switch in the c-Class blade enclosure. This is plugged into port 1. Wait for the controller to boot, then check the seven-segment display. If the seven-segment display shows on, then the fault has been corrected and the system has returned to normal. If the seven-segment display continues to shows an Hn 67 or Cn 02 code, continue to the next step.
4.
5.
6. 7.
At this stage, you have identified that the problem is with an X9700cx I/O module. Determine if the fault lies with the top or bottom modules. For example, if the seven-segment display shows C1 02, then the fault may lie with one of the primary (top) I/O modules. Unmount all file systems using the X9000 management console. For more information, see the HP X9000 File Serving Software User Guide. Examine the I/O module LEDs. If an I/O module has an amber LED: a. Replace the I/O module as follows: a. Detach the SAS cable connecting the I/O module to the X9700c controller. b. Ensure that the disk drawer is fully pushed in and locked. c. Remove the I/O module. d. Replace with a new I/O module (it will not engage with the disk drawer unless the drawer is fully pushed in) e. Re-attach the SAS cable. Ensure it is attached to the IN port (the bottom port). b. c. Re-seat controller 1 as described below in the section Re-seating an X9700c controller (page 120). Wait for the controller to boot, and then check the seven-segment display. If the seven-segment display shows on then the fault has been corrected and the system has returned to normal and you can proceed to step 1 1. If the seven-segment display continues to show an Hn 67 or Cn 02 code, continue to the next step.
8.
One of the I/O modules may be failed even though the amber LED is not on. Replace the I/O modules one by one as follows: a. Remove the left (top or bottom as identified in step 4) I/O module and replace it with a new module as follows: a. Detach the SAS cable connecting the I/O module to the X9700c controller. b. Ensure that the disk drawer is fully pushed in and locked. c. Remove the I/O module. d. Replace with a new I/O module (it will not engage with the disk drawer unless the drawer is fully pushed in) e. Re-attach the SAS cable. Ensure it is attached to the IN port (the bottom port). b. Re-seat the appropriate X9700c controller as described below in the section Re-seating an X9700c controller (page 120).
1 18
Troubleshooting
c.
Wait for the controller to boot. If the seven-segment display shows on, then the fault has been corrected and the system has returned to normal and you can proceed to step 1 1. If the seven-segment continues to shows an Hn 67 or Cn 02 code, continue to the next step.
d. e. f.
If the fault does not clear, remove the left I/O module and reinsert the original I/O module. Re-seat the appropriate X9700c controller as described below in the section Re-seating an X9700c controller (page 120). Wait for the controller to boot. If the seven-segment display shows on, then the fault has been corrected, the system has returned to normal, and you can proceed to step 1 1. If the seven-segment display continues to shows an Hn 67 or Cn 02 code, continue to the next step.
g. h. 9.
If the fault does not clear, remove the right I/O module and replace with the new I/O module. Re-seat the appropriate X9700c controller as described below in the section Re-seating an X9700c controller (page 120).
If the seven-segment display now shows on, run the exds_stdiag command and validate that both controllers are seen by exds_stdiag. 10. If the fault has not cleared at this stage, there could be a double fault (that is, failure of two I/O modules). Alternatively, one of the SAS cables could be faulty. Contact HP Support to help identify the fault or faults. Run the exds_escalate command to generate an escalate report for use by HP Support as follows:
# exds_escalate
1 1. At this stage, an X9700cx I/O module has been replaced. Change the firmware of the I/O modules to the version included in the X9720 Network Storage System: a. Identify the serial number of the array using the command: exds_stdiag b. Run the X9700cx I/O module firmware update command: # /opt/hp/mxso/firmware/exds9100cx_scexe s The command will pause to gather the system configuration, which can take several minutes on a large system. It then displays the serial number of an array and asks if it should be updated. If the serial number displayed is not the array to be updated, select N for no. The command will continue to display serial numbers. When it reaches the desired array, select Y to update the firmware. NOTE: If you reply Y to the wrong array, let the command finish normally. This can do no harm since I/O has been suspended as described above (and the I/O modules should already be at the level included in the X9720 Network Storage System). c. d. e. After the array has been flashed, you can exit the update utility by entering q to quit. Press the power buttons to power off the affected X9700c and X9700cx. Re-apply power to the capacity block. Power on the X9700cx first, then the associated X9700c. The firmware update occurs during reboot, so the reboot could take longer than usual (up to 25 minutes). Wait until the seven-segment display of all X9700c enclosures goes to the on state before proceeding. If the seven-segment display of an X9700c has not returned to "on" after 25 minutes, power cycle the complete capacity block again.
1 19
12. Run the exds_stdiag command to verify the firmware version. Check that the firmware is the same on both drawers (boxes) of the X9700cx. Following is an example of exds_stdiag output:
... ctlr P89A40C9SW705J box 1 ExDS9100c box 2 ExDS9100cx box 3 ExDS9100cx ExDS9100cc in 01/SGA830000M slot 1 fw 0126.2008120502 boxes 3 disks 22 luns 5 sn SGA830000M fw 1.56 fans OK,OK,OK,OK temp OK power OK,OK sn CN8827002Z fw 1.28 fans OK,OK temp OK power OK,OK,FAILED,OK sn CN8827002Z fw 2.03 fans OK,OK temp OK power OK,OK,OK,OK
In the above example, the array serial number (box 1) is SGA830000M. The firmware level on box 2 (left drawer of X9700cx) is 1.28. The firmware level on box 3 (right drawer) is 2.03. This is unsupported because the firmware levels are not the samethe firmware must be updated as described in step 1 1. 13. Mount the file systems that were unmounted in step 6 using the X9000 management console.
For each host, the output includes: Version number of the installed file system Version numbers of the IAD and File System module Operating system type and OS kernel version Processor architecture
The -S option shows this information for all file serving nodes. The -C option shows the information for all X9000 clients. The file system and IAD/FS output fields should show matching version numbers unless you have installed special releases or patches. If the output fields show mismatched version numbers and you do not know of any reason for the mismatch, contact HP Support. A mismatch might affect the operation of your cluster.
120 Troubleshooting
To permanently disable SELinux, edit its configuration file (/etc/selinux/config) and set SELINUX=parameter to either permissive or disabled. SELinux will be stopped at the next boot. For X9000 clients, the client might not be registered with the management console. For information on registering clients, see the HP X9000 File Serving Software Installation Guide.
Failover
Cannot fail back from failover caused by storage subsystem failure
When a storage subsystem fails and automated failover is turned on, the management console will initiate its failover protocol. It updates the configuration database to record that segment ownership has transferred from primary servers to their standbys and then attempts to migrate the segments to the standbys. However, segments cannot migrate because neither the primary servers nor the standbys can access the storage subsystem and the failover is stopped. Perform the following manual recovery procedure: 1. Restore the failed storage subsystem (for example, replace failed Fibre Channel switches or replace a LUN that was removed from the storage array). 2. Reboot the standby servers, which will allow the failover to complete.
bonding for additional bandwidth. However, mode 6 bonding is more sensitive to issues in the s network topology, and has been seen to cause storms of ARP traffic when deployed.
To workaround the problem re-create the segment on the failing LUN. For example to identify the correct LUN associated with the failure above, run a command similar to the following on the first server in the system. In the following example, server glory2 is the name of a file serving node:
# ibrix_pv -l -h glory2 PV_NAME SIZE(MB) VG_NAME ------- -------- ------d1 131070 vg1_1 d2 131070 vg1_2 d3 131070 vg1_3 d5 23551 vg1_5 d6 131070 vg1_4 DEVICE --------------/dev/mxso/dev4a /dev/mxso/dev5a /dev/mxso/dev6a /dev/mxso/dev8a /dev/mxso/dev7a RAIDTYPE -------RAIDHOST -------RAIDDEVICE ----------
The Device column identifies the LUN number. The volume group vg1_4 is created from LUN 7. Re-create the segment according to the instructions in the HP X9000 File Serving Software File System User Guide. After this command completes, rerun the file system creation command.
122
Troubleshooting
This gathers log information that is useful in diagnosing whether the data can be recovered. Generally, if the failure is due to real disk failures, the data cannot be recovered. However, if the failure is due to an inadvertent removal of a working disk drive, it may be possible to restore the LUN to operation. 3. Contact HP Support as soon as possible.
The underlying causes of these problems differ. However, the recovery process is similar in all cases. Do not replace the HP P700m until you have worked through the process described here. In general terms, the solution is to reset the SAS switches and if that fails, reboot each X9700c controller until you locate a controller that is interfering with the SAS fabric. If your system is in production, follow the steps below to minimize downtime on the system: 1. Log in to the Onboard Administrator and run the show bay info all command. Compare entries for the affected blade and working blades. If the entries look different, reboot each Onboard Administrator, one at a time. Re-seat or replace the P700m in the affected server blade. 2. Run exds_stdiag. If exds_stdiag detects the same capacity blocks and X9720c controllers as the other server blades, then the procedure is completed; otherwise, continue to the next step. If all servers are affected, shut down all servers; if a subset of servers is affected, shut down the subset. Using OA, log into the SAS switch 1 and reset it. Wait for it to reboot. Reset SAS switch 2. Wait for it to reboot. Boot one affected server. Run the following command: # exds_stdiag 10. If X9700c controllers can be seen, boot other affected servers and run exds_stdiag on each. If they also see the X9700c controllers, the procedure is completed; otherwise continue to the next step.
Troubleshooting specific issues 123
3. 4. 5. 6. 7. 8. 9.
1 1. Perform the following steps For each X9700c controller in turn: a. Slide out controller until LEDs extinguish. b. Reinsert controller. c. Wait for the seven-segment to show "on". d. Run the exds_stdiag command on affected server. e. If ok, the procedure is completed; otherwise, repeat steps a through d on next the controller. 12. If the above steps do not produce results, replace the HP P700m. 13. Boot server and run exds_stdiag, 14. If you still cannot see the X9700c controllers, repeat the procedure starting with step 1. If the system is not in production, you can use the following shorter procedure: 1. Power off all server blades. 2. Using OA, power off both SAS switches. 3. Power on both SAS switches and wait until they are on. 4. Power on all server blades. 5. Run exds_stdiag. If exds_stdiag indicates that there are no problems, then the procedure is completed; otherwise, continue to the next step. 6. Power off all X9700c enclosures. 7. Power on all enclosures. 8. Wait until all sever-segment displays show "on" then power on all server blades. 9. If the HP P700m still cannot access the fabric, replace it on affected server blades and run exds_stdiag again.
See the HP X9720 Network Storage System Controller User Guide for more information about the LED descriptions.
the GSI light after each replacement. See Replacing components in the HP ExDS9100 Storage System for replacement instructions.
There are 16 identical profiles assigned to servers. As an example, a profile called bay1 is created and assigned to enclosure device bay 1:
->show profile bay1 -output=script2 Name;Device Bay;Server;Status;Serial Number;UUID bay1;enc0:1;ProLiant BL460c G6;Degraded;8920RFCC;XXXXXX8920RFCC Connection Type;Port;Network Name;Status;PXE;MAC Address;Allocated Speed Ethernet;1;man_lan;OK;UseBIOS;<Factory-Default>;1 Ethernet;2;ext1;Degraded;UseBIOS;<Factory-Default>;9 Ethernet;3;ext2;Degraded;UseBIOS;<Factory-Default>;9 Ethernet;4;man_lan;OK;UseBIOS;<Factory-Default>;1
As a convention, the domain name is created using the system name, but any domain name can be used. The domain is given an IP on the management network for easy access:
Domain Name=kudos_vc_domain Checkpoint Status=Valid Domain IP Status=Enabled IP Address=172.16.2.1 Subnet Mask=255.255.248.0 Gateway=-- -MAC Address Type=Factory-Default WWN Address Type=Factory-Default
125
To run a health check on a file serving node, use the following command:
<installdirectory>/bin/ibrix_health -i -h HOSTLIST
If the last line of the output reports Passed, the file system information on the file serving node and management console is consistent. To repair file serving node information, use the following command:
<installdirectory>/bin/ibrix_dbck -o -f FSNAME [-h HOSTLIST]
To repair information on all file serving nodes, omit the -h HOSTLIST argument.
126
Troubleshooting
NOTE: Some HP parts are not designed for customer self repair. To satisfy the customer warranty, HP requires that an HP-authorized service provider replace the part. CAUTION: Be sure the replacement is available before removing a component or a blanking panel. Open slots dramatically impact airflow and cooling within the device. See Component and cabling diagrams (page 155) for diagrams of system components and cabling. See Spare parts list (page 168) for a list of customer replaceable parts. Based on availability and where geography permits, CSR parts will be shipped for next business day delivery. Same-day or four-hour delivery might be offered at an additional charge, where geography permits. If assistance is required, you can call the HP Technical Support Center, and a technician will help you over the telephone. The materials shipped with a replacement CSR part specify whether a defective part must be returned to HP. In cases where it is required to return the defective part to HP, you must ship the defective part back to HP within a defined period of time, normally five (5) business days. The defective part must be returned with the associated documentation in the provided shipping material. Failure to return the defective part could result in HP billing you for the replacement. With a customer self repair, HP will pay all shipping and part return costs and determine the courier/carrier to be used. For more information about HP's Customer Self Repair program, contact your local service provider. For the North American program, go to http://www.hp.com/go/selfrepair.
Required tools
The following tools might be necessary for some procedures: T-10 Torx screwdriver T-15 Torx screwdriver 4-mm flat-blade screwdriver Phillips screwdriver
Additional documentation
The information in this section pertains to the specifics of the X9720 Network Storage System. For detailed component replacement instructions, see the following documentation at http:// www.hp.com/support/manuals: c7000 blade enclosure HP BladeSystem c7000 Enclosure Maintenance and Service Guide HP BladeSystem c-Class Enclosure Troubleshooting Guide HP ProLiant BL460c Server Blade Maintenance and Service Guide HP ExDS9100c/X9720 Storage System Controller Cache Module Customer Self Repair Instructions HP ExDS9100c/X9720 Storage System Controller Battery Customer Self Repair Instructions HP ExDS9100c/X9720 Storage System Controller Customer Self Repair Instructions
128
HP 3 SAS BL Switch Installation Instructions HP 3 SAS BL Switch Customer Self Repair Instructions HP 600 Modular Disk System Maintenance and Service Guide
X9700cx
7. 8.
Insert the server blade into its original bay in the blade chassis. Connect to the iLO, create a user called "exds" and assign the same password as the other server blades. The IP address is automatically configured to be the same as the original server blade. Boot the server. Run the exds_stdiag command to verify that the server blade can access storage.
To have password-less ssh support for additional server blades, or to access the X9720 itself without specifying a password, add the keys of the new servers to the .ssh/authorized_keys on each pre-existing server blade, and add the keys of all server blades to .ssh/authorized_keys on the new server blades. Alternatively, you can re-generate keys for all server blades and distribute them appropriately.
Replacing the c7000 blade enclosure and server blade parts 129
4. 5.
Reconnect the cable that was disconnected in step 1. Remove and then reconnect the uplink to the customer network for bay 2. Clients lose connectivity during this procedure unless you are using a bonded network.
NOTE:
After the new VC module is inserted, network connectivity to the X9720 may be lost for approximately 5 seconds while the new module is configured. Alerts may be generated during this period. So long as ibrix_health -l reports PASSED after the operation is complete, these alerts can be safely ignored. For information about configuring the VC domain, see Configuring the Virtual Connect domain (page 125).
bladebay 7 add zonegroup=exds_zonegroup bladebay 8 add zonegroup=exds_zonegroup bladebay 9 add zonegroup=exds_zonegroup bladebay 10 add zonegroup=exds_zonegroup bladebay 11 add zonegroup=exds_zonegroup bladebay 12 add zonegroup=exds_zonegroup bladebay 13 add zonegroup=exds_zonegroup bladebay 14 add zonegroup=exds_zonegroup bladebay 15 add zonegroup=exds_zonegroup bladebay 16 add zonegroup=exds_zonegroup switch local saveupdate
10. Verify that the zonegroups have been updated: => sw local zg exds_zonegroup show
Blade Enclosure at x123s-ExDS-ENC Switch: Local Zone Group: exds_zonegroup Zoned Blade Switch Port: 41 Zoned Blade Switch Port: 42 Zoned Blade Switch Port: 43 Zoned Blade Switch Port: 44 Zoned Blade Switch Port: 45 Zoned Blade Switch Port: 46 Zoned Blade Switch Port: 47 Zoned Blade Switch Port: 48 Zoned Blade Switch Port: 31 Zoned Blade Switch Port: 32 Zoned Blade Switch Port: 33 Zoned Blade Switch Port: 34 Zoned Blade Switch Port: 35 Zoned Blade Switch Port: 36 Zoned Blade Switch Port: 37 Zoned Blade Switch Port: 38 Assigned Blade Bay: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
1 1. Reconnect the SAS cables. 12. Disconnect from the interconnect by pressing Ctrl-Shift-_ (the Control, Shift, and underscore keys) and then press the D key for "D)isconnect". 13. Exit from the OA using "exit". NOTE: switch. Wait at least 60 seconds after seating the SAS switch before removing another SAS
The SAS switch now provides a redundant access path to storage. Storage units will balance I/O between both switches on their subsequent restarts.
132
2.
Insert a replacement drive. Run the exds_stdiag command to verify that LUNs are rebuilding and the replacement disk drive firmware is correct (if there are no errors reported, then the firmware is correct). If the disk drive firmware is not correct, contact HP Support.
See the HP 600 Modular Disk System Maintenance and Service Guide for more information.
133
1.
2.
Remove the SAS cable in port 1 that connects the X9700c to the SAS switch in the c-Class blade enclosure. Do not remove the two SAS expansion cables that connect the X9700c controller to the I/O controllers on the X9700cx enclosure. Slide the X9700c controller partially out of the chassis: a. Squeeze the controller thumb latch and rotate the latch handle down. b. Pull the controller straight out of the chassis until it has clearly disengaged.
3.
Remove the two SAS expansion cables that connect the X9700c to the I/O controllers on the X9700cx enclosure, keeping track of which cable was attached to which port. 4. Pull the controller straight out of the chassis. 5. Ensure the new controller has batteries. If not, use the batteries from the old controller. 6. Insert the new controller in to the X9700c chassis, but do not slide it fully in. 7. Attach the two SAS expansion cables that connect the X9700c to the I/O controllers on the X9700cx enclosure, ensuring that they are connected in the original places. 8. Push the controller fully into the chassis so it engages. 9. Reattach the SAS cable that connects the X9700c to the SAS switch in the c-Class blade enclosure. This is plugged into port 1. 10. Run the exds_stdiag command to verify that the firmware is correct (if there are no errors reported, then the firmware is correct). Normally, the new controller is flashed to the same firmware revision as the running controller. However, the management (SEP) firmware is not automatically updated. IMPORTANT: In dual-controller configurations, both controllers must execute the same version of firmware. If the controllers have different firmware versions, the storage system responds as follows: Cold boot method. If a second controller is installed while the device is powered off, the system compares the firmware versions of both controllers at power on and clones the most recent version from one controller to the other controller. Hot plug method. If a second controller is installed while the device is powered on, the system clones the firmware version of the active controller to the second controller.
CAUTION: In hot-plug cloning, it is possible for firmware on a newly installed controller to be downgraded to an earlier version. If this occurs, schedule a maintenance window and update the firmware of both controllers to the latest version. See the HP ExDS9100c/X9720 Storage System Controller Customer Self Repair Instructions for more information.
Remove the old battery. Insert new battery. Use the exds_stdiag command to verify that the battery is charging or is working properly. You might need to wait for up to four minutes to see the status change.
134
See the HP ExDS9100c/X9720 Storage System Controller Battery Customer Self Repair Instructions for more information.
135
9.
Run the exds_stdiag command to verify the firmware version. Check that the firmware is the same on both drawers (boxes) of the X9700cx. Following is an example of exds_stdiag output:
136
ExDS9100cc in 01/SGA830000M slot 1 fw 0126.2008120502 boxes 3 disks 22 luns 5 sn SGA830000M fw 1.56 fans OK,OK,OK,OK temp OK power OK,OK sn CN8827002Z fw 1.28 fans OK,OK temp OK power OK,OK,FAILED,OK sn CN8827002Z fw 2.03 fans OK,OK temp OK power OK,OK,OK,OK
In this example, the array serial number (box 1) is SGA830000M. The firmware level on box 2 (left drawer of X9700cx) is 1.28. The firmware level on box 3 (right drawer) is 2.03. This is unsupported because the firmware levels are not the same. In addition, if exds_stdiag continues to report that the firmware levels are below the recommended version, you must repeat the firmware update process in its entirety starting at step 8 above until the firmware levels are correct. 10. Mount the file systems that were unmounted in step 1. For more information, see the HP s X9000 File Serving Software File System User Guide. 1 1. If the system is not operating normally, repeat the entire procedure until the system is operational. See HP 600 Modular Disk System Maintenance and Service Guide for more information.
137
You will need to create a QuickRestore DVD, as described later, and then install it on the affected blade. This step installs the operating system and X9000 Software on the blade and launches a configuration wizard. CAUTION: The Quick Restore DVD restores the file serving node to its original factory state. This is a destructive process that completely erases all of the data on local hard drives. CAUTION: Recovering the management console node can result in data loss if improperly performed. Contact HP Support for assistance in performing the recovery procedure.
138
The server reboots automatically after the installation is complete. Remove the DVD from the USB DVD drive. 7. The Configuration Wizard starts automatically. To configure a file serving node, use one of the following procedures: When your cluster was configured initially, the installer may have created a template for configuring file serving nodes. To use this template to configure the file serving node undergoing recovery, go to Configuring a file serving node using the original template (page 139). To configure the file serving node manually, without the template, go to Configuring a file serving node manually (page 144).
139
3.
The Configuration Wizard attempts to discover management consoles on the network and then displays the results. Select the appropriate management console for this cluster.
NOTE: If the list does not include the appropriate management console, or you want to customize the cluster configuration for the file serving node, select Cancel. Go to Configuring a file serving node manually (page 144) for information about completing the configuration. 4. On the Verify Hostname dialog box, enter a hostname for the node, or accept the hostname generated by the hostname template.
5.
The Verify Configuration window shows the configuration received from the management console. Select Accept to apply the configuration to the server and register the server with the management console. NOTE: If you select Reject, the wizard exits and the shell prompt is displayed. You can restart the Wizard by entering the command /usr/local/ibrix/autocfg/bin/menu_ss_wizard or logging in to the server again.
6.
If the specified hostname already exists in the cluster (the name was used by the node you are replacing), the Replace Existing Server window asks whether you want to replace the existing server with the node you are configuring. When you click Yes, the replacement node will be registered.
4.
The QuickRestore DVD enables the iptables firewall. Either make the firewall configuration match that of your other server blades to allow traffic on appropriate ports, or disable the service entirely by running the chkconfig iptables off and service iptables stop commands. To allow traffic on appropriate ports, open the following ports: 80 443 1234 9000 9005 9008 9009
5.
Run the following commands on the management console to tune your server blade for optimal performance: ibrix_host_tune -S -h <hostname of new server blade> -o rpc_max_timeout=64,san_timeout=120,max_pg_segment=8192,max_pg_host=49152, max_pg_request=12 ibrix_host_tune -t 64 -h <hostname of new server blade>
6.
7. 8. 9.
On all surviving nodes, remove the ssh key for the hostname that you just recovered from the file /root/.ssh/known_hosts. (The key will exist only on the nodes that previously accessed the recovered node.) Copy /etc/hosts from a working node to /etc/hosts on the restored node. Ensure that all servers have server hostname entries in /etc/machines on all nodes. If you disabled NIC monitoring before using the QuickRestore DVD, re-enable the monitor: ibrix_nic -m -h MONITORHOST -A DESTHOST/IFNAME For example: ibrix_nic -m -h titan16 -A titan15/eth2
10. Configure Insight Remote Support on the node. See Configuring HP Insight Remote Support on X9000 systems (page 21). 1 1. Run ibrix_health -l from the X9000 management console to verify that no errors are being reported. NOTE: If the ibrix_health command reports that the restored node failed, run the following command: ibrix_health i h <hostname> If this command reports failures for volume groups, run the following command: ibrix_pv -a -h <Hostname of restored node>
Restoring services
When you perform a Quick Restore of a file serving node, the NFS, CIFS, FTP, and HTTP export information is not automatically restored to the node. After operations are failed back to the node, the I/O from client systems to the node fails for the NFS, CIFS, FTP, and HTTP shares. To avoid this situation, manually restore the NFS, CIFS, FTP, and HTTP exports on the node before failing it back. Restore CIFS services. Complete the following steps:
142
1.
If the restored node was previously configured to perform domain authorization for CIFS services, run the following command: ibrix_auth -n DOMAIN_NAME -A AUTH_PROXY_USER_NAME@domain_name [-P AUTH_PROXY_PASSWORD] -h HOSTNAME For example: ibrix_auth -n ibq1.mycompany.com -A [email protected] -P password -h ib5-9 If the command fails, check the following: Verify that DNS services are running on the node where you ran the ibrix_auth command. Verify that you entered a valid domain name with the full path for the -n and -A options.
2. 3.
Rejoin the likewise database to the Active Directory domain: /opt/likewise/bin/domainjoin-cli join <domain_name> Administrator Push the original share information from the management console database to the restored node. On the node hosting the active management console, first create a temporary CIFS share: ibrix_cifs -a f FSNAME s SHARENAME -p SHAREPATH Then delete the temporary CIFS share: ibrix_cifs -d -s SHARENAME
4.
Run the following command to verify that the original share information is on the restored node: ibrix_cifs -i -h SERVERNAME
Restore HTTP services. Complete the following steps: 1. Take the appropriate actions: 2. 3. If Active Directory authentication is used, join the restored node to the AD domain manually. If Local user authentication is used, create a temporary local user on the GUI and apply the settings to all servers. This step resyncs the local user database.
Run the following command: ibrix_httpconfig -R -h HOSTNAME Verify that HTTP services have been restored. Use the GUI or CLI to identify a share served by the restored node and then browse to the share.
All Vhosts and HTTP shares should now be restored on the node. Restore FTP services. Complete the following steps: 1. Take the appropriate actions: 2. 3. If Active Directory authentication is used, join the restored node to the AD domain manually. If Local user authentication is used, create a temporary local user on the GUI and apply the settings to all servers. This step resyncs the local user database.
Run the following command: ibrix_ftpconfig -R -h HOSTNAME Verify that HTTP services have been restored. Use the GUI or CLI to identify a share served by the restored node and then browse to the share.
All Vhosts and FTP shares should now be restored on the node.
Configuring a file serving node 143
3.
The Configuration Wizard attempts to discover management consoles on the network and then displays the results. Select Cancel to configure the node manually. (If the wizard cannot locate a management console, the screen shown in step 4 will appear.)
4.
5.
The Cluster Configuration Menu lists the configuration parameters that you need to set. Use the Up and Down arrow keys to select an item in the list. When you have made your select, press Tab to move to the buttons at the bottom of the dialog box, and press Space to go to the next dialog box.
6.
Select Management Console from the menu, and enter the IP address of the management console. This is typically the address of the management console on the cluster network.
145
7.
Select Hostname from the menu, and enter the hostname of this server.
8.
Select Time Zone from the menu, and then use Up or Down to select your time zone.
146
9.
Select Default Gateway from the menu, and enter the IP Address of the host that will be used as the default gateway.
10. Select DNS Settings from the menu, and enter the IP addresses for the primary and secondary DNS servers that will be used to resolve domain names. Also enter the DNS domain name.
147
1 1. Select Networks from the menu. Select <add device> to create a bond for the cluster network.
You are creating a bonded interface for the cluster network; select Ok on the Select Interface Type dialog box.
148
Enter a name for the interface (bond0 for the cluster interface) and specify the appropriate options and slave devices. The factory defaults for the slave devices are eth0 and eth3. Use Mode 6 bonding for 1GbE networks and Mode 1 bonding for 10GbE networks.
12. When the Configure Network dialog box reappears, select bond0.
149
13. To complete the bond0 configuration, enter a space to select the Cluster Network role. Then enter the IP address and netmask information that the network will use.
Repeat this procedure to create a bonded user network (typically bond1 with eth1 and eth2) and any custom networks as required. 14. When you have completed your entries on the File Serving Node Configuration Menu, select Continue. 15. Verify your entries on the confirmation screen, and select Commit to apply the values to the file serving node and register it with the management console.
150 Recovering the X9720 Network Storage System
16. If the hostname specified for the node already exists in the cluster (the name was used by the node you are replacing), the Replace Existing Server window asks whether you want to replace the existing server with the node you are configuring. When you click Yes, the replacement node will be registered.
IMPORTANT:
Troubleshooting
iLO remote console does not respond to keystrokes
You need to use a local terminal when performing a recovery because networking has not yet been set up. Occasionally when using the iLO integrated remote console, the console will not respond to keystrokes. To correct this situation, remove and reseat the blade. The iLO remote console will then respond properly. Alternatively, you can use a local KVM to perform the recovery.
Troubleshooting
151
Related information
Related documents are available on the Manuals page at http://www.hp.com/support/manuals.
On the Manuals page, select storage > NAS Systems > Ibrix Storage Systems > HP X9000 Network Storage Systems.
On the Manuals page, click bladesystem > BladeSystem Server Blades, and then select HP Proliant BL 460c G7 Server Series or HP Proliant BL 460c G6 Server Series.
152
On the Manuals page, click bladesystem > BladeSystem Interconnects > HP BladeSystem SAS Interconnects.
Maintaining the X9700cx (also known as the HP 600 Modular Disk System)
HP 600 Modular Disk System Maintenance and Service Guide Describes removal and replacement procedures. This document should be used only by persons qualified in servicing of computer equipment. On the Manuals page, click storage > Disk Storage Systems > HP 600 Modular Disk System.
HP websites
For additional information, see the following HP websites: http://www.hp.com/go/X9000 http://www.hp.com http://www.hp.com/go/storage http://www.hp.com/support/manuals http://www.hp.com/support/downloads
Rack stability
Rack stability protects personnel and equipment. WARNING! To reduce the risk of personal injury or damage to equipment:
Extend leveling jacks to the floor. Ensure that the full weight of the rack rests on the leveling jacks. Install stabilizing feet on the rack. In multiple-rack installations, fasten racks together securely. Extend only one rack component at a time. Racks can become unstable if more than one component is extended.
Product warranties
For information about HP product warranties, see the warranty information website: http://www.hp.com/go/storagewarranty
HP websites
153
Subscription service
HP recommends that you register your product at the Subscriber's Choice for Business website: http://www.hp.com/go/e-updates After registering, you will receive email notification of product enhancements, new driver versions, firmware updates, and other product resources.
155
1. Management switch 2 2. Management switch 1 3. X9700c 1 4. TFT monitor and keyboard 5. c-Class Blade enclosure 6. X9700cx 1
156
6 X9700cx 3 7 TFT monitor and keyboard 8 c-Class Blade Enclosure 9 X9700cx 2 10 X9700cx 1
157
7 X9700cx 4 8 X9700cx 3 9 TFT monitor and keyboard 10 c-Class Blade Enclosure 1 1 X9700cx 2 12 X9700cx 1
158
159
Servers 2 through 16X9000 file serving nodes. The file serving nodes manage the individual segments of the file system. Each segment is assigned to a specific file serving node and each node can "own" several segments. Segment ownership can be migrated from one node to another while the X9000 file system is actively in use. The X9000 management console must be running for this migration to occur. The following diagram shows a front view of a performance block (c-Class Blade enclosure) with half-height device bays numbering 1 through 16.
2. Interconnect bay 2 (Virtual Connect Flex-10 7. Interconnect bay 7 (reserved for future use) 10 Ethernet Module) 3. Interconnect bay 3 (SAS Switch) 4. Interconnect bay 4 (SAS Switch) 8. Interconnect bay 8 (reserved for future use) 9. Onboard Administrator 1
Flex-10 networks
The server blades in the X9720 Network Storage System have two built-in Flex-10 10 NICs. The Flex-10 technology comprises the Flex-10 NICs and the Flex-10 Virtual Connect modules in interconnect bays 1 and 2 of the performance chassis. Each Flex-10 NIC is configured to represent four physical interfaces (NIC) devices, also called FlexNICs, with a total bandwidth of 10ps. The FlexNICs are configured as follows on an X9720 Network Storage System:
161
The X9720 Network Storage System automatically reserves eth0 and eth3 and creates a bonded device, bond0. This is the management network. Although eth0 and eth3 are physically connected to the Flex-10 Virtual Connect (VC) modules, the VC domain is configured so that this network is not seen by the site network. With this configuration, eth1 and eth2 are available for connecting each server blade to the site network. To connect to the site network, you must connect one or more of the allowed ports as "uplinks" to your site network. These are the ports marked in green in Virtual Connect Flex-10 Ethernet module cablingBase cabinet (page 165). If you connect several ports to the same switch in your site network, all ports must use the same media type. In addition, HP recommends you use 10 links. The X9720 Network Storage System uses mode 1 (active/backup) for network bonds. No other bonding mode is supported. Properly configured, this provides a fully redundant network connection to each blade. A single failure of NIC, Virtual Connect module, uplink, or site network switch will not fail the network device. However, it is important that the site network infrastructure is properly configured for a bonded interface to operate correctly both in terms of redundancy and performance.
Capacity blocks
A capacity block comprises an X9700c chassis containing 12 disk drives and an X9700cx JBOD enclosure containing 70 disk drives. The X9700cx enclosure actually contains two JBODsone in each pull-out drawer (left and right drawer). Each drawer contains 35 disk drives. The serial number is the serial number of the X9700c chassis. Every server is connected to every array using a serial attached SCSI (SAS) fabric. The following elements exist: Each server has a P700m SAS host bus adapter (HBA) which has two SAS ports. Each SAS port is connected by the server blade enclosure backplane to a SAS switch. There are two SAS switches such that each server is connected by a redundant SAS fabric.
Each array has two redundant controllers. Each of the controllers is connected to each SAS switch. Within an array, the disk drives are assigned to different boxes, where box 1 is the X9700c enclosure and boxes 2 and 3 are the left and right pull-out drawers, respectively. The following diagram shows the numbering in an array box.
1. Box 1X9700c 2. Box 2X9700cx, left drawer (as viewed from the front) 3. Box 3X9700cx, right drawer (as viewed from the front)
An array normally has two controllers. Each controller has a battery-backed cache. Each controller has its own firmware. Normally all servers should have two redundant paths to all arrays.
162
1. Battery 1 2. Battery 2 3. SAS expander port 1 4. UID 5. Power LED 6. System fault LED 7. On/Off power button 8. Power supply 2
9. Fan 2 10. X9700c controller 2 1 1. SAS expander port 2 12. SAS port 1 13. X9700c controller 1 14. Fan 1 15. Power supply 1
1. Drawer 1 2. Drawer 2
1. Power supply 2. Primary I/O module drawer 2 3. Primary I/O module drawer 1 4. Out SAS port
5. In SAS port 6. Secondary I/O module drawer 1 7. Secondary I/O module drawer 2 8. Fan
Cabling diagrams
Capacity block cablingBase and expansion cabinets
A capacity block is comprised of the X9700c and X9700cx. CAUTION: Correct cabling of the capacity block is critical for proper X9720 Network Storage System operation.
1 2 3 4 5
X9700c X9700cx primary I/O module (drawer 2) X9700cx secondary I/O module (drawer 2) X9700cx primary I/O module (drawer 1) X9700cx secondary I/O module (drawer 1)
Onboard Administrator
1. 2. 3. 4. 5. 6.
Management switch 2 Management switch 1 Bay 1 (Virtual Connect Flex-10 10 Ethernet Module for connection to site network) Bay 2 (Virtual Connect Flex-10 10 Ethernet Module for connection to site network) Bay 3 (SAS switch) Bay 4 (SAS switch)
7. 8. 9. 10. 1 1. 12.
Bay 5 (reserved for future use) Bay 6 (reserved for future use) Bay 7 (reserved for optional components) Bay 8 (reserved for optional components) Onboard Administrator 1 Onboard Administrator 2
Cabling diagrams
165
1 2 3 4 5 6 7 8
X9700c 4 X9700c 3 X9700c 2 X9700c 1 SAS switch ports 1through 4 (in interconnect bay 3 of the c-Class Blade Enclosure). Ports 2 through 4 are reserved for additional capacity blocks. SAS switch ports 5 through 8 (in interconnect bay 3 of the c-Class Blade Enclosure). Reserved for expansion cabinet use. SAS switch ports 1 through 4 (in interconnect bay 4 of the c-Class Blade Enclosure). Ports 2 through 4 are reserved for additional capacity blocks. SAS switch ports 5 through 8 (in interconnect bay 4 of the c-Class Blade Enclosure). Reserved for expansion cabinet use.
1 2 3 4
5 6 7 8
SAS switch ports 1 through 4 (in interconnect bay 3 of the c-Class Blade Enclosure). Used by base cabinet. SAS switch ports 5 through 8 (in interconnect bay 3 of the c-Class Blade Enclosure). SAS switch ports 1 through 4 (in interconnect bay 4 of the c-Class Blade Enclosure). SAS switch ports 5 through 8 (in interconnect bay 4 of the c-Class Blade Enclosure). Used by base cabinet.
Cabling diagrams
167
NOTE: Some HP parts are not designed for customer self-repair. To satisfy the customer warranty, HP requires that an authorized service provider replace the part. These parts are identified as No in the spare parts lists.
AW548ABase Rack
Description Accessories Kit HP J9021A SWITCH 281024G CABLE, CONSOLE D-SUB9 - RJ45 L CABLE, CONSOLE D-SUB9 - RJ45 L PWR-CORD OPT-903 3-COND 2.3-MSPS-RACK,UNIT,10642,10KG2 SPS-SHOCK PALLET,600MM,10KG2 SPS-STABLIZER,600MM,10GK2 SPS-PANEL,SIDE,10642,10KG2 SPS-STICK,4X FIXED,C-13,OFFSET,WW SPS-BRACKETS,PDU SPS-STICK,4XC-13,Attached CBL SPS-SPS-STICK,ATTACH'D CBL,C13 0-1FT SPS-RACK,BUS BAR & Wire Tray Spare part number 5069-6535 J9021-69001 5188-3836 5188-6699 8120-6805 385969-001 385976-001 385973-001 385971-001 483915-001 252641-001 460430-001 419595-001 457015-001 Customer self repair Mandatory Mandatory Mandatory Mandatory Mandatory Mandatory Mandatory Mandatory Mandatory Optional Optional Mandatory Mandatory Optional
169
Description SPS-FAN MODULE (X9700c) SPS-CHASSIS (X9700c) SPS-BD,MIDPLANE (X9700c) SPS-BD,BACKPLANE II (X9700c) SPS-BD,RISER (X9700c) SPS-BD,USB,UID (X9700c) SPS-POWER SUPPLY (X9700c) SPS-BD,POWER UID,W/CABLE (X9700c) SPS-BATTERY MODULE (X9700c) SPS-BD,CONTROLLER,9100C (X9700c) SPS-BD,7-SEGMENT,DISPLAY (X9700c) SPS-BD,DIMM,DDR2,MOD,512MB (X9700c) SPS-HDD, B/P, W/CABLES & DRAWER ASSY (X9700cx) SPS-BD,LED PANEL,W/CABLE (X9700cx) SPS-FAN, SYSTEM (X9700cx) SPS-PWR SUPPLY (X9700cx) SPS-POWER BLOCK,W/POWER B/P BDS (X9700cx) SPS-BD, 2 PORT, W/1.5 EXPAND (X9700cx) SPS-DRV,HD,1TB,7.2K,DP SAS,3.5" HP SPS-RAIL KIT SPS-RACKMOUNT KIT
170
Description SPS-FAN MODULE (X9700c) SPS-CHASSIS (X9700c) SPS-BD,MIDPLANE (X9700c) SPS-BD,BACKPLANE II (X9700c) SPS-BD,RISER (X9700c) SPS-BD,USB,UID (X9700c) SPS-POWER SUPPLY (X9700c) SPS-BD,POWER UID,W/CABLE (X9700c) SPS-BATTERY MODULE (X9700c) SPS-BD,CONTROLLER,9100C (X9700c) SPS-BD,7-SEGMENT,DISPLAY (X9700c) SPS-BD,DIMM,DDR2,MOD,512MB (X9700c) SPS-HDD, B/P, W/CABLES & DRAWER ASSY (X9700cx) SPS-BD,LED PANEL,W/CABLE (X9700cx) SPS-FAN, SYSTEM (X9700cx) SPS-PWR SUPPLY (X9700cx) SPS-POWER BLOCK,W/POWER B/P BDS (X9700cx) SPS-BD, 2 PORT, W/1.5 EXPAND (X9700cx) SPS-RAIL KIT SPS-DRV,HD,2 TB,7.2K,DP SAS,3.5 HP SPS-RACKMOUNT KIT SPS-CA,EXT MINI SAS, 2M SPS-CA,EXT MINI SAS, 4M
171
Grounding methods
There are several methods for grounding. Use one or more of the following methods when handling or installing electrostatic sensitive parts: Use a wrist strap connected by a ground cord to a grounded workstation or computer chassis. Wrist straps are flexible straps with a minimum of 1 megohm 10 percent resistance in the ground cords. To provide proper ground, wear the strap snug against the skin. Use heel straps, toe straps, or boot straps at standing workstations. Wear the straps on both feet when standing on conductive floors or dissipating floor mats. Use conductive field service tools. Use a portable field service kit with a folding static-dissipating work mat.
If you do not have any of the suggested equipment for proper grounding, have an HP-authorized reseller install the part. NOTE: For more information on static electricity or assistance with product installation, contact your HP-authorized reseller.
Grounding methods
There are several methods for grounding. Use one or more of the following methods when handling or installing electrostatic sensitive parts: Use a wrist strap connected by a ground cord to a grounded workstation or computer chassis. Wrist straps are flexible straps with a minimum of 1 megohm 10 percent resistance in the ground cords. To provide proper ground, wear the strap snug against the skin. Use heel straps, toe straps, or boot straps at standing workstations. Wear the straps on both feet when standing on conductive floors or dissipating floor mats. Use conductive field service tools. Use a portable field service kit with a folding static-dissipating work mat.
If you do not have any of the suggested equipment for proper grounding, have an HP-authorized reseller install the part. NOTE: For more information on static electricity or assistance with product installation, contact your HP-authorized reseller.
172
Equipment symbols
If the following symbols are located on equipment, hazardous conditions could exist. WARNING! Any enclosed surface or area of the equipment marked with these symbols indicates the presence of electrical shock hazards. Enclosed area contains no operator serviceable parts. To reduce the risk of injury from electrical shock hazards, do not open this enclosure. WARNING! Any RJ-45 receptacle marked with these symbols indicates a network interface connection. To reduce the risk of electrical shock, fire, or damage to the equipment, do not plug telephone or telecommunications connectors into this receptacle. WARNING! Any surface or area of the equipment marked with these symbols indicates the presence of a hot surface or hot component. Contact with this surface could result in injury.
WARNING! Power supplies or systems marked with these symbols indicate the presence of multiple sources of power. WARNING! Any product or assembly marked with these symbols indicates that the component exceeds the recommended weight for one individual to handle safely.
Weight warning
WARNING! The device can be very heavy. To reduce the risk of personal injury or damage to equipment: Remove all hot-pluggable power supplies and modules to reduce the overall weight of the device before lifting. Observe local health and safety requirements and guidelines for manual material handling. Get help to lift and stabilize the device during installation or removal, especially when the device is not fastened to the rails. When a device weighs more than 22.5 kg (50 lb), at least two people must lift the component into the rack together. If the component is loaded into the rack above chest level, a third person must assist in aligning the rails while the other two support the device.
Equipment symbols
173
WARNING!
Observe local occupational safety requirements and guidelines for heavy equipment handling. Obtain adequate assistance to lift and stabilize the product during installation or removal. Extend the leveling jacks to the floor. Rest the full weight of the rack on the leveling jacks. Attach stabilizing feet to the rack if it is a single-rack installation. Ensure the racks are coupled in multiple-rack installations. Fully extend the bottom stabilizers on the equipment. Ensure that the equipment is properly supported/braced when installing options and boards. Be careful when sliding rack components with slide rails into the rack. The slide rails could pinch your fingertips. Ensure that the rack is adequately stabilized before extending a rack component with slide rails outside the rack. Extend only one component at a time. A rack could become unstable if more than one component is extended for any reason.
WARNING! Verify that the AC power supply branch circuit that provides power to the rack is not overloaded. Overloading AC power to the rack power supply circuit increases the risk of personal injury, fire, or damage to the equipment. The total rack load should not exceed 80 percent of the branch circuit rating. Consult the electrical authority having jurisdiction over your facility wiring and installation requirements.
Allow the product to cool before removing covers and touching internal components. Do not disable the power cord grounding plug. The grounding plug is an important safety feature. Plug the power cord into a grounded (earthed) electrical outlet that is easily accessible at all times. Disconnect power from the device by unplugging the power cord from either the electrical outlet or the device. Do not use non-conductive tools that could bridge live parts. Remove all watches, rings, or loose jewelry when working in hot-plug areas of an energized device. Install the device in a controlled access location where only qualified personnel have access to the device. Power off the equipment and disconnect power to all AC power cords before removing any access covers for non-hot-pluggable areas. Do not replace non-hot-pluggable components while power is applied to the product. Power off the device and then disconnect all AC power cords. Do not exceed the level of repair specified in the procedures in the product documentation. All troubleshooting and repair procedures are detailed to allow only subassembly or module-level repair. Because of the complexity of the individual boards and subassemblies, do not attempt to make repairs at the component level or to make modifications to any printed wiring board. Improper repairs can create a safety hazard.
174
WARNING! To reduce the risk of personal injury or damage to the equipment, the installation of non-hot-pluggable components should be performed only by individuals who are qualified in servicing computer equipment, knowledgeable about the procedures and precautions, and trained to deal with products capable of producing hazardous energy levels. WARNING! To reduce the risk of personal injury or damage to the equipment, observe local occupational health and safety requirements and guidelines for manually handling material.
CAUTION: Protect the installed solution from power fluctuations and temporary interruptions with a regulating Uninterruptible Power Supply (UPS). This device protects the hardware from damage caused by power surges and voltage spikes, and keeps the system in operation during a power failure. CAUTION: To properly ventilate the system, you must provide at least 7.6 centimeters (3.0 inches) of clearance at the front and back of the device. CAUTION: When replacing hot-pluggable components in an operational X9720 Network Storage System, allow approximately 30 seconds between removing the failed component and installing the replacement. This time is needed to ensure that configuration data about the removed component is cleared from the system registry. To minimize airflow loss, do not pause for more than a few minutes. To prevent overheating due to an empty chassis bay, use a blanking panel or leave the slightly disengaged component in the chassis until the replacement can be made. CAUTION: Schedule physical configuration changes during periods of low or no activity. If the system is performing rebuilds, RAID migrations, array expansions LUN expansions, or experiencing heavy I/O, avoid physical configuration changes such as adding or replacing hard drives or hot-plugging a controller or any other component. For example, hot-adding or replacing a controller while under heavy I/O could cause a momentary pause, performance decrease, or loss of access to the device while the new controller is starting up. When the controller completes the startup process, full functionality is restored. CAUTION: Before replacing a hot-pluggable component, ensure that steps have been taken to prevent loss of data.
175
Class A equipment
This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to Part 15 of the FCC rules. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instructions, may cause harmful interference to radio communications. Operation of this equipment in a residential area is likely to cause harmful interference, in which case the user will be required to correct the interference at personal expense.
Class B equipment
This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant to Part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference in a residential installation. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instructions, may cause harmful interference to radio communications. However, there is no guarantee that interference will not occur in a particular installation. If this equipment does cause harmful interference to radio or television reception, which can be determined by turning the equipment
176
off and on, the user is encouraged to try to correct the interference by one or more of the following measures: Reorient or relocate the receiving antenna. Increase the separation between the equipment and receiver. Connect the equipment into an outlet on a circuit that is different from that to which the receiver is connected. Consult the dealer or an experienced radio or television technician for help.
Declaration of Conformity for products marked with the FCC logo, United States only
This device complies with Part 15 of the FCC Rules. Operation is subject to the following two conditions: (1) this device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired operation. For questions regarding this FCC declaration, contact us by mail or telephone: Hewlett-Packard Company P.O. Box 692000, Mail Stop 510101 Houston, Texas 77269-2000 Or call 1-281-514-3333
Modification
The FCC requires the user to be notified that any changes or modifications made to this device that are not expressly approved by Hewlett-Packard Company may void the user's authority to operate the equipment.
Cables
When provided, connections to this device must be made with shielded cables with metallic RFI/EMI connector hoods in order to maintain compliance with FCC Rules and Regulations.
Class B equipment
This Class B digital apparatus meets all requirements of the Canadian Interference-Causing Equipment Regulations. Cet appareil numrique de la class B respecte toutes les exigences du Rglement sur le matriel brouilleur du Canada.
Compliance with these directives implies conformity to applicable harmonized European standards (European Norms) which are listed on the EU Declaration of Conformity issued by Hewlett-Packard for this product or product family.
This compliance is indicated by the following conformity marking placed on the product:
This marking is valid for non-Telecom products and EU harmonized Telecom products (e.g., Bluetooth).
Certificates can be obtained from http://www.hp.com/go/certificates. Hewlett-Packard GmbH, HQ-TRE, Herrenberger Strasse 140, 71034 Boeblingen, Germany
Japanese notices
Japanese VCCI-A notice
Korean notices
Class A equipment
178
Class B equipment
Taiwanese notices
BSMI Class A notice
Taiwanese notices
179
The Center for Devices and Radiological Health (CDRH) of the U.S. Food and Drug Administration implemented regulations for laser products on August 2, 1976. These regulations apply to laser products manufactured from August 1, 1976. Compliance is mandatory for products marketed in the United States.
181
Recycling notices
English recycling notice
Disposal of waste equipment by users in private household in the European Union
This symbol means do not dispose of your product with your other household waste. Instead, you should protect human health and the environment by handing over your waste equipment to a designated collection point for the recycling of waste electrical and electronic equipment. For more information, please contact your household waste disposal service
182
Recycling notices
English recycling notice
Recycling notices
187
Recycling notices
191
193
194
Glossary
ACE ACL ADS ALB BMC CIFS CLI CSR DAS DNS FTP GSI HA HBA HCA HDD IAD iLO IML IOPS IPMI JBOD KVM LUN Access control entry. Access control list. Active Directory Service. Advanced load balancing. Baseboard Management Configuration. Common Internet File System. The protocol used in Windows environments for shared folders. Command-line interface. An interface comprised of various commands which are used to control operating system responses. Customer self repair. Direct attach storage. A dedicated storage device that connects directly to one or more servers. Domain name system. File Transfer Protocol. Global service indicator. High availability. Host bus adapter. Host channel adapter. Hard disk drive. HP X9000 Software Administrative Daemon. Integrated Lights-Out. Initial microcode load. I/Os per second. Intelligent Platform Management Interface. Just a bunch of disks. Keyboard, video, and mouse. Logical unit number. A LUN results from mapping a logical unit number, port ID, and LDEV ID to a RAID group. The size of the LUN is determined by the emulation mode of the LDEV and the number of LDEVs associated with the LUN. Maximum Transmission Unit. Network attached storage. Network file system. The protocol used in most UNIX environments to share folders or mounts. Network interface card. A device that handles communication between a device and other devices on a network. Network Time Protocol. A protocol that enables the storage systems time and date to be obtained from a network-attached server, keeping multiple hosts and storage devices synchronized. Onboard Administrator. OpenFabrics Enterprise Distribution. On-screen display. Active Directory Organizational Units. Read-only access. Remote Procedure Call. Read-write access. Storage area network. A network of storage devices available to one or more servers. Serial Attached SCSI.
Glossary
MTU NAS NFS NIC NTP OA OFED OSD OU RO RPC RW SAN SAS
196
SELinux SFU SID SNMP TCP/IP UDP UID USM VACM VC VIF WINS WWN WWNN WWPN
Security-Enhanced Linux. Microsoft Services for UNIX. Secondary controller identifier number. Simple Network Management Protocol. Transmission Control Protocol/Internet Protocol. User Datagram Protocol. Unit identification. SNMP User Security Model. SNMP View Access Control Model. HP Virtual Connect. Virtual interface. Windows Internet Naming Service. World Wide Name. A unique identifier assigned to a Fibre Channel device. World wide node name. A globally unique 64-bit identifier assigned to each Fibre Channel node process. World wide port name. A unique 64-bit address used in a FC storage network to identify each device in a FC network.
197
Index
A
ACU using hpacucli, 1 12 adding server blades, 97 agile management console, 28 Array Configuration Utility using hpacucli, 1 12 AutoPass, 96 customer self repair, 153
D
Declaration of Conformity, 177 disk drive replacement, X9700c, 133 Disposal of waste equipment, European Union, 182 document related documentation, 152
E
email event notification, 38 error messages POST, 1 15 escalating issues, 1 10 European Union notice, 177 events, cluster add SNMPv3 users and groups, 42 configure email notification, 38 configure SNMP agent, 40 configure SNMP notification, 40 configure SNMP trapsinks, 41 define MIB views, 42 delete SNMP configuration elements, 43 enable or disable email notification, 39 list email notification settings, 40 list SNMP configuration, 43 monitor, 52 remove, 52 view, 52 exds escalate command, 1 10 exds_netdiag, 1 13 exds_netperf, 1 14 exds_stdiag utility, 1 12
B
backups file systems, 44 management console configuration, 44 NDMP applications, 44 battery replacement notices, 192 blade enclosure replacing, 129 booting server blades, 14 booting X9720, 14
C
cabling diagrams, 164 Canadian notice, 177 capacity block overview, 162 capacity blocks removing, 105 replacing disk drive, 133 clients access virtual interfaces, 26 cluster events, monitor, 52 health checks, 53 license key, 96 license, view, 96 log files, 55 operating statistics, 55 version numbers, view, 120 cluster interface change IP address, 76 change network, 76 defined, 73 command exds escalate, 1 10 exds_netdiag, 1 13 exds_netperf, 1 14 exds_stdiag, 1 12 hpacucli, 1 12 install server, 97 components replacing, 127 returning, 128 X9720, 1 1 contacting HP, 152
198 Index
F
failover automated, 26 NIC, 26 features X9720, 1 1 Federal Communications Commission notice, 176 file serving node recover, 138 file serving nodes configure power sources for failover, 30 dissociate power sources, 31 fail back, 32 fail over manually, 32 health checks, 53 identify standbys, 30 maintain consistency with configuration database, 126 migrate segments, 70 monitor status, 51 operational states, 51 power management, 68 prefer a user network interface, 75
run health check, 126 start or stop processes, 69 troubleshooting, 121 tune, 69 view process status, 69 file systems segments migrate, 70 Flex-10 networks, 161
prefer a user network interface, 75 view, 49 HP technical support, 152 hpacucli command, 1 12 hpasmcli(4) command, 55
I
IML clear or view, 55 hpasmcli(4) command, 55 install server command, 97 Integrated Management Log (IML) clear or view, 55 hpasmcli(4) command, 55 IP address change for cluster interface, 76 change for X9000 client, 75
G
grounding methods, 172
H
hazardous conditions symbols on equipment, 173 HBAs delete HBAs, 35 delete standby port pairings, 35 discover, 35 identify standby-paired ports, 35 list information, 35 monitor for high availability, 34 monitoring, turn on or off, 35 health check reports, 53 help obtaining, 152 High Availability agile management console, 28 automated failover, turn on or off, 31 check configuration, 36 defined, 29 delete network interface monitors, 34 delete network interface standbys, 34 delete power sources, 31 detailed configuration report, 37 dissociate power sources, 31 fail back a node, 32 failover a node manually, 32 failover protection, 12 HBA monitoring, turn on or off, 35 identify network interface monitors, 34 identify network interface standbys, 33 identify standby-paired HBA ports, 35 identify standbys for file serving nodes, 30 power management for nodes, 68 set up automated failover, 30 set up HBA monitor, 34 set up manual failover, 31 set up network interface monitoring, 32 set up power sources, 30 summary configuration report, 36 troubleshooting, 121 hostgroups, 48 add domain rule, 49 add X9000 client, 49 create hostgroup tree, 49 delete, 50
J
Japanese notices, 178
K
Korean notices, 178
L
labels, symbols on equipment, 173 laser compliance notices, 180 link state monitoring, 27 Linux X9000 clients, upgrade, 91 loading rack, warning, 173
M
management console agile, 28 back up configuration, 44 failover, 28 migrate to agile configuration, 78 X9000 client access, 19 management console CLI, 19 management console GUI change password, 19 customize, 18 Details page, 18 Navigator, 18 open, 15 view events, 52 Management Server blade recover, 138
N
NDMP backups, 44 cancel sessions, 46 configure NDMP parameters, 45 rescan for new devices, 46 start or stop NDMP Server, 46 view events, 47 view sessions, 45 view tape and media changer devices, 46
199
network interfaces add routing table entries, 76 bonded and virtual interfaces, 73 defined, 73 delete, 77 delete monitors, 34 delete routing table entries, 76 delete standbys, 34 guidelines, 25 identify monitors, 34 identify standbys, 33 set up monitoring, 32 viewing, 77 network testing, 1 13 NIC failover, 26
O
OA accessing via serial port, 1 1 1 accessing via service port, 1 12 replacing, 130 Onboard Administrator accessing via serial port, 1 1 1 accessing via service port, 1 12 replacing, 130
Onboard Administrator, 130 P700m mezzanine card, 132 SAS cable, 137 SAS switch, 131 server blade disk drive, 130 server blade motherboard, 129 server blade system board, 129 server blades, 129 VC module, 130 Virtual Connect module, 130 X9700c chassis, 135 X9700c controller, 133 X9700c controller battery, 134 X9700c disk drive, 133 X9700c fan, 135 X9700c power supply, 135 X9700cx fan, 137 X9700cx I/O module, 136 X9700cx power supply, 137 returning components, 128 routing table entries add, 76 delete, 76
S
SAS cable replacing, 137 SAS switch replacing, 131 segments evacuate from cluster, 71 migrate, 70 server blades adding, 97 booting, 14 disk drive replacement, 130 motherboard replacing, 129 overview, 160 removing, 105 replacing, 129 replacing both disk drives, 130 servers configure standby, 25 shut down, hardware and software, 67 SNMP event notification, 40 SNMP MIB, 42 spare parts list, 168 storage, monitor, 51 storage, remove from cluster, 71 Subscriber's Choice, HP, 154 symbols on equipment, 173 system board replacing, 129 System Configuration Wizard, 139 system recovery, 138 System Configuration Wizard, 139 system startup, 67
P
P700m mezzanine card replacing, 132 passwords, change GUI password, 19 POST error messages, 1 15 power failure, system recovery, 68
Q
QuickRestoreDVD, 138
R
rack stability warning, 153 recycling notices, 182 regulatory compliance Canadian notice, 177 European Union notice, 177 identification numbers, 176 Japanese notices, 178 Korean notices, 178 laser, 180 recycling notices, 182 Taiwanese notices, 179 related documentation, 152 removing capacity blocks, 105 server blades, 105 replacing blade enclosure, 129 capacity blocks disk drive, 133 components, 127 OA, 130
200 Index
T
Taiwanese notices, 179 technical support HP, 152 service locator website, 153 troubleshooting, 107 escalating issues, 1 10
U
upgrades Linux X9000 clients, 91 X9000 Software, 82 automatic, 85 manual, 88 user network interface add, 73 configuration rules, 76 defined, 73 identify for X9000 clients, 74 modify, 74 prefer, 74 unprefer, 75
V
VC module replacing, 130 Virtual Connect domain, configure, 125 Virtual Connect module replacing, 130 virtual interfaces, 25 bonded, create, 25 client access, 26 configure standby servers, 25 guidelines, 25
W
warning rack stability, 153 warnings loading rack, 173 weight, 173 websites customer self repair, 153 HP, 153 HP Subscriber's Choice for Business, 154 weight, warning, 173
troubleshooting, 121 tune, 69 tune locally, 70 view process status, 69 X9000 Software shut down, 67 start, 68 X9000 Software upgrade, 82 automatic, 85 manual, 88 X9700c chassis replacing, 135 controller replacing, 133 controller battery replacing, 134 disk drive replacement, 133 fan replacing, 135 power supply replacing, 135 X9700cx fan replacing, 137 I/O module replacing, 136 power supply replacing, 137 X9720 booting, 14 components, 1 1 features, 1 1 individual servers, 14 logging in, 14 X9720 hardware shut down, 67
X
X9000 clients add to hostgroup, 49 change IP address, 75 identify a user network interface, 74 interface to management console, 19 migrate segments, 70 monitor status, 51 prefer a user network interface, 75 start or stop processes, 69
201