0% found this document useful (0 votes)
84 views99 pages

HPE - Psg000003aen - Us - HPE 3PAR StoreServ 8000 Storage - Parts Support Guide

The HPE 3PAR StoreServ 8000 Storage - Parts Support Guide provides detailed instructions for identifying, removing, and replacing parts of the storage system. It includes information on connecting to the service processor, precautions for handling components, and tools needed for servicing. Additionally, it outlines changes in software procedures with the introduction of Service Processor 5.0, emphasizing the use of the Service Console for management tasks.

Uploaded by

oagen1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views99 pages

HPE - Psg000003aen - Us - HPE 3PAR StoreServ 8000 Storage - Parts Support Guide

The HPE 3PAR StoreServ 8000 Storage - Parts Support Guide provides detailed instructions for identifying, removing, and replacing parts of the storage system. It includes information on connecting to the service processor, precautions for handling components, and tools needed for servicing. Additionally, it outlines changes in software procedures with the introduction of Service Processor 5.0, emphasizing the use of the Service Console for management tasks.

Uploaded by

oagen1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 99

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide

Part Number: External


Published: 2020
Edition: 1
HPE 3PAR StoreServ 8000 Storage - Parts Support Guide
Abstract
This document will help you identify parts of the product and also gives a step-by-step instruction on how to remove and replace the
components.
Part Number: External
Published: 2020
Edition: 1

© Copyright 2020 Hewlett Packard Enterprise Development LP

Notices
The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and
services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as
constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained
herein.
Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession, use, or copying. Consistent with FAR
12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed
to the U.S. Government under vendor's standard commercial license.

Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard Enterprise has no control over and is
not responsible for information outside the Hewlett Packard Enterprise website.

Acknowledgments
Intel ®, Itanium ®, Optane ™, Pentium ®, Xeon ®, Intel Inside ®, and the Intel Inside logo are trademarks of Intel Corporation in the U.S. and other
countries.
Microsoft ® and Windows ® are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other
countries.
Adobe ® and Acrobat ® are trademarks of Adobe Systems Incorporated.
Java ® and Oracle ® are registered trademarks of Oracle and/or its affiliates.
UNIX ® is a registered trademark of The Open Group.
All third-party marks are property of their respective owners.
Table of contents

Part Locator
Front view
SFF 2U drive enclosure
SFF disk drive
LFF 4U drive enclosure
LFF disk drive
Back view
Node
Node PCM
PCM battery
SFP
Node interconnect cables
Drive PCM
SAS cables
IO Module
Internal view
Data cache DIMM
Control cache DIMM
Node clock battery
Node boot drive
PCIe adapter riser card
Optional host bus adapter
PCIe adapter
Remove/Replace
References
ATTN: SP 5.0Changes
Precautions
Tools And Materials
Connecting To The Service Processor
Servicing The Storage System
Drive Considerations
Node Rescue
Controller Node - Customer SSMC
Controller Node - Service CLI
Node Clock Battery CLI
Node DIMM CLI
Node PCIe Adapter And Riser Card CLI
Node Boot Drive CLI
Node PCM CLI
Node PCM Battery CLI
SFP CLI
Drive - Customer SSMC
Drive - Service CLI
Drive Enclosure CLI
I/O Module CLI
Drive PCM CLI
Procedures
Overview
Controller Node - Customer
Controller Node - Service
Node Clock Battery
Node DIMM
Node PCIe Adapter And Riser Card
4-Port Fibre Combo Adapter !
4-Port ISCSI Combo Adapter !
Node Boot Drive
Node PCM
Node PCM Battery
SFP
Drive - Customer
Drive - Service
Drive Enclosure
I/O Module
Drive PCM
HPE 3PAR StoreServ 8000 Storage Customer Self-Install Video
Introduction to HPE 3PAR Training by HPE Education Services
Welcome to the HPE 3PAR StoreServ 8000 Storage Customer Self-install Video
Overview & Preparation
Setting up the system
Cabling the Storage System
Powering UP
Setting UP the Service Processor
Software Setup
Post-Installation Tasks
Resources
Front view

SFF 2U drive enclosure

SFF disk drive

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 5


LFF 4U drive enclosure

LFF disk drive

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 6


Back view

Node

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 7


Node PCM

PCM battery

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 8


SFP

Node interconnect cables

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 9


Drive PCM

SAS cables

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 10


IO Module

Internal view
HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 11
Data cache DIMM

Control cache DIMM

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 12


Node clock battery

Node boot drive


HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 13
PCIe adapter riser card

Optional host bus adapter

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 14


PCIe adapter

ATTN: SP 5.0Changes
ATTENTION: SP 5.0 Changes
On HPE 3PAR StoreServ Storage systems running OS 3.3.1/SP 5.0 and higher, the software procedures for part replacement have
changed. While the physical procedure shown in the accompanying videos are correct, be alert that the interfaces shown, SPOCC and
SPMAINT for running software procedures are no longer available. Most software procedures are now run from the Service Console, a
new interface introduced with SP5.0.
To gain a significant understanding of the changes, please watch the videos at the OS 3.3.1/SP 5.0 Part Replacement Procedures link.
This link can be found within the SML underMedia Selection and Additional Resources for this product.

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 15


Summary of Changes with Service Processor 5.0:

The Service Console (SC), a browser-based tool, is now the primary interface for managing part replacement among many other
functions. Access the SC by browsing to: https://<Service Processor IP address>:8443

Accounts:admin andhpepartner (password set during SP initialization and can by changed by the customer); hpesupport (password
obtained through HPE StoreFront Remote and is either time-based or encrypted)

Notification of a degraded or failed part can come through an alert to the SC or SSMC, or an email, if enabled. Notification will
include corrective action and often a replacement part number if appropriate

Identification of the degraded or failed part is facilitated by the SCs Map, Activity, and where supported, Schematic views

Setting Maintenance mode must now be done through the SCs Action->Set Maintenance Mode; and should be done before every
part replacement

Check Health and Locate can be run through the SCs Action pull-down menu.

Non-interactive CLI commands can be done through the SCs Action->Start CLI session

Interactive CLI commands must be run through TUI (Text-based User Interface) through terminal console SSH, port 22, physically
through the service port (Port 2) using 10.255.155.54 or 10.255.155.52 using iLO, or <Service Processor IP> if accessed over the
customers network

Verification of a replaced part functioning properly is done through the SCs Map, Activity, and where supported, Schematic views,
as well as physical inspection of appropriate LEDs.

Precautions
To prevent damage to the unit, protect data, and avoid personal injury, review and follow these precautions.

Before you begin


1. If the unit contains heated components, wait until the components have cooled off before proceeding. Refer to the service manual
for details on how long each component requires to adequately cool off.

2. Put on your electrostatic discharge (ESD) wrist or shoe strap to avoid damaging any circuitry.

3. Place an ESD mat on a suitably grounded surface, and then place the temporary and replacement units on the mat.

Static electricity
Static electricity can damage electrical components. Before removing or replacing a component, observe the following precautions to
prevent damage to electric components and accessories:
Remove all ESD-generating materials from your work area.

To avoid hand contact, transport and store all electrostatic parts and assemblies in conductive or approved ESD packaging such as
ESD tubes, bags, or boxes.

Keep electrostatic-sensitive parts in their containers until they arrive at static-free stations. Before removing items from their
containers, place the containers on a grounded surface.

Do not take the new component out of its ESD package or handle any component before connecting your ESD wrist or shoe strap to
a suitably grounded surface.

Avoid contact with pins, leads, or circuitry.

Use the ESD package provided with the new part to return the old part.

Before servicing any component in the storage system, prepare an Electrostatic Discharge-safe (ESD) work surface by placing an
antistatic mat on the floor or table near the storage system.

Attach the ground lead of the mat to an unpainted surface of the rack.

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 16


Always use a wrist-grounding strap provided with the storage system.

Attach the grounding strap clip directly to an unpainted surface of the rack.

Disassembly
Ensure that you take the following precautions when disassembling a unit:

Label each cable as you remove it, noting its position and routing. This will make replacing the cables much easier and will ensure
that the cables are rerouted properly.

If applicable, keep all screws with the component removed. The screws used in each component can be of different thread sizes and
lengths. Using the wrong screw in a component could damage the unit.

Tools And Materials

Tools used in removal and replacement


Ensure that you have the tools you need before you begin. These should include:

Phillips-head screwdriver

Torx screwdrivers

Electrostatic discharge (ESD) wrist or shoe strap

Tape and markers or label maker for labeling cables.

Connecting To The Service Processor

Connect to the service processor


You connect the laptop to the service processor (SP) through an Ethernet connection (LAN).When you are connecting to the SP running
version 4.x or above, there are two interfaces available:

Service Processor Onsite Customer Care (SPOCC):A web-based graphical user interface that is available for support of the HP 3PAR
storage system and its SP. SPOCC is the web-based alternative to accessing most of the features and functionality that are
available through SPMAINT.

SPMAINT:The SPMAINT utility is an interface for the support (configuration and maintenance) of both the storage system and its
SP. Use SPMAINT as a backup method for accessing the SP; SPOCC is the preferred access method. Only one SPMAINT session is
allowed at a time through SPOCC or a CLI session.
CAUTION: Many of the features and functions that are available through SPMAINT can adversely affect a running system. To prevent
potential damage to the system and irrecoverable loss of data, do not attempt the procedures described in this manual until you have
taken all necessary safeguards.
NOTE: You connect to the SP using a standard web browser or a Secure Shell (SSH) session connection. If you do not have SSH, connect
to the serial port of the SP.
NOTE: HP recommends using a small private switch between the SP and the laptop to ensure that the laptop does not lose its network
connection during the build process. When the SP resets, the NIC port resets and drops the link. This connection loss can result in the
failure of the software load process. Any personal switch with four to eight ports is supported, such as the HP 1405-5G Switch
(J97982A), which is available as a non-catalog item from HP SmartBuy.
Use one of the following methods to establish a connection to the service processor (SP).
Method 1 - Maintenance PC (laptop) LAN Setup

1. Connect the laptop to the SP Service/iLO port via a cross-over cable or a private network.
HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 17
2. Configure the LAN connection of the laptop:

a. ClickStart >Control Panel >Network and Internet >Network and Sharing Center .

b. From theControl Panel Home , chooseChange adapter settings .

c. ChooseLocal Area Connection >Properties .

d. From theNetworking tab, selectInternet Protocol Version 4 (TCP/IPv4) and clickProperties .

e. Record the current settings for restoring purposes after the SP maintenance is complete.

f. ChooseUse the following IP address , set the following network parameters and click OK .

Network Configuration Setting

IP address 10.255.155.49

Subnet mask 255.255.248.0

g. Close all windows.

Power on the SP.

Establish a console session to the SP via SSH or web browser.

NOTE: The session iLO username is administrator and the password is listed on a label located on the top left corner of the server.

Log on to the SP with the applicable login credential. A password is not required. Follow the prompts to perform the Moment of
Birth (MOB) procedure.

Method 2 - SSH Connection Setup Use a terminal emulator application to establish a Secure Shell (SSH) communication to the SP.

1. Connect the laptop to the SP Service/iLO port via cross-over cable or a private network.

2. Establish an SSH session to the SP.

a. Enter the following settings:

Host Name (or IP address): 10.255.155.52 for iLO or 10.255.155.54 for Service (SPMAINT)

Login Name:administrator

Password (Use iLO password)

a. At the prompt, entervsp and pressEnter

3. Log on as root and follow the prompts to begin the MOB process.

Laptop serial settings Use the guidelines listed in this table to adjust the serial settings of the laptop before using a terminal emulator to
communicate with the SP and perform various tasks to support the storage system.
NOTE: Make sure to choose 115200 when rebuilding an SP or the rebuild will fail to complete.

Setting Value

Baud Rate 115200

Parity None

Word Length 8

Stop Bits 1

Flow Control Both

Transmit Xon/Xoff

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 18


Receive Xon/Xoff

Char transmit delay 0

Line transmit delay 0

Servicing The Storage System

SPOCC

Use SPOCC to access the SPMAINT interface (Service Processor Maintenance) in the Command Line Interface (CLI), where you perform
various administrative and diagnostic tasks to support both the storage system and the SP.
To open SPOCC, enter the SP IP address in a web browser and enter your user name and password.

Accessing the SPMAINT Interface Interactive CLI

Use the SPMAINT interface if you are servicing a storage system component or when you need to run a CLI command.The SPMAINT
interface allows you to affect the current status and configuration of both the system and the SP. For this reason, only one instance of
the SPMAINT interface can be run at a time on a given system.
To access the SPMAINT interface:

1. From the left side of the SPOCC home page, click Support .

2. From theService Processor - Support page, underService Processor , clickSPMAINT on the Web in theAction column.

3. From the3PAR Service Processor Menu , enter7 forInteractive CLI for a StoreServ , and then select the system.

Logging in to the HP 3PAR StoreServ Management Console

HP 3PAR StoreServ Management Console (SSMC) software provides contemporary browser-based interfaces for monitoring HP 3PAR
storage systems.
The HP 3PAR StoreServ Management Console (SSMC) procedures in this guide assume that the storage system to be serviced has
already been added to an instance of SSMC and is available for management by logging in to the SSMC Main Console. If that is not the
case, you must first add the storage system to an instance of SSMC by logging in to the SSMC Administrator Console.

Logging in to the Administrator Console

Direct login

1. Browse to the server on which HP 3PAR StoreServ Management Console software is installed, https://<IP address or FQDN>:8443 .
The login screen opens.Tip: The default port number is 8443. Another port might have been assigned during installation of the
software.

2. Select theAdministrator Console check box.

3. Enter the HP 3PAR SSMC administrator user name and password.

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 19


4. ClickLogin . TheStorage Systems screen in the Administrator Console is displayed.

Login from the Main Console

1. When already logged in to the Main Console, click the Session icon in the banner, and then select Administrator Console . The login
screen opens withAdministrator Console check box preselected.

2. Enter the HP 3PAR SSMC administrator user name and password.

3. ClickLogin . TheStorage Systems screen in the Administrator Console is displayed.

Logging in to the Main Console

1. Browse to the server on which HP 3PAR StoreServ Management Console software is installed, https://<IP address or FQDN>:8443 .
The login screen opens.Tip: The default port number is 8443. Another port might have been assigned during installation of the
software.

2. Enter a user name and password.

3. ClickLogin . TheDashboard screen is displayed.

Identifying a Replaceable Part

Parts have a nine-character spare part number on their labels. For some spare parts, the part number is available in the system.
Alternatively, the HP call center can assist in identifying the correct spare part number.
Swappable Components
Colored touch points on a storage system component (such as a lever or latch) identify whether the system should be powered on or off
during a part replacement:

Hot-swappable Parts are identified by red-colored touch points. The system can remain powered on and active during replacement.
NOTE: Drives are only hot-swappable if they have been properly prepared using servicemag.

Warm-swappable Parts are identified by gray touch points. The system does not fail if the part is removed, but data loss might
occur if the replacement procedure is not followed correctly.

Cold-swappable Parts are identified by blue touch points. The system must be powered off or otherwise suspended before replacing
the part.

CAUTION:

Do not replace cold-swappable components while power is applied to the product. Power off the device and then disconnect all AC
power cords.

Power off the equipment and disconnect power to all AC power cords before removing any access covers for cold-swappable areas.

When replacing hot-swappable components, allow approximately 30 seconds between removing the failed component and installing
the replacement. This time is needed to ensure that configuration data about the removed component is cleared from the system
registry. To prevent overheating due to an empty enclosure or bay, use a blank or leave the slightly disengaged component in the
enclosure until the replacement can be made. Drives must be replaced within 10 minutes, nodes 30 minutes and all other parts
within 6 minutes.

Before replacing a hot-swappable component, ensure that steps have been taken to prevent loss of data.

Powering on/off the storage system

The following describes how to power the storage system on and off.
WARNING! Do not power off the system unless a service procedure requires the system to be powered off. Before you power off the
system to perform maintenance procedures, first verify with a system administrator. Powering off the system will result in loss of access
to the data from all attached hosts.
Powering Off
Before you begin, unmount volumes exported from the StoreServ from all hosts, or shutdown all hosts that have volumes provisioned

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 20


from the StoreServ. Next, use either SPOCC or the SPMAINT interface to shut down and power off the system. For information about
SPOCC, see 'Service Processor Onsite Customer Care' in the maintenance guide.
NOTE: PDUs in any expansion cabinets connected to the storage system might need to be shut off. Use the locatesys command to
identify all connected cabinets before shutting down the system. The locatesys command flashes the LEDs for all node and drive
enclosures.
The system can be shutdown before powering off by any of the following three methods:
- Using SPOCC:

1. Select StoreServ Product Maintenance.

2. Select Halt a StoreServ cluster/node.

3. Follow the prompts to shutdown a cluster. Do not shut down individual Nodes.

4. Turn off power to the node PCMs.

5. Turn off power to the drive enclosure PCMs.

6. Turn off all PDUs in the rack.


- Using the SPMAINT interface:

1. From the 3PAR Service Processor Menu, enter 4 for StoreServ Product Maintenance.

2. Select Halt a StoreServ Cluster/Node.

3. Follow the prompts to shutdown a cluster. Do not shut down individual Nodes.

4. Turn off power to the node PCMs.

5. Turn off power to the drive enclosure PCMs.

6. Turn off all PDUs in the rack.


- Using CLI Directly on the Controller Node if the SP is Inaccessible

1. Enter shutdownsys halt. Confirm all prompts.


CAUTION: Failure to wait until all controller nodes are in a halted state can cause thesystem to view the shutdown as uncontrolled.
The system will undergo a check-state whenpowered on if the nodes are not fully halted before power is removed and can
seriouslyimpact host access to data.

2. Allow 2-3 minutes for the node to halt, then verify that the node Status LED is flashing green and the node hotplug LED is blue,
indicating that the node has been halted.

3. Turn off power to the node PCMs.

4. Turn off power to the drive enclosure PCMs.

5. Turn off all PDUs in the rack.

Powering On

1. Set the circuit breakers on the PDUs to the ON position.

2. Set the switches on the power strips to the ON position.

3. Power on the drive enclosure PCMs.

4. Power on the node enclosure PCMs.

5. Verify the status of the LEDs.

Disengaging the PDU Pivot Brackets To access the vertically mounted power distribution units (PDU) or servicing area, the PDUs can be
lowered out of the rack.

1. Remove the two top mounting screws.

2. Pull down on the PDU to lower.


NOTE: If necessary, loosen the two bottom screws to easily pull down the PDU.

3. Ensure the PDUs are in a fully lowered position before accessing.

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 21


Drive Considerations
WARNING! If the system is enabled with HP 3PAR Data Encryption feature, only use theself-encrypting drives (SED) or Federal
Information Processing Standard (FIPS) capable drives. Using a non-self-encrypting drive might cause errors during the replacement
process.
NOTE: SSDs have a limited number of writes that can occur before reaching the SSD's write endurance limit. This limit is generally high
enough so wear out will not occur during the expected service life of an HP 3PAR StoreServ under the great majority of configurations,
IO patterns, and workloads. HP 3PAR StoreServ tracks all writes to SSDs and can report the percent of the total write endurance limit
that has been used. This allows any SSD approaching the write endurance limit to be proactively replaced before they are automatically
spared out. An SSD has reached the maximum usage limit once it exceeds its write endurance limit. Following the product warranty
period, SSDs that have exceeded the maximum usage limit will not be repaired or replaced under HP support contracts.
CAUTION:

Blank drive carriers are provided and must be used if all slots in the enclosure are not filled with drives.

To avoid potential damage to equipment and loss of data, handle drives carefully.
NOTE: The system supports the following storage drives: Large Form Factor (LFF) drives, Small Form Factor (SFF) drives, and Solid
State Drives (SSD). The replacement procedures are essentially the same for all storage drives.

Minimum drive configurations


Minimum six SSDs per node pair
Minimum four SSDs per node pair for AFC

Minimum eight 10k/15k HDDs per node pair

Minimum twelve 7.2k HDDs per node pair

Drives must be added in pairs

Follow best practices when adding drives


Multiples of two identical drives to each drive enclosure, including nodes

SFF: populate left to right

LFF: bottom to top, left to right


- Columns may contain mixture of drive types

Unused slots must contain drive blanks

Example drive configuration


Shown is an example of LFF slot drive loading order showing how SSD & NFL drives can be intermixed in the same vertical column.
Drives are still installed in even number pairs.

NL 21 22 23
NL 17 18 19
NL 13 14 15
NL 9 10 11
SSD 5 6 7
SSD 1 2 3

Node Rescue

Automatic Node-to-Node Rescue

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 22


Only use this automatic node-to-node rescue when replacing or upgrading a node in a storage system. The system has on-board node
rescue Ethernet connections that connects each node in a cluster, which allows a rescue to occur from another node rather than the
need for the service processor. The node-to-node rescue starts automatically when a node is removed and then replaced in a storage
system and when there is at least one other node in the cluster.
NOTE:

Always perform the automatic node rescue procedures, unless otherwise instructed.

When performing automatic node-to-node rescue, there might be instances where a node is to be rescued by another node that has
been inserted but has not been detected. If this happens, issue the CLI command,

startnoderescue node <nodenum>

Use the
showtask -d

command to view detailed status regarding the node rescue:

Service Processor-to-Node Rescue

Only use this SP-to-Node rescue procedure if all nodes in the HP 3PAR system are down and need to be rebuilt from the HP 3PAR OS
image on the service processor.
To perform an SP-to-node rescue:
1. At the rear of the storage system, uncoil the red crossover Ethernet cable connected to the SP Service network connection and
connect this cross-over cable to the MGMT Ethernet port of the node that is being rescued.

2. Connect the maintenance laptop to the Physical SP using the serial connection and start an SPMAINT interface session.

3. On the SPMAINT interface home page, enter 3 forStoreServ Configuration Management , then enter1 forDisplay StoreServ
information to perform the pre-rescue task of obtaining the following information:
HP 3PAR OS Level on the StoreServ system

StoreServ system network parameters including netmask and gateway information Return to the main menu.
NOTE: Copy the network information on to a separate document for reference to complete the subsequent steps of configuring
the system network.

4. On the SPMAINT interface home page, complete the following procedure:

a. Enter4 forStoreServ Product Maintenance .

b. Enter11 forNode Rescue .

c. Enter y to confirm to action before continuing with node rescue.

d. Enter1 forConfigure Node Rescue , and then select the system.At this point, you will be prompted for the node rescue
configuration information. Complete the following procedure:

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 23


Verify the HP 3PAR OS level is correct, and then enter y to use the level.

Entery to continue to setup node rescue.


NOTE: The process of establishing communication between the SP and system might take a few minutes.

PressEnter to accept the default setup value

[/dev/tpddev/vvb/0]

Entery to specify the time zone. Continue to follow the time zone setup prompts.

Confirm the HP 3PAR OS level, and then enter y to continue.

5. Enter2 forSP-to-Node Rescue .


NOTE: The process of establishing communication between the SP and StoreServ system might take several minutes.

6. Establish a serial connection to the node being rescued. If necessary, disconnect the serial cable from SP.

7. Connect a serial cable from the laptop to the service port on the node being rescued.
NOTE: Check the baud rate settings before establishing a connection. The baud rate of the node is 57600.

8. Connect the crossover cable from the MGMT Ethernet port on the node being rescued to the Service Ethernet port of the SP.

9. Turn on the power to the node to begin the boot process. Closely monitor the serial console output.

10. After the PCI scan occurs, press CTRL+W to establish a

whack>

prompt.

a. Enter

nemoe cmd unset node_rescue_needed

and pressEnter . The output willdisplay the message no output.

b. Enter

boot rescue

and pressEnter .

c. Monitor the console output process. The node will continue to run POST until it stops and displays instructions for running
node-rescue (see output below). Enter y to continue.The SP installs the OS. This process takes approximately 10 to 15 minutes
(rescue and rebuild of disk = 5 minutes) + (reboot = 5-10 minutes). When complete, the node restarts and tries to become part
of the cluster.Continue until all nodes are rescued. After the last node boots, the cluster should form and the system should start.

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 24


11. When the cluster forms, verify the node status LED is slowly blinking green and provides a login prompt.

12. Remove the crossover cable from the recently rescued node and connect it to the next node if additional nodes need rescuing.
NOTE: Reconnect the public network (Ethernet) cable to recently rescued node.

13. Repeat steps 7 through 12 for each node.

14. Log onto a node as a console user.

15. From the node 3PAR Console Menu , select option 11 Finish SP-to-Node rescue procedure to set the network configuration for the
system. Follow the prompts to complete the network configuration.
NOTE:

The cluster must be active and the admin volume must be mounted before changing the network configuration.

If necessary, access STATs to obtain the network information or request it from the system administrator.

16. PressEnter .

17. Before deconfiguring the node rescue, disconnect the crossover cables and reconnect the public network cable.

18. Return to the SP Main menu and perform the following:

a. Select1 Deconfigure Node Rescue .

b. SelectXReturn to previous menu to return to the main menu.

c. From the3PAR Service Processor Menu , select option 7 Interactive CLI for a StoreServ , then choose the system.

19. Enter

shownode

to verify that all nodes have joined the cluster.

20. Enter

shutdownsys

reboot and enter yes to reboot the system.

21. Reconnect the host and host cables if previously removed or shutdown.

22. Enter

checkhealth -detail

to verify the current state of the system.

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 25


23. If File Persona is configured, verify that File Services have been re-started and that FPGs are owned by their Primary Node. If FPGs
are active on the Alternate Node, restore them to their Primary Node using

setfpg -failover <fpgname>

24. In the SP window,issue the

exit

command and selectX to exit from the 3PAR Service Processor Menu and to log out of the session.

25. Disconnect the serial cable from the maintenance laptop and the red cross-over Ethernet cable from the node and coil and replace
the cable behind the SP. If applicable, reconnect the customer's network cable and any other cables that may have been
disconnected.

26. Enter

checkhealth -detail

to verify the current state of the system. After issues identified by

checkhealth

are resolved, instruct the customer to restart the hosts and applications and resume the host IO.

27. Close and lock the rear door.

Controller Node - Customer SSMC

Node identification and shutdown

Contact your ASP for assistance in completing this task.

Node verification

1. In the SSMC main menu, select Storage Systems > Systems.

2. Select the storage system that was just serviced.

3. In the detail pane Health panel, check that the State is Normal (green). If the state is not normal, contact your authorized service
provider for further assistance.

4. On the SSMC main menu, select Storage Systems > Controller Nodes . Select the replaced controller node and verify that the State is
Normal (green).

Controller Node - Service CLI

Node identification and shutdown

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 26


Before you begin: Connect to the service processor and start an SPMAINT session.
NOTE: If the failed node is already halted, it is not necessary to shutdown the node, because it is not part of the cluster.
1. From the3PAR Service Processor Menu , enter 7 forInteractive CLI for a StoreServ , and then select the system.

2. When prompted, enter

to enable maintenance mode for the system.

3. Enter the

showsys

command to verify you are on the correct array.

4. Enter

checkhealth -detail

to verify the current state of the system.

5. Look for the following alerts:

Node node:<nodeID> "Node is not online"Node node:<nodeID> "Power supply <psID> detailed
state is <status>Node node:<nodeID> "Power supply <psID> AC state is <acStatus>"Node node:
<nodeID> "Power supply <psID> DC state is <dcStatus>"Node node:<nodeID> "Power supply <psID>
battery is <batStatus>"Node node:<nodeID> "Node <nodeID> battery is <batStatus>"Node node:
<nodeID> "Power supply <psID> battery is expired"Node node:<nodeID> "Fan is <fanID> is
<status>"Node node:<nodeID> "Power supply <psID> fan module <fanID> is <status>"Node node:
<nodeID> "Fan module <fanID> is <status>Node node:<nodeID> "Detailed State <state>"
(degraded or failed)Node node:<nodeID> "PCI card in Slot:<slot> is empty, but is not empty
in Node:<pair-node>"Node node:<nodeID> "PCI card model in Slot:<slot> is not the same as
Node:<pair-node>"Node -- "There is at least one active servicenode operation in
progress"Node node:<nodeID> "Process <ps_name> has reached 90% of maximum size"Node node:
<nodeID> "Node has less than 100MB of free memory"Node node:<nodeID> "Environmental factor
<item_string> is <state>"Node node:<nodeID> "Flusher speed set incorrectly to: <flush_speed>
(should be 0)"Node node:<nodeID> "ioload is running"Node node:<nodeID> "VV <vvid> has
<stuck_num> outstanding <command> with a maximum wait time of <slptime>"Node node:<nodeID> "
<item> is not the same on all nodes"

(where <item> is: BIOS version, NEMOE version, Control memory, Data memory, CPU Speed, CPU Bus Speed, HP 3PAR OS
version,Package list)

6. Use the nodeID in the alert or enter

shownode

and use the node number or identifier to identify which node is affected.
2 node numbering:

4 node numbering:

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 27


7. Enter

shutdownnode halt <nodeID>

to halt the appropriate node.

8. When prompted, enter

yes

to confirm halting of the node.

Node verification

1. Enter

shownode

to verify the node has joined the cluster.

2. Reboot the replacement node to synchronize the software and hardware values by entering

shutdownnode reboot <nodeID>

.
NOTE: The reboot process may take approximately ten to fifteen minutes. The node becomes part of the cluster when the process
is complete.

3. Verify the node status LED is blinking green in sync with other node LEDs. The uniform blinking indicates the node has joined the
cluster.

4. After rebooting is complete, enter

shownode

to verify the node has joined the cluster.

4UW0001489 cli% shownode

Control Data Cache

Node ----Name---- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB)


Available(%)

0 4UW0001489-0 OK Yes Yes Off GreenBlnk 65536 32768 100

1 4UW0001489-1 OK No Yes Off GreenBlnk 65536 32768 100

2 4UW0001489-2 OK No Yes Off GreenBlnk 65536 32768 100

3 4UW0001489-3 OK No Yes Off GreenBlnk 65536 32768 100

NOTE: Depending on the serviced component, the node might go through Node Rescue, which can take up to 10 minutes.

NOTE: The LED status for the replaced node might indicate green and could take up to 3 minutes to change to green flashing.

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 28


5. Enter

shownode -d

for additional detail.

4UW0001489 cli% shownode -d

---------------------------------------------Nodes---------------------------------------------

Control Data Cache

Node ----Name---- -State- Master InCluster -Service_LED- ---LED--- Mem(MB) Mem(MB)


Available(%)

0 4UW0001489-0 OK Yes Yes Off GreenBlnk 65536 32768 100

1 4UW0001489-1 OK No Yes Off GreenBlnk 65536 32768 100

2 4UW0001489-2 OK No Yes Off GreenBlnk 65536 32768 100

3 4UW0001489-3 OK No Yes Off GreenBlnk 65536 32768 100

-----------------------------PCI Cards------------------------------

Node Slot Type -Manufacturer- -Model-- ---Serial--- -Rev- Firmware

0 0 FC EMULEX LPE16002 Onboard 30 10.4.359.0

0 1 SAS LSI 9300-2P Onboard 02 7.00.00.00

0 2 Eth HP 560SFP+ 38EAA714315C n/a 3.15.1-k

0 3 Eth Intel e1000e Onboard n/a 2.3.2-k

1 0 FC EMULEX LPE16002 Onboard 30 10.4.359.0

1 1 SAS LSI 9300-2P Onboard 02 7.00.00.00

1 2 Eth HP 560SFP+ 38EAA7150C40 n/a 3.15.1-k

1 3 Eth Intel e1000e Onboard n/a 2.3.2-k

2 0 FC EMULEX LPE16002 Onboard 30 10.4.359.0

2 1 SAS LSI 9300-2P Onboard 02 7.00.00.00

2 2 Eth HP 560SFP+ 38EAA71522AC n/a 3.15.1-k

2 3 Eth Intel e1000e Onboard n/a 2.3.2-k

3 0 FC EMULEX LPE16002 Onboard 30 10.4.359.0

3 1 SAS LSI 9300-2P Onboard 02 7.00.00.00

3 2 Eth HP 560SFP+ 38EAA7150B90 n/a 3.15.1-k

3 3 Eth Intel e1000e Onboard n/a 2.3.2-k

----------------------------CPUs----------------------------

Node CPU -Manufacturer- -Serial- CPUSpeed(MHz) BusSpeed(MHz)

0 0 GenuineIntel -- 2493 100.00

0 1 GenuineIntel -- 2493 100.00

0 2 GenuineIntel -- 2493 100.00

0 3 GenuineIntel -- 2493 100.00

0 4 GenuineIntel -- 2493 100.00

0 5 GenuineIntel -- 2493 100.00

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 29


0 6 GenuineIntel -- 2493 100.00

0 7 GenuineIntel -- 2493 100.00

0 8 GenuineIntel -- 2493 100.00

0 9 GenuineIntel -- 2493 100.00

0 10 GenuineIntel -- 2493 100.00

0 11 GenuineIntel -- 2493 100.00

0 12 GenuineIntel -- 2493 100.00

0 13 GenuineIntel -- 2493 100.00

0 14 GenuineIntel -- 2493 100.00

0 15 GenuineIntel -- 2493 100.00

1 0 GenuineIntel -- 2493 100.00

1 1 GenuineIntel -- 2493 100.00

1 2 GenuineIntel -- 2493 100.00

1 3 GenuineIntel -- 2493 100.00

1 4 GenuineIntel -- 2493 100.00

1 5 GenuineIntel -- 2493 100.00

1 6 GenuineIntel -- 2493 100.00

1 7 GenuineIntel -- 2493 100.00

1 8 GenuineIntel -- 2493 100.00

1 9 GenuineIntel -- 2493 100.00

1 10 GenuineIntel -- 2493 100.00

1 11 GenuineIntel -- 2493 100.00

1 12 GenuineIntel -- 2493 100.00

1 13 GenuineIntel -- 2493 100.00

1 14 GenuineIntel -- 2493 100.00

1 15 GenuineIntel -- 2493 100.00

2 0 GenuineIntel -- 2494 100.00

2 1 GenuineIntel -- 2494 100.00

2 2 GenuineIntel -- 2494 100.00

2 3 GenuineIntel -- 2494 100.00

2 4 GenuineIntel -- 2494 100.00

2 5 GenuineIntel -- 2494 100.00

2 6 GenuineIntel -- 2494 100.00

2 7 GenuineIntel -- 2494 100.00

2 8 GenuineIntel -- 2494 100.00

2 9 GenuineIntel -- 2494 100.00

2 10 GenuineIntel -- 2494 100.00

2 11 GenuineIntel -- 2494 100.00

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 30


2 12 GenuineIntel -- 2494 100.00

2 13 GenuineIntel -- 2494 100.00

2 14 GenuineIntel -- 2494 100.00

2 15 GenuineIntel -- 2494 100.00

3 0 GenuineIntel -- 2493 100.00

3 1 GenuineIntel -- 2493 100.00

3 2 GenuineIntel -- 2493 100.00

3 3 GenuineIntel -- 2493 100.00

3 4 GenuineIntel -- 2493 100.00

3 5 GenuineIntel -- 2493 100.00

3 6 GenuineIntel -- 2493 100.00

3 7 GenuineIntel -- 2493 100.00

3 8 GenuineIntel -- 2493 100.00

3 9 GenuineIntel -- 2493 100.00

3 10 GenuineIntel -- 2493 100.00

3 11 GenuineIntel -- 2493 100.00

3 12 GenuineIntel -- 2493 100.00

3 13 GenuineIntel -- 2493 100.00

3 14 GenuineIntel -- 2493 100.00

3 15 GenuineIntel -- 2493 100.00

--------------------------------------Physical Memory---------------------------------------

Node Slot SlotID -Name-- -Usage- ---Type--- --Manufacturer--- -Serial- -Latency-- Size(MB)

0 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 061A8323 CL5.0/9.0 32768

0 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 069DBAB1 CL5.0/9.0 32768

0 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C2FE5 CL5.0/11.0 16384

0 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C2FE4 CL5.0/11.0 16384

1 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 061773B9 CL5.0/9.0 32768

1 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 05FF0C20 CL5.0/9.0 32768

1 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0EAF22 CL5.0/11.0 16384

1 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0EAF48 CL5.0/11.0 16384

2 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 06B62512 CL5.0/9.0 32768

2 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 06B62501 CL5.0/9.0 32768

2 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C3042 CL5.0/11.0 16384

2 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C3034 CL5.0/11.0 16384

3 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 05FF0D2D CL5.0/9.0 32768

3 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 05FF0D49 CL5.0/9.0 32768

3 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C5387 CL5.0/11.0 16384

3 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C537F CL5.0/11.0 16384

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 31


---------------------------------------------Internal Drives-----------------------------------
-----------

Node Drive ------WWN------- -Manufacturer- -----Model------ ---Serial--- -Firmware- Size(MB)


Type SedState

0 0 5001B44E5A0DDC90 SanDisk DX300128A5xnEMLC 151640400016 X2190300 122104 SATA capable

1 0 5001B44E5A0DDCF3 SanDisk DX300128A5xnEMLC 151640400115 X2190300 122104 SATA capable

2 0 5001B44E5A0DDD14 SanDisk DX300128A5xnEMLC 151640400148 X2190300 122104 SATA capable

3 0 5001B44E5A0DDD27 SanDisk DX300128A5xnEMLC 151640400167 X2190300 122104 SATA capable

--------------------------------Power Supplies---------------------------------

Node PS -Assem_Serial- -PSState- FanState ACState DCState -BatState- ChrgLvl(%)

0,1 0 5DNSFA2436Y1UT OK OK OK OK OK 100

0,1 1 5DNSFA2436Y1UZ OK OK OK OK OK 100

2,3 0 5DNSFA2436Y1GU OK OK OK OK OK 100

2,3 1 5DNSFA2437A0MY OK OK OK OK OK 100

------------------------------MCU------------------------------

Node Model Firmware State ResetReason -------Up Since--------

0 NEMOE 4.8.28 ready cold_power_on 2000-12-31 17:59:10 CST

1 NEMOE 4.8.28 ready cold_power_on 2000-12-31 17:59:10 CST

2 NEMOE 4.8.28 ready cold_power_on 2000-12-31 17:59:11 CST

3 NEMOE 4.8.28 ready cold_power_on 2000-12-31 17:59:11 CST

-----------Uptime-----------

Node -------Up Since--------

0 2000-12-31 18:03:30 CST

1 2000-12-31 18:03:29 CST

2 2000-12-31 18:03:30 CST

3 2000-12-31 18:03:31 CST

6. Issue the

checkhealth

command to verify that the state of all nodes is healthy.


NOTE: Depending on the serviced component, the node might go through Node Rescue, which can take up to 10 minutes.

7. Enter

exit

and selectX to exit from the 3PAR Service Processor Menu .

Node Clock Battery CLI

Node clock battery identification

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 32


Before you begin: Connect to the service processor and start an SPMAINT session.

1. From the3PAR Service Processor Menu , enter 7 forInteractive CLI for a StoreServ , and then select the system.

2. When prompted, enter

to enable maintenance mode for the system.

3. Enter the

showsys

command to verify you are on the correct array.

4. Enter

checkhealth -detail

to verify the current state of the system.

5. Look for the following alert:

Node node:<nodeID> "Node <nodeID> battery is <batStatus>"

6. Use the nodeID in the alert or enter

shownode

and use the node number or identifier to identify which node is affected.
2 node numbering:

4 node numbering:

7. Type

exit

to return to the 3PAR Service Processor Menu.

8. Select optionHalt a StoreServ cluster/node , select the desired node, and confirm all prompts to halt the node.

9. In the3PAR Service Processor Menu , select option 7 Interactive CLI for a StoreServ .

10. If required, execute the locatesys command to identify the system:

a. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.

b. Execute the

locatesys

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 33


command.
NOTE: All nodes in this System flash, except for the node with the failed component, which displays a solid blue LED.

Node clock battery verification

1. After rebooting is complete, enter

shownode

to verify the node has joined the cluster.

2. Enter

shownode -d

for additional detail.

4UW0001489 cli% shownode

Control Data Cache

Node ----Name---- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB)


Available(%)

0 4UW0001489-0 OK Yes Yes Off GreenBlnk 65536 32768 100

1 4UW0001489-1 OK No Yes Off GreenBlnk 65536 32768 100

2 4UW0001489-2 OK No Yes Off GreenBlnk 65536 32768 100

3 4UW0001489-3 OK No Yes Off GreenBlnk 65536 32768 100

NOTE: Depending on the serviced component, the node might go through Node Rescue, which can take up to 10 minutes.

NOTE: The LED status for the replaced node might indicate green and could take up to 3 minutes to change to green flashing.

3. Enter

shownode -d

for additional detail.

4UW0001489 cli% shownode -d

---------------------------------------------Nodes---------------------------------------------

Control Data Cache

Node ----Name---- -State- Master InCluster -Service_LED- ---LED--- Mem(MB) Mem(MB)


Available(%)

0 4UW0001489-0 OK Yes Yes Off GreenBlnk 65536 32768 100

1 4UW0001489-1 OK No Yes Off GreenBlnk 65536 32768 100

2 4UW0001489-2 OK No Yes Off GreenBlnk 65536 32768 100

3 4UW0001489-3 OK No Yes Off GreenBlnk 65536 32768 100

-----------------------------PCI Cards------------------------------

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 34


Node Slot Type -Manufacturer- -Model-- ---Serial--- -Rev- Firmware

0 0 FC EMULEX LPE16002 Onboard 30 10.4.359.0

0 1 SAS LSI 9300-2P Onboard 02 7.00.00.00

0 2 Eth HP 560SFP+ 38EAA714315C n/a 3.15.1-k

0 3 Eth Intel e1000e Onboard n/a 2.3.2-k

1 0 FC EMULEX LPE16002 Onboard 30 10.4.359.0

1 1 SAS LSI 9300-2P Onboard 02 7.00.00.00

1 2 Eth HP 560SFP+ 38EAA7150C40 n/a 3.15.1-k

1 3 Eth Intel e1000e Onboard n/a 2.3.2-k

2 0 FC EMULEX LPE16002 Onboard 30 10.4.359.0

2 1 SAS LSI 9300-2P Onboard 02 7.00.00.00

2 2 Eth HP 560SFP+ 38EAA71522AC n/a 3.15.1-k

2 3 Eth Intel e1000e Onboard n/a 2.3.2-k

3 0 FC EMULEX LPE16002 Onboard 30 10.4.359.0

3 1 SAS LSI 9300-2P Onboard 02 7.00.00.00

3 2 Eth HP 560SFP+ 38EAA7150B90 n/a 3.15.1-k

3 3 Eth Intel e1000e Onboard n/a 2.3.2-k

----------------------------CPUs----------------------------

Node CPU -Manufacturer- -Serial- CPUSpeed(MHz) BusSpeed(MHz)

0 0 GenuineIntel -- 2493 100.00

0 1 GenuineIntel -- 2493 100.00

0 2 GenuineIntel -- 2493 100.00

0 3 GenuineIntel -- 2493 100.00

0 4 GenuineIntel -- 2493 100.00

0 5 GenuineIntel -- 2493 100.00

0 6 GenuineIntel -- 2493 100.00

0 7 GenuineIntel -- 2493 100.00

0 8 GenuineIntel -- 2493 100.00

0 9 GenuineIntel -- 2493 100.00

0 10 GenuineIntel -- 2493 100.00

0 11 GenuineIntel -- 2493 100.00

0 12 GenuineIntel -- 2493 100.00

0 13 GenuineIntel -- 2493 100.00

0 14 GenuineIntel -- 2493 100.00

0 15 GenuineIntel -- 2493 100.00

1 0 GenuineIntel -- 2493 100.00

1 1 GenuineIntel -- 2493 100.00

1 2 GenuineIntel -- 2493 100.00

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 35


1 3 GenuineIntel -- 2493 100.00

1 4 GenuineIntel -- 2493 100.00

1 5 GenuineIntel -- 2493 100.00

1 6 GenuineIntel -- 2493 100.00

1 7 GenuineIntel -- 2493 100.00

1 8 GenuineIntel -- 2493 100.00

1 9 GenuineIntel -- 2493 100.00

1 10 GenuineIntel -- 2493 100.00

1 11 GenuineIntel -- 2493 100.00

1 12 GenuineIntel -- 2493 100.00

1 13 GenuineIntel -- 2493 100.00

1 14 GenuineIntel -- 2493 100.00

1 15 GenuineIntel -- 2493 100.00

2 0 GenuineIntel -- 2494 100.00

2 1 GenuineIntel -- 2494 100.00

2 2 GenuineIntel -- 2494 100.00

2 3 GenuineIntel -- 2494 100.00

2 4 GenuineIntel -- 2494 100.00

2 5 GenuineIntel -- 2494 100.00

2 6 GenuineIntel -- 2494 100.00

2 7 GenuineIntel -- 2494 100.00

2 8 GenuineIntel -- 2494 100.00

2 9 GenuineIntel -- 2494 100.00

2 10 GenuineIntel -- 2494 100.00

2 11 GenuineIntel -- 2494 100.00

2 12 GenuineIntel -- 2494 100.00

2 13 GenuineIntel -- 2494 100.00

2 14 GenuineIntel -- 2494 100.00

2 15 GenuineIntel -- 2494 100.00

3 0 GenuineIntel -- 2493 100.00

3 1 GenuineIntel -- 2493 100.00

3 2 GenuineIntel -- 2493 100.00

3 3 GenuineIntel -- 2493 100.00

3 4 GenuineIntel -- 2493 100.00

3 5 GenuineIntel -- 2493 100.00

3 6 GenuineIntel -- 2493 100.00

3 7 GenuineIntel -- 2493 100.00

3 8 GenuineIntel -- 2493 100.00

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 36


3 9 GenuineIntel -- 2493 100.00

3 10 GenuineIntel -- 2493 100.00

3 11 GenuineIntel -- 2493 100.00

3 12 GenuineIntel -- 2493 100.00

3 13 GenuineIntel -- 2493 100.00

3 14 GenuineIntel -- 2493 100.00

3 15 GenuineIntel -- 2493 100.00

--------------------------------------Physical Memory---------------------------------------

Node Slot SlotID -Name-- -Usage- ---Type--- --Manufacturer--- -Serial- -Latency-- Size(MB)

0 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 061A8323 CL5.0/9.0 32768

0 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 069DBAB1 CL5.0/9.0 32768

0 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C2FE5 CL5.0/11.0 16384

0 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C2FE4 CL5.0/11.0 16384

1 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 061773B9 CL5.0/9.0 32768

1 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 05FF0C20 CL5.0/9.0 32768

1 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0EAF22 CL5.0/11.0 16384

1 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0EAF48 CL5.0/11.0 16384

2 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 06B62512 CL5.0/9.0 32768

2 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 06B62501 CL5.0/9.0 32768

2 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C3042 CL5.0/11.0 16384

2 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C3034 CL5.0/11.0 16384

3 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 05FF0D2D CL5.0/9.0 32768

3 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 05FF0D49 CL5.0/9.0 32768

3 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C5387 CL5.0/11.0 16384

3 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C537F CL5.0/11.0 16384

---------------------------------------------Internal Drives-----------------------------------
-----------

Node Drive ------WWN------- -Manufacturer- -----Model------ ---Serial--- -Firmware- Size(MB)


Type SedState

0 0 5001B44E5A0DDC90 SanDisk DX300128A5xnEMLC 151640400016 X2190300 122104 SATA capable

1 0 5001B44E5A0DDCF3 SanDisk DX300128A5xnEMLC 151640400115 X2190300 122104 SATA capable

2 0 5001B44E5A0DDD14 SanDisk DX300128A5xnEMLC 151640400148 X2190300 122104 SATA capable

3 0 5001B44E5A0DDD27 SanDisk DX300128A5xnEMLC 151640400167 X2190300 122104 SATA capable

--------------------------------Power Supplies---------------------------------

Node PS -Assem_Serial- -PSState- FanState ACState DCState -BatState- ChrgLvl(%)

0,1 0 5DNSFA2436Y1UT OK OK OK OK OK 100

0,1 1 5DNSFA2436Y1UZ OK OK OK OK OK 100

2,3 0 5DNSFA2436Y1GU OK OK OK OK OK 100

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 37


2,3 1 5DNSFA2437A0MY OK OK OK OK OK 100

------------------------------MCU------------------------------

Node Model Firmware State ResetReason -------Up Since--------

0 NEMOE 4.8.28 ready cold_power_on 2000-12-31 17:59:10 CST

1 NEMOE 4.8.28 ready cold_power_on 2000-12-31 17:59:10 CST

2 NEMOE 4.8.28 ready cold_power_on 2000-12-31 17:59:11 CST

3 NEMOE 4.8.28 ready cold_power_on 2000-12-31 17:59:11 CST

-----------Uptime-----------

Node -------Up Since--------

0 2000-12-31 18:03:30 CST

1 2000-12-31 18:03:29 CST

2 2000-12-31 18:03:30 CST

3 2000-12-31 18:03:31 CST

4. Enter

checkhealth -detail

to verify the current state of the system.

5. Enter

exit

and selectX to exit from the 3PAR Service Processor Menu .

Node DIMM CLI

Node DIMM identification

Before you begin: Connect to the service processor and start an SPMAINT session.

1. From the3PAR Service Processor Menu , enter 7 forInteractive CLI for a StoreServ , and then select the system.

2. When prompted, enter

to enable maintenance mode for the system.

3. Enter the

showsys

command to verify you are on the correct array.

4. Enter

checkhealth -detail

to verify the current state of the system.

5. Enter

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 38


shownode

to verify the current state of the controller nodes.

Control Data Cache

Node --Name--- -State-- Master InCluster ---LED--- Mem(MB) Mem(MB) Available(%)

0 1001356-0 Degraded Yes Yes AmberBlnk 2048 8192 100

1 1001356-1 Degraded No Yes AmberBlnk 2048 8192 100

6. Confirm the memory and cache numbers look right. If not, issue the

shownode -mem

command to identify the location of the failed DIMM.

4UW0001489 cli% shownode -mem

Node Slot SlotID -Name-- -Usage- ---Type--- --Manufacturer--- -Serial- -Latency-- Size(MB)

0 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 061A8323 CL5.0/9.0 32768

0 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 069DBAB1 CL5.0/9.0 32768

0 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C2FE5 CL5.0/11.0 16384

0 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C2FE4 CL5.0/11.0 16384

1 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 061773B9 CL5.0/9.0 32768

1 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 05FF0C20 CL5.0/9.0 32768

1 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0EAF22 CL5.0/11.0 16384

1 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0EAF48 CL5.0/11.0 16384

2 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 06B62512 CL5.0/9.0 32768

2 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 06B62501 CL5.0/9.0 32768

2 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C3042 CL5.0/11.0 16384

2 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C3034 CL5.0/11.0 16384

3 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 05FF0D2D CL5.0/9.0 32768

3 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 05FF0D49 CL5.0/9.0 32768

3 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C5387 CL5.0/11.0 16384

3 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C537F CL5.0/11.0 16384

7. Issue the

shownode -i

command to display the part number. The

shownode -i

command displays node inventory information. Scroll down to view physical memory information.

4UW0001489 cli% shownode -i

------------------------------Nodes-------------------------------

Node ----Name---- -Manufacturer- ---Assem_Part--- --Assem_Serial--

0 4UW0001489-0 FXN H6Y97-63001 PDSET81LM7R0CE

1 4UW0001489-1 FXN H6Y97-63001 PDSET81LM7O03O

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 39


2 4UW0001489-2 FXN H6Y97-63001 PDSET81LM7R08T

3 4UW0001489-3 FXN H6Y97-63001 PDSET81LM7R03R

---------------------PCI Cards---------------------

Node Slot Type -Manufacturer- -Model-- ---Serial---

0 0 FC EMULEX LPE16002 Onboard

0 1 SAS LSI 9300-2P Onboard

0 2 Eth HP 560SFP+ 38EAA714315C

0 3 Eth Intel e1000e Onboard

1 0 FC EMULEX LPE16002 Onboard

1 1 SAS LSI 9300-2P Onboard

1 2 Eth HP 560SFP+ 38EAA7150C40

1 3 Eth Intel e1000e Onboard

2 0 FC EMULEX LPE16002 Onboard

2 1 SAS LSI 9300-2P Onboard

2 2 Eth HP 560SFP+ 38EAA71522AC

2 3 Eth Intel e1000e Onboard

3 0 FC EMULEX LPE16002 Onboard

3 1 SAS LSI 9300-2P Onboard

3 2 Eth HP 560SFP+ 38EAA7150B90

3 3 Eth Intel e1000e Onboard

--------------------------------------CPUs--------------------------------------

Node CPU -Manufacturer- ---------------------Model--------------------- -Serial-

0 0 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 1 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 2 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 3 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 4 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 5 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 6 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 7 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 8 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 9 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 10 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 11 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 12 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 13 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 14 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 15 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 40


1 0 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 1 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 2 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 3 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 4 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 5 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 6 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 7 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 8 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 9 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 10 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 11 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 12 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 13 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 14 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 15 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 0 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 1 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 2 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 3 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 4 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 5 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 6 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 7 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 8 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 9 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 10 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 11 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 12 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 13 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 14 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 15 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 0 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 1 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 2 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 3 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 4 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 5 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 41


3 6 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 7 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 8 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 9 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 10 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 11 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 12 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 13 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 14 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 15 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

---------------------------------------------Internal Drives-----------------------------------
-----------

Node Drive ------WWN------- -Manufacturer- -----Model------ ---Serial--- -Firmware- Size(MB)


Type SedState

0 0 5001B44E5A0DDC90 SanDisk DX300128A5xnEMLC 151640400016 X2190300 122104 SATA capable

1 0 5001B44E5A0DDCF3 SanDisk DX300128A5xnEMLC 151640400115 X2190300 122104 SATA capable

2 0 5001B44E5A0DDD14 SanDisk DX300128A5xnEMLC 151640400148 X2190300 122104 SATA capable

3 0 5001B44E5A0DDD27 SanDisk DX300128A5xnEMLC 151640400167 X2190300 122104 SATA capable

-----------------------------------------Physical Memory---------------------------------------
---

Node Slot SlotID Name Type --Manufacturer--- ----PartNumber---- -Serial- -Rev- Size(MB)

0 CC_0.0 J18000 DIMM0.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 061A8323 0011 32768

0 CC_1.0 J19000 DIMM1.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 069DBAB1 0011 32768

0 DC_0.0 J14005 DIMM0.0 DDR3_SDRAM Micron Technology 36KDZS2G72PZ-1G6E1 0D0C2FE5 4531 16384

0 DC_1.0 J16005 DIMM1.0 DDR3_SDRAM Micron Technology -- 0D0C2FE4 0000 16384

1 CC_0.0 J18000 DIMM0.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 061773B9 0011 32768

1 CC_1.0 J19000 DIMM1.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 05FF0C20 0011 32768

1 DC_0.0 J14005 DIMM0.0 DDR3_SDRAM Micron Technology -- 0D0EAF22 0000 16384

1 DC_1.0 J16005 DIMM1.0 DDR3_SDRAM Micron Technology -- 0D0EAF48 0000 16384

2 CC_0.0 J18000 DIMM0.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 06B62512 0011 32768

2 CC_1.0 J19000 DIMM1.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 06B62501 0011 32768

2 DC_0.0 J14005 DIMM0.0 DDR3_SDRAM Micron Technology 36KDZS2G72PZ-1G6E1 0D0C3042 4531 16384

2 DC_1.0 J16005 DIMM1.0 DDR3_SDRAM Micron Technology -- 0D0C3034 0000 16384

3 CC_0.0 J18000 DIMM0.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 05FF0D2D 0011 32768

3 CC_1.0 J19000 DIMM1.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 05FF0D49 0011 32768

3 DC_0.0 J14005 DIMM0.0 DDR3_SDRAM Micron Technology 36KDZS2G72PZ-1G6E1 0D0C5387 4531 16384

3 DC_1.0 J16005 DIMM1.0 DDR3_SDRAM Micron Technology -- 0D0C537F 0000 16384

------------------Power Supplies------------------

Node PS -Manufacturer- -Assem_Part- -Assem_Serial-

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 42


0,1 0 XYRATEX 726237-001 5DNSFA2436Y1UT

0,1 1 XYRATEX 726237-001 5DNSFA2436Y1UZ

2,3 0 XYRATEX 726237-001 5DNSFA2436Y1GU

2,3 1 XYRATEX 726237-001 5DNSFA2437A0MY

8. Refer to node numbering:


2 node numbering:

4 node numbering:

9. Enter

exit

to return to the 3PAR Service Processor Menu.

10. Select option4 StoreServ Product Maintenance and then select the desired system.

11. Select optionHalt a StoreServ cluster/node , select the desired node, and confirm all prompts to halt the node.

12. In the3PAR Service Processor Menu , select option 7 Interactive CLI for a StoreServ .

13. If required, execute the locatesys command to identify the system:

a. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.

b. Execute the

locatesys

command.NOTE: All nodes in this System flash, except for the node with the failed component, which displays a solid blue LED.

Node DIMM verification

1. After rebooting is complete, enter shownode to verify the node has joined the cluster.

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 43


2. Issue the

shownode -i

command to display the memory.NOTE: The

shownode -i

command displays node inventory information; scroll down to view physical memory information.

4UW0001489 cli% shownode -i

------------------------------Nodes-------------------------------

Node ----Name---- -Manufacturer- ---Assem_Part--- --Assem_Serial--

0 4UW0001489-0 FXN H6Y97-63001 PDSET81LM7R0CE

1 4UW0001489-1 FXN H6Y97-63001 PDSET81LM7O03O

2 4UW0001489-2 FXN H6Y97-63001 PDSET81LM7R08T

3 4UW0001489-3 FXN H6Y97-63001 PDSET81LM7R03R

---------------------PCI Cards---------------------

Node Slot Type -Manufacturer- -Model-- ---Serial---

0 0 FC EMULEX LPE16002 Onboard

0 1 SAS LSI 9300-2P Onboard

0 2 Eth HP 560SFP+ 38EAA714315C

0 3 Eth Intel e1000e Onboard

1 0 FC EMULEX LPE16002 Onboard

1 1 SAS LSI 9300-2P Onboard

1 2 Eth HP 560SFP+ 38EAA7150C40

1 3 Eth Intel e1000e Onboard

2 0 FC EMULEX LPE16002 Onboard

2 1 SAS LSI 9300-2P Onboard

2 2 Eth HP 560SFP+ 38EAA71522AC

2 3 Eth Intel e1000e Onboard

3 0 FC EMULEX LPE16002 Onboard

3 1 SAS LSI 9300-2P Onboard

3 2 Eth HP 560SFP+ 38EAA7150B90

3 3 Eth Intel e1000e Onboard

--------------------------------------CPUs--------------------------------------

Node CPU -Manufacturer- ---------------------Model--------------------- -Serial-

0 0 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 1 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 2 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 3 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 4 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 44


0 5 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 6 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 7 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 8 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 9 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 10 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 11 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 12 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 13 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 14 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

0 15 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 0 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 1 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 2 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 3 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 4 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 5 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 6 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 7 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 8 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 9 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 10 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 11 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 12 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 13 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 14 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

1 15 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 0 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 1 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 2 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 3 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 4 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 5 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 6 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 7 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 8 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 9 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 10 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 45


2 11 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 12 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 13 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 14 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

2 15 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 0 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 1 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 2 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 3 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 4 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 5 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 6 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 7 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 8 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 9 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 10 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 11 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 12 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 13 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 14 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

3 15 GenuineIntel 62 (Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz) --

---------------------------------------------Internal Drives-----------------------------------
-----------

Node Drive ------WWN------- -Manufacturer- -----Model------ ---Serial--- -Firmware- Size(MB)


Type SedState

0 0 5001B44E5A0DDC90 SanDisk DX300128A5xnEMLC 151640400016 X2190300 122104 SATA capable

1 0 5001B44E5A0DDCF3 SanDisk DX300128A5xnEMLC 151640400115 X2190300 122104 SATA capable

2 0 5001B44E5A0DDD14 SanDisk DX300128A5xnEMLC 151640400148 X2190300 122104 SATA capable

3 0 5001B44E5A0DDD27 SanDisk DX300128A5xnEMLC 151640400167 X2190300 122104 SATA capable

-----------------------------------------Physical Memory---------------------------------------
---

Node Slot SlotID Name Type --Manufacturer--- ----PartNumber---- -Serial- -Rev- Size(MB)

0 CC_0.0 J18000 DIMM0.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 061A8323 0011 32768

0 CC_1.0 J19000 DIMM1.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 069DBAB1 0011 32768

0 DC_0.0 J14005 DIMM0.0 DDR3_SDRAM Micron Technology 36KDZS2G72PZ-1G6E1 0D0C2FE5 4531 16384

0 DC_1.0 J16005 DIMM1.0 DDR3_SDRAM Micron Technology -- 0D0C2FE4 0000 16384

1 CC_0.0 J18000 DIMM0.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 061773B9 0011 32768

1 CC_1.0 J19000 DIMM1.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 05FF0C20 0011 32768

1 DC_0.0 J14005 DIMM0.0 DDR3_SDRAM Micron Technology -- 0D0EAF22 0000 16384

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 46


1 DC_1.0 J16005 DIMM1.0 DDR3_SDRAM Micron Technology -- 0D0EAF48 0000 16384

2 CC_0.0 J18000 DIMM0.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 06B62512 0011 32768

2 CC_1.0 J19000 DIMM1.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 06B62501 0011 32768

2 DC_0.0 J14005 DIMM0.0 DDR3_SDRAM Micron Technology 36KDZS2G72PZ-1G6E1 0D0C3042 4531 16384

2 DC_1.0 J16005 DIMM1.0 DDR3_SDRAM Micron Technology -- 0D0C3034 0000 16384

3 CC_0.0 J18000 DIMM0.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 05FF0D2D 0011 32768

3 CC_1.0 J19000 DIMM1.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 05FF0D49 0011 32768

3 DC_0.0 J14005 DIMM0.0 DDR3_SDRAM Micron Technology 36KDZS2G72PZ-1G6E1 0D0C5387 4531 16384

3 DC_1.0 J16005 DIMM1.0 DDR3_SDRAM Micron Technology -- 0D0C537F 0000 16384

------------------Power Supplies------------------

Node PS -Manufacturer- -Assem_Part- -Assem_Serial-

0,1 0 XYRATEX 726237-001 5DNSFA2436Y1UT

0,1 1 XYRATEX 726237-001 5DNSFA2436Y1UZ

2,3 0 XYRATEX 726237-001 5DNSFA2436Y1GU

2,3 1 XYRATEX 726237-001 5DNSFA2437A0MY

3. Enter

shownode -d

for additional detail.

4UW0001489 cli% shownode -d

---------------------------------------------Nodes---------------------------------------------

Control Data Cache

Node ----Name---- -State- Master InCluster -Service_LED- ---LED--- Mem(MB) Mem(MB)


Available(%)

0 4UW0001489-0 OK Yes Yes Off GreenBlnk 65536 32768 100

1 4UW0001489-1 OK No Yes Off GreenBlnk 65536 32768 100

2 4UW0001489-2 OK No Yes Off GreenBlnk 65536 32768 100

3 4UW0001489-3 OK No Yes Off GreenBlnk 65536 32768 100

-----------------------------PCI Cards------------------------------

Node Slot Type -Manufacturer- -Model-- ---Serial--- -Rev- Firmware

0 0 FC EMULEX LPE16002 Onboard 30 10.4.359.0

0 1 SAS LSI 9300-2P Onboard 02 7.00.00.00

0 2 Eth HP 560SFP+ 38EAA714315C n/a 3.15.1-k

0 3 Eth Intel e1000e Onboard n/a 2.3.2-k

1 0 FC EMULEX LPE16002 Onboard 30 10.4.359.0

1 1 SAS LSI 9300-2P Onboard 02 7.00.00.00

1 2 Eth HP 560SFP+ 38EAA7150C40 n/a 3.15.1-k

1 3 Eth Intel e1000e Onboard n/a 2.3.2-k

2 0 FC EMULEX LPE16002 Onboard 30 10.4.359.0

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 47


2 1 SAS LSI 9300-2P Onboard 02 7.00.00.00

2 2 Eth HP 560SFP+ 38EAA71522AC n/a 3.15.1-k

2 3 Eth Intel e1000e Onboard n/a 2.3.2-k

3 0 FC EMULEX LPE16002 Onboard 30 10.4.359.0

3 1 SAS LSI 9300-2P Onboard 02 7.00.00.00

3 2 Eth HP 560SFP+ 38EAA7150B90 n/a 3.15.1-k

3 3 Eth Intel e1000e Onboard n/a 2.3.2-k

----------------------------CPUs----------------------------

Node CPU -Manufacturer- -Serial- CPUSpeed(MHz) BusSpeed(MHz)

0 0 GenuineIntel -- 2493 100.00

0 1 GenuineIntel -- 2493 100.00

0 2 GenuineIntel -- 2493 100.00

0 3 GenuineIntel -- 2493 100.00

0 4 GenuineIntel -- 2493 100.00

0 5 GenuineIntel -- 2493 100.00

0 6 GenuineIntel -- 2493 100.00

0 7 GenuineIntel -- 2493 100.00

0 8 GenuineIntel -- 2493 100.00

0 9 GenuineIntel -- 2493 100.00

0 10 GenuineIntel -- 2493 100.00

0 11 GenuineIntel -- 2493 100.00

0 12 GenuineIntel -- 2493 100.00

0 13 GenuineIntel -- 2493 100.00

0 14 GenuineIntel -- 2493 100.00

0 15 GenuineIntel -- 2493 100.00

1 0 GenuineIntel -- 2493 100.00

1 1 GenuineIntel -- 2493 100.00

1 2 GenuineIntel -- 2493 100.00

1 3 GenuineIntel -- 2493 100.00

1 4 GenuineIntel -- 2493 100.00

1 5 GenuineIntel -- 2493 100.00

1 6 GenuineIntel -- 2493 100.00

1 7 GenuineIntel -- 2493 100.00

1 8 GenuineIntel -- 2493 100.00

1 9 GenuineIntel -- 2493 100.00

1 10 GenuineIntel -- 2493 100.00

1 11 GenuineIntel -- 2493 100.00

1 12 GenuineIntel -- 2493 100.00

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 48


1 13 GenuineIntel -- 2493 100.00

1 14 GenuineIntel -- 2493 100.00

1 15 GenuineIntel -- 2493 100.00

2 0 GenuineIntel -- 2494 100.00

2 1 GenuineIntel -- 2494 100.00

2 2 GenuineIntel -- 2494 100.00

2 3 GenuineIntel -- 2494 100.00

2 4 GenuineIntel -- 2494 100.00

2 5 GenuineIntel -- 2494 100.00

2 6 GenuineIntel -- 2494 100.00

2 7 GenuineIntel -- 2494 100.00

2 8 GenuineIntel -- 2494 100.00

2 9 GenuineIntel -- 2494 100.00

2 10 GenuineIntel -- 2494 100.00

2 11 GenuineIntel -- 2494 100.00

2 12 GenuineIntel -- 2494 100.00

2 13 GenuineIntel -- 2494 100.00

2 14 GenuineIntel -- 2494 100.00

2 15 GenuineIntel -- 2494 100.00

3 0 GenuineIntel -- 2493 100.00

3 1 GenuineIntel -- 2493 100.00

3 2 GenuineIntel -- 2493 100.00

3 3 GenuineIntel -- 2493 100.00

3 4 GenuineIntel -- 2493 100.00

3 5 GenuineIntel -- 2493 100.00

3 6 GenuineIntel -- 2493 100.00

3 7 GenuineIntel -- 2493 100.00

3 8 GenuineIntel -- 2493 100.00

3 9 GenuineIntel -- 2493 100.00

3 10 GenuineIntel -- 2493 100.00

3 11 GenuineIntel -- 2493 100.00

3 12 GenuineIntel -- 2493 100.00

3 13 GenuineIntel -- 2493 100.00

3 14 GenuineIntel -- 2493 100.00

3 15 GenuineIntel -- 2493 100.00

--------------------------------------Physical Memory---------------------------------------

Node Slot SlotID -Name-- -Usage- ---Type--- --Manufacturer--- -Serial- -Latency-- Size(MB)

0 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 061A8323 CL5.0/9.0 32768

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 49


0 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 069DBAB1 CL5.0/9.0 32768

0 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C2FE5 CL5.0/11.0 16384

0 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C2FE4 CL5.0/11.0 16384

1 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 061773B9 CL5.0/9.0 32768

1 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 05FF0C20 CL5.0/9.0 32768

1 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0EAF22 CL5.0/11.0 16384

1 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0EAF48 CL5.0/11.0 16384

2 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 06B62512 CL5.0/9.0 32768

2 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 06B62501 CL5.0/9.0 32768

2 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C3042 CL5.0/11.0 16384

2 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C3034 CL5.0/11.0 16384

3 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 05FF0D2D CL5.0/9.0 32768

3 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 05FF0D49 CL5.0/9.0 32768

3 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C5387 CL5.0/11.0 16384

3 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C537F CL5.0/11.0 16384

---------------------------------------------Internal Drives-----------------------------------
-----------

Node Drive ------WWN------- -Manufacturer- -----Model------ ---Serial--- -Firmware- Size(MB)


Type SedState

0 0 5001B44E5A0DDC90 SanDisk DX300128A5xnEMLC 151640400016 X2190300 122104 SATA capable

1 0 5001B44E5A0DDCF3 SanDisk DX300128A5xnEMLC 151640400115 X2190300 122104 SATA capable

2 0 5001B44E5A0DDD14 SanDisk DX300128A5xnEMLC 151640400148 X2190300 122104 SATA capable

3 0 5001B44E5A0DDD27 SanDisk DX300128A5xnEMLC 151640400167 X2190300 122104 SATA capable

--------------------------------Power Supplies---------------------------------

Node PS -Assem_Serial- -PSState- FanState ACState DCState -BatState- ChrgLvl(%)

0,1 0 5DNSFA2436Y1UT OK OK OK OK OK 100

0,1 1 5DNSFA2436Y1UZ OK OK OK OK OK 100

2,3 0 5DNSFA2436Y1GU OK OK OK OK OK 100

2,3 1 5DNSFA2437A0MY OK OK OK OK OK 100

------------------------------MCU------------------------------

Node Model Firmware State ResetReason -------Up Since--------

0 NEMOE 4.8.28 ready cold_power_on 2000-12-31 17:59:11 CST

1 NEMOE 4.8.28 ready cold_power_on 2000-12-31 17:59:11 CST

2 NEMOE 4.8.28 ready cold_power_on 2000-12-31 17:59:12 CST

3 NEMOE 4.8.28 ready cold_power_on 2000-12-31 17:59:12 CST

-----------Uptime-----------

Node -------Up Since--------

0 2000-12-31 18:03:30 CST

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 50


1 2000-12-31 18:03:29 CST

2 2000-12-31 18:03:31 CST

3 2000-12-31 18:03:31 CST

4. Enter

checkhealth -detail

to verify the current state of the system.


IMPORTANT: If the node does not come up after DIMM replacement, use the following steps to resolve the issue:

Reseat the node.

If that does not solve the issue, then reseat the DIMM.

If the issue is still not resolved, call technical support for assitance.

5. Enter

exit

and selectX to exit from the 3PAR Service Processor Menu .

Node PCIe Adapter And Riser Card CLI

Node PCIe adapter identification

IMPORTANT: This adapter is only supported under OS 3.3.1 / SP 5.0 or higher.


Review theATTN: SP 5.0 Changes and the currentHPE 3PAR StoreServ 8000 Storage Service and Upgrade Guide Service Edition for all
software procedures
Before you begin: Connect to the service processor.

1. From the3PAR Service Processor Menu , enter 7 forInteractive CLI for a StoreServ , and then select the system.

2. When prompted, enter

to enable maintenance mode for the system.

3. Enter the

showsys

command to verify you are on the correct array.

4. Enter

checkhealth -detail

to verify the current state of the system.

5. Look for the following alerts:

Node node:<nodeID> "PCI card in Slot:<slot> is empty, but is not empty in Node:<pair-
node>"Node node:<nodeID> "PCI card model in Slot:<slot> is not the same as Node:<pair-node>"

6. Use the nodeID in the alert or enter

shownode -i

and use the node number or identifier to identify which node is affected. Also note which slot is listed in the alert or the
HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 51
shownode -i

result.
2 node numbering:

4 node numbering:

7. Enter

locatenode

to confirm the correct component is being serviced.

8. Identify the node with the failed PCIe adapter by locating the blue service LED.

9. Confirm all of the cables to the PCI adapter are clearly labeled with their location for later reconnection.

10. Enter

shutdownnode halt <nodeID>

to halt the appropriate node.

11. When prompted, enter

yes

to confirm halting of the node.

Node PCIe adapter verification

IMPORTANT: This adapter is only supported under OS 3.3.1 / SP 5.0 or higher.


Review theATTN: SP 5.0 Changes and the currentHPE 3PAR StoreServ 8000 Storage Service and Upgrade Guide Service Edition for all
software procedures.

1. Enter

shownode

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 52


to verify the node has joined the cluster.

4UW0001489 cli% shownode

Control Data Cache

Node ----Name---- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB)


Available(%)

0 4UW0001489-0 OK Yes Yes Off GreenBlnk 65536 32768 100

1 4UW0001489-1 OK No Yes Off GreenBlnk 65536 32768 100

2 4UW0001489-2 OK No Yes Off GreenBlnk 65536 32768 100

3 4UW0001489-3 OK No Yes Off GreenBlnk 65536 32768 100

NOTE: If the node that was halted also contained the active Ethernet session, you will need to exit and restart the CLI session.

NOTE:

Depending on the serviced component, the node might go through Node Rescue, which can take up to 10 minutes.

The LED status for the replaced node might indicate green and could take up to 3 minutes to change to green flashing.

2. Enter

showport

to verify the connected ports are ready.

4UW0001489 cli% showport

N:S:P Mode State ----Node_WWN---- -Port_WWN/HW_Addr- Type Protocol Label Partner


FailoverState

0:0:1 target loss_sync 2FF70002AC07E1E9 20010002AC07E1E9 free FC - 1:0:1 none

0:0:2 target loss_sync 2FF70002AC07E1E9 20020002AC07E1E9 free FC - 1:0:2 none

0:1:1 initiator ready 50002ACFF707E1E9 50002AC01107E1E9 disk SAS DP-1 - -

0:1:2 initiator ready 50002ACFF707E1E9 50002AC01207E1E9 disk SAS DP-2 - -

0:2:1 peer offline - 38EAA714315C free IP - - -

0:3:1 peer offline - 3464A9EA07E1 free IP IP0 - -

1:0:1 target loss_sync 2FF70002AC07E1E9 21010002AC07E1E9 free FC - 0:0:1 none

1:0:2 target loss_sync 2FF70002AC07E1E9 21020002AC07E1E9 free FC - 0:0:2 none

1:1:1 initiator ready 50002ACFF707E1E9 50002AC11107E1E9 disk SAS DP-1 - -

1:1:2 initiator ready 50002ACFF707E1E9 50002AC11207E1E9 disk SAS DP-2 - -

1:2:1 peer offline - 38EAA7150C40 free IP - - -

1:2:2 peer offline - 38EAA7150C41 free IP - - -

1:3:1 peer offline - 3464A9EA0089 free IP IP1 - -

2:0:1 target loss_sync 2FF70002AC07E1E9 22010002AC07E1E9 free FC - 3:0:1 none

2:0:2 target loss_sync 2FF70002AC07E1E9 22020002AC07E1E9 free FC - 3:0:2 none

2:1:1 initiator ready 50002ACFF707E1E9 50002AC21107E1E9 disk SAS DP-1 - -

2:1:2 initiator ready 50002ACFF707E1E9 50002AC21207E1E9 disk SAS DP-2 - -

2:2:1 peer offline - 38EAA71522AC free IP - - -

2:2:2 peer offline - 38EAA71522AD free IP - - -

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 53


2:3:1 peer offline - 3464A9EA0353 free IP IP2 - -

3:0:1 target loss_sync 2FF70002AC07E1E9 23010002AC07E1E9 free FC - 2:0:1 none

3:0:2 target loss_sync 2FF70002AC07E1E9 23020002AC07E1E9 free FC - 2:0:2 none

3:1:1 initiator ready 50002ACFF707E1E9 50002AC31107E1E9 disk SAS DP-1 - -

3:1:2 initiator ready 50002ACFF707E1E9 50002AC31207E1E9 disk SAS DP-2 - -

3:2:1 peer offline - 38EAA7150B90 free IP - - -

3:2:2 peer offline - 38EAA7150B91 free IP - - -

3:3:1 peer offline - 3464A9EA0775 free IP IP3 - -

------------------------------------------------------------------------------------------------
-------

27

NOTE: A port must be connected and correctly communicating to be ready. Verify if the port (State column) is ready.

3. Enter

showport -i

to verify the installed module is installed into the correct slot.

4UW0001489 cli% showport -i

N:S:P Brand Model Rev Firmware Serial HWType

0:0:1 EMULEX LPE16002 30 10.4.359.0 Onboard FC

0:0:2 EMULEX LPE16002 30 10.4.359.0 Onboard FC

0:1:1 LSI 9300-2P 02 7.00.00.00 Onboard SAS

0:1:2 LSI 9300-2P 02 7.00.00.00 Onboard SAS

0:2:1 HP 560SFP+ n/a 3.15.1-k 38EAA714315C Eth

0:3:1 Intel e1000e n/a 2.3.2-k Onboard Eth

1:0:1 EMULEX LPE16002 30 10.4.359.0 Onboard FC

1:0:2 EMULEX LPE16002 30 10.4.359.0 Onboard FC

1:1:1 LSI 9300-2P 02 7.00.00.00 Onboard SAS

1:1:2 LSI 9300-2P 02 7.00.00.00 Onboard SAS

1:2:1 HP 560SFP+ n/a 3.15.1-k 38EAA7150C40 Eth

1:2:2 HP 560SFP+ n/a 3.15.1-k 38EAA7150C41 Eth

1:3:1 Intel e1000e n/a 2.3.2-k Onboard Eth

2:0:1 EMULEX LPE16002 30 10.4.359.0 Onboard FC

2:0:2 EMULEX LPE16002 30 10.4.359.0 Onboard FC

2:1:1 LSI 9300-2P 02 7.00.00.00 Onboard SAS

2:1:2 LSI 9300-2P 02 7.00.00.00 Onboard SAS

2:2:1 HP 560SFP+ n/a 3.15.1-k 38EAA71522AC Eth

2:2:2 HP 560SFP+ n/a 3.15.1-k 38EAA71522AD Eth

2:3:1 Intel e1000e n/a 2.3.2-k Onboard Eth

3:0:1 EMULEX LPE16002 30 10.4.359.0 Onboard FC

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 54


3:0:2 EMULEX LPE16002 30 10.4.359.0 Onboard FC

3:1:1 LSI 9300-2P 02 7.00.00.00 Onboard SAS

3:1:2 LSI 9300-2P 02 7.00.00.00 Onboard SAS

3:2:1 HP 560SFP+ n/a 3.15.1-k 38EAA7150B90 Eth

3:2:2 HP 560SFP+ n/a 3.15.1-k 38EAA7150B91 Eth

3:3:1 Intel e1000e n/a 2.3.2-k Onboard Eth

--------------------------------------------------------

27

4. Enter

checkhealth -detail

to verify the current state of the system.

5. Enter

exit

and selectX to exit from the 3PAR Service Processor Menu .

Node Boot Drive CLI

Node boot drive identification

Before you begin: Connect to the service processor and start an SPMAINT session.

1. From the3PAR Service Processor Menu , enter 7 forInteractive CLI for a StoreServ , and then select the system.

2. When prompted, enter

to enable maintenance mode for the system.

3. Enter the

showsys

command to verify you are on the correct array.

4. Enter

checkhealth -detail

to verify the current state of the system.

5. Enter

shownode

to verify the current state of the controller nodes.

6. Enter

exit

to return to the 3PAR Service Processor Menu.

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 55


7. Select option4 StoreServ Product Maintenance and then select the desired system.

8. Select optionHalt a StoreServ cluster/node , select the desired node, and confirm all prompts to halt the node.

9. In the3PAR Service Processor Menu , select option 7 Interactive CLI for a StoreServ .

10. If required, execute the locatesys command to identify the system:

a. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.

b. Execute the

locatesys

command.
NOTE: All nodes in this System flash, except for the node with the failed component, which displays a solid blue LED.

Node boot drive verification

1. Ensure that you are in maintenance mode.

2. Enter

shownode

to verify that the state of all nodes if OK.

4UW0001489 cli% shownode

Control Data Cache

Node ----Name---- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB)


Available(%)

0 4UW0001489-0 OK Yes Yes Off GreenBlnk 65536 32768 100

1 4UW0001489-1 OK No Yes Off GreenBlnk 65536 32768 100

2 4UW0001489-2 OK No Yes Off GreenBlnk 65536 32768 100

3 4UW0001489-3 OK No Yes Off GreenBlnk 65536 32768 100

NOTE: If the node that was halted also contained the active Ethernet session, you will need to exit and restart the CLI session.

NOTE:

Depending on the serviced component, the node might go through Node Rescue, which can take up to 10 minutes.

The LED status for the replaced node might indicate green and could take up to 3 minutes to change to green flashing.

3. Enter

shownode -d

for additional detail.

---------------------------------------------Nodes---------------------------------------------

Control Data Cache

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 56


Node ----Name---- -State- Master InCluster -Service_LED- ---LED--- Mem(MB) Mem(MB)
Available(%)

0 4UW0001489-0 OK Yes Yes Off GreenBlnk 65536 32768 100

1 4UW0001489-1 OK No Yes Off GreenBlnk 65536 32768 100

2 4UW0001489-2 OK No Yes Off GreenBlnk 65536 32768 100

3 4UW0001489-3 OK No Yes Off GreenBlnk 65536 32768 100

-----------------------------PCI Cards------------------------------

Node Slot Type -Manufacturer- -Model-- ---Serial--- -Rev- Firmware

0 0 FC EMULEX LPE16002 Onboard 30 10.4.359.0

0 1 SAS LSI 9300-2P Onboard 02 7.00.00.00

0 2 Eth HP 560SFP+ 38EAA714315C n/a 3.15.1-k

0 3 Eth Intel e1000e Onboard n/a 2.3.2-k

1 0 FC EMULEX LPE16002 Onboard 30 10.4.359.0

1 1 SAS LSI 9300-2P Onboard 02 7.00.00.00

1 2 Eth HP 560SFP+ 38EAA7150C40 n/a 3.15.1-k

1 3 Eth Intel e1000e Onboard n/a 2.3.2-k

2 0 FC EMULEX LPE16002 Onboard 30 10.4.359.0

2 1 SAS LSI 9300-2P Onboard 02 7.00.00.00

2 2 Eth HP 560SFP+ 38EAA71522AC n/a 3.15.1-k

2 3 Eth Intel e1000e Onboard n/a 2.3.2-k

3 0 FC EMULEX LPE16002 Onboard 30 10.4.359.0

3 1 SAS LSI 9300-2P Onboard 02 7.00.00.00

3 2 Eth HP 560SFP+ 38EAA7150B90 n/a 3.15.1-k

3 3 Eth Intel e1000e Onboard n/a 2.3.2-k

----------------------------CPUs----------------------------

Node CPU -Manufacturer- -Serial- CPUSpeed(MHz) BusSpeed(MHz)

0 0 GenuineIntel -- 2493 100.00

0 1 GenuineIntel -- 2493 100.00

0 2 GenuineIntel -- 2493 100.00

0 3 GenuineIntel -- 2493 100.00

0 4 GenuineIntel -- 2493 100.00

0 5 GenuineIntel -- 2493 100.00

0 6 GenuineIntel -- 2493 100.00

0 7 GenuineIntel -- 2493 100.00

0 8 GenuineIntel -- 2493 100.00

0 9 GenuineIntel -- 2493 100.00

0 10 GenuineIntel -- 2493 100.00

0 11 GenuineIntel -- 2493 100.00

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 57


0 12 GenuineIntel -- 2493 100.00

0 13 GenuineIntel -- 2493 100.00

0 14 GenuineIntel -- 2493 100.00

0 15 GenuineIntel -- 2493 100.00

1 0 GenuineIntel -- 2493 100.00

1 1 GenuineIntel -- 2493 100.00

1 2 GenuineIntel -- 2493 100.00

1 3 GenuineIntel -- 2493 100.00

1 4 GenuineIntel -- 2493 100.00

1 5 GenuineIntel -- 2493 100.00

1 6 GenuineIntel -- 2493 100.00

1 7 GenuineIntel -- 2493 100.00

1 8 GenuineIntel -- 2493 100.00

1 9 GenuineIntel -- 2493 100.00

1 10 GenuineIntel -- 2493 100.00

1 11 GenuineIntel -- 2493 100.00

1 12 GenuineIntel -- 2493 100.00

1 13 GenuineIntel -- 2493 100.00

1 14 GenuineIntel -- 2493 100.00

1 15 GenuineIntel -- 2493 100.00

2 0 GenuineIntel -- 2494 100.00

2 1 GenuineIntel -- 2494 100.00

2 2 GenuineIntel -- 2494 100.00

2 3 GenuineIntel -- 2494 100.00

2 4 GenuineIntel -- 2494 100.00

2 5 GenuineIntel -- 2494 100.00

2 6 GenuineIntel -- 2494 100.00

2 7 GenuineIntel -- 2494 100.00

2 8 GenuineIntel -- 2494 100.00

2 9 GenuineIntel -- 2494 100.00

2 10 GenuineIntel -- 2494 100.00

2 11 GenuineIntel -- 2494 100.00

2 12 GenuineIntel -- 2494 100.00

2 13 GenuineIntel -- 2494 100.00

2 14 GenuineIntel -- 2494 100.00

2 15 GenuineIntel -- 2494 100.00

3 0 GenuineIntel -- 2493 100.00

3 1 GenuineIntel -- 2493 100.00

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 58


3 2 GenuineIntel -- 2493 100.00

3 3 GenuineIntel -- 2493 100.00

3 4 GenuineIntel -- 2493 100.00

3 5 GenuineIntel -- 2493 100.00

3 6 GenuineIntel -- 2493 100.00

3 7 GenuineIntel -- 2493 100.00

3 8 GenuineIntel -- 2493 100.00

3 9 GenuineIntel -- 2493 100.00

3 10 GenuineIntel -- 2493 100.00

3 11 GenuineIntel -- 2493 100.00

3 12 GenuineIntel -- 2493 100.00

3 13 GenuineIntel -- 2493 100.00

3 14 GenuineIntel -- 2493 100.00

3 15 GenuineIntel -- 2493 100.00

--------------------------------------Physical Memory---------------------------------------

Node Slot SlotID -Name-- -Usage- ---Type--- --Manufacturer--- -Serial- -Latency-- Size(MB)

0 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 061A8323 CL5.0/9.0 32768

0 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 069DBAB1 CL5.0/9.0 32768

0 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C2FE5 CL5.0/11.0 16384

0 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C2FE4 CL5.0/11.0 16384

1 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 061773B9 CL5.0/9.0 32768

1 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 05FF0C20 CL5.0/9.0 32768

1 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0EAF22 CL5.0/11.0 16384

1 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0EAF48 CL5.0/11.0 16384

2 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 06B62512 CL5.0/9.0 32768

2 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 06B62501 CL5.0/9.0 32768

2 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C3042 CL5.0/11.0 16384

2 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C3034 CL5.0/11.0 16384

3 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 05FF0D2D CL5.0/9.0 32768

3 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 05FF0D49 CL5.0/9.0 32768

3 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C5387 CL5.0/11.0 16384

3 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C537F CL5.0/11.0 16384

---------------------------------------------Internal Drives-----------------------------------
-----------

Node Drive ------WWN------- -Manufacturer- -----Model------ ---Serial--- -Firmware- Size(MB)


Type SedState

0 0 5001B44E5A0DDC90 SanDisk DX300128A5xnEMLC 151640400016 X2190300 122104 SATA capable

1 0 5001B44E5A0DDCF3 SanDisk DX300128A5xnEMLC 151640400115 X2190300 122104 SATA capable

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 59


2 0 5001B44E5A0DDD14 SanDisk DX300128A5xnEMLC 151640400148 X2190300 122104 SATA capable

3 0 5001B44E5A0DDD27 SanDisk DX300128A5xnEMLC 151640400167 X2190300 122104 SATA capable

--------------------------------Power Supplies---------------------------------

Node PS -Assem_Serial- -PSState- FanState ACState DCState -BatState- ChrgLvl(%)

0,1 0 5DNSFA2436Y1UT OK OK OK OK OK 100

0,1 1 5DNSFA2436Y1UZ OK OK OK OK OK 100

2,3 0 5DNSFA2436Y1GU OK OK OK OK OK 100

2,3 1 5DNSFA2437A0MY OK OK OK OK OK 100

------------------------------MCU------------------------------

Node Model Firmware State ResetReason -------Up Since--------

0 NEMOE 4.8.28 ready cold_power_on 2000-12-31 17:59:10 CST

1 NEMOE 4.8.28 ready cold_power_on 2000-12-31 17:59:10 CST

2 NEMOE 4.8.28 ready cold_power_on 2000-12-31 17:59:11 CST

3 NEMOE 4.8.28 ready cold_power_on 2000-12-31 17:59:11 CST

-----------Uptime-----------

Node -------Up Since--------

0 2000-12-31 18:03:30 CST

1 2000-12-31 18:03:30 CST

2 2000-12-31 18:03:31 CST

3 2000-12-31 18:03:31 CST

4. Enter

checkhealth -detail

to verify the current state of the system.

5. Enter

exit

and selectX to exit from the 3PAR Service Processor Menu .

Node PCM CLI

Node PCM identification

Before you begin: Connect to the service processor and start an SPMAINT session.

1. From the3PAR Service Processor Menu , enter 7 forInteractive CLI for a StoreServ , and then select the system.

2. When prompted, enter

to enable maintenance mode for the system.

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 60


3. Enter the

showsys

command to verify you are on the correct array.

4. Enter

checkhealth -detail

to verify the current state of the system.

5. Look for the following alerts:


Node node:<nodeID> "Power supply <psID> detailed state is <status>Node node:<nodeID> "Power supply <psID> AC state is
<acStatus>"Node node:<nodeID> "Power supply <psID> DC state is <dcStatus>"Node node:<nodeID> "Power supply <psID> battery
is <batStatus>"Node node:<nodeID> "Power supply <psID> fan module <fanID> is <status>"Node node:<nodeID> "Power supply
<psID> battery is expired"

6. Additional information can be gathered with

shownode s

and

shownode -ps

cli% shownode -s

Node -State-- -Detailed_State-

0 Degraded PS 1 Failed

1 Degraded PS 0 Failed

cli% shownode -ps

Node PS -Serial- -PSState- FanState ACState DCState -BatState- ChrgLvl(%)

0 0 FFFFFFFF OK OK OK OK OK 100

0 1 FFFFFFFF Failed -- OK Failed Degraded 100

1 0 FFFFFFFF Failed -- Failed Failed Degraded 100

1 1 FFFFFFFF OK OK OK OK OK 100

7. Use the nodeID in the alert or the node number or identifier from

shownode

to identify which node is affected.


2 node numbering:

4 node numbering:

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 61


8. Also note which PCM is affected by using the power supply ID number.
2 node system PCM numbering:

4 node system PCM numbering:

Node PCM verification

1. Enter

shownode -ps

to verify that the PCM has been successfully replaced.

4UW0001489 cli% shownode -ps

Node PS -Assem_Part- -Assem_Serial- ACState DCState PSState

0,1 0 726237-001 5DNSFA2436Y1UT OK OK OK

0,1 1 726237-001 5DNSFA2436Y1UZ OK OK OK

2,3 0 726237-001 5DNSFA2436Y1GU OK OK OK

2,3 1 726237-001 5DNSFA2437A0MY OK OK OK

2. Verify that the PCM Battery is still working by issuing the

showbattery

command. The State of both batteries should be OK.

4UW0001489 cli% showbattery

Node PS Bat Assem_Serial -State- ChrgLvl(%) -ExpDate- Expired Testing

0,1 0 0 6CQUBA3HN7SFRW OK 100 n/a No No

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 62


0,1 1 0 6CQUBA3HN7SFDD OK 100 n/a No No

2,3 0 0 6CQUBA3HN7SFFB OK 100 n/a No No

2,3 1 0 6CQUBA3HN7SFF6 OK 100 n/a No No

3. Enter

checkhealth -detail

to verify the current state of the system.

4. Enter

exit

and selectX to exit from the 3PAR Service Processor Menu .

Node PCM Battery CLI

Node PCM battery identification

Before you begin: Connect to the service processor and start an SPMAINT session.

1. From the3PAR Service Processor Menu , enter 7 forInteractive CLI for a StoreServ , and then select the system.

2. When prompted, enter

to enable maintenance mode for the system.

3. Enter the

showsys

command to verify you are on the correct array.

4. Enter

checkhealth -detail

to verify the current state of the system.

5. Look for the following alerts:

Node node:<nodeID> "Power supply <psID> battery is <batStatus>"Node node:<nodeID> "Power


supply <psID> battery is expired"

6. Additional information can be gathered using

shownode

cli%shownode -ps

Node PS -Serial- -PSState- FanState ACState DCState -BatState- ChrgLvl(%)

2 0 FFFFFFFF OK OK OK OK OK 100

2 1 FFFFFFFF OK OK OK OK OK 100

3 0 FFFFFFFF OK OK OK OK OK 100

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 63


3 1 FFFFFFFF Degraded OK OK OK Failed 0

7. Additional information can be gathered using

showbattery

to verify that the battery has failed. Verify that at least one PCM battery in each node enclosure is functional:

Node PS Bat Serial -State-- ChrgLvl(%) -ExpDate-- Expired Testing

3 0 0 100A300B OK 100 07/01/2011 No No

3 1 0 12345310 Failed 0 04/07/2011 No No

8. Use the nodeID in the alert or the node number or identifier from

shownode

to identify which node is affected.


2 node numbering:

4 node numbering:

9. Also note which PCM is affected by using the power supply ID number.
2 node system PCM numbering:

4 node system PCM numbering:

NOTE: Because each battery is a backup for both nodes, node 0 and 1 both report a problem with a single battery. The quantity
appears as 2 in output because two nodes are reporting the problem. Battery 0 for node 0 is in the left PCM, and battery 0 for node
1 is in the right side PCM (when looking at the node enclosure from the rear).

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 64


Node PCM battery verification

1. Use the

showbattery

command to confirm the battery is functional and the serial ID has changed.

4UW0001489 cli% showbattery

Node PS Bat Assem_Serial -State- ChrgLvl(%) -ExpDate- Expired Testing

0,1 0 0 6CQUBA3HN7SFRW OK 100 n/a No No

0,1 1 0 6CQUBA3HN7SFDD OK 100 n/a No No

2,3 0 0 6CQUBA3HN7SFFB OK 100 n/a No No

2,3 1 0 6CQUBA3HN7SFF6 OK 100 n/a No No

2. Enter

checkhealth -detail

to verify the current state of the system.

3. Enter

exit

and selectX to exit from the 3PAR Service Processor Menu .

SFP CLI

SFP identification

Before you begin: Connect to the service processor and start an SPMAINT session.

1. From the3PAR Service Processor Menu , enter 7 forInteractive CLI for a StoreServ , and then select the system.

2. When prompted, enter

to enable maintenance mode for the system.

3. Enter the

showsys

command to verify you are on the correct array.

4. Enter

checkhealth -detail

to verify the current state of the system.

5. Look for alerts such as:

Port port:<nsp> "SFP is missing"Port port:<nsp> SFP is <state>" (degraded or failed)Port


port:<nsp> "SFP is disabled"Port port:<nsp> "Receiver Power Low: Check FC Cable"Port port:

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 65


<nsp> "Transmit Power Low: Check FC Cable"Port port:<nsp> "SFP has TX fault"

6. More information can be gathered by using the

showport

command to view the port State. Typically, the State is listed as loss sync, the Mode as initiator, and the Connected Device Type as
free.

4UW0001489 cli% showport

N:S:P Mode State ----Node_WWN---- -Port_WWN/HW_Addr- Type Protocol Label Partner


FailoverState

0:0:1 target loss_sync 2FF70002AC07E1E9 20010002AC07E1E9 free FC - 1:0:1 none

0:0:2 target loss_sync 2FF70002AC07E1E9 20020002AC07E1E9 free FC - 1:0:2 none

0:1:1 initiator ready 50002ACFF707E1E9 50002AC01107E1E9 disk SAS DP-1 - -

0:1:2 initiator ready 50002ACFF707E1E9 50002AC01207E1E9 disk SAS DP-2 - -

0:2:1 peer offline - 38EAA714315C free IP - - -

0:3:1 peer offline - 3464A9EA07E1 free IP IP0 - -

1:0:1 target loss_sync 2FF70002AC07E1E9 21010002AC07E1E9 free FC - 0:0:1 none

1:0:2 target loss_sync 2FF70002AC07E1E9 21020002AC07E1E9 free FC - 0:0:2 none

1:1:1 initiator ready 50002ACFF707E1E9 50002AC11107E1E9 disk SAS DP-1 - -

1:1:2 initiator ready 50002ACFF707E1E9 50002AC11207E1E9 disk SAS DP-2 - -

1:2:1 peer offline - 38EAA7150C40 free IP - - -

1:2:2 peer offline - 38EAA7150C41 free IP - - -

1:3:1 peer offline - 3464A9EA0089 free IP IP1 - -

2:0:1 target loss_sync 2FF70002AC07E1E9 22010002AC07E1E9 free FC - 3:0:1 none

2:0:2 target loss_sync 2FF70002AC07E1E9 22020002AC07E1E9 free FC - 3:0:2 none

2:1:1 initiator ready 50002ACFF707E1E9 50002AC21107E1E9 disk SAS DP-1 - -

2:1:2 initiator ready 50002ACFF707E1E9 50002AC21207E1E9 disk SAS DP-2 - -

2:2:1 peer offline - 38EAA71522AC free IP - - -

2:2:2 peer offline - 38EAA71522AD free IP - - -

2:3:1 peer offline - 3464A9EA0353 free IP IP2 - -

3:0:1 target loss_sync 2FF70002AC07E1E9 23010002AC07E1E9 free FC - 2:0:1 none

3:0:2 target loss_sync 2FF70002AC07E1E9 23020002AC07E1E9 free FC - 2:0:2 none

3:1:1 initiator ready 50002ACFF707E1E9 50002AC31107E1E9 disk SAS DP-1 - -

3:1:2 initiator ready 50002ACFF707E1E9 50002AC31207E1E9 disk SAS DP-2 - -

3:2:1 peer offline - 38EAA7150B90 free IP - - -

3:2:2 peer offline - 38EAA7150B91 free IP - - -

3:3:1 peer offline - 3464A9EA0775 free IP IP3 - -

------------------------------------------------------------------------------------------------
-------

27

7. Issue the

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 66


showport -sfp

command to verify which SFP requires replacement:

4UW0001489 cli% showport -sfp

N:S:P -State-- -Manufacturer-- MaxSpeed(Gbps) TXDisable TXFault RXLoss DDM

0:0:1 OK HP-A 14.0 No No Yes Yes

0:0:2 OK HP-A 14.0 No No Yes Yes

0:1:1 OK 0.0 -- -- -- No

0:1:2 OK HEWLETT-PACKARD 0.0 No No No No

0:2:1 OK JDSU 10.3 No No Yes Yes

1:0:1 OK HP-A 14.0 No No Yes Yes

1:0:2 OK HP-A 14.0 No No Yes Yes

1:1:1 OK 0.0 -- -- -- No

1:1:2 OK HEWLETT-PACKARD 0.0 No No No No

1:2:1 OK JDSU 10.3 No No Yes Yes

1:2:2 OK JDSU 10.3 No No Yes Yes

2:0:1 OK HP-A 14.0 No No Yes Yes

2:0:2 OK HP-A 14.0 No No Yes Yes

2:1:1 OK 0.0 -- -- -- No

2:1:2 Degraded HEWLETT_PACKARD 0.0 No No No No

2:2:1 OK JDSU 10.3 No No Yes Yes

2:2:2 OK JDSU 10.3 No No Yes Yes

3:0:1 OK HP-A 14.0 No No Yes Yes

3:0:2 OK HP-A 14.0 No No Yes Yes

3:1:1 OK 0.0 -- -- -- No

3:1:2 OK HEWLETT-PACKARD 0.0 No No No No

3:2:1 OK JDSU 10.3 No No Yes Yes

3:2:2 OK JDSU 10.3 No No Yes Yes

--------------------------------------------------------------------------

23

8. Use the N:S:P column to identify which node, slot and port has the problem.
2 node numbering:

4 node numbering:

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 67


SFP numbering and LEDs

Port Slot: Port


FC-1 N:0:1N = node number
FC-2 N:0:2N = node number

LED Appearance State Indicates


All No light Off Wake up failure or power not applied
1 - Port Amber Off Not connected
speed
Amber 3 fast flashes Connected at 8 GB/s
Amber 4 fast flashes Connected at 16 GB/s
2 - Link Green On Normal/Connected - link up
Status
Flashing Link down or not connected

9. Typically, data will be missing and listed as

- - -

SFP verification

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 68


1. Issue the

showport

command to verify that the ports are in good condition and the State is listed as ready. The State should now be listed as ready, the
Mode as target and the Type as host.Also verify that the port mode for the serviced node did not get changed after replacement.

4UW0001489 cli% showport

N:S:P Mode State ----Node_WWN---- -Port_WWN/HW_Addr- Type Protocol Label Partner


FailoverState

0:0:1 target loss_sync 2FF70002AC07E1E9 20010002AC07E1E9 free FC - 1:0:1 none

0:0:2 target loss_sync 2FF70002AC07E1E9 20020002AC07E1E9 free FC - 1:0:2 none

0:1:1 initiator ready 50002ACFF707E1E9 50002AC01107E1E9 disk SAS DP-1 - -

0:1:2 initiator ready 50002ACFF707E1E9 50002AC01207E1E9 disk SAS DP-2 - -

0:2:1 peer offline - 38EAA714315C free IP - - -

0:3:1 peer offline - 3464A9EA07E1 free IP IP0 - -

1:0:1 target loss_sync 2FF70002AC07E1E9 21010002AC07E1E9 free FC - 0:0:1 none

1:0:2 target loss_sync 2FF70002AC07E1E9 21020002AC07E1E9 free FC - 0:0:2 none

1:1:1 initiator ready 50002ACFF707E1E9 50002AC11107E1E9 disk SAS DP-1 - -

1:1:2 initiator ready 50002ACFF707E1E9 50002AC11207E1E9 disk SAS DP-2 - -

1:2:1 peer offline - 38EAA7150C40 free IP - - -

1:2:2 peer offline - 38EAA7150C41 free IP - - -

1:3:1 peer offline - 3464A9EA0089 free IP IP1 - -

2:0:1 target loss_sync 2FF70002AC07E1E9 22010002AC07E1E9 free FC - 3:0:1 none

2:0:2 target loss_sync 2FF70002AC07E1E9 22020002AC07E1E9 free FC - 3:0:2 none

2:1:1 initiator ready 50002ACFF707E1E9 50002AC21107E1E9 disk SAS DP-1 - -

2:1:2 initiator ready 50002ACFF707E1E9 50002AC21207E1E9 disk SAS DP-2 - -

2:2:1 peer offline - 38EAA71522AC free IP - - -

2:2:2 peer offline - 38EAA71522AD free IP - - -

2:3:1 peer offline - 3464A9EA0353 free IP IP2 - -

3:0:1 target loss_sync 2FF70002AC07E1E9 23010002AC07E1E9 free FC - 2:0:1 none

3:0:2 target loss_sync 2FF70002AC07E1E9 23020002AC07E1E9 free FC - 2:0:2 none

3:1:1 initiator ready 50002ACFF707E1E9 50002AC31107E1E9 disk SAS DP-1 - -

3:1:2 initiator ready 50002ACFF707E1E9 50002AC31207E1E9 disk SAS DP-2 - -

3:2:1 peer offline - 38EAA7150B90 free IP - - -

3:2:2 peer offline - 38EAA7150B91 free IP - - -

3:3:1 peer offline - 3464A9EA0775 free IP IP3 - -

------------------------------------------------------------------------------------------------
-------

27

2. Issue the
HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 69
showport -sfp

command to verify that the replaced SFP is connected and the State is listed as OK. Data should now be populated.

4UW0001489 cli% showport -sfp

N:S:P -State-- -Manufacturer-- MaxSpeed(Gbps) TXDisable TXFault RXLoss DDM

0:0:1 OK HP-A 14.0 No No Yes Yes

0:0:2 OK HP-A 14.0 No No Yes Yes

0:1:1 OK 0.0 -- -- -- No

0:1:2 OK HEWLETT-PACKARD 0.0 No No No No

0:2:1 OK JDSU 10.3 No No Yes Yes

1:0:1 OK HP-A 14.0 No No Yes Yes

1:0:2 OK HP-A 14.0 No No Yes Yes

1:1:1 OK 0.0 -- -- -- No

1:1:2 OK HEWLETT-PACKARD 0.0 No No No No

1:2:1 OK JDSU 10.3 No No Yes Yes

1:2:2 OK JDSU 10.3 No No Yes Yes

2:0:1 OK HP-A 14.0 No No Yes Yes

2:0:2 OK HP-A 14.0 No No Yes Yes

2:1:1 OK 0.0 -- -- -- No

2:1:2 Degraded HEWLETT_PACKARD 0.0 No No No No

2:2:1 OK JDSU 10.3 No No Yes Yes

2:2:2 OK JDSU 10.3 No No Yes Yes

3:0:1 OK HP-A 14.0 No No Yes Yes

3:0:2 OK HP-A 14.0 No No Yes Yes

3:1:1 OK 0.0 -- -- -- No

3:1:2 OK HEWLETT-PACKARD 0.0 No No No No

3:2:1 OK JDSU 10.3 No No Yes Yes

3:2:2 OK JDSU 10.3 No No Yes Yes

--------------------------------------------------------------------------

23

Drive - Customer SSMC

Drives identification

1. In the SSMC main menu, select Storage Systems > Systems . A list of storage systems is displayed in the list pane.

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 70


2. In theSystems filter, select the storage system to be serviced.

3. In the detail pane, select the Configuration view.

4. In thePhysical Drives panel, click the total physical drives hyperlink. The Physical Drives screen is displayed.

5. In theStatus filter, select Critical andWarning . The list pane shows physical drives that have a critical or warning status.

6. In the list pane, select a physical drive to display its properties in the detail pane.
WARNING: The state may indicate Degraded, which indicates that the physical drive is not ready for replacement. It may take
several hours for the data to be vacated; do not remove the physical drive until the status is Critical. Removing a degraded physical
drive before all the data is vacated will cause loss of data.

CAUTION: If more than one physical drive is critical or degraded, contact your authorized service provider to determine if the repair
can be done in a safe manner, preventing down time or data loss.

7. Locate the failed drive using the SSMC. To avoid damaging the hardware or losing data, always confirm the drive by its amber fault
LED, before removing it.

a. In the detail pane General panel, click the Enclosure name hyperlink. The Drive Enclosures screen is displayed with the drive
enclosure pre-selected.

b. Click the drive enclosure Start Locate icon. Follow the instructions on the dialog that opens.

Drives verification

1. In the SSMC main menu, select Storage Systems > Systems . A list of storage systems is displayed in the list pane.

2. In theSystems filter, select the storage system to be serviced.

3. In the detail pane, select the Configuration view.

4. In thePhysical Drives panel, click the total physical drives hyperlink. The Physical Drives screen is displayed.

5. In the list pane, select a physical drive to display its properties in the detail pane.
NOTE: Until data has been restored, the Status might not yet be updated to Normal (green).

6. Open an HP 3PAR CLI session, and then issue the

checkhealth

command to verify the system is working properly.

Drive - Service CLI

Drives identification

1. Connect to the service processor and start an SPMAINT session.

2. From the3PAR Service Processor Menu , enter 7 forInteractive CLI for a StoreServ , and then select the system.

3. When prompted, enter y to enable maintenance mode for the system.

4. Enter the

showsys

command to verify you are on the correct array.

5. Enter

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 71


checkhealth -detail

to verify the current state of the system.

6. Enter

showpd

, followed by

showpd i

to help identify the drive at risk and in need of replacement. Make note of the cage ID, the first digit in the cage position (CagePos)
column where the pd is located.
NOTE: If a storage drive fails, the system will automatically run servicemag in the background and illuminate the blue drive LED
indicating a fault and which drive to replace. A storage drive may need to be replaced for various reasons and may not show errors
in the displayed output. If the replacement is a proactive replacement prior to a failure enter

servicemag start -pdid <pdID>

to initiate the temporary removal of data from the drive with storage of the removed data on spare chunklets.

7. Enter

servicemag status

to monitor progress.
NOTE: When an SSD is identified as degraded, you must manually initiate the replacement process. Issue the servicemag start -pdid
<pdID> command to move the chunklets. When the SSD is replaced, the system automatically initiates servicemag resume.

NOTE: This process might take up to 10 minutes; repeat the command to refresh the status.There are four responses. Resonse 1 is
expected when the drive is ready to be replaced.Response 1:

servicemag

has successfully completed:When Succeeded displays as the last line in the output, it is safe to replace the drive. This response is
expected.Response 2:

servicemag

has not started.Data is being reconstructed on spares; servicemag does not start until this process iscomplete. Retry the command
at a later time.Response 3:

servicemag

has failed. Call your authorized service provider for assistance.Response 4:

servicemag

is in progress. The output will inform the user of progress.

8. Locate the failed drive. To avoid damaging the hardware or losing data, always confirm the drive by its amber fault LED, before
removing it.

a. Execute the

showpd -failed

command:

b. Execute the

locatecage -t XX cageY

command.Where:-

XX

is the appropriate number of seconds to allow service personnel to view the LED status of the drive enclosure-

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 72


is the cage number shown as the first number of CagePos in the output of the showpd -failed command.This flashes all drives in
this cage except the failed drive.

Drives verification

1. Verify that the replacement drive has been successfully replaced using

servicemag status

.There are three possible responses.Response 1:

servicemag

is in progress; the output describes the current state of the procedure.Response 2:

servicemag

has completed. When No servicemag operations logged displays as the last line in the output, the drive has successfully been
replaced.Response 3:

servicemag

has failed:There can be several causes for this failure; contact your authorized service provider for assistance.

2. Enter

checkhealth -detail

to verify the current state of the system.

3. Enter

exit

and selectX to exit from the 3PAR Service Processor Menu .

Drive Enclosure CLI

Drive enclosure identification and shutdown

1. Connect to the service processor and start an SPMAINT session.

2. From the 3PAR Service Processor Menu , enter 7 forInteractive CLI for a StoreServ , and then select the system.

3. When prompted, enter y to enable maintenance mode for the system.

4. Enter the

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 73


showsys

command to verify you are on the correct array.

5. Enter

checkhealth -detail

to verify the current state of the system.

6. Label the cage and power cables for later replacement.

7. Label the drives within the drive enclosure with their slot location. The drives must be populated in the same order in the
replacement cage.

8. Power down StoreServ using shutdownsys halt. Wait for the Hot Plug LED on the nodes to turn blue in approximately 10 to 15
minutes.

Drive enclosure verification

1. Verify that the node LEDs are in sync and blinking green.

2. Identify the cage ID of the failed drive enclosure that was removed. Issue the

showcage

command.

3. Remove the entry of the removed cage from the stored system information. Issue the

servicecage remove cagex

command where x is the cage number for the failed drive enclosure.

4. Verify that the new cage and drives are online ready by using

showcage

showpd

, and

checkhealth detail cabling

cli% showcage

Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model FormFactor

0 cage0 0:1:1 0 1:1:1 0 10 24-29 4038 4038 DCN2 SFF

1 cage1 0:1:1 1 1:1:1 1 6 25-29 4038 4038 DCS7 LFF

2 cage2 0:1:2 0 1:1:2 1 10 24-29 4038 4038 DCS8 SFF

3 cage3 0:1:2 1 1:1:2 0 6 25-26 4038 4038 DCS7 LFF

cli% showpd

----Size(MB)----- ----Ports----

Id CagePos Type RPM State Total Free A B Capacity(GB)

0 0:0:0 FC 10 normal 417792 322560 0:1:1*1:1:1 450

1 0:1:0 FC 10 normal 417792 309248 0:1:1 1:1:1* 450

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 74


2 0:2:0 FC 10 normal 417792 318464 0:1:1*1:1:1 450

3 0:3:0 FC 10 normal 417792 315392 0:1:1 1:1:1* 450

5. Verify that there are no cabling errors. Checkhealth provides guidance for fixing cabling problems.

6. Instruct the host administrator to bring the host back up.

7. Enter

checkhealth -detail

to verify the current state of the system.

8. Enter

exit

and selectX to exit from the 3PAR Service Processor Menu .

I/O Module CLI

I/O module identification

1. Connect to the service processor and start an SPMAINT session.

2. From the3PAR Service Processor Menu , enter 7 forInteractive CLI for a StoreServ , and then select the system.

3. When prompted, enter y to enable maintenance mode for the system.

4. Use the

showsys

command to verify you are on the correct array.

5. Enter

checkhealth -detail

to verify the current state of the system.

6. Enter

showcage

to display the cage IDs.

cli% showcage

Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model FormFactor

0 cage0 0:1:1 0 1:1:1 0 10 24-29 4038 4038 DCN2 SFF

1 cage1 0:1:1 1 1:1:1 1 6 25-29 4038 4038 DCS7 LFF

2 cage2 0:1:2 0 1:1:2 1 10 24-29 4038 4038 DCS8 SFF

3 cage3 0:1:2 1 1:1:2 0 6 25-26 4038 4038 DCS7 LFF

7. Enter

showcage -d <cageID>

to display information about the drive cage you are working on.

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 75


cli% showcage -d cage0

Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model FormFactor

0 cage0 0:1:1 0 1:1:1 0 10 24-29 4038 4038 DCN2 SFF

-----------Cage detail info for cage0 ---------

Position: ---

Interface Board Info Card0 Card1

Firmware_status Current Current

Product_Rev 4038 4038

State(self,partner) OK,OK OK,OK

VendorId,ProductId HP,DCN2 HP,DCN2

Master_CPU Yes No

SAS_Addr 5001438030F5953E 5001438030F5953E

Link_Speed(DP1,Internal) 12.0Gbps,12.0Gbps 12.0Gbps,12.0Gbps

PS PSState ACState DCState Fan State Fan0_Speed Fan1_Speed

ps0 OK OK OK OK Low Low

ps1 OK OK OK OK Low Low

-------------Drive Info-------------- --PortA-- --PortB--

Drive DeviceName State Temp(C) LoopState LoopState

0:0 5000cca05507607f Normal 24 OK OK

1:0 5000cca055072b9f Normal 24 OK OK

2:0 5000cca05507224f Normal 24 OK OK

3:0 5000cca0550722eb Normal 24 OK OK

4:0 5000cca03901dd23 Normal 28 OK OK

5:0 5000cca039029bab Normal 29 OK OK

6:0 5000cca039016db3 Normal 28 OK OK

7:0 5000cca045026e2b Normal 27 OK OK

8:0 5000cca045027fef Normal 28 OK OK

9:0 5000cca04500d02f Normal 28 OK OK

8. Enter

locatecage <cageID>

to illuminate the blue service LED of the drive enclosure with the failed I/O module.

9. Locate the I/O module from the showcage output after you find the cage with the blue service LED. The LED must be blue indicating
the I/O module is safe to remove.
NOTE: I/O module 0 is at the bottom and I/O module 1 is on top.

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 76


I/O module verification

1. Enter the

locatecage -t 0

to turn off the blue service LED of the drive enclosure.

2. Enter

showcage

to display the cage IDs.

cli% showcage

Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model FormFactor

0 cage0 0:1:1 0 1:1:1 0 10 24-29 4038 4038 DCN2 SFF

1 cage1 0:1:1 1 1:1:1 1 6 25-29 4038 4038 DCS7 LFF

2 cage2 0:1:2 0 1:1:2 1 10 24-29 4038 4038 DCS8 SFF

3 cage3 0:1:2 1 1:1:2 0 6 25-26 4038 4038 DCS7 LFF

3. Enter

showcage -d <cageID>

to display information about the serviced drive cage.


If required: enter

upgradecage <cageID>

to upgrade the firmware on the I/O module.


NOTE: The drive enclosure blue service LED blinks during the upgrade process.

Enter

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 77


showcage d <cageID>

to display information about the drive cage.

cli% showcage -d cage0

Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model FormFactor

0 cage0 0:1:1 0 1:1:1 0 10 24-29 4038 4038 DCN2 SFF

-----------Cage detail info for cage0 ---------

Position: ---

Interface Board Info Card0 Card1

Firmware_status Current Current

Product_Rev 4038 4038

State(self,partner) OK,OK OK,OK

VendorId,ProductId HP,DCN2 HP,DCN2

Master_CPU Yes No

SAS_Addr 5001438030F5953E 5001438030F5953E

Link_Speed(DP1,Internal) 12.0Gbps,12.0Gbps 12.0Gbps,12.0Gbps

PS PSState ACState DCState Fan State Fan0_Speed Fan1_Speed

ps0 OK OK OK OK Low Low

ps1 OK OK OK OK Low Low

-------------Drive Info-------------- --PortA-- --PortB--

Drive DeviceName State Temp(C) LoopState LoopState

0:0 5000cca05507607f Normal 24 OK OK

1:0 5000cca055072b9f Normal 24 OK OK

2:0 5000cca05507224f Normal 24 OK OK

3:0 5000cca0550722eb Normal 24 OK OK

4:0 5000cca03901dd23 Normal 28 OK OK

5:0 5000cca039029bab Normal 29 OK OK

6:0 5000cca039016db3 Normal 28 OK OK

7:0 5000cca045026e2b Normal 27 OK OK

8:0 5000cca045027fef Normal 28 OK OK

9:0 5000cca04500d02f Normal 28 OK OK

4. Enter

checkhealth -detail

to verify the current state of the system.

5. Enter

exit

and selectX to exit from the 3PAR Service Processor Menu .

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 78


Drive PCM CLI

Drive PCM identification

Before you begin: Connect to the service processor and start an SPMAINT session.

1. From the 3PAR Service Processor Menu , enter 7 for Interactive CLI for a StoreServ , and then select the system.

2. When prompted, enter

to enable maintenance mode for the system.

3. Enter the

showsys

command to verify you are on the correct array.

4. Enter

checkhealth -detail

to verify the current state of the system.

5. Normally a failed PCM displays an invalid status; if that is the case, close the current window and proceed to the remove the PCM
step. If no invalid status is displayed use one of the following procedures:

a. If the cage has been called out in a notification issue the

showcage d <cageID>

command, where

<cageID>

is the name of the cage indicated in the notification. One or more of ACState, DCState, and PSState could be in a Failed state.

b. If the cage is unknown, issue the showcage d command.

cli% showcage

Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model FormFactor

0 cage0 0:1:1 0 1:1:1 0 10 24-29 4038 4038 DCN2 SFF

1 cage1 0:1:1 1 1:1:1 1 6 25-29 4038 4038 DCS7 LFF

2 cage2 0:1:2 0 1:1:2 1 10 24-29 4038 4038 DCS8 SFF

3 cage3 0:1:2 1 1:1:2 0 6 25-26 4038 4038 DCS7 LFF

The output above is repeated for each cage; search for the failure.

6. Enter

locatecage <cageID>

to illuminate the blue service LED of the drive enclosure with the failed component. The LED must be blue indicating the drive PCM
is safe to remove.

Drive PCM verification

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 79


1. Verify that the PCM has been successfully replaced by issuing the

showcage d <cageID>

command. ACState, DCState and PSState should all be OK.

2. Enter

checkhealth -detail

to verify the current state of the system.

3. Enter

exit

and selectX to exit from the 3PAR Service Processor Menu .

Overview

Overview
Shown here is the front of the 3PAR 8440 storage system, part of the 3PAR 8000 storage series.This is a 4-node system, with two SFF
(Small Form Factor) drive enclosures. Each SFF drive enclosure holds 24 SFF drives.Below are two LFF (Large Form Factor) drive
enclosures attached to the 4-node system. Each of the LFF drive enclosures hold 24 LFF drives.Shown here is the rear of the 4-node
3PAR 8440. We see two different node pairs. The nodes in a given pair are flipped in orientation with one another. On each node pair,
we see interconnect cabling, a management port, host networking (either fiber channel or converged networking), a gray node removal
rod, mini-SAS drive enclosure cabling slots, additional host connectivity through onboard FC ports, and remote copy and other
management ports.On each side of the node are 764 watt PCMs (Power Cooling Modules). Each node PCM also contains a battery for
the node.On the LFF drive enclosure, we see the I/O modules that connect to the nodes, and the drive enclosures 580 watt PCM.
Interconnect cabling Interconnect link cables are used with 4-node systems to interconnect the nodes. It is no longer necessary to shut
down the StoreServ to replace the node interconnect link cables.Each interconnect cable is labeled with an 'A' side and a 'C' side. The 'A'
side of the cables are inserted into either node 0 or node 1. The 'C' side of the cables are inserted into either node 2 or node 3.Once you
have completed cabling, each node will be able to communicate with the other nodes in the cluster.
Mini-SAS cabling The drive enclosure is cabled with either a passive copper, or active optical cable. The passive copper cable is used in
a single rack enclosure, and the active optical cable is used in a multi-rack environment.The cable is connected first to the appropriate
DP port on the node, and then into the appropriate port on the I/O module in the drive enclosure. Refer to the service guide for
determining the appropriate ports for cabling.
Installing replacement components The service replacement steps for all parts in this system follow the same basic procedure:

1. Identify the failed component using the CLI

2. Remove the failed component

3. Install the replacement component

4. Verify the replacement component is functioning properly using the CLI

Two parts in the system are designated Customer Self Repair (CSR): the 8200 node, and the drives. Customers will use the SSMC to
identify the failed component.

Controller Node - Customer

Controller node - customer


CAUTION:

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 80


To prevent overheating, node replacement requires a maximum service time of 30 minutes.

Verify that cables are labeled before shutting down the node.

Wear an electrostatic discharge wrist strap to avoid damaging any circuitry.


When the failure notification is received customers should contact their Authorized Service Providers (ASP) for assistance with failure
verification and identifying the exact component to be replaced and the location of the failed node if replacement is required.
NOTE: Do not order a replacement node until the ASP has verified the failure, including a procedure to reset the node.
CAUTION: Customers should only replace a Controller Node on StoreServ 8200 Storage; other internal components should be serviced
by ASPs.
CAUTION: Alloy (grey)-colored latches on components like the node means the component is warm-swappable. HP recommends that
the node be shutdown (with the power to the enclosure remaining on) before removing this component. Contact your ASP for node
diagnosis and shutdown.
Preparation

1. Unpack the replacement node and place on an ESD safe mat.

Node identification and shutdown


Contact your ASP for assistance in completing this task.
Node removal

1. Allow 2-3 minutes for the Node to halt, then verify that the Node Status LED is flashing green and the Node UID LED is solid blue,
not flashing, indicating that the Node has been halted and is safe to remove.
NOTE: The Node Fault LED may be amber, depending on the nature of the node failure.

CAUTION: The system will not fail if the node is properly halted before removal but data loss may occur if the replacement
procedure is not followed correctly.

NOTE: Nodes 1 and 3 are rotated with reference to Nodes 0 and 2.

2. At the rear of the rack, remove cables from the failed node.
NOTE: Confirm all of the cables are clearly labeled for later reconnection.

3. Pull the gray node rod to the extracted position, out of the component.

4. When the node is halfway out of the enclosure, use both hands to slide the node out completely.

5. While carefully supporting the controller node from underneath, remove the node and place on an ESD safe work surface.

Node replacement

1. Remove any SFPs from the failed node by opening the retaining clip and sliding the SFP out of the slot.

2. Install the SFPs into the replacement node by carefully sliding the SFP into the port until fully seated, then closing the wire handle
to secure it in place.

3. On the replacement node, ensure the gray node rod is in the extracted position, pulled out of the component.

4. Grasp each side of the replacement node and gently slide it into the enclosure. Ensure the node is aligned with the grooves in the
slot.
CAUTION: Ensure the node is correctly oriented; alternate nodes are rotated 180°.

5. Keep sliding the node in until it halts against the insertion mechanism.

6. Reconnect the cables to the node.

7. Push the extended gray node rod into the node to ensure the node is fully seated.
CAUTION: If the blue LED is flashing, which indicates that the node is not properly seated, pull out the grey node rod and push it
back in to ensure that the node is fully seated.

NOTE: Once inserted, the node should power up and go through the node rescue procedure before joining the cluster. This might
take up to 10 minutes.

8. Verify the node LED is flashing green in synchronization with other nodes, indicating that the node has joined the cluster.

9. Follow the return instructions provided with the new component.


NOTE: If a PCIe adapter is installed in the failed node, leave it installed. Do not remove and return it in the packaging for the
replacement PCIe adapter.

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 81


Node verification
Follow the SSMC steps to verify the installation of the replacement node.

Controller Node - Service

Controller node - service


CAUTION:

To prevent overheating, node replacement requires a maximum service time of 30 minutes.

Verify that cables are labeled before shutting down the node and removing the cover.

Wear an electrostatic discharge wrist strap to avoid damaging any circuitry.

Preparation

1. Unpack the replacement node and place on an ESD safe mat.

Node identification and shutdown


Follow the CLI steps to identify and shutdown the failed node.
Node removal
CAUTION: The system does not fail if the node is properly halted before removal, but data loss might occur if the replacement
procedure is not followed correctly.

1. Allow up to ten minutes for the node to halt, then verify that the Node Status LED is flashing green and the Node UID LED is solid
blue, not flashing, indicating that the node has been halted and is safe to remove.
NOTE: The Node Fault LED might be amber, depending on the nature of the node failure.

2. Confirm all of the cables to the node and PCI adapters are clearly labeled with their location for later reconnection.

3. Remove the cables connected directly to the controller node and to the PCI adapters.

4. Pull the gray node rod to the extracted position, out of the component.

5. When the node is halfway out of the enclosure, use both hands to slide the node out completely.

6. While carefully supporting the controller node from underneath, remove the node and place on an ESD safe work surface.

Node replacement

1. On the replacement node, ensure the gray node rod is in the extracted position, pulled out of the component.

2. Remove the node cover from both the failed node and the replacement node, by loosening two thumbscrews and lifting the cover to
remove.

3. Transfer both SFPs from the onboard adapter ports on the failed node to the onboard adapter ports on the replacement node:

a. Open the retaining clip and carefully slide the SFP out of the slot on the failed node.

b. Carefully slide the SFP into the adapter port on the replacement node until fully seated, and then close the retaining clip to
secure it in place.

4. Carefully transfer the following from the failed controller node to the exact same position in the replacement controller node:- All of
the control cache DIMMs- All of the data cache DIMMs- The node boot drive

5. Remove the SFPs from the PCIe adapter in the failed node by opening the retaining clip and sliding it out of the slot.

6. Remove the PCIe adapter assembly, including the riser card, from the failed node and transfer to the replacement node.

7. Install the SFPs, removed from the PCIe adapter in the failed node, into the PCIe adapter in the replacement node and close the
retaining clip to secure it.

8. Replace the node cover on both the failed node and the replacement node by aligning the node rod and guide pins with the cutouts

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 82


and lowering into place, then tightening two thumbscrews.

9. Push in the gray node rod on the failed node to ready for packaging.

10. On the replacement node, ensure that the gray node rod is in the extracted position, pulled out of the component.

11. Grasp each side of the replacement node and gently slide it into the enclosure. Ensure the node is aligned with the grooves in the
slot.
CAUTION: Ensure the node is correctly oriented; alternate nodes are rotated 180°.

12. Keep sliding the node in until it halts against the insertion mechanism.

13. Reconnect the cables to the node.

14. Push the extended gray node rod into the node to ensure the node is fully seated.
CAUTION: If the blue LED is flashing, which indicates that the node is not properly seated, pull out the grey node rod and push it
back in to ensure that the node is fully seated.

NOTE: Once inserted, the node should power up and go through the node rescue procedure before joining the cluster. This might
take up to 10 minutes.

15. Verify the node LED is flashing green in synchronization with other nodes, indicating that the node has joined the cluster.
Node verification
Follow the CLI steps to verify the installation of the replacement node.

Node Clock Battery

Node clock battery


CAUTION:

To prevent overheating, node replacement requires a maximum service time of 30 minutes.

Verify that cables are labeled before shutting down the node and removing the cover.

Wear an electrostatic discharge wrist strap to avoid damaging any circuitry.

CAUTION: Alloy gray-colored latches on components such as the node mean the component is warm-swappable. HP recommends
shutting down the node (with the enclosure power remaining on) before removing this component.
NOTE: The clock inside the node uses a 3-V lithium coin battery. The lithium coin battery might explode if it is incorrectly installed in
the node. Replace the clock battery only with a battery supplied by HP. Do not use non-HP supplied batteries. Dispose of used batteries
according to the manufacturers instructions.
NOTE: If the node with the failed component is already halted, it is not necessary to shutdown the node, because it is not part of the
cluster.
Node clock battery identification
Follow the CLI steps to identify the failed node clock battery.
Node clock battery removal

1. Allow 2-3 minutes for the node to halt, then verify that the Node Status LED is flashing green and the Node UID LED is solid blue,
not flashing, indicating that the node has been halted and is safe to remove.
CAUTION: The system will not fail if the node is properly halted before removal, but data loss may occur if the replacement
procedure is not followed correctly.

NOTE: The Node Fault LED may be amber, depending on the nature of the node failure.

2. Ensure that all cables on the failed node are labeled to facilitate reconnecting later.

3. At the rear of the rack, remove all cables from the node with the failed component.

4. Pull the node rod to remove the node from the enclosure.

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 83


5. When the node is halfway out of the enclosure, use both hands to slide the node completely out.

6. Set the node on the ESD safe mat for servicing.

7. Remove the node cover by loosening two thumbscrews and lifting the cover to remove.

8. Remove the Clock Battery by pulling aside the retainer clip and pulling the battery up from the battery holder.
CAUTION: Do not touch internal node components when removing or inserting the battery.

Node clock battery replacement

1. With the positive-side facing the retaining clip, insert the replacement 3-V lithium coin battery into the Clock Battery slot.

2. Replace the node cover by aligning the node rod and guide pins with the cutouts and lowering into place, then tightening two
thumbscrews.

3. Ensure that the gray node rod is in the extracted position, pulled out of the component.

4. Grasp each side of the node and gently slide it into the enclosure. Ensure the node is aligned with the grooves in the slot.
CAUTION: Ensure the node is correctly oriented, alternate nodes are rotated by 180°.

5. Keep sliding the node in until it halts against the insertion mechanism.

6. Reconnect cables to the node.

7. Push the extended gray node rod into the node to ensure the node is correctly installed.
CAUTION: A flashing blue LED indicates that the node is not properly seated. Pull out the gray node rod and push back in to ensure
that the node is fully seated.

NOTE: Once inserted, the node should power up and rejoin the cluster; this may take up to 5 minutes.

8. Verify that the node LED is blinking green in synchronization with other nodes, indicating that the node has joined the cluster.

Node clock battery verification


Follow the CLI steps to verify the installation of the replacement node clock battery.

Node DIMM

Node DIMM
CAUTION:

To prevent overheating, node replacement requires a maximum service time of 30 minutes.

Verify that cables are labeled before shutting down the node and removing the cover.

Wear an electrostatic discharge wrist strap to avoid damaging any circuitry.

CAUTION: Alloy gray-colored latches on components such as the node mean the component is warm-swappable. HP recommends
shutting down the node (with the enclosure power remaining on) before removing this component.
NOTE: If the node with the failed component is already halted, it is not necessary to shutdown (halt) the node, because it is not part of
the cluster. The failed DIMM should be identified from the failure notification.
NOTE: Even when a DIMM is reported as failed, it still displays configuration information.
Node DIMM identification
Follow the CLI steps to identify the failed DIMM.
Node DIMM removal

1. Allow 2-3 minutes for the node to halt, then verify that the Node Status LED is flashing green and the Node UID LED is solid blue,
not flashing, indicating that the node has been halted and is safe to remove.
CAUTION: The system will not fail if the node is properly halted before removal, but data loss may occur if the replacement
procedure is not followed correctly.

NOTE: The Node Fault LED may be amber, depending on the nature of the node failure.

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 84


2. Ensure that all cables on the failed node are labeled to facilitate reconnecting later.

3. At the rear of the rack, remove all cables from the node with the failed component.

4. Pull the node rod to remove the node from the enclosure.

5. When the node is halfway out of the enclosure, use both hands to slide the node completely out.

6. Set the node on the ESD safe mat for servicing.

7. Remove the node cover by loosening two thumbscrews and lifting the cover to remove.

8. Physically identify the failed DIMM in the node. The Control Cache (CC) and Data Cache (DC) DIMMs can be identified by locating
the appropriate silk-screening on the board.

9. With your thumb or finger, press outward on the two tabs on the sides of the DIMM to remove the failed DIMM and place on the ESD
safe mat.

Node DIMM replacement

1. Align the key and insert the DIMM by pushing downward on the edge of the DIMM until the tabs on both sides snap into place.

2. Replace the node cover by aligning the node rod and guide pins with the cutouts and lowering into place, then tightening two
thumbscrews.

3. Ensure that the gray node rod is in the extracted position, pulled out of the component.

4. Grasp each side of the node and gently slide it into the enclosure. Ensure the node is aligned with the grooves in the slot.
CAUTION: Ensure the node is correctly oriented, alternate nodes are rotated by 180°.

5. Keep sliding the node in until it halts against the insertion mechanism.

6. Reconnect cables to the node.

7. Push the extended gray node rod into the node to ensure the node is correctly installed.
CAUTION: A flashing blue LED indicates that the node is not properly seated. Pull out the gray node rod and push back in to ensure
that the node is fully seated.

NOTE: Once inserted, the node should power up and rejoin the cluster; this may take up to 5 minutes.

8. Verify that the node LED is blinking green in synchronization with other nodes, indicating that the node has joined the cluster.

Node DIMM verification


Follow the CLI steps to verify the installation of the replacement DIMM.

Node PCIe Adapter And Riser Card

Node PCIe adapter and riser card


CAUTION:

To prevent overheating, node replacement requires a maximum service time of 30 minutes.

Verify that cables are labeled before shutting down the node and removing the cover.

Wear an electrostatic discharge wrist strap to avoid damaging any circuitry.

Each node in a node pair (a node pair is composed of the two controller nodes in a single 2U enclosure) must have the same number and
type of adapters.
When installing or upgrading expansion cards, follow these guidelines:

2-node system:Install a same-type pair of cards.

4-node system:Install any combination of two sets of same-type pair of cards

CAUTION: Alloy gray-colored latches on components such as the node mean the component is warm-swappable. HP recommends

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 85


shutting down the node (with the enclosure power remaining on) before removing this component.
NOTE: If the node with the failed component is already halted, it is not necessary to shutdown (halt) the node, because it is not part of
the cluster.
Node PCIe adapter identification
Follow the CLI steps to identify the failed PCIe adapter or PCIe riser card.
Node PCIe adapter removal
CAUTION: The system does not fail if the node is properly halted before removal, but data loss might occur if the replacement
procedure is not followed correctly.

1. Allow up to ten minutes for the node to halt, then verify that the Node Status LED is flashing green and the Node UID LED is solid
blue, not flashing, indicating that the node has been halted and is safe to remove.
NOTE: The Node Fault LED might be amber, depending on the nature of the node failure.

2. Ensure that all cables on the failed node are labeled to facilitate reconnecting later.

3. At the rear of the rack, remove all cables from the node with the failed component.

4. Pull the node rod to remove the node from the enclosure.

5. When the node is halfway out of the enclosure, use both hands to slide the node completely out.

6. Set the node on the ESD safe mat for servicing.

7. Remove the node cover by loosening two thumbscrews and lifting the cover to remove.

8. Disconnect all of the cables from the PCIe adapter.

9. Remove any SFPs from the PCIe Adapter.

10. Remove the PCIe Adapter assembly:

a. Press down on the blue tabs to release the assembly from the node.
NOTE: If the PCIe adapter is half-height, it may not secured by both tabs. Some cards may have an extender to secure it under
the tabs, as shown here.

b. Grasp and pull the assembly up and away from the node for removal.

Node PCIe riser card removal and replacement

1. If a failed riser card is being replaced:

a. Pull the failed riser card to the side to remove it from the PCIe adapter.

b. Insert the replacement riser card into the slot on the PCIe adapter.

Node PCIe adapter replacement

1. If a failed PCIe adapter is being replaced, remove the riser card from the failed PCIe adapter, and install onto the replacement PCIe
adapter.

2. Align the PCIe Adapter assembly with its slot in the node chassis.

3. Snap the PCIe Adapter assembly into position.

4. Reconnect the cables, removed earlier, to the PCIe adapter.

5. Replace any SFPs, removed earlier.

6. Replace the node cover by aligning the node rod and guide pins with the cutouts and lowering into place, then tightening two
thumbscrews.

7. Ensure that the gray node rod is in the extracted position, pulled out of the component.

8. Grasp each side of the node and gently slide it into the enclosure. Ensure the node is aligned with the grooves in the slot.
CAUTION: Ensure the node is correctly oriented, alternate nodes are rotated by 180°.

9. Keep sliding the node in until it halts against the insertion mechanism.

10. Reconnect cables to the node.

11. Push the extended gray node rod into the node to ensure the node is correctly installed.

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 86


CAUTION: A flashing blue LED indicates that the node is not properly seated. Pull out the gray node rod and push back in to ensure
that the node is fully seated.

NOTE: Once inserted, the node should power up and go through the node rescue procedure before joining the cluster. This may take
up to 10 minutes.

12. Verify that the node LED is blinking green in synchronization with other nodes, indicating that the node has joined the cluster.

Node PCIe adapter verification


Follow the CLI steps to verify the installation of the replacement PCIe adapter or PCIe riser card.

4-Port Fibre Combo Adapter !

4-Port Fibre Combo Adapter


IMPORTANT: This adapter is only supported under OS 3.3.1 / SP 5.0 or higher.
Review theATTN: SP 5.0 Changes and the currentHPE 3PAR StoreServ 8000 Storage Service and Upgrade Guide Service Edition for all
software procedures.
IMPORTANT: This video covers the physical installation procedures and related software identification and verification steps for
installing the 4-port combo adapters. There are additional software procedures that must be performed before and after physical
installation, especially for configurations that run or intend to run HPE 3PAR File Persona to provide files services. Please carefully
review the procedures related to installing, replacing, or upgrading the 4-port combo adapters in the HPE 3PAR StoreServ 8000
Storage Service and Upgrade Guide Service Edition before undertaking the procedures in this video.
CAUTION:

To prevent overheating, node replacement requires a maximum service time of 30 minutes.

Verify that cables are labeled before shutting down the node and removing the cover.

Wear an electrostatic discharge wrist strap to avoid damaging any circuitry.

Each node in a node pair (a node pair is composed of the two controller nodes in a single 2U enclosure) must have the same number and
type of adapters.
When installing or upgrading expansion cards, follow these guidelines:

2-node system:Install a same-type pair of cards.

4-node system:Install any combination of two sets of same-type pair of cards

CAUTION: Alloy gray-colored latches on components such as the node mean the component is warm-swappable. HP recommends
shutting down the node (with the enclosure power remaining on) before removing this component.
NOTE: If the node with the failed component is already halted, it is not necessary to shutdown (halt) the node, because it is not part of
the cluster.
Node PCIe adapter identification
Follow the CLI steps to identify the failed PCIe adapter or PCIe riser card.
4-Port Fibre Combo Adapter removal
CAUTION: The system does not fail if the node is properly halted before removal, but data loss might occur if the replacement
procedure is not followed correctly.

1. Allow up to ten minutes for the node to halt, then verify that the Node Status LED is flashing green and the Node UID LED is solid
blue, not flashing, indicating that the node has been halted and is safe to remove.
NOTE: The Node Fault LED might be amber, depending on the nature of the node failure.

2. Ensure that all cables on the failed node are labeled to facilitate reconnecting later.

3. At the rear of the rack, remove all cables from the node with the failed component.

4. Pull the node rod to remove the node from the enclosure.

5. When the node is halfway out of the enclosure, use both hands to slide the node completely out.

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 87


6. Set the node on the ESD safe mat for servicing.

7. Remove the node cover by loosening two thumbscrews and lifting the cover to remove.

8. Disconnect all of the cables from the PCIe adapter.

9. Remove any SFPs from the PCIe Adapter.

10. Remove the PCIe Adapter assembly:

a. Press down on the blue tabs to release the assembly from the node.
NOTE: If the PCIe adapter is half-height, it may not secured by both tabs. Some cards may have an extender to secure it under
the tabs, as shown here.

b. Grasp and pull the assembly up and away from the node for removal.
4-Port Fibre Combo Adapter replacement

1. If a failed PCIe adapter is being replaced, remove the riser card from the failed PCIe adapter, and install onto the replacement PCIe
adapter.

2. Align the PCIe Adapter assembly with its slot in the node chassis.

3. Snap the PCIe Adapter assembly into position.

4. Reconnect the cables, removed earlier, to the PCIe adapter.

5. Replace any SFPs, removed earlier.

6. Replace the node cover by aligning the node rod and guide pins with the cutouts and lowering into place, then tightening two
thumbscrews.

7. Ensure that the gray node rod is in the extracted position, pulled out of the component.

8. Grasp each side of the node and gently slide it into the enclosure. Ensure the node is aligned with the grooves in the slot.
CAUTION: Ensure the node is correctly oriented, alternate nodes are rotated by 180°.

9. Keep sliding the node in until it halts against the insertion mechanism.

10. Reconnect cables to the node.

11. Push the extended gray node rod into the node to ensure the node is correctly installed.
CAUTION: A flashing blue LED indicates that the node is not properly seated. Pull out the gray node rod and push back in to ensure
that the node is fully seated.

NOTE: Once inserted, the node should power up and go through the node rescue procedure before joining the cluster. This may take
up to 10 minutes.

12. Verify that the node LED is blinking green in synchronization with other nodes, indicating that the node has joined the cluster.
4-Port Fibre Combo Adapter verification
Follow the CLI steps to verify the installation of the replacement PCIe adapter or PCIe riser card.

4-Port ISCSI Combo Adapter !

4-Port iSCI Combo Adapter


IMPORTANT: This adapter is only supported under OS 3.3.1 / SP 5.0 or higher.
Review the pATTN: SP 5.0 Changes and the currentHPE 3PAR StoreServ 8000 Storage Service and Upgrade Guide Service Edition for
all software procedures.
IMPORTANT: This video covers the physical installation procedures and related software identification and verification steps for
installing the 4-port combo adapters. There are additional software procedures that must be performed before and after physical
installation, especially for configurations that run or intend to run HPE 3PAR File Persona to provide files services. Please carefully
review the procedures related to installing, replacing, or upgrading the 4-port combo adapters in the HPE 3PAR StoreServ 8000

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 88


Storage Service and Upgrade Guide Service Edition before undertaking the procedures in this video.
CAUTION:

To prevent overheating, node replacement requires a maximum service time of 30 minutes.

Verify that cables are labeled before shutting down the node and removing the cover.

Wear an electrostatic discharge wrist strap to avoid damaging any circuitry.


Each node in a node pair (a node pair is composed of the two controller nodes in a single 2U enclosure) must have the same number and
type of adapters.
When installing or upgrading expansion cards, follow these guidelines:

2-node system:Install a same-type pair of cards.

4-node system:Install any combination of two sets of same-type pair of cards


CAUTION: Alloy gray-colored latches on components such as the node mean the component is warm-swappable. HP recommends
shutting down the node (with the enclosure power remaining on) before removing this component.
NOTE: If the node with the failed component is already halted, it is not necessary to shutdown (halt) the node, because it is not part of
the cluster.
Node PCIe adapter identification
Follow the CLI steps to identify the failed PCIe adapter or PCIe riser card.
4-Port iSCI Combo Adapter removal
CAUTION: The system does not fail if the node is properly halted before removal, but data loss might occur if the replacement
procedure is not followed correctly.
1. Allow up to ten minutes for the node to halt, then verify that the Node Status LED is flashing green and the Node UID LED is solid
blue, not flashing, indicating that the node has been halted and is safe to remove.
NOTE: The Node Fault LED might be amber, depending on the nature of the node failure.

2. Ensure that all cables on the failed node are labeled to facilitate reconnecting later.

3. At the rear of the rack, remove all cables from the node with the failed component.

4. Pull the node rod to remove the node from the enclosure.

5. When the node is halfway out of the enclosure, use both hands to slide the node completely out.

6. Set the node on the ESD safe mat for servicing.

7. Remove the node cover by loosening two thumbscrews and lifting the cover to remove.

8. Disconnect all of the cables from the PCIe adapter.

9. Remove any SFPs from the PCIe Adapter.

10. Remove the PCIe Adapter assembly:

a. Press down on the blue tabs to release the assembly from the node.
NOTE: If the PCIe adapter is half-height, it may not secured by both tabs. Some cards may have an extender to secure it under
the tabs, as shown here.

b. Grasp and pull the assembly up and away from the node for removal.

4-Port iSCI Combo Adapter replacement


1. If a failed PCIe adapter is being replaced, remove the riser card from the failed PCIe adapter, and install onto the replacement PCIe
adapter.

2. Align the PCIe Adapter assembly with its slot in the node chassis.

3. Snap the PCIe Adapter assembly into position.

4. Reconnect the cables, removed earlier, to the PCIe adapter.

5. Replace any SFPs, removed earlier.

6. Replace the node cover by aligning the node rod and guide pins with the cutouts and lowering into place, then tightening two
thumbscrews.

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 89


7. Ensure that the gray node rod is in the extracted position, pulled out of the component.

8. Grasp each side of the node and gently slide it into the enclosure. Ensure the node is aligned with the grooves in the slot.
CAUTION: Ensure the node is correctly oriented, alternate nodes are rotated by 180°.

9. Keep sliding the node in until it halts against the insertion mechanism.

10. Reconnect cables to the node.

11. Push the extended gray node rod into the node to ensure the node is correctly installed.
CAUTION: A flashing blue LED indicates that the node is not properly seated. Pull out the gray node rod and push back in to ensure
that the node is fully seated.

NOTE: Once inserted, the node should power up and go through the node rescue procedure before joining the cluster. This may take
up to 10 minutes.

12. Verify that the node LED is blinking green in synchronization with other nodes, indicating that the node has joined the cluster.
4-Port iSCI Combo Adapter verification
Follow the CLI steps to verify the installation of the replacement PCIe adapter or PCIe riser card.

Node Boot Drive

Node boot drive


CAUTION:
To prevent overheating, node replacement requires a maximum service time of 30 minutes.

Verify that cables are labeled before shutting down the node and removing the cover.

Wear an electrostatic discharge wrist strap to avoid damaging any circuitry.

CAUTION: Alloy gray-colored latches on components such as the node mean the component is warm-swappable. HP recommends
shutting down the node (with the enclosure power remaining on) before removing this component.
NOTE: If the node with the failed component is already halted, it is not necessary to shutdown (halt) the node, because it is not part of
the cluster.
Node boot drive identification
Follow the CLI steps to identify the failed node boot drive.
Node boot drive removal

1. Allow 2-3 minutes for the node to halt, then verify that the Node Status LED is flashing green and the Node UID LED is solid blue,
not flashing, indicating that the node has been halted and is safe to remove.
CAUTION: The system will not fail if the node is properly halted before removal, but data loss may occur if the replacement
procedure is not followed correctly.

NOTE: The Node Fault LED may be amber, depending on the nature of the node failure.

2. Ensure that all cables on the failed node are labeled to facilitate reconnecting later.

3. At the rear of the rack, remove all cables from the node with the failed component.

4. Pull the node rod to remove the node from the enclosure.

5. When the node is halfway out of the enclosure, use both hands to slide the node out completely.

6. Set the node on the ESD safe mat for servicing.

7. Remove the node cover by loosening two thumbscrews and lifting the cover to remove.

8. Remove the screw securing the failed node boot drive and allow it to release to its spring tension position.

9. Grasp the failed node boot drive by its edges and remove it from its slot in the node.

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 90


Node boot drive replacement
1. Remove the replacement node boot drive from its static dissipative bag.

2. Align the notch on the node boot drive with its key in the board receptacle in the node.

3. At an angle, gently insert the node boot drive into its slot.

4. Replace the screw to secure the failed node boot drive.

5. Replace the node cover by aligning the node rod and guide pins with the cutouts and lowering into place, then tightening two
thumbscrews.

6. Ensure that the gray node rod is in the extracted position, pulled out of the component.

7. Grasp each side of the node and gently slide it into the enclosure. Ensure the node is aligned with the grooves in the slot.
CAUTION: Ensure the node is correctly oriented, alternate nodes are rotated by 180°.

8. Keep sliding the node in until it halts against the insertion mechanism.

9. Reconnect cables to the node.

10. Push the extended gray node rod into the node to ensure the node is correctly installed.
CAUTION: A flashing blue LED indicates that the node is not properly seated. Pull out the gray node rod and push back in to ensure
that the node is fully seated.

NOTE: Once inserted, the node should power up and go through the node rescue procedure before joining the cluster. This may take
up to 10 minutes.

11. Verify that the node LED is blinking green in synchronization with other nodes, indicating that the node has joined the cluster.

Node boot drive verification


Follow the CLI steps to verify the installation of the replacement node boot drive.

Node PCM

Node PCM
NOTE: PCMs are located at the rear of the system and are located on either side of the nodes. There are two types of PCMs: the 580 W
used in the drive enclosure, and the 764 W used in the controller node. The 764 W includes a replaceable battery.
CAUTION: To prevent overheating the Node PCM bay in the enclosure should not be left open for more than 6 minutes.
CAUTION: Be sure to put on your electrostatic discharge wrist strap to avoid damaging any circuitry.
Preparation
1. Remove the replacement PCM from its packaging and place on an ESD safe mat with the empty battery compartment facing up.

Node PCM verification


Follow the CLI steps to identify the failed node PCM.
Node PCM removal
CAUTION: Ensure that the PCM power switch is turned to the OFF position to disconnect power.
NOTE: Because PCMs use a common power bus, some PCM LEDs might remain illuminated after the PCM is powered off.
1. Loosen the cord clamp, and release the cable tie tab.

2. Disconnect the power cable. Ensure the power cable and cable clamp will not be in the way when the PCM is removed.

3. With your thumb and forefinger, grasp and squeeze the latch to release the handle.

4. Slide the handle away from the PCM to open it.

5. Rotate the PCM release handle and slide the PCM out of the enclosure.

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 91


6. Place the faulty PCM on the ESD safe mat.
Node PCM replacement

1. On the replacement PCM, ensure that the handle is in the open position.

2. Slide the PCM into the enclosure and push until the insertion mechanism starts to engage.
NOTE: Ensure that no cables get caught in the PCM insertion mechanism.

3. Close the handle to fully seat the PCM into the enclosure; you will hear a click as the latch engages.

4. Once inserted, pull back lightly on the PCM to ensure that it is properly engaged.

5. Reconnect the power cable, and tighten the clamp.

6. Turn the PCM on and check that power LED is green.


Node PCM verification
Follow the CLI steps to verify the installation of the replacement node PCM.

Node PCM Battery

Node PCM battery


CAUTION: To prevent overheating the Node PCM bay in the enclosure should not be left open for more than 6 minutes.
CAUTION: Be sure to put on your electrostatic discharge wrist strap to avoid damaging any circuitry.
Node PCM battery identification
Follow the CLI steps to identify the failed node PCM battery.
Node PCM battery removal
CAUTION: Ensure that the PCM power switch is turned to the OFF position to disconnect power.
NOTE: Because PCMs use a common power bus, some PCM LEDs might remain illuminated after the PCM is powered off.
1. Loosen the cord clamp, and release the cable tie tab.

2. Disconnect the power cable. Ensure the power cable and cable clamp will not be in the way when the PCM is removed.

3. With your thumb and forefinger, grasp and squeeze the latch to release the handle.

4. Slide the handle away from the PCM to open it.

5. Rotate the PCM release handle and slide the PCM out of the enclosure.

6. Place the PCM on the ESD safe mat with the battery compartment facing up.

7. Lift the handle to eject the failed battery.


Node PCM battery replacement

1. Ensure replacement PCM battery handle is in the upright position, insert the battery into the PCM, and then push down the handle
to install. Check that the battery and handle are level with the surface of the PCM.

2. Slide the PCM into the enclosure and push until the insertion mechanism starts to engage.
NOTE: Ensure that no cables get caught in the PCM insertion mechanism.

3. Close the handle to fully seat the PCM into the enclosure; you will hear a click as the latch engages.

4. Once inserted, pull back lightly on the PCM to ensure that it is properly engaged.

5. Reconnect the power cable, and tighten the clamp.

6. Turn the PCM on and check that power LED is green.


Node PCM battery verification

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 92


Follow the CLI steps to verify the installation of the replacement node PCM battery.

SFP

SFP
CAUTION: Be sure to put on your electrostatic discharge wrist strap to avoid damaging any circuitry.
The SFP is located in the adapter port on the controller node and there are two to six SFPs per node.
WARNING: When the system is on, do not stare at the fibers. Doing so could damage your eyes.
SFP identification
Follow the CLI steps to identify the failed SFP.
SFP removal
NOTE: The hardware procedure shown depicts the 3PAR 7000 series. The procedure is identical for the 3PAR 8000 series.

1. After identifying the SFP that requires replacement, disconnect the cable, open the retaining clip and carefully slide the SFP out of
the slot.
CAUTION: When handling the SFP, do not touch the gold contact leads to prevent damage.
SFP replacement

1. Carefully slide the replacement SFP into the adapter until fully seated, close the retaining clip to secure it in place, and reconnect
the cable.

2. Reconnect the cable to the SFP module and verify that the link status LED is solid green.
SFP verification
Follow the CLI steps to verify the installation of the replacement SFP.

Drive - Customer

Drive customer
CAUTION: Be sure to put on your electrostatic discharge wrist strap to avoid damaging any circuitry.
NOTE: The system supports the following storage drives: Large Form Factor (LFF) drives, Small Form Factor (SFF) drives, and Solid
State Drives (SSD). The replacement procedures are essentially the same for all storage drives.
CAUTION:

If you require more than 10 minutes to replace a drive, install a drive blank cover to prevent overheating while you are working.

To avoid damage to hardware and the loss of data, never remove a drive without confirming that the drive fault LED is lit.
Before you begin: Review drive considerations .
Drive identification
Follow the SSMC steps to identify the failed drive.
Drive removal

1. Pinch the handle latch to release the handle into open position, pull the handle away from the enclosure, pull the drive only slightly
out of the enclosure, and then wait 30 seconds to allow time for the internal disk to stop rotating.

2. Slowly slide the drive out of the enclosure and place on an ESD safe mat.

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 93


Drive replacement
1. Press the handle latch to open the handle.

2. With the latch handle of the drive fully extended, align and slide the drive into the bay until the handle begins to engage, and then
push firmly until it clicks.

3. Close the handle to fully seat the drive into the drive bay.

4. Observe the newly installed drive for 60 seconds to verify the amber Fault LED turns off and remains off for 60 seconds.
Drive verification
Follow the SSMC steps to verify the installation of the replacement drive.

Drive - Service

Drive service
CAUTION: Be sure to put on your electrostatic discharge wrist strap to avoid damaging any circuitry.
NOTE: The system supports the following storage drives: Large Form Factor (LFF) drives, Small Form Factor (SFF) drives, and Solid
State Drives (SSD). The replacement procedures are essentially the same for all storage drives.
CAUTION:

If you require more than 10 minutes to replace a drive, install a drive blank cover to prevent overheating while you are working.

To avoid damage to hardware and the loss of data, never remove a drive without confirming that the drive fault LED is lit.
Before you begin: Review drive considerations .
Drive identification
Follow the CLI steps to identify the failed drive.
Drive removal

1. Pinch the handle latch to release the handle into open position, pull the handle away from the enclosure, pull the drive only slightly
out of the enclosure, and then wait 30 seconds to allow time for the internal disk to stop rotating.

2. Slowly slide the drive out of the enclosure and place on an ESD safe mat.
Drive replacement

1. Press the handle latch to open the handle.

2. With the latch handle of the drive fully extended, align and slide the drive into the bay until the handle begins to engage, and then
push firmly until it clicks.

3. Close the handle to fully seat the drive into the drive bay.

4. Observe the newly installed drive for 60 seconds to verify the amber Fault LED turns off and remains off for 60 seconds.
Drive verification
Follow the CLI steps to verify the installation of the replacement drive.

Drive Enclosure

Drive enclosure
CAUTION: Be sure to put on your electrostatic discharge wrist strap to avoid damaging any circuitry.
HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 94
CAUTION:
An online replacement of a drive enclosure might be possible, by contacting HP Tech Support to schedule the replacement of the
drive enclosure while the storage system is online.

Offline replacement:An offline replacement of a drive enclosure can be performed by scheduling an offline maintenance window,
and using the following procedure in this section.

CAUTION: Two people are required to remove the enclosure from the rack to prevent injury.
NOTE: The procedures for the LFF 4U drive enclosure and the SFF 2U drive enclosure are essentially the same, except where noted.
Before you begin: Review drive considerations .
Drive enclosure identification and shutdown
Follow the CLI steps to identify and shut down the failed drive enclosure.
Drive enclosure removal
1. Label, if necessary, all the mini-SAS cables from the drive enclosure to be replaced.

2. Label, all drives and their slot location 0,1,2, etc.

3. Power down the enclosure and disconnect all mini SAS cables and power cables.

4. Remove the I/O modules and power supplies, and set them on an ESD safe mat.

5. Remove the drives from the enclosure, noting all drive locations. Set the drives on an ESD safe mat.

6. Remove the bezels at the sides of the enclosure.

7. Remove the screws that mount the enclosure to the front of the rack: four screws for the 4U enclosure, two screws for the 2U
enclosure.

8. Remove the two hold-down screws that secure the enclosure at the rear of the rack.

9. While supporting the enclosure from underneath, slide the failed drive enclosure out of its rack.
WARNING: Always use at least two people to lift an enclosure.

Drive enclosure replacement


1. While supporting the replacement drive enclosure from underneath, align the enclosure with the rack rails and slide the enclosure
into the rack until flush.

2. Replace the two hold-down screws at the rear of the rack.

3. Replace the screws at the front of the rack: four screws for the 4U enclosure; two screws for the 2U enclosure.

4. Replace the I/O modules and power supplies.

5. Connect the mini-SAS cables as labeled.

6. Replace the power cables.

7. Replace the bezels at the sides of the enclosure.

8. Install the drives into their same slot locations, as labeled earlier.

9. Power on the cage, and wait for all drives to spin up with a green LED status.

10. Power on StoreServ, and wait approximately 10 to 15 minutes.


Drive enclosure verification
Follow the CLI steps to verify the installation of the replacement drive enclosure.
IMPORTANT:
The replacement of a cage always changes the cage ID.
After completing the cage hardware replacement, review the storage configuration and update all the custom created configuration
rules for Common Provisioning Groups referring to the original cage ID. If such changes are required, tuning the system may also be
needed.

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 95


I/O Module

I/O module
CAUTION: Be sure to put on your electrostatic discharge wrist strap to avoid damaging any circuitry.
CAUTION:
To prevent overheating, the I/O module bay in the enclosure should not be left open for more than 6 minutes.

Storage systems operate using two I/O modules per drive enclosure and can temporarily operate using one I/O module when
removing the other I/O module for servicing.
Before you begin: Review drive considerations .
I/O module identification
Follow the CLI steps to identify the failed I/O module.
I/O module removal

1. If required, label and then unplug the cable(s) from the failed I/O module. There can be one cable or two.

2. Grasp the module latch between thumb and forefinger, squeeze to release the latch, and pull the latch handles open.

3. Grip the handles on both sides of the module, remove it from the enclosure, and then place the failed I/O module on an ESD safe
mat.
I/O module replacement
1. Open the replacement module latch and slide it into the enclosure until it automatically engages.

2. Once the replacement module is in the enclosure, close the latch until it engages and clicks.

3. Pull back lightly on the handle to check seating.

4. Reconnect the mini-SAS cables to the replacement I/O modules in the same location as before.

I/O module verification


Follow the CLI steps to verify the installation of the replacement I/O module.

Drive PCM

Drive PCM
NOTE: PCMs are located at the rear of the system and are located on either side of the nodes. There are two types of PCMs: the 580 W
used in the drive enclosure, and the 764 W used in the controller node. The 764 W includes a replaceable battery.
CAUTION: To prevent overheating the drive PCM bay in the enclosure should not be left open for more than 6 minutes.
CAUTION: Be sure to put on your electrostatic discharge wrist strap to avoid damaging any circuitry.
Preparation
1. Remove the replacement PCM from its packaging and place on an ESD safe mat.

Drive PCM identification


Follow the CLI steps to identify the failed drive PCM.
Drive PCM removal
CAUTION: Ensure that the PCM power switch is turned to the OFF position to disconnect power.
NOTE: Because PCMs use a common power bus, some PCM LEDs might remain illuminated after the PCM is powered off.

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 96


1. Loosen the cord clamp, and release the cable tie tab.

2. Disconnect the power cable. Ensure the power cable and cable clamp will not be in the way when the PCM is removed.

3. With your thumb and forefinger, grasp and squeeze the latch to release the handle.

4. Slide the handle away from the PCM to open it.

5. Rotate the PCM release handle and slide the PCM out of the enclosure.

6. Place the faulty PCM on the ESD safe mat.


Drive PCM replacement

1. On the replacement PCM, ensure that the handle is in the open position.

2. Slide the PCM into the enclosure and push until the insertion mechanism starts to engage.
NOTE: Ensure that no cables get caught in the PCM insertion mechanism.

3. Close the handle to fully seat the PCM into the enclosure; you will hear a click as the latch engages.

4. Once inserted, pull back lightly on the PCM to ensure that it is properly engaged.

5. Reconnect the power cable, and tighten the clamp.

6. Turn the PCM on and check that power LED is green.


Drive PCM verification
Follow the CLI steps to verify the installation of the replacement drive PCM.

Introduction to HPE 3PAR Training by HPE Education Services


This training video created by the Hewlett Packard Enterprise Education team will guide you through the process of installing an HPE
3PAR Store Serv 8000 system. We invite you to explore this series of courses to deepen you knowledge and skills of 3PARE Store Serv
systems, because installation is just the start of the journey.
HPE Education offers 3PAR courses and instructor-led, virtual instructor-led and self-paced web based training modalities to match how
you prefer to learn. All of these include the same excellent access to the HPE Virtual Lab environment, maintained with the latest
equipment and software for extended 3PAR skills, practice an development.
HPE Education is designed for all skillsets with offerings at levels 1, 2 and 3. For administrators who have hands-on daily responsibility
for managing the 3PAR StoreServ systems deployed in your environment, HPE Education courses offer the foundation and advanced
skills development you need to be proficient.
As a third component HPE Education includes essential information on the concepts at the heart of HPE 3PAR architecture. This is
helpful to enable more advanced use of all 3PAR capabilities by deepening your understanding of how the 3PAR architecture is
designed. Once you have the 3PAR foundation established, we offer courses for advanced capabilities, like Remote Copy, Quality of
Service, Performance, Adaptive Optimization, Alerts, Rebalancing and others. Additionally, to augment your 3PAR training, HPE
Education also offers courses on RAID, SAN, Fibre Channel network, and Fabric technologies Subscribe to these channels to stay aware
of new training opportunities and offers.
Subscribe to HPE Learning Channel, www.hpe.com/ww/learningchannel to see all of the latest video information. Follow us on Twitter
@HPE_Education. Find us on LinkedIn and Facebook at HPE Technology Services. Get started today by clicking the link under the video:
www.hpe.com/ww/learn3PAR for access to course data sheet details, schedules and registration For information on all of the excellent
courses available from HPE Education, see our course listsings at www.hpe.com/www/learn.

Welcome to the HPE 3PAR StoreServ 8000 Storage Customer Self-install Video
The list of steps on the left provides navigation and keeps track of your progress. From any screen, the Menu pull-down menu at the top
left provides video links. The Resources pull-down menu at the top right provides links for documentation and a link to the Hewlett
Packard Enterprise Information Library.
When you are ready to begin, click the Next button at the bottom right to watch Overview & Preparation.

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 97


Powering UP
Everything is now ready to power up and initialize.
If necessary, route the PDU power cables under the bracket at the bottom rear of the rack. Connect the PDU power cables to separate
power sources to achieve power redundancy.
Confirm that the circuit breakers on the PDUs are set to the ON position.
if applicable, set the switches on the power strips to the ON position and verify that the LEDs are green.
If a physical Service Processor is being used, it should be switched on.
Power on all of the drive enclosure PCMs. Verify each PCM OK LED is green.
Power on the controller node enclosure PCMs and verify that each PCM OK LED is green.

NOTE:
The battery LED may flash green, indicating that the battery is charging. Once fully charged, the LED will be solid
green.

For more information on LEDs, follow the link to the LED Indicators in the "Resources” pull- down menu.
Allow up to 10 minutes for the system to boot.
Now, from the front of the system, locate and verify that the ear cap System Power green LED is On and the Drive Status amber LED is
off.

NOTE:
The cage numbers may not reflect actual cage numbers at this point.

On the rear, verify the following The controller node's status LEDs are solid green.
All I/O modules status LEDs lit green. All PCM status LEDs still lit green. All of the hardware is now installed, powered up and verified to
be working.
Click Next to watch Setting up the Service Processor.

Setting UP the Service Processor


Setting up the service processor
For physical SP installation, please refer to the 3PAR Store Serv 8000 Storage Self- Install Guide, available from the "Resources" pull-
down menu. You can install a virtual service processor on either VMware ESXi, or Microsoft Hyper-V. We'll use a Virtual Service
Processor using VMware ESXi.
Important: The Virtual Service Processor that you will be installing must be on the same subnet as the 3PAR Store Serv Storage system.
Begin by installing VMware ESXi VSP software from either the DVD or the email that was sent with links to download the software. This
may have gone to the email account responsible for purchasing the product.
From the VMware vSphere Client window select File > Deploy OVF template. On the Source page, click Browse to locate the OVF file on
the DVD or downloaded location.
Select the OVF file, click Open and then click Next.
On the OVF Template Details page, verify that the OVF template is selected, and then click Next.
On the Name and Location page, enter the name for the VSP as you filled it out in the worksheet, and then click Next.
On the Disk Format page, select Thin Provision, and then click Next.
On the Network Mapping page, map the virtual machine to the networks in your inventory, and then click Next.
On the Ready to Complete page, follow these steps:
a. Review the deployment settings.
b. Select the Power on after Deployment check box. Selecting this option powers on the VSP after the installation is complete.
c. Click Finish. A Deployment Completed Successfully message should appear after a few minutes. Click Close.
On the left navigation pane, verify that the system is on and the green icon is displayed on the new VSP. If necessary, expanding

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 98


navigation tree to find the deployed VSP.
Click Next to watch Software Setup.

Resources

Click here to access the Self-Install Guide

Click here to access the Cabling Configuration Guide

Click here to access the Self-Install Forum

Click here to access the Support Center

Click here to access the LED Indicators Document

Click here to access the HPE Information Library

Click here to access the HPE Insight Online User Guide

Click here to access the HPE Support Center for additional information

HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 99

You might also like