HPE - Psg000003aen - Us - HPE 3PAR StoreServ 8000 Storage - Parts Support Guide
HPE - Psg000003aen - Us - HPE 3PAR StoreServ 8000 Storage - Parts Support Guide
Notices
The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and
services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as
constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained
herein.
Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession, use, or copying. Consistent with FAR
12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed
to the U.S. Government under vendor's standard commercial license.
Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard Enterprise has no control over and is
not responsible for information outside the Hewlett Packard Enterprise website.
Acknowledgments
Intel ®, Itanium ®, Optane ™, Pentium ®, Xeon ®, Intel Inside ®, and the Intel Inside logo are trademarks of Intel Corporation in the U.S. and other
countries.
Microsoft ® and Windows ® are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other
countries.
Adobe ® and Acrobat ® are trademarks of Adobe Systems Incorporated.
Java ® and Oracle ® are registered trademarks of Oracle and/or its affiliates.
UNIX ® is a registered trademark of The Open Group.
All third-party marks are property of their respective owners.
Table of contents
Part Locator
Front view
SFF 2U drive enclosure
SFF disk drive
LFF 4U drive enclosure
LFF disk drive
Back view
Node
Node PCM
PCM battery
SFP
Node interconnect cables
Drive PCM
SAS cables
IO Module
Internal view
Data cache DIMM
Control cache DIMM
Node clock battery
Node boot drive
PCIe adapter riser card
Optional host bus adapter
PCIe adapter
Remove/Replace
References
ATTN: SP 5.0Changes
Precautions
Tools And Materials
Connecting To The Service Processor
Servicing The Storage System
Drive Considerations
Node Rescue
Controller Node - Customer SSMC
Controller Node - Service CLI
Node Clock Battery CLI
Node DIMM CLI
Node PCIe Adapter And Riser Card CLI
Node Boot Drive CLI
Node PCM CLI
Node PCM Battery CLI
SFP CLI
Drive - Customer SSMC
Drive - Service CLI
Drive Enclosure CLI
I/O Module CLI
Drive PCM CLI
Procedures
Overview
Controller Node - Customer
Controller Node - Service
Node Clock Battery
Node DIMM
Node PCIe Adapter And Riser Card
4-Port Fibre Combo Adapter !
4-Port ISCSI Combo Adapter !
Node Boot Drive
Node PCM
Node PCM Battery
SFP
Drive - Customer
Drive - Service
Drive Enclosure
I/O Module
Drive PCM
HPE 3PAR StoreServ 8000 Storage Customer Self-Install Video
Introduction to HPE 3PAR Training by HPE Education Services
Welcome to the HPE 3PAR StoreServ 8000 Storage Customer Self-install Video
Overview & Preparation
Setting up the system
Cabling the Storage System
Powering UP
Setting UP the Service Processor
Software Setup
Post-Installation Tasks
Resources
Front view
Node
PCM battery
SAS cables
Internal view
HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 11
Data cache DIMM
ATTN: SP 5.0Changes
ATTENTION: SP 5.0 Changes
On HPE 3PAR StoreServ Storage systems running OS 3.3.1/SP 5.0 and higher, the software procedures for part replacement have
changed. While the physical procedure shown in the accompanying videos are correct, be alert that the interfaces shown, SPOCC and
SPMAINT for running software procedures are no longer available. Most software procedures are now run from the Service Console, a
new interface introduced with SP5.0.
To gain a significant understanding of the changes, please watch the videos at the OS 3.3.1/SP 5.0 Part Replacement Procedures link.
This link can be found within the SML underMedia Selection and Additional Resources for this product.
The Service Console (SC), a browser-based tool, is now the primary interface for managing part replacement among many other
functions. Access the SC by browsing to: https://<Service Processor IP address>:8443
Accounts:admin andhpepartner (password set during SP initialization and can by changed by the customer); hpesupport (password
obtained through HPE StoreFront Remote and is either time-based or encrypted)
Notification of a degraded or failed part can come through an alert to the SC or SSMC, or an email, if enabled. Notification will
include corrective action and often a replacement part number if appropriate
Identification of the degraded or failed part is facilitated by the SCs Map, Activity, and where supported, Schematic views
Setting Maintenance mode must now be done through the SCs Action->Set Maintenance Mode; and should be done before every
part replacement
Check Health and Locate can be run through the SCs Action pull-down menu.
Non-interactive CLI commands can be done through the SCs Action->Start CLI session
Interactive CLI commands must be run through TUI (Text-based User Interface) through terminal console SSH, port 22, physically
through the service port (Port 2) using 10.255.155.54 or 10.255.155.52 using iLO, or <Service Processor IP> if accessed over the
customers network
Verification of a replaced part functioning properly is done through the SCs Map, Activity, and where supported, Schematic views,
as well as physical inspection of appropriate LEDs.
Precautions
To prevent damage to the unit, protect data, and avoid personal injury, review and follow these precautions.
2. Put on your electrostatic discharge (ESD) wrist or shoe strap to avoid damaging any circuitry.
3. Place an ESD mat on a suitably grounded surface, and then place the temporary and replacement units on the mat.
Static electricity
Static electricity can damage electrical components. Before removing or replacing a component, observe the following precautions to
prevent damage to electric components and accessories:
Remove all ESD-generating materials from your work area.
To avoid hand contact, transport and store all electrostatic parts and assemblies in conductive or approved ESD packaging such as
ESD tubes, bags, or boxes.
Keep electrostatic-sensitive parts in their containers until they arrive at static-free stations. Before removing items from their
containers, place the containers on a grounded surface.
Do not take the new component out of its ESD package or handle any component before connecting your ESD wrist or shoe strap to
a suitably grounded surface.
Use the ESD package provided with the new part to return the old part.
Before servicing any component in the storage system, prepare an Electrostatic Discharge-safe (ESD) work surface by placing an
antistatic mat on the floor or table near the storage system.
Attach the ground lead of the mat to an unpainted surface of the rack.
Attach the grounding strap clip directly to an unpainted surface of the rack.
Disassembly
Ensure that you take the following precautions when disassembling a unit:
Label each cable as you remove it, noting its position and routing. This will make replacing the cables much easier and will ensure
that the cables are rerouted properly.
If applicable, keep all screws with the component removed. The screws used in each component can be of different thread sizes and
lengths. Using the wrong screw in a component could damage the unit.
Phillips-head screwdriver
Torx screwdrivers
Service Processor Onsite Customer Care (SPOCC):A web-based graphical user interface that is available for support of the HP 3PAR
storage system and its SP. SPOCC is the web-based alternative to accessing most of the features and functionality that are
available through SPMAINT.
SPMAINT:The SPMAINT utility is an interface for the support (configuration and maintenance) of both the storage system and its
SP. Use SPMAINT as a backup method for accessing the SP; SPOCC is the preferred access method. Only one SPMAINT session is
allowed at a time through SPOCC or a CLI session.
CAUTION: Many of the features and functions that are available through SPMAINT can adversely affect a running system. To prevent
potential damage to the system and irrecoverable loss of data, do not attempt the procedures described in this manual until you have
taken all necessary safeguards.
NOTE: You connect to the SP using a standard web browser or a Secure Shell (SSH) session connection. If you do not have SSH, connect
to the serial port of the SP.
NOTE: HP recommends using a small private switch between the SP and the laptop to ensure that the laptop does not lose its network
connection during the build process. When the SP resets, the NIC port resets and drops the link. This connection loss can result in the
failure of the software load process. Any personal switch with four to eight ports is supported, such as the HP 1405-5G Switch
(J97982A), which is available as a non-catalog item from HP SmartBuy.
Use one of the following methods to establish a connection to the service processor (SP).
Method 1 - Maintenance PC (laptop) LAN Setup
1. Connect the laptop to the SP Service/iLO port via a cross-over cable or a private network.
HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 17
2. Configure the LAN connection of the laptop:
a. ClickStart >Control Panel >Network and Internet >Network and Sharing Center .
e. Record the current settings for restoring purposes after the SP maintenance is complete.
f. ChooseUse the following IP address , set the following network parameters and click OK .
IP address 10.255.155.49
NOTE: The session iLO username is administrator and the password is listed on a label located on the top left corner of the server.
Log on to the SP with the applicable login credential. A password is not required. Follow the prompts to perform the Moment of
Birth (MOB) procedure.
Method 2 - SSH Connection Setup Use a terminal emulator application to establish a Secure Shell (SSH) communication to the SP.
1. Connect the laptop to the SP Service/iLO port via cross-over cable or a private network.
Host Name (or IP address): 10.255.155.52 for iLO or 10.255.155.54 for Service (SPMAINT)
Login Name:administrator
3. Log on as root and follow the prompts to begin the MOB process.
Laptop serial settings Use the guidelines listed in this table to adjust the serial settings of the laptop before using a terminal emulator to
communicate with the SP and perform various tasks to support the storage system.
NOTE: Make sure to choose 115200 when rebuilding an SP or the rebuild will fail to complete.
Setting Value
Parity None
Word Length 8
Stop Bits 1
Transmit Xon/Xoff
SPOCC
Use SPOCC to access the SPMAINT interface (Service Processor Maintenance) in the Command Line Interface (CLI), where you perform
various administrative and diagnostic tasks to support both the storage system and the SP.
To open SPOCC, enter the SP IP address in a web browser and enter your user name and password.
Use the SPMAINT interface if you are servicing a storage system component or when you need to run a CLI command.The SPMAINT
interface allows you to affect the current status and configuration of both the system and the SP. For this reason, only one instance of
the SPMAINT interface can be run at a time on a given system.
To access the SPMAINT interface:
1. From the left side of the SPOCC home page, click Support .
2. From theService Processor - Support page, underService Processor , clickSPMAINT on the Web in theAction column.
3. From the3PAR Service Processor Menu , enter7 forInteractive CLI for a StoreServ , and then select the system.
HP 3PAR StoreServ Management Console (SSMC) software provides contemporary browser-based interfaces for monitoring HP 3PAR
storage systems.
The HP 3PAR StoreServ Management Console (SSMC) procedures in this guide assume that the storage system to be serviced has
already been added to an instance of SSMC and is available for management by logging in to the SSMC Main Console. If that is not the
case, you must first add the storage system to an instance of SSMC by logging in to the SSMC Administrator Console.
Direct login
1. Browse to the server on which HP 3PAR StoreServ Management Console software is installed, https://<IP address or FQDN>:8443 .
The login screen opens.Tip: The default port number is 8443. Another port might have been assigned during installation of the
software.
1. When already logged in to the Main Console, click the Session icon in the banner, and then select Administrator Console . The login
screen opens withAdministrator Console check box preselected.
1. Browse to the server on which HP 3PAR StoreServ Management Console software is installed, https://<IP address or FQDN>:8443 .
The login screen opens.Tip: The default port number is 8443. Another port might have been assigned during installation of the
software.
Parts have a nine-character spare part number on their labels. For some spare parts, the part number is available in the system.
Alternatively, the HP call center can assist in identifying the correct spare part number.
Swappable Components
Colored touch points on a storage system component (such as a lever or latch) identify whether the system should be powered on or off
during a part replacement:
Hot-swappable Parts are identified by red-colored touch points. The system can remain powered on and active during replacement.
NOTE: Drives are only hot-swappable if they have been properly prepared using servicemag.
Warm-swappable Parts are identified by gray touch points. The system does not fail if the part is removed, but data loss might
occur if the replacement procedure is not followed correctly.
Cold-swappable Parts are identified by blue touch points. The system must be powered off or otherwise suspended before replacing
the part.
CAUTION:
Do not replace cold-swappable components while power is applied to the product. Power off the device and then disconnect all AC
power cords.
Power off the equipment and disconnect power to all AC power cords before removing any access covers for cold-swappable areas.
When replacing hot-swappable components, allow approximately 30 seconds between removing the failed component and installing
the replacement. This time is needed to ensure that configuration data about the removed component is cleared from the system
registry. To prevent overheating due to an empty enclosure or bay, use a blank or leave the slightly disengaged component in the
enclosure until the replacement can be made. Drives must be replaced within 10 minutes, nodes 30 minutes and all other parts
within 6 minutes.
Before replacing a hot-swappable component, ensure that steps have been taken to prevent loss of data.
The following describes how to power the storage system on and off.
WARNING! Do not power off the system unless a service procedure requires the system to be powered off. Before you power off the
system to perform maintenance procedures, first verify with a system administrator. Powering off the system will result in loss of access
to the data from all attached hosts.
Powering Off
Before you begin, unmount volumes exported from the StoreServ from all hosts, or shutdown all hosts that have volumes provisioned
3. Follow the prompts to shutdown a cluster. Do not shut down individual Nodes.
1. From the 3PAR Service Processor Menu, enter 4 for StoreServ Product Maintenance.
3. Follow the prompts to shutdown a cluster. Do not shut down individual Nodes.
2. Allow 2-3 minutes for the node to halt, then verify that the node Status LED is flashing green and the node hotplug LED is blue,
indicating that the node has been halted.
Powering On
Disengaging the PDU Pivot Brackets To access the vertically mounted power distribution units (PDU) or servicing area, the PDUs can be
lowered out of the rack.
Blank drive carriers are provided and must be used if all slots in the enclosure are not filled with drives.
To avoid potential damage to equipment and loss of data, handle drives carefully.
NOTE: The system supports the following storage drives: Large Form Factor (LFF) drives, Small Form Factor (SFF) drives, and Solid
State Drives (SSD). The replacement procedures are essentially the same for all storage drives.
NL 21 22 23
NL 17 18 19
NL 13 14 15
NL 9 10 11
SSD 5 6 7
SSD 1 2 3
Node Rescue
Always perform the automatic node rescue procedures, unless otherwise instructed.
When performing automatic node-to-node rescue, there might be instances where a node is to be rescued by another node that has
been inserted but has not been detected. If this happens, issue the CLI command,
Use the
showtask -d
Only use this SP-to-Node rescue procedure if all nodes in the HP 3PAR system are down and need to be rebuilt from the HP 3PAR OS
image on the service processor.
To perform an SP-to-node rescue:
1. At the rear of the storage system, uncoil the red crossover Ethernet cable connected to the SP Service network connection and
connect this cross-over cable to the MGMT Ethernet port of the node that is being rescued.
2. Connect the maintenance laptop to the Physical SP using the serial connection and start an SPMAINT interface session.
3. On the SPMAINT interface home page, enter 3 forStoreServ Configuration Management , then enter1 forDisplay StoreServ
information to perform the pre-rescue task of obtaining the following information:
HP 3PAR OS Level on the StoreServ system
StoreServ system network parameters including netmask and gateway information Return to the main menu.
NOTE: Copy the network information on to a separate document for reference to complete the subsequent steps of configuring
the system network.
d. Enter1 forConfigure Node Rescue , and then select the system.At this point, you will be prompted for the node rescue
configuration information. Complete the following procedure:
[/dev/tpddev/vvb/0]
Entery to specify the time zone. Continue to follow the time zone setup prompts.
6. Establish a serial connection to the node being rescued. If necessary, disconnect the serial cable from SP.
7. Connect a serial cable from the laptop to the service port on the node being rescued.
NOTE: Check the baud rate settings before establishing a connection. The baud rate of the node is 57600.
8. Connect the crossover cable from the MGMT Ethernet port on the node being rescued to the Service Ethernet port of the SP.
9. Turn on the power to the node to begin the boot process. Closely monitor the serial console output.
whack>
prompt.
a. Enter
b. Enter
boot rescue
and pressEnter .
c. Monitor the console output process. The node will continue to run POST until it stops and displays instructions for running
node-rescue (see output below). Enter y to continue.The SP installs the OS. This process takes approximately 10 to 15 minutes
(rescue and rebuild of disk = 5 minutes) + (reboot = 5-10 minutes). When complete, the node restarts and tries to become part
of the cluster.Continue until all nodes are rescued. After the last node boots, the cluster should form and the system should start.
12. Remove the crossover cable from the recently rescued node and connect it to the next node if additional nodes need rescuing.
NOTE: Reconnect the public network (Ethernet) cable to recently rescued node.
15. From the node 3PAR Console Menu , select option 11 Finish SP-to-Node rescue procedure to set the network configuration for the
system. Follow the prompts to complete the network configuration.
NOTE:
The cluster must be active and the admin volume must be mounted before changing the network configuration.
If necessary, access STATs to obtain the network information or request it from the system administrator.
16. PressEnter .
17. Before deconfiguring the node rescue, disconnect the crossover cables and reconnect the public network cable.
c. From the3PAR Service Processor Menu , select option 7 Interactive CLI for a StoreServ , then choose the system.
19. Enter
shownode
20. Enter
shutdownsys
21. Reconnect the host and host cables if previously removed or shutdown.
22. Enter
checkhealth -detail
exit
command and selectX to exit from the 3PAR Service Processor Menu and to log out of the session.
25. Disconnect the serial cable from the maintenance laptop and the red cross-over Ethernet cable from the node and coil and replace
the cable behind the SP. If applicable, reconnect the customer's network cable and any other cables that may have been
disconnected.
26. Enter
checkhealth -detail
checkhealth
are resolved, instruct the customer to restart the hosts and applications and resume the host IO.
Node verification
3. In the detail pane Health panel, check that the State is Normal (green). If the state is not normal, contact your authorized service
provider for further assistance.
4. On the SSMC main menu, select Storage Systems > Controller Nodes . Select the replaced controller node and verify that the State is
Normal (green).
3. Enter the
showsys
4. Enter
checkhealth -detail
Node node:<nodeID> "Node is not online"Node node:<nodeID> "Power supply <psID> detailed
state is <status>Node node:<nodeID> "Power supply <psID> AC state is <acStatus>"Node node:
<nodeID> "Power supply <psID> DC state is <dcStatus>"Node node:<nodeID> "Power supply <psID>
battery is <batStatus>"Node node:<nodeID> "Node <nodeID> battery is <batStatus>"Node node:
<nodeID> "Power supply <psID> battery is expired"Node node:<nodeID> "Fan is <fanID> is
<status>"Node node:<nodeID> "Power supply <psID> fan module <fanID> is <status>"Node node:
<nodeID> "Fan module <fanID> is <status>Node node:<nodeID> "Detailed State <state>"
(degraded or failed)Node node:<nodeID> "PCI card in Slot:<slot> is empty, but is not empty
in Node:<pair-node>"Node node:<nodeID> "PCI card model in Slot:<slot> is not the same as
Node:<pair-node>"Node -- "There is at least one active servicenode operation in
progress"Node node:<nodeID> "Process <ps_name> has reached 90% of maximum size"Node node:
<nodeID> "Node has less than 100MB of free memory"Node node:<nodeID> "Environmental factor
<item_string> is <state>"Node node:<nodeID> "Flusher speed set incorrectly to: <flush_speed>
(should be 0)"Node node:<nodeID> "ioload is running"Node node:<nodeID> "VV <vvid> has
<stuck_num> outstanding <command> with a maximum wait time of <slptime>"Node node:<nodeID> "
<item> is not the same on all nodes"
(where <item> is: BIOS version, NEMOE version, Control memory, Data memory, CPU Speed, CPU Bus Speed, HP 3PAR OS
version,Package list)
shownode
and use the node number or identifier to identify which node is affected.
2 node numbering:
4 node numbering:
yes
Node verification
1. Enter
shownode
2. Reboot the replacement node to synchronize the software and hardware values by entering
.
NOTE: The reboot process may take approximately ten to fifteen minutes. The node becomes part of the cluster when the process
is complete.
3. Verify the node status LED is blinking green in sync with other node LEDs. The uniform blinking indicates the node has joined the
cluster.
shownode
NOTE: Depending on the serviced component, the node might go through Node Rescue, which can take up to 10 minutes.
NOTE: The LED status for the replaced node might indicate green and could take up to 3 minutes to change to green flashing.
shownode -d
---------------------------------------------Nodes---------------------------------------------
-----------------------------PCI Cards------------------------------
----------------------------CPUs----------------------------
--------------------------------------Physical Memory---------------------------------------
Node Slot SlotID -Name-- -Usage- ---Type--- --Manufacturer--- -Serial- -Latency-- Size(MB)
0 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 061A8323 CL5.0/9.0 32768
0 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 069DBAB1 CL5.0/9.0 32768
0 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C2FE5 CL5.0/11.0 16384
0 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C2FE4 CL5.0/11.0 16384
1 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 061773B9 CL5.0/9.0 32768
1 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 05FF0C20 CL5.0/9.0 32768
1 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0EAF22 CL5.0/11.0 16384
1 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0EAF48 CL5.0/11.0 16384
2 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 06B62512 CL5.0/9.0 32768
2 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 06B62501 CL5.0/9.0 32768
2 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C3042 CL5.0/11.0 16384
2 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C3034 CL5.0/11.0 16384
3 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 05FF0D2D CL5.0/9.0 32768
3 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 05FF0D49 CL5.0/9.0 32768
3 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C5387 CL5.0/11.0 16384
3 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C537F CL5.0/11.0 16384
--------------------------------Power Supplies---------------------------------
------------------------------MCU------------------------------
-----------Uptime-----------
6. Issue the
checkhealth
7. Enter
exit
1. From the3PAR Service Processor Menu , enter 7 forInteractive CLI for a StoreServ , and then select the system.
3. Enter the
showsys
4. Enter
checkhealth -detail
shownode
and use the node number or identifier to identify which node is affected.
2 node numbering:
4 node numbering:
7. Type
exit
8. Select optionHalt a StoreServ cluster/node , select the desired node, and confirm all prompts to halt the node.
9. In the3PAR Service Processor Menu , select option 7 Interactive CLI for a StoreServ .
a. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
b. Execute the
locatesys
shownode
2. Enter
shownode -d
NOTE: Depending on the serviced component, the node might go through Node Rescue, which can take up to 10 minutes.
NOTE: The LED status for the replaced node might indicate green and could take up to 3 minutes to change to green flashing.
3. Enter
shownode -d
---------------------------------------------Nodes---------------------------------------------
-----------------------------PCI Cards------------------------------
----------------------------CPUs----------------------------
--------------------------------------Physical Memory---------------------------------------
Node Slot SlotID -Name-- -Usage- ---Type--- --Manufacturer--- -Serial- -Latency-- Size(MB)
0 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 061A8323 CL5.0/9.0 32768
0 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 069DBAB1 CL5.0/9.0 32768
0 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C2FE5 CL5.0/11.0 16384
0 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C2FE4 CL5.0/11.0 16384
1 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 061773B9 CL5.0/9.0 32768
1 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 05FF0C20 CL5.0/9.0 32768
1 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0EAF22 CL5.0/11.0 16384
1 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0EAF48 CL5.0/11.0 16384
2 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 06B62512 CL5.0/9.0 32768
2 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 06B62501 CL5.0/9.0 32768
2 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C3042 CL5.0/11.0 16384
2 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C3034 CL5.0/11.0 16384
3 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 05FF0D2D CL5.0/9.0 32768
3 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 05FF0D49 CL5.0/9.0 32768
3 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C5387 CL5.0/11.0 16384
3 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C537F CL5.0/11.0 16384
---------------------------------------------Internal Drives-----------------------------------
-----------
--------------------------------Power Supplies---------------------------------
------------------------------MCU------------------------------
-----------Uptime-----------
4. Enter
checkhealth -detail
5. Enter
exit
Before you begin: Connect to the service processor and start an SPMAINT session.
1. From the3PAR Service Processor Menu , enter 7 forInteractive CLI for a StoreServ , and then select the system.
3. Enter the
showsys
4. Enter
checkhealth -detail
5. Enter
6. Confirm the memory and cache numbers look right. If not, issue the
shownode -mem
Node Slot SlotID -Name-- -Usage- ---Type--- --Manufacturer--- -Serial- -Latency-- Size(MB)
0 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 061A8323 CL5.0/9.0 32768
0 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 069DBAB1 CL5.0/9.0 32768
0 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C2FE5 CL5.0/11.0 16384
0 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C2FE4 CL5.0/11.0 16384
1 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 061773B9 CL5.0/9.0 32768
1 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 05FF0C20 CL5.0/9.0 32768
1 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0EAF22 CL5.0/11.0 16384
1 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0EAF48 CL5.0/11.0 16384
2 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 06B62512 CL5.0/9.0 32768
2 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 06B62501 CL5.0/9.0 32768
2 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C3042 CL5.0/11.0 16384
2 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C3034 CL5.0/11.0 16384
3 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 05FF0D2D CL5.0/9.0 32768
3 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 05FF0D49 CL5.0/9.0 32768
3 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C5387 CL5.0/11.0 16384
3 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C537F CL5.0/11.0 16384
7. Issue the
shownode -i
shownode -i
command displays node inventory information. Scroll down to view physical memory information.
------------------------------Nodes-------------------------------
---------------------PCI Cards---------------------
--------------------------------------CPUs--------------------------------------
---------------------------------------------Internal Drives-----------------------------------
-----------
-----------------------------------------Physical Memory---------------------------------------
---
Node Slot SlotID Name Type --Manufacturer--- ----PartNumber---- -Serial- -Rev- Size(MB)
0 CC_0.0 J18000 DIMM0.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 061A8323 0011 32768
0 CC_1.0 J19000 DIMM1.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 069DBAB1 0011 32768
0 DC_0.0 J14005 DIMM0.0 DDR3_SDRAM Micron Technology 36KDZS2G72PZ-1G6E1 0D0C2FE5 4531 16384
1 CC_0.0 J18000 DIMM0.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 061773B9 0011 32768
1 CC_1.0 J19000 DIMM1.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 05FF0C20 0011 32768
2 CC_0.0 J18000 DIMM0.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 06B62512 0011 32768
2 CC_1.0 J19000 DIMM1.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 06B62501 0011 32768
2 DC_0.0 J14005 DIMM0.0 DDR3_SDRAM Micron Technology 36KDZS2G72PZ-1G6E1 0D0C3042 4531 16384
3 CC_0.0 J18000 DIMM0.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 05FF0D2D 0011 32768
3 CC_1.0 J19000 DIMM1.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 05FF0D49 0011 32768
3 DC_0.0 J14005 DIMM0.0 DDR3_SDRAM Micron Technology 36KDZS2G72PZ-1G6E1 0D0C5387 4531 16384
------------------Power Supplies------------------
4 node numbering:
9. Enter
exit
10. Select option4 StoreServ Product Maintenance and then select the desired system.
11. Select optionHalt a StoreServ cluster/node , select the desired node, and confirm all prompts to halt the node.
12. In the3PAR Service Processor Menu , select option 7 Interactive CLI for a StoreServ .
a. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
b. Execute the
locatesys
command.NOTE: All nodes in this System flash, except for the node with the failed component, which displays a solid blue LED.
1. After rebooting is complete, enter shownode to verify the node has joined the cluster.
shownode -i
shownode -i
command displays node inventory information; scroll down to view physical memory information.
------------------------------Nodes-------------------------------
---------------------PCI Cards---------------------
--------------------------------------CPUs--------------------------------------
---------------------------------------------Internal Drives-----------------------------------
-----------
-----------------------------------------Physical Memory---------------------------------------
---
Node Slot SlotID Name Type --Manufacturer--- ----PartNumber---- -Serial- -Rev- Size(MB)
0 CC_0.0 J18000 DIMM0.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 061A8323 0011 32768
0 CC_1.0 J19000 DIMM1.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 069DBAB1 0011 32768
0 DC_0.0 J14005 DIMM0.0 DDR3_SDRAM Micron Technology 36KDZS2G72PZ-1G6E1 0D0C2FE5 4531 16384
1 CC_0.0 J18000 DIMM0.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 061773B9 0011 32768
1 CC_1.0 J19000 DIMM1.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 05FF0C20 0011 32768
2 CC_0.0 J18000 DIMM0.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 06B62512 0011 32768
2 CC_1.0 J19000 DIMM1.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 06B62501 0011 32768
2 DC_0.0 J14005 DIMM0.0 DDR3_SDRAM Micron Technology 36KDZS2G72PZ-1G6E1 0D0C3042 4531 16384
3 CC_0.0 J18000 DIMM0.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 05FF0D2D 0011 32768
3 CC_1.0 J19000 DIMM1.0 DDR3_SDRAM Smart Modular SH4097RV310493SDV 05FF0D49 0011 32768
3 DC_0.0 J14005 DIMM0.0 DDR3_SDRAM Micron Technology 36KDZS2G72PZ-1G6E1 0D0C5387 4531 16384
------------------Power Supplies------------------
3. Enter
shownode -d
---------------------------------------------Nodes---------------------------------------------
-----------------------------PCI Cards------------------------------
----------------------------CPUs----------------------------
--------------------------------------Physical Memory---------------------------------------
Node Slot SlotID -Name-- -Usage- ---Type--- --Manufacturer--- -Serial- -Latency-- Size(MB)
0 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 061A8323 CL5.0/9.0 32768
0 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C2FE5 CL5.0/11.0 16384
0 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C2FE4 CL5.0/11.0 16384
1 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 061773B9 CL5.0/9.0 32768
1 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 05FF0C20 CL5.0/9.0 32768
1 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0EAF22 CL5.0/11.0 16384
1 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0EAF48 CL5.0/11.0 16384
2 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 06B62512 CL5.0/9.0 32768
2 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 06B62501 CL5.0/9.0 32768
2 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C3042 CL5.0/11.0 16384
2 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C3034 CL5.0/11.0 16384
3 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 05FF0D2D CL5.0/9.0 32768
3 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 05FF0D49 CL5.0/9.0 32768
3 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C5387 CL5.0/11.0 16384
3 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C537F CL5.0/11.0 16384
---------------------------------------------Internal Drives-----------------------------------
-----------
--------------------------------Power Supplies---------------------------------
------------------------------MCU------------------------------
-----------Uptime-----------
4. Enter
checkhealth -detail
If that does not solve the issue, then reseat the DIMM.
If the issue is still not resolved, call technical support for assitance.
5. Enter
exit
1. From the3PAR Service Processor Menu , enter 7 forInteractive CLI for a StoreServ , and then select the system.
3. Enter the
showsys
4. Enter
checkhealth -detail
Node node:<nodeID> "PCI card in Slot:<slot> is empty, but is not empty in Node:<pair-
node>"Node node:<nodeID> "PCI card model in Slot:<slot> is not the same as Node:<pair-node>"
shownode -i
and use the node number or identifier to identify which node is affected. Also note which slot is listed in the alert or the
HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 51
shownode -i
result.
2 node numbering:
4 node numbering:
7. Enter
locatenode
8. Identify the node with the failed PCIe adapter by locating the blue service LED.
9. Confirm all of the cables to the PCI adapter are clearly labeled with their location for later reconnection.
10. Enter
yes
1. Enter
shownode
NOTE: If the node that was halted also contained the active Ethernet session, you will need to exit and restart the CLI session.
NOTE:
Depending on the serviced component, the node might go through Node Rescue, which can take up to 10 minutes.
The LED status for the replaced node might indicate green and could take up to 3 minutes to change to green flashing.
2. Enter
showport
------------------------------------------------------------------------------------------------
-------
27
NOTE: A port must be connected and correctly communicating to be ready. Verify if the port (State column) is ready.
3. Enter
showport -i
--------------------------------------------------------
27
4. Enter
checkhealth -detail
5. Enter
exit
Before you begin: Connect to the service processor and start an SPMAINT session.
1. From the3PAR Service Processor Menu , enter 7 forInteractive CLI for a StoreServ , and then select the system.
3. Enter the
showsys
4. Enter
checkhealth -detail
5. Enter
shownode
6. Enter
exit
8. Select optionHalt a StoreServ cluster/node , select the desired node, and confirm all prompts to halt the node.
9. In the3PAR Service Processor Menu , select option 7 Interactive CLI for a StoreServ .
a. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
b. Execute the
locatesys
command.
NOTE: All nodes in this System flash, except for the node with the failed component, which displays a solid blue LED.
2. Enter
shownode
NOTE: If the node that was halted also contained the active Ethernet session, you will need to exit and restart the CLI session.
NOTE:
Depending on the serviced component, the node might go through Node Rescue, which can take up to 10 minutes.
The LED status for the replaced node might indicate green and could take up to 3 minutes to change to green flashing.
3. Enter
shownode -d
---------------------------------------------Nodes---------------------------------------------
-----------------------------PCI Cards------------------------------
----------------------------CPUs----------------------------
--------------------------------------Physical Memory---------------------------------------
Node Slot SlotID -Name-- -Usage- ---Type--- --Manufacturer--- -Serial- -Latency-- Size(MB)
0 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 061A8323 CL5.0/9.0 32768
0 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 069DBAB1 CL5.0/9.0 32768
0 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C2FE5 CL5.0/11.0 16384
0 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C2FE4 CL5.0/11.0 16384
1 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 061773B9 CL5.0/9.0 32768
1 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 05FF0C20 CL5.0/9.0 32768
1 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0EAF22 CL5.0/11.0 16384
1 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0EAF48 CL5.0/11.0 16384
2 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 06B62512 CL5.0/9.0 32768
2 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 06B62501 CL5.0/9.0 32768
2 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C3042 CL5.0/11.0 16384
2 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C3034 CL5.0/11.0 16384
3 CC_0.0 J18000 DIMM0.0 Control DDR3_SDRAM Smart Modular 05FF0D2D CL5.0/9.0 32768
3 CC_1.0 J19000 DIMM1.0 Control DDR3_SDRAM Smart Modular 05FF0D49 CL5.0/9.0 32768
3 DC_0.0 J14005 DIMM0.0 Data DDR3_SDRAM Micron Technology 0D0C5387 CL5.0/11.0 16384
3 DC_1.0 J16005 DIMM1.0 Data DDR3_SDRAM Micron Technology 0D0C537F CL5.0/11.0 16384
---------------------------------------------Internal Drives-----------------------------------
-----------
--------------------------------Power Supplies---------------------------------
------------------------------MCU------------------------------
-----------Uptime-----------
4. Enter
checkhealth -detail
5. Enter
exit
Before you begin: Connect to the service processor and start an SPMAINT session.
1. From the3PAR Service Processor Menu , enter 7 forInteractive CLI for a StoreServ , and then select the system.
showsys
4. Enter
checkhealth -detail
shownode s
and
shownode -ps
cli% shownode -s
0 Degraded PS 1 Failed
1 Degraded PS 0 Failed
0 0 FFFFFFFF OK OK OK OK OK 100
1 1 FFFFFFFF OK OK OK OK OK 100
7. Use the nodeID in the alert or the node number or identifier from
shownode
4 node numbering:
1. Enter
shownode -ps
showbattery
3. Enter
checkhealth -detail
4. Enter
exit
Before you begin: Connect to the service processor and start an SPMAINT session.
1. From the3PAR Service Processor Menu , enter 7 forInteractive CLI for a StoreServ , and then select the system.
3. Enter the
showsys
4. Enter
checkhealth -detail
shownode
cli%shownode -ps
2 0 FFFFFFFF OK OK OK OK OK 100
2 1 FFFFFFFF OK OK OK OK OK 100
3 0 FFFFFFFF OK OK OK OK OK 100
showbattery
to verify that the battery has failed. Verify that at least one PCM battery in each node enclosure is functional:
8. Use the nodeID in the alert or the node number or identifier from
shownode
4 node numbering:
9. Also note which PCM is affected by using the power supply ID number.
2 node system PCM numbering:
NOTE: Because each battery is a backup for both nodes, node 0 and 1 both report a problem with a single battery. The quantity
appears as 2 in output because two nodes are reporting the problem. Battery 0 for node 0 is in the left PCM, and battery 0 for node
1 is in the right side PCM (when looking at the node enclosure from the rear).
1. Use the
showbattery
command to confirm the battery is functional and the serial ID has changed.
2. Enter
checkhealth -detail
3. Enter
exit
SFP CLI
SFP identification
Before you begin: Connect to the service processor and start an SPMAINT session.
1. From the3PAR Service Processor Menu , enter 7 forInteractive CLI for a StoreServ , and then select the system.
3. Enter the
showsys
4. Enter
checkhealth -detail
showport
command to view the port State. Typically, the State is listed as loss sync, the Mode as initiator, and the Connected Device Type as
free.
------------------------------------------------------------------------------------------------
-------
27
7. Issue the
0:1:1 OK 0.0 -- -- -- No
1:1:1 OK 0.0 -- -- -- No
2:1:1 OK 0.0 -- -- -- No
3:1:1 OK 0.0 -- -- -- No
--------------------------------------------------------------------------
23
8. Use the N:S:P column to identify which node, slot and port has the problem.
2 node numbering:
4 node numbering:
- - -
SFP verification
showport
command to verify that the ports are in good condition and the State is listed as ready. The State should now be listed as ready, the
Mode as target and the Type as host.Also verify that the port mode for the serviced node did not get changed after replacement.
------------------------------------------------------------------------------------------------
-------
27
2. Issue the
HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 69
showport -sfp
command to verify that the replaced SFP is connected and the State is listed as OK. Data should now be populated.
0:1:1 OK 0.0 -- -- -- No
1:1:1 OK 0.0 -- -- -- No
2:1:1 OK 0.0 -- -- -- No
3:1:1 OK 0.0 -- -- -- No
--------------------------------------------------------------------------
23
Drives identification
1. In the SSMC main menu, select Storage Systems > Systems . A list of storage systems is displayed in the list pane.
4. In thePhysical Drives panel, click the total physical drives hyperlink. The Physical Drives screen is displayed.
5. In theStatus filter, select Critical andWarning . The list pane shows physical drives that have a critical or warning status.
6. In the list pane, select a physical drive to display its properties in the detail pane.
WARNING: The state may indicate Degraded, which indicates that the physical drive is not ready for replacement. It may take
several hours for the data to be vacated; do not remove the physical drive until the status is Critical. Removing a degraded physical
drive before all the data is vacated will cause loss of data.
CAUTION: If more than one physical drive is critical or degraded, contact your authorized service provider to determine if the repair
can be done in a safe manner, preventing down time or data loss.
7. Locate the failed drive using the SSMC. To avoid damaging the hardware or losing data, always confirm the drive by its amber fault
LED, before removing it.
a. In the detail pane General panel, click the Enclosure name hyperlink. The Drive Enclosures screen is displayed with the drive
enclosure pre-selected.
b. Click the drive enclosure Start Locate icon. Follow the instructions on the dialog that opens.
Drives verification
1. In the SSMC main menu, select Storage Systems > Systems . A list of storage systems is displayed in the list pane.
4. In thePhysical Drives panel, click the total physical drives hyperlink. The Physical Drives screen is displayed.
5. In the list pane, select a physical drive to display its properties in the detail pane.
NOTE: Until data has been restored, the Status might not yet be updated to Normal (green).
checkhealth
Drives identification
2. From the3PAR Service Processor Menu , enter 7 forInteractive CLI for a StoreServ , and then select the system.
4. Enter the
showsys
5. Enter
6. Enter
showpd
, followed by
showpd i
to help identify the drive at risk and in need of replacement. Make note of the cage ID, the first digit in the cage position (CagePos)
column where the pd is located.
NOTE: If a storage drive fails, the system will automatically run servicemag in the background and illuminate the blue drive LED
indicating a fault and which drive to replace. A storage drive may need to be replaced for various reasons and may not show errors
in the displayed output. If the replacement is a proactive replacement prior to a failure enter
to initiate the temporary removal of data from the drive with storage of the removed data on spare chunklets.
7. Enter
servicemag status
to monitor progress.
NOTE: When an SSD is identified as degraded, you must manually initiate the replacement process. Issue the servicemag start -pdid
<pdID> command to move the chunklets. When the SSD is replaced, the system automatically initiates servicemag resume.
NOTE: This process might take up to 10 minutes; repeat the command to refresh the status.There are four responses. Resonse 1 is
expected when the drive is ready to be replaced.Response 1:
servicemag
has successfully completed:When Succeeded displays as the last line in the output, it is safe to replace the drive. This response is
expected.Response 2:
servicemag
has not started.Data is being reconstructed on spares; servicemag does not start until this process iscomplete. Retry the command
at a later time.Response 3:
servicemag
servicemag
8. Locate the failed drive. To avoid damaging the hardware or losing data, always confirm the drive by its amber fault LED, before
removing it.
a. Execute the
showpd -failed
command:
b. Execute the
locatecage -t XX cageY
command.Where:-
XX
is the appropriate number of seconds to allow service personnel to view the LED status of the drive enclosure-
Drives verification
1. Verify that the replacement drive has been successfully replaced using
servicemag status
servicemag
servicemag
has completed. When No servicemag operations logged displays as the last line in the output, the drive has successfully been
replaced.Response 3:
servicemag
has failed:There can be several causes for this failure; contact your authorized service provider for assistance.
2. Enter
checkhealth -detail
3. Enter
exit
2. From the 3PAR Service Processor Menu , enter 7 forInteractive CLI for a StoreServ , and then select the system.
4. Enter the
5. Enter
checkhealth -detail
7. Label the drives within the drive enclosure with their slot location. The drives must be populated in the same order in the
replacement cage.
8. Power down StoreServ using shutdownsys halt. Wait for the Hot Plug LED on the nodes to turn blue in approximately 10 to 15
minutes.
1. Verify that the node LEDs are in sync and blinking green.
2. Identify the cage ID of the failed drive enclosure that was removed. Issue the
showcage
command.
3. Remove the entry of the removed cage from the stored system information. Issue the
command where x is the cage number for the failed drive enclosure.
4. Verify that the new cage and drives are online ready by using
showcage
showpd
, and
cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model FormFactor
cli% showpd
----Size(MB)----- ----Ports----
5. Verify that there are no cabling errors. Checkhealth provides guidance for fixing cabling problems.
7. Enter
checkhealth -detail
8. Enter
exit
2. From the3PAR Service Processor Menu , enter 7 forInteractive CLI for a StoreServ , and then select the system.
4. Use the
showsys
5. Enter
checkhealth -detail
6. Enter
showcage
cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model FormFactor
7. Enter
showcage -d <cageID>
to display information about the drive cage you are working on.
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model FormFactor
Position: ---
Master_CPU Yes No
8. Enter
locatecage <cageID>
to illuminate the blue service LED of the drive enclosure with the failed I/O module.
9. Locate the I/O module from the showcage output after you find the cage with the blue service LED. The LED must be blue indicating
the I/O module is safe to remove.
NOTE: I/O module 0 is at the bottom and I/O module 1 is on top.
1. Enter the
locatecage -t 0
2. Enter
showcage
cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model FormFactor
3. Enter
showcage -d <cageID>
upgradecage <cageID>
Enter
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model FormFactor
Position: ---
Master_CPU Yes No
4. Enter
checkhealth -detail
5. Enter
exit
Before you begin: Connect to the service processor and start an SPMAINT session.
1. From the 3PAR Service Processor Menu , enter 7 for Interactive CLI for a StoreServ , and then select the system.
3. Enter the
showsys
4. Enter
checkhealth -detail
5. Normally a failed PCM displays an invalid status; if that is the case, close the current window and proceed to the remove the PCM
step. If no invalid status is displayed use one of the following procedures:
showcage d <cageID>
command, where
<cageID>
is the name of the cage indicated in the notification. One or more of ACState, DCState, and PSState could be in a Failed state.
cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model FormFactor
The output above is repeated for each cage; search for the failure.
6. Enter
locatecage <cageID>
to illuminate the blue service LED of the drive enclosure with the failed component. The LED must be blue indicating the drive PCM
is safe to remove.
showcage d <cageID>
2. Enter
checkhealth -detail
3. Enter
exit
Overview
Overview
Shown here is the front of the 3PAR 8440 storage system, part of the 3PAR 8000 storage series.This is a 4-node system, with two SFF
(Small Form Factor) drive enclosures. Each SFF drive enclosure holds 24 SFF drives.Below are two LFF (Large Form Factor) drive
enclosures attached to the 4-node system. Each of the LFF drive enclosures hold 24 LFF drives.Shown here is the rear of the 4-node
3PAR 8440. We see two different node pairs. The nodes in a given pair are flipped in orientation with one another. On each node pair,
we see interconnect cabling, a management port, host networking (either fiber channel or converged networking), a gray node removal
rod, mini-SAS drive enclosure cabling slots, additional host connectivity through onboard FC ports, and remote copy and other
management ports.On each side of the node are 764 watt PCMs (Power Cooling Modules). Each node PCM also contains a battery for
the node.On the LFF drive enclosure, we see the I/O modules that connect to the nodes, and the drive enclosures 580 watt PCM.
Interconnect cabling Interconnect link cables are used with 4-node systems to interconnect the nodes. It is no longer necessary to shut
down the StoreServ to replace the node interconnect link cables.Each interconnect cable is labeled with an 'A' side and a 'C' side. The 'A'
side of the cables are inserted into either node 0 or node 1. The 'C' side of the cables are inserted into either node 2 or node 3.Once you
have completed cabling, each node will be able to communicate with the other nodes in the cluster.
Mini-SAS cabling The drive enclosure is cabled with either a passive copper, or active optical cable. The passive copper cable is used in
a single rack enclosure, and the active optical cable is used in a multi-rack environment.The cable is connected first to the appropriate
DP port on the node, and then into the appropriate port on the I/O module in the drive enclosure. Refer to the service guide for
determining the appropriate ports for cabling.
Installing replacement components The service replacement steps for all parts in this system follow the same basic procedure:
Two parts in the system are designated Customer Self Repair (CSR): the 8200 node, and the drives. Customers will use the SSMC to
identify the failed component.
Verify that cables are labeled before shutting down the node.
1. Allow 2-3 minutes for the Node to halt, then verify that the Node Status LED is flashing green and the Node UID LED is solid blue,
not flashing, indicating that the Node has been halted and is safe to remove.
NOTE: The Node Fault LED may be amber, depending on the nature of the node failure.
CAUTION: The system will not fail if the node is properly halted before removal but data loss may occur if the replacement
procedure is not followed correctly.
2. At the rear of the rack, remove cables from the failed node.
NOTE: Confirm all of the cables are clearly labeled for later reconnection.
3. Pull the gray node rod to the extracted position, out of the component.
4. When the node is halfway out of the enclosure, use both hands to slide the node out completely.
5. While carefully supporting the controller node from underneath, remove the node and place on an ESD safe work surface.
Node replacement
1. Remove any SFPs from the failed node by opening the retaining clip and sliding the SFP out of the slot.
2. Install the SFPs into the replacement node by carefully sliding the SFP into the port until fully seated, then closing the wire handle
to secure it in place.
3. On the replacement node, ensure the gray node rod is in the extracted position, pulled out of the component.
4. Grasp each side of the replacement node and gently slide it into the enclosure. Ensure the node is aligned with the grooves in the
slot.
CAUTION: Ensure the node is correctly oriented; alternate nodes are rotated 180°.
5. Keep sliding the node in until it halts against the insertion mechanism.
7. Push the extended gray node rod into the node to ensure the node is fully seated.
CAUTION: If the blue LED is flashing, which indicates that the node is not properly seated, pull out the grey node rod and push it
back in to ensure that the node is fully seated.
NOTE: Once inserted, the node should power up and go through the node rescue procedure before joining the cluster. This might
take up to 10 minutes.
8. Verify the node LED is flashing green in synchronization with other nodes, indicating that the node has joined the cluster.
Verify that cables are labeled before shutting down the node and removing the cover.
Preparation
1. Allow up to ten minutes for the node to halt, then verify that the Node Status LED is flashing green and the Node UID LED is solid
blue, not flashing, indicating that the node has been halted and is safe to remove.
NOTE: The Node Fault LED might be amber, depending on the nature of the node failure.
2. Confirm all of the cables to the node and PCI adapters are clearly labeled with their location for later reconnection.
3. Remove the cables connected directly to the controller node and to the PCI adapters.
4. Pull the gray node rod to the extracted position, out of the component.
5. When the node is halfway out of the enclosure, use both hands to slide the node out completely.
6. While carefully supporting the controller node from underneath, remove the node and place on an ESD safe work surface.
Node replacement
1. On the replacement node, ensure the gray node rod is in the extracted position, pulled out of the component.
2. Remove the node cover from both the failed node and the replacement node, by loosening two thumbscrews and lifting the cover to
remove.
3. Transfer both SFPs from the onboard adapter ports on the failed node to the onboard adapter ports on the replacement node:
a. Open the retaining clip and carefully slide the SFP out of the slot on the failed node.
b. Carefully slide the SFP into the adapter port on the replacement node until fully seated, and then close the retaining clip to
secure it in place.
4. Carefully transfer the following from the failed controller node to the exact same position in the replacement controller node:- All of
the control cache DIMMs- All of the data cache DIMMs- The node boot drive
5. Remove the SFPs from the PCIe adapter in the failed node by opening the retaining clip and sliding it out of the slot.
6. Remove the PCIe adapter assembly, including the riser card, from the failed node and transfer to the replacement node.
7. Install the SFPs, removed from the PCIe adapter in the failed node, into the PCIe adapter in the replacement node and close the
retaining clip to secure it.
8. Replace the node cover on both the failed node and the replacement node by aligning the node rod and guide pins with the cutouts
9. Push in the gray node rod on the failed node to ready for packaging.
10. On the replacement node, ensure that the gray node rod is in the extracted position, pulled out of the component.
11. Grasp each side of the replacement node and gently slide it into the enclosure. Ensure the node is aligned with the grooves in the
slot.
CAUTION: Ensure the node is correctly oriented; alternate nodes are rotated 180°.
12. Keep sliding the node in until it halts against the insertion mechanism.
14. Push the extended gray node rod into the node to ensure the node is fully seated.
CAUTION: If the blue LED is flashing, which indicates that the node is not properly seated, pull out the grey node rod and push it
back in to ensure that the node is fully seated.
NOTE: Once inserted, the node should power up and go through the node rescue procedure before joining the cluster. This might
take up to 10 minutes.
15. Verify the node LED is flashing green in synchronization with other nodes, indicating that the node has joined the cluster.
Node verification
Follow the CLI steps to verify the installation of the replacement node.
Verify that cables are labeled before shutting down the node and removing the cover.
CAUTION: Alloy gray-colored latches on components such as the node mean the component is warm-swappable. HP recommends
shutting down the node (with the enclosure power remaining on) before removing this component.
NOTE: The clock inside the node uses a 3-V lithium coin battery. The lithium coin battery might explode if it is incorrectly installed in
the node. Replace the clock battery only with a battery supplied by HP. Do not use non-HP supplied batteries. Dispose of used batteries
according to the manufacturers instructions.
NOTE: If the node with the failed component is already halted, it is not necessary to shutdown the node, because it is not part of the
cluster.
Node clock battery identification
Follow the CLI steps to identify the failed node clock battery.
Node clock battery removal
1. Allow 2-3 minutes for the node to halt, then verify that the Node Status LED is flashing green and the Node UID LED is solid blue,
not flashing, indicating that the node has been halted and is safe to remove.
CAUTION: The system will not fail if the node is properly halted before removal, but data loss may occur if the replacement
procedure is not followed correctly.
NOTE: The Node Fault LED may be amber, depending on the nature of the node failure.
2. Ensure that all cables on the failed node are labeled to facilitate reconnecting later.
3. At the rear of the rack, remove all cables from the node with the failed component.
4. Pull the node rod to remove the node from the enclosure.
7. Remove the node cover by loosening two thumbscrews and lifting the cover to remove.
8. Remove the Clock Battery by pulling aside the retainer clip and pulling the battery up from the battery holder.
CAUTION: Do not touch internal node components when removing or inserting the battery.
1. With the positive-side facing the retaining clip, insert the replacement 3-V lithium coin battery into the Clock Battery slot.
2. Replace the node cover by aligning the node rod and guide pins with the cutouts and lowering into place, then tightening two
thumbscrews.
3. Ensure that the gray node rod is in the extracted position, pulled out of the component.
4. Grasp each side of the node and gently slide it into the enclosure. Ensure the node is aligned with the grooves in the slot.
CAUTION: Ensure the node is correctly oriented, alternate nodes are rotated by 180°.
5. Keep sliding the node in until it halts against the insertion mechanism.
7. Push the extended gray node rod into the node to ensure the node is correctly installed.
CAUTION: A flashing blue LED indicates that the node is not properly seated. Pull out the gray node rod and push back in to ensure
that the node is fully seated.
NOTE: Once inserted, the node should power up and rejoin the cluster; this may take up to 5 minutes.
8. Verify that the node LED is blinking green in synchronization with other nodes, indicating that the node has joined the cluster.
Node DIMM
Node DIMM
CAUTION:
Verify that cables are labeled before shutting down the node and removing the cover.
CAUTION: Alloy gray-colored latches on components such as the node mean the component is warm-swappable. HP recommends
shutting down the node (with the enclosure power remaining on) before removing this component.
NOTE: If the node with the failed component is already halted, it is not necessary to shutdown (halt) the node, because it is not part of
the cluster. The failed DIMM should be identified from the failure notification.
NOTE: Even when a DIMM is reported as failed, it still displays configuration information.
Node DIMM identification
Follow the CLI steps to identify the failed DIMM.
Node DIMM removal
1. Allow 2-3 minutes for the node to halt, then verify that the Node Status LED is flashing green and the Node UID LED is solid blue,
not flashing, indicating that the node has been halted and is safe to remove.
CAUTION: The system will not fail if the node is properly halted before removal, but data loss may occur if the replacement
procedure is not followed correctly.
NOTE: The Node Fault LED may be amber, depending on the nature of the node failure.
3. At the rear of the rack, remove all cables from the node with the failed component.
4. Pull the node rod to remove the node from the enclosure.
5. When the node is halfway out of the enclosure, use both hands to slide the node completely out.
7. Remove the node cover by loosening two thumbscrews and lifting the cover to remove.
8. Physically identify the failed DIMM in the node. The Control Cache (CC) and Data Cache (DC) DIMMs can be identified by locating
the appropriate silk-screening on the board.
9. With your thumb or finger, press outward on the two tabs on the sides of the DIMM to remove the failed DIMM and place on the ESD
safe mat.
1. Align the key and insert the DIMM by pushing downward on the edge of the DIMM until the tabs on both sides snap into place.
2. Replace the node cover by aligning the node rod and guide pins with the cutouts and lowering into place, then tightening two
thumbscrews.
3. Ensure that the gray node rod is in the extracted position, pulled out of the component.
4. Grasp each side of the node and gently slide it into the enclosure. Ensure the node is aligned with the grooves in the slot.
CAUTION: Ensure the node is correctly oriented, alternate nodes are rotated by 180°.
5. Keep sliding the node in until it halts against the insertion mechanism.
7. Push the extended gray node rod into the node to ensure the node is correctly installed.
CAUTION: A flashing blue LED indicates that the node is not properly seated. Pull out the gray node rod and push back in to ensure
that the node is fully seated.
NOTE: Once inserted, the node should power up and rejoin the cluster; this may take up to 5 minutes.
8. Verify that the node LED is blinking green in synchronization with other nodes, indicating that the node has joined the cluster.
Verify that cables are labeled before shutting down the node and removing the cover.
Each node in a node pair (a node pair is composed of the two controller nodes in a single 2U enclosure) must have the same number and
type of adapters.
When installing or upgrading expansion cards, follow these guidelines:
CAUTION: Alloy gray-colored latches on components such as the node mean the component is warm-swappable. HP recommends
1. Allow up to ten minutes for the node to halt, then verify that the Node Status LED is flashing green and the Node UID LED is solid
blue, not flashing, indicating that the node has been halted and is safe to remove.
NOTE: The Node Fault LED might be amber, depending on the nature of the node failure.
2. Ensure that all cables on the failed node are labeled to facilitate reconnecting later.
3. At the rear of the rack, remove all cables from the node with the failed component.
4. Pull the node rod to remove the node from the enclosure.
5. When the node is halfway out of the enclosure, use both hands to slide the node completely out.
7. Remove the node cover by loosening two thumbscrews and lifting the cover to remove.
a. Press down on the blue tabs to release the assembly from the node.
NOTE: If the PCIe adapter is half-height, it may not secured by both tabs. Some cards may have an extender to secure it under
the tabs, as shown here.
b. Grasp and pull the assembly up and away from the node for removal.
a. Pull the failed riser card to the side to remove it from the PCIe adapter.
b. Insert the replacement riser card into the slot on the PCIe adapter.
1. If a failed PCIe adapter is being replaced, remove the riser card from the failed PCIe adapter, and install onto the replacement PCIe
adapter.
2. Align the PCIe Adapter assembly with its slot in the node chassis.
6. Replace the node cover by aligning the node rod and guide pins with the cutouts and lowering into place, then tightening two
thumbscrews.
7. Ensure that the gray node rod is in the extracted position, pulled out of the component.
8. Grasp each side of the node and gently slide it into the enclosure. Ensure the node is aligned with the grooves in the slot.
CAUTION: Ensure the node is correctly oriented, alternate nodes are rotated by 180°.
9. Keep sliding the node in until it halts against the insertion mechanism.
11. Push the extended gray node rod into the node to ensure the node is correctly installed.
NOTE: Once inserted, the node should power up and go through the node rescue procedure before joining the cluster. This may take
up to 10 minutes.
12. Verify that the node LED is blinking green in synchronization with other nodes, indicating that the node has joined the cluster.
Verify that cables are labeled before shutting down the node and removing the cover.
Each node in a node pair (a node pair is composed of the two controller nodes in a single 2U enclosure) must have the same number and
type of adapters.
When installing or upgrading expansion cards, follow these guidelines:
CAUTION: Alloy gray-colored latches on components such as the node mean the component is warm-swappable. HP recommends
shutting down the node (with the enclosure power remaining on) before removing this component.
NOTE: If the node with the failed component is already halted, it is not necessary to shutdown (halt) the node, because it is not part of
the cluster.
Node PCIe adapter identification
Follow the CLI steps to identify the failed PCIe adapter or PCIe riser card.
4-Port Fibre Combo Adapter removal
CAUTION: The system does not fail if the node is properly halted before removal, but data loss might occur if the replacement
procedure is not followed correctly.
1. Allow up to ten minutes for the node to halt, then verify that the Node Status LED is flashing green and the Node UID LED is solid
blue, not flashing, indicating that the node has been halted and is safe to remove.
NOTE: The Node Fault LED might be amber, depending on the nature of the node failure.
2. Ensure that all cables on the failed node are labeled to facilitate reconnecting later.
3. At the rear of the rack, remove all cables from the node with the failed component.
4. Pull the node rod to remove the node from the enclosure.
5. When the node is halfway out of the enclosure, use both hands to slide the node completely out.
7. Remove the node cover by loosening two thumbscrews and lifting the cover to remove.
a. Press down on the blue tabs to release the assembly from the node.
NOTE: If the PCIe adapter is half-height, it may not secured by both tabs. Some cards may have an extender to secure it under
the tabs, as shown here.
b. Grasp and pull the assembly up and away from the node for removal.
4-Port Fibre Combo Adapter replacement
1. If a failed PCIe adapter is being replaced, remove the riser card from the failed PCIe adapter, and install onto the replacement PCIe
adapter.
2. Align the PCIe Adapter assembly with its slot in the node chassis.
6. Replace the node cover by aligning the node rod and guide pins with the cutouts and lowering into place, then tightening two
thumbscrews.
7. Ensure that the gray node rod is in the extracted position, pulled out of the component.
8. Grasp each side of the node and gently slide it into the enclosure. Ensure the node is aligned with the grooves in the slot.
CAUTION: Ensure the node is correctly oriented, alternate nodes are rotated by 180°.
9. Keep sliding the node in until it halts against the insertion mechanism.
11. Push the extended gray node rod into the node to ensure the node is correctly installed.
CAUTION: A flashing blue LED indicates that the node is not properly seated. Pull out the gray node rod and push back in to ensure
that the node is fully seated.
NOTE: Once inserted, the node should power up and go through the node rescue procedure before joining the cluster. This may take
up to 10 minutes.
12. Verify that the node LED is blinking green in synchronization with other nodes, indicating that the node has joined the cluster.
4-Port Fibre Combo Adapter verification
Follow the CLI steps to verify the installation of the replacement PCIe adapter or PCIe riser card.
Verify that cables are labeled before shutting down the node and removing the cover.
2. Ensure that all cables on the failed node are labeled to facilitate reconnecting later.
3. At the rear of the rack, remove all cables from the node with the failed component.
4. Pull the node rod to remove the node from the enclosure.
5. When the node is halfway out of the enclosure, use both hands to slide the node completely out.
7. Remove the node cover by loosening two thumbscrews and lifting the cover to remove.
a. Press down on the blue tabs to release the assembly from the node.
NOTE: If the PCIe adapter is half-height, it may not secured by both tabs. Some cards may have an extender to secure it under
the tabs, as shown here.
b. Grasp and pull the assembly up and away from the node for removal.
2. Align the PCIe Adapter assembly with its slot in the node chassis.
6. Replace the node cover by aligning the node rod and guide pins with the cutouts and lowering into place, then tightening two
thumbscrews.
8. Grasp each side of the node and gently slide it into the enclosure. Ensure the node is aligned with the grooves in the slot.
CAUTION: Ensure the node is correctly oriented, alternate nodes are rotated by 180°.
9. Keep sliding the node in until it halts against the insertion mechanism.
11. Push the extended gray node rod into the node to ensure the node is correctly installed.
CAUTION: A flashing blue LED indicates that the node is not properly seated. Pull out the gray node rod and push back in to ensure
that the node is fully seated.
NOTE: Once inserted, the node should power up and go through the node rescue procedure before joining the cluster. This may take
up to 10 minutes.
12. Verify that the node LED is blinking green in synchronization with other nodes, indicating that the node has joined the cluster.
4-Port iSCI Combo Adapter verification
Follow the CLI steps to verify the installation of the replacement PCIe adapter or PCIe riser card.
Verify that cables are labeled before shutting down the node and removing the cover.
CAUTION: Alloy gray-colored latches on components such as the node mean the component is warm-swappable. HP recommends
shutting down the node (with the enclosure power remaining on) before removing this component.
NOTE: If the node with the failed component is already halted, it is not necessary to shutdown (halt) the node, because it is not part of
the cluster.
Node boot drive identification
Follow the CLI steps to identify the failed node boot drive.
Node boot drive removal
1. Allow 2-3 minutes for the node to halt, then verify that the Node Status LED is flashing green and the Node UID LED is solid blue,
not flashing, indicating that the node has been halted and is safe to remove.
CAUTION: The system will not fail if the node is properly halted before removal, but data loss may occur if the replacement
procedure is not followed correctly.
NOTE: The Node Fault LED may be amber, depending on the nature of the node failure.
2. Ensure that all cables on the failed node are labeled to facilitate reconnecting later.
3. At the rear of the rack, remove all cables from the node with the failed component.
4. Pull the node rod to remove the node from the enclosure.
5. When the node is halfway out of the enclosure, use both hands to slide the node out completely.
7. Remove the node cover by loosening two thumbscrews and lifting the cover to remove.
8. Remove the screw securing the failed node boot drive and allow it to release to its spring tension position.
9. Grasp the failed node boot drive by its edges and remove it from its slot in the node.
2. Align the notch on the node boot drive with its key in the board receptacle in the node.
3. At an angle, gently insert the node boot drive into its slot.
5. Replace the node cover by aligning the node rod and guide pins with the cutouts and lowering into place, then tightening two
thumbscrews.
6. Ensure that the gray node rod is in the extracted position, pulled out of the component.
7. Grasp each side of the node and gently slide it into the enclosure. Ensure the node is aligned with the grooves in the slot.
CAUTION: Ensure the node is correctly oriented, alternate nodes are rotated by 180°.
8. Keep sliding the node in until it halts against the insertion mechanism.
10. Push the extended gray node rod into the node to ensure the node is correctly installed.
CAUTION: A flashing blue LED indicates that the node is not properly seated. Pull out the gray node rod and push back in to ensure
that the node is fully seated.
NOTE: Once inserted, the node should power up and go through the node rescue procedure before joining the cluster. This may take
up to 10 minutes.
11. Verify that the node LED is blinking green in synchronization with other nodes, indicating that the node has joined the cluster.
Node PCM
Node PCM
NOTE: PCMs are located at the rear of the system and are located on either side of the nodes. There are two types of PCMs: the 580 W
used in the drive enclosure, and the 764 W used in the controller node. The 764 W includes a replaceable battery.
CAUTION: To prevent overheating the Node PCM bay in the enclosure should not be left open for more than 6 minutes.
CAUTION: Be sure to put on your electrostatic discharge wrist strap to avoid damaging any circuitry.
Preparation
1. Remove the replacement PCM from its packaging and place on an ESD safe mat with the empty battery compartment facing up.
2. Disconnect the power cable. Ensure the power cable and cable clamp will not be in the way when the PCM is removed.
3. With your thumb and forefinger, grasp and squeeze the latch to release the handle.
5. Rotate the PCM release handle and slide the PCM out of the enclosure.
1. On the replacement PCM, ensure that the handle is in the open position.
2. Slide the PCM into the enclosure and push until the insertion mechanism starts to engage.
NOTE: Ensure that no cables get caught in the PCM insertion mechanism.
3. Close the handle to fully seat the PCM into the enclosure; you will hear a click as the latch engages.
4. Once inserted, pull back lightly on the PCM to ensure that it is properly engaged.
2. Disconnect the power cable. Ensure the power cable and cable clamp will not be in the way when the PCM is removed.
3. With your thumb and forefinger, grasp and squeeze the latch to release the handle.
5. Rotate the PCM release handle and slide the PCM out of the enclosure.
6. Place the PCM on the ESD safe mat with the battery compartment facing up.
1. Ensure replacement PCM battery handle is in the upright position, insert the battery into the PCM, and then push down the handle
to install. Check that the battery and handle are level with the surface of the PCM.
2. Slide the PCM into the enclosure and push until the insertion mechanism starts to engage.
NOTE: Ensure that no cables get caught in the PCM insertion mechanism.
3. Close the handle to fully seat the PCM into the enclosure; you will hear a click as the latch engages.
4. Once inserted, pull back lightly on the PCM to ensure that it is properly engaged.
SFP
SFP
CAUTION: Be sure to put on your electrostatic discharge wrist strap to avoid damaging any circuitry.
The SFP is located in the adapter port on the controller node and there are two to six SFPs per node.
WARNING: When the system is on, do not stare at the fibers. Doing so could damage your eyes.
SFP identification
Follow the CLI steps to identify the failed SFP.
SFP removal
NOTE: The hardware procedure shown depicts the 3PAR 7000 series. The procedure is identical for the 3PAR 8000 series.
1. After identifying the SFP that requires replacement, disconnect the cable, open the retaining clip and carefully slide the SFP out of
the slot.
CAUTION: When handling the SFP, do not touch the gold contact leads to prevent damage.
SFP replacement
1. Carefully slide the replacement SFP into the adapter until fully seated, close the retaining clip to secure it in place, and reconnect
the cable.
2. Reconnect the cable to the SFP module and verify that the link status LED is solid green.
SFP verification
Follow the CLI steps to verify the installation of the replacement SFP.
Drive - Customer
Drive customer
CAUTION: Be sure to put on your electrostatic discharge wrist strap to avoid damaging any circuitry.
NOTE: The system supports the following storage drives: Large Form Factor (LFF) drives, Small Form Factor (SFF) drives, and Solid
State Drives (SSD). The replacement procedures are essentially the same for all storage drives.
CAUTION:
If you require more than 10 minutes to replace a drive, install a drive blank cover to prevent overheating while you are working.
To avoid damage to hardware and the loss of data, never remove a drive without confirming that the drive fault LED is lit.
Before you begin: Review drive considerations .
Drive identification
Follow the SSMC steps to identify the failed drive.
Drive removal
1. Pinch the handle latch to release the handle into open position, pull the handle away from the enclosure, pull the drive only slightly
out of the enclosure, and then wait 30 seconds to allow time for the internal disk to stop rotating.
2. Slowly slide the drive out of the enclosure and place on an ESD safe mat.
2. With the latch handle of the drive fully extended, align and slide the drive into the bay until the handle begins to engage, and then
push firmly until it clicks.
3. Close the handle to fully seat the drive into the drive bay.
4. Observe the newly installed drive for 60 seconds to verify the amber Fault LED turns off and remains off for 60 seconds.
Drive verification
Follow the SSMC steps to verify the installation of the replacement drive.
Drive - Service
Drive service
CAUTION: Be sure to put on your electrostatic discharge wrist strap to avoid damaging any circuitry.
NOTE: The system supports the following storage drives: Large Form Factor (LFF) drives, Small Form Factor (SFF) drives, and Solid
State Drives (SSD). The replacement procedures are essentially the same for all storage drives.
CAUTION:
If you require more than 10 minutes to replace a drive, install a drive blank cover to prevent overheating while you are working.
To avoid damage to hardware and the loss of data, never remove a drive without confirming that the drive fault LED is lit.
Before you begin: Review drive considerations .
Drive identification
Follow the CLI steps to identify the failed drive.
Drive removal
1. Pinch the handle latch to release the handle into open position, pull the handle away from the enclosure, pull the drive only slightly
out of the enclosure, and then wait 30 seconds to allow time for the internal disk to stop rotating.
2. Slowly slide the drive out of the enclosure and place on an ESD safe mat.
Drive replacement
2. With the latch handle of the drive fully extended, align and slide the drive into the bay until the handle begins to engage, and then
push firmly until it clicks.
3. Close the handle to fully seat the drive into the drive bay.
4. Observe the newly installed drive for 60 seconds to verify the amber Fault LED turns off and remains off for 60 seconds.
Drive verification
Follow the CLI steps to verify the installation of the replacement drive.
Drive Enclosure
Drive enclosure
CAUTION: Be sure to put on your electrostatic discharge wrist strap to avoid damaging any circuitry.
HPE 3PAR StoreServ 8000 Storage - Parts Support Guide 94
CAUTION:
An online replacement of a drive enclosure might be possible, by contacting HP Tech Support to schedule the replacement of the
drive enclosure while the storage system is online.
Offline replacement:An offline replacement of a drive enclosure can be performed by scheduling an offline maintenance window,
and using the following procedure in this section.
CAUTION: Two people are required to remove the enclosure from the rack to prevent injury.
NOTE: The procedures for the LFF 4U drive enclosure and the SFF 2U drive enclosure are essentially the same, except where noted.
Before you begin: Review drive considerations .
Drive enclosure identification and shutdown
Follow the CLI steps to identify and shut down the failed drive enclosure.
Drive enclosure removal
1. Label, if necessary, all the mini-SAS cables from the drive enclosure to be replaced.
3. Power down the enclosure and disconnect all mini SAS cables and power cables.
4. Remove the I/O modules and power supplies, and set them on an ESD safe mat.
5. Remove the drives from the enclosure, noting all drive locations. Set the drives on an ESD safe mat.
7. Remove the screws that mount the enclosure to the front of the rack: four screws for the 4U enclosure, two screws for the 2U
enclosure.
8. Remove the two hold-down screws that secure the enclosure at the rear of the rack.
9. While supporting the enclosure from underneath, slide the failed drive enclosure out of its rack.
WARNING: Always use at least two people to lift an enclosure.
3. Replace the screws at the front of the rack: four screws for the 4U enclosure; two screws for the 2U enclosure.
8. Install the drives into their same slot locations, as labeled earlier.
9. Power on the cage, and wait for all drives to spin up with a green LED status.
I/O module
CAUTION: Be sure to put on your electrostatic discharge wrist strap to avoid damaging any circuitry.
CAUTION:
To prevent overheating, the I/O module bay in the enclosure should not be left open for more than 6 minutes.
Storage systems operate using two I/O modules per drive enclosure and can temporarily operate using one I/O module when
removing the other I/O module for servicing.
Before you begin: Review drive considerations .
I/O module identification
Follow the CLI steps to identify the failed I/O module.
I/O module removal
1. If required, label and then unplug the cable(s) from the failed I/O module. There can be one cable or two.
2. Grasp the module latch between thumb and forefinger, squeeze to release the latch, and pull the latch handles open.
3. Grip the handles on both sides of the module, remove it from the enclosure, and then place the failed I/O module on an ESD safe
mat.
I/O module replacement
1. Open the replacement module latch and slide it into the enclosure until it automatically engages.
2. Once the replacement module is in the enclosure, close the latch until it engages and clicks.
4. Reconnect the mini-SAS cables to the replacement I/O modules in the same location as before.
Drive PCM
Drive PCM
NOTE: PCMs are located at the rear of the system and are located on either side of the nodes. There are two types of PCMs: the 580 W
used in the drive enclosure, and the 764 W used in the controller node. The 764 W includes a replaceable battery.
CAUTION: To prevent overheating the drive PCM bay in the enclosure should not be left open for more than 6 minutes.
CAUTION: Be sure to put on your electrostatic discharge wrist strap to avoid damaging any circuitry.
Preparation
1. Remove the replacement PCM from its packaging and place on an ESD safe mat.
2. Disconnect the power cable. Ensure the power cable and cable clamp will not be in the way when the PCM is removed.
3. With your thumb and forefinger, grasp and squeeze the latch to release the handle.
5. Rotate the PCM release handle and slide the PCM out of the enclosure.
1. On the replacement PCM, ensure that the handle is in the open position.
2. Slide the PCM into the enclosure and push until the insertion mechanism starts to engage.
NOTE: Ensure that no cables get caught in the PCM insertion mechanism.
3. Close the handle to fully seat the PCM into the enclosure; you will hear a click as the latch engages.
4. Once inserted, pull back lightly on the PCM to ensure that it is properly engaged.
Welcome to the HPE 3PAR StoreServ 8000 Storage Customer Self-install Video
The list of steps on the left provides navigation and keeps track of your progress. From any screen, the Menu pull-down menu at the top
left provides video links. The Resources pull-down menu at the top right provides links for documentation and a link to the Hewlett
Packard Enterprise Information Library.
When you are ready to begin, click the Next button at the bottom right to watch Overview & Preparation.
NOTE:
The battery LED may flash green, indicating that the battery is charging. Once fully charged, the LED will be solid
green.
For more information on LEDs, follow the link to the LED Indicators in the "Resources” pull- down menu.
Allow up to 10 minutes for the system to boot.
Now, from the front of the system, locate and verify that the ear cap System Power green LED is On and the Drive Status amber LED is
off.
NOTE:
The cage numbers may not reflect actual cage numbers at this point.
On the rear, verify the following The controller node's status LEDs are solid green.
All I/O modules status LEDs lit green. All PCM status LEDs still lit green. All of the hardware is now installed, powered up and verified to
be working.
Click Next to watch Setting up the Service Processor.
Resources
Click here to access the HPE Support Center for additional information