0% found this document useful (0 votes)
266 views

Ibm System Storage

Uploaded by

jumbo_hyd
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
266 views

Ibm System Storage

Uploaded by

jumbo_hyd
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 434

Front cover

IBM System Storage DS8800


Architecture and Implementation

High Density Storage Enclosure

8 Gbps Host Adapters

4-Port Device Adapter

Bertrand Dufrasne
Doug Acuff
Pat Atkinson
Urban Biel
Hans Paul Drumm
Jana Jamsek
Peter Kimmel
Gero Schmidt
Alexander Warmuth

ibm.com/redbooks
International Technical Support Organization

IBM System Storage DS8800: Architecture and


Implementation

January 2011

SG24-8886-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page xi.

First Edition (January 2011)

This edition applies to the IBM System Storage DS8800 with DS8000 Licensed Machine Code (LMC) level
6.6.xxx.xx (bundle version 86.0.xxx.xx)).

© Copyright International Business Machines Corporation 2011. All rights reserved.


Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Contents

Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi

Part 1. Concepts and architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Chapter 1. Introduction to the IBM System Storage DS8800 series. . . . . . . . . . . . . . . . 3


1.1 The DS8800: A member of the DS family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 DS8800 features and functions overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.1 Overall architecture and components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.2 Storage capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2.3 Supported environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2.4 Copy Services functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2.5 Service and setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.2.6 Configuration flexibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.2.7 IBM Certified Secure Data Overwrite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3 Performance features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3.1 Sophisticated caching algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3.2 Solid State Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.3.3 Multipath Subsystem Device Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.3.4 Performance for System z. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.3.5 Performance enhancements for IBM Power Systems . . . . . . . . . . . . . . . . . . . . . 18
1.3.6 Performance enhancements for z/OS Global Mirror . . . . . . . . . . . . . . . . . . . . . . . 18

Chapter 2. IBM System Storage DS8800 models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21


2.1 DS8800 model overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.1.1 DS8800 Model 951 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

Chapter 3. Hardware components and architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 29


3.1 Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.1.1 Base frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.1.2 Expansion frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.1.3 Rack operator window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.2 DS8800 architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2.1 POWER6+ processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2.2 Peripheral Component Interconnect Express (PCI Express) . . . . . . . . . . . . . . . . 36
3.2.3 Device adapters and host adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2.4 Storage facility architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.2.5 Server-based SMP design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.3 Storage facility processor complex (CEC). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.3.1 Processor memory and cache management . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.3.2 RIO-G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.3.3 I/O enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.4 Disk subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

© Copyright IBM Corp. 2011. All rights reserved. iii


3.4.1 Device adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.4.2 Disk enclosures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.4.3 Disk drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.5 Host adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.5.1 Fibre Channel/FICON host adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.6 Power and cooling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.7 Management console network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.8 System Storage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.9 Isolated Tivoli Key Lifecycle Manager server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

Chapter 4. RAS on IBM System Storage DS8800 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55


4.1 Names and terms for the DS8800 storage system . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.2 RAS features of DS8800 CEC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.2.1 POWER6 Hypervisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.2.2 POWER6 processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.2.3 AIX operating system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.2.4 CEC dual hard drive rebuild . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.2.5 RIO-G interconnect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.2.6 Environmental monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.2.7 Resource deallocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.3 CEC failover and failback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.3.1 Dual operational . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.3.2 Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.3.3 Failback. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.3.4 NVS and power outages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.4 Data flow in DS8800 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.4.1 I/O enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.4.2 Host connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.4.3 Metadata checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.5 RAS on the HMC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.5.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.5.2 Microcode updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.5.3 Call Home and Remote Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.6 RAS on the disk subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.6.1 RAID configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.6.2 Disk path redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.6.3 Predictive Failure Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.6.4 Disk scrubbing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.6.5 RAID 5 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.6.6 RAID 6 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.6.7 RAID 10 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.6.8 Spare creation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.7 RAS on the power subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.7.1 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.7.2 Line power loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
4.7.3 Line power fluctuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
4.7.4 Power control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
4.7.5 Emergency power off . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.8 RAS and Full Disk Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.8.1 Deadlock recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.8.2 Dual platform TKLM servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.9 Other features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.9.1 Internal network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

iv IBM System Storage DS8800: Architecture and Implementation


4.9.2 Remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.9.3 Earthquake resistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

Chapter 5. Virtualization concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85


5.1 Virtualization definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.2 The abstraction layers for disk virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.2.1 Array sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.2.2 Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.2.3 Ranks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.2.4 Extent Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.2.5 Logical volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
5.2.6 Track Space Efficient volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.2.7 Allocation, deletion, and modification of LUNs/CKD volumes. . . . . . . . . . . . . . . . 96
5.2.8 Logical subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.2.9 Volume access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.2.10 Virtualization hierarchy summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.3 Benefits of virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

Chapter 6. IBM System Storage DS8800 Copy Services overview. . . . . . . . . . . . . . . 107


6.1 Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.2 FlashCopy and FlashCopy SE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.2.1 Basic concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.2.2 Benefits and use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
6.2.3 FlashCopy options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
6.2.4 FlashCopy SE-specific options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
6.3 Remote Mirror and Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
6.3.1 Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
6.3.2 Global Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
6.3.3 Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
6.3.4 Metro/Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
6.3.5 z/OS Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
6.3.6 z/OS Metro/Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
6.3.7 Summary of Remote Mirror and Copy function characteristics. . . . . . . . . . . . . . 119

Chapter 7. Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121


7.1 DS8800 hardware: performance characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
7.1.1 Fibre Channel switched interconnection at the back-end . . . . . . . . . . . . . . . . . . 122
7.1.2 Fibre Channel device adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
7.1.3 Eight-port and four-port host adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
7.1.4 POWER6+: heart of the DS8800 dual-cluster design . . . . . . . . . . . . . . . . . . . . . 126
7.1.5 Vertical growth and scalability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.2 Software performance: synergy items. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
7.2.1 End-to-end I/O priority: synergy with AIX and DB2 on System p . . . . . . . . . . . . 128
7.2.2 Cooperative caching: Synergy with AIX and DB2 on System p . . . . . . . . . . . . . 128
7.2.3 Long busy wait host tolerance: Synergy with AIX on System p . . . . . . . . . . . . . 128
7.2.4 PowerHA Extended distance extensions: synergy with AIX on System p . . . . . 128
7.3 Performance considerations for disk drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
7.4 DS8000 superior caching algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
7.4.1 Sequential Adaptive Replacement Cache. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
7.4.2 Adaptive Multi-stream Prefetching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
7.4.3 Intelligent Write Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
7.5 Performance considerations for logical configuration . . . . . . . . . . . . . . . . . . . . . . . . . 134
7.5.1 Workload characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
7.5.2 Data placement in the DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

Contents v
7.5.3 Data placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
7.6 Performance and sizing considerations for open systems . . . . . . . . . . . . . . . . . . . . . 140
7.6.1 Determining the number of paths to a LUN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
7.6.2 Dynamic I/O load-balancing: Subsystem Device Driver . . . . . . . . . . . . . . . . . . . 140
7.6.3 Automatic port queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
7.6.4 Determining where to attach the host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
7.7 Performance and sizing considerations for System z . . . . . . . . . . . . . . . . . . . . . . . . . 142
7.7.1 Host connections to System z servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
7.7.2 Parallel Access Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
7.7.3 z/OS Workload Manager: Dynamic PAV tuning . . . . . . . . . . . . . . . . . . . . . . . . . 145
7.7.4 HyperPAV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
7.7.5 PAV in z/VM environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
7.7.6 Multiple Allegiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
7.7.7 I/O priority queuing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
7.7.8 Performance considerations on Extended Distance FICON . . . . . . . . . . . . . . . . 151
7.7.9 High Performance FICON for z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
7.7.10 Extended distance High Performance FICON . . . . . . . . . . . . . . . . . . . . . . . . . 154

Part 2. Planning and installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

Chapter 8. Physical planning and installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157


8.1 Considerations prior to installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
8.1.1 Who should be involved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
8.1.2 What information is required . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
8.2 Planning for the physical installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
8.2.1 Delivery and staging area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
8.2.2 Floor type and loading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
8.2.3 Room space and service clearance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
8.2.4 Power requirements and operating environment . . . . . . . . . . . . . . . . . . . . . . . . 163
8.2.5 Host interface and cables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
8.3 Network connectivity planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
8.3.1 Hardware Management Console and network access . . . . . . . . . . . . . . . . . . . . 166
8.3.2 System Storage Productivity Center and network access . . . . . . . . . . . . . . . . . 166
8.3.3 DSCLI console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
8.3.4 DSCIMCLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
8.3.5 Remote support connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
8.3.6 Business-to-Business VPN connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
8.3.7 Remote power control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
8.3.8 Storage area network connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
8.3.9 Tivoli Key Lifecycle Manager server for encryption. . . . . . . . . . . . . . . . . . . . . . . 169
8.3.10 Lightweight Directory Access Protocol server for single sign-on . . . . . . . . . . . 171
8.4 Remote mirror and copy connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
8.5 Disk capacity considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
8.5.1 Disk sparing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
8.5.2 Disk capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
8.5.3 Solid State Drive (SSD) considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
8.5.4 Full Disk Encryption disk considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
8.6 Planning for growth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175

Chapter 9. DS8800 HMC planning and setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177


9.1 Hardware Management Console overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
9.1.1 Storage Hardware Management Console hardware. . . . . . . . . . . . . . . . . . . . . . 178
9.1.2 Private Ethernet networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
9.2 Hardware Management Console software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

vi IBM System Storage DS8800: Architecture and Implementation


9.2.1 DS Storage Manager GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
9.2.2 Command-line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
9.2.3 DS Open Application Programming Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
9.2.4 Web-based user interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
9.3 HMC activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
9.3.1 HMC planning tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
9.3.2 Planning for microcode upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
9.3.3 Time synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
9.3.4 Monitoring with the HMC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
9.3.5 Call Home and remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
9.4 HMC and IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
9.5 HMC user management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
9.5.1 User management using the DS CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
9.5.2 User management using the DS GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
9.6 External HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
9.6.1 External HMC benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
9.6.2 Configuring DS CLI to use a second HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200

Chapter 10. IBM System Storage DS8800 features and license keys . . . . . . . . . . . . 203
10.1 IBM System Storage DS8800 licensed functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
10.2 Activation of licensed functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
10.2.1 Obtaining DS8800 machine information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
10.2.2 Obtaining activation codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
10.2.3 Applying activation codes using the GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
10.2.4 Applying activation codes using the DS CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
10.3 Licensed scope considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
10.3.1 Why you get a choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
10.3.2 Using a feature for which you are not licensed . . . . . . . . . . . . . . . . . . . . . . . . . 218
10.3.3 Changing the scope to All . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
10.3.4 Changing the scope from All to FB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
10.3.5 Applying an insufficient license feature key . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
10.3.6 Calculating how much capacity is used for CKD or FB. . . . . . . . . . . . . . . . . . . 221

Part 3. Storage configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223

Chapter 11. Configuration flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225


11.1 Configuration worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
11.2 Configuration flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226

Chapter 12. System Storage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229


12.1 System Storage Productivity Center overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
12.1.1 SSPC components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
12.1.2 SSPC capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
12.1.3 SSPC upgrade options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
12.2 SSPC setup and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
12.2.1 Configuring SSPC for DS8800 remote GUI access . . . . . . . . . . . . . . . . . . . . . 233
12.2.2 Manage embedded CIMOM on DS8000. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
12.2.3 Set up SSPC user management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
12.2.4 Set up and discover DS8000 using native device interface . . . . . . . . . . . . . . . 243
12.3 Working with a DS8000 system in TPC-BE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
12.3.1 Manually recover CIM Agent connectivity after HMC shutdown . . . . . . . . . . . . 244
12.3.2 Display disks and volumes of DS8000 Extent Pools. . . . . . . . . . . . . . . . . . . . . 245
12.3.3 Display the physical paths between systems . . . . . . . . . . . . . . . . . . . . . . . . . . 247
12.3.4 Storage health management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249

Contents vii
12.3.5 Display host volumes through SVC to the assigned DS8000 volume. . . . . . . . 249

Chapter 13. Configuration using the DS Storage Manager GUI . . . . . . . . . . . . . . . . . 251


13.1 DS Storage Manager GUI overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
13.1.1 Accessing the DS GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
13.1.2 DS GUI Welcome window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
13.2 Logical configuration process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
13.3 Examples of configuring DS8000 storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
13.3.1 Define storage complex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
13.3.2 Create arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
13.3.3 Create ranks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
13.3.4 Create Extent Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
13.3.5 Configure I/O ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
13.3.6 Configure logical host systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
13.3.7 Create fixed block volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
13.3.8 Create volume groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
13.3.9 Create LCUs and CKD volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
13.3.10 Additional actions on LCUs and CKD volumes . . . . . . . . . . . . . . . . . . . . . . . . 299
13.4 Other DS GUl functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
13.4.1 Check the status of the DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
13.4.2 Explore the DS8800 hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304

Chapter 14. Configuration with the DS Command-Line Interface . . . . . . . . . . . . . . . 307


14.1 DS Command-Line Interface overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
14.1.1 Supported operating systems for the DS CLI . . . . . . . . . . . . . . . . . . . . . . . . . . 308
14.1.2 User accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
14.1.3 DS CLI profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
14.1.4 Command structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
14.1.5 Using the DS CLI application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
14.1.6 Return codes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
14.1.7 User assistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
14.2 Configuring the I/O ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
14.3 Monitoring the I/O ports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
14.4 Configuring the DS8000 storage for FB volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
14.4.1 Create arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
14.4.2 Create ranks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
14.4.3 Create Extent Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
14.4.4 Creating FB volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
14.4.5 Creating volume groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
14.4.6 Creating host connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
14.4.7 Mapping open systems host disks to storage unit volumes . . . . . . . . . . . . . . . 329
14.5 Configuring DS8000 Storage for Count Key Data Volumes . . . . . . . . . . . . . . . . . . . 331
14.5.1 Create arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
14.5.2 Ranks and Extent Pool creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
14.5.3 Logical control unit creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
14.5.4 Create CKD volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334

Part 4. Maintenance and upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341

Chapter 15. Licensed machine code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343


15.1 How new microcode is released . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
15.2 Bundle installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
15.3 Concurrent and non-concurrent updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
15.4 Code updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346

viii IBM System Storage DS8800: Architecture and Implementation


15.5 Host adapter firmware updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
15.6 Loading the code bundle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
15.7 Post-installation activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
15.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347

Chapter 16. Monitoring with Simple Network Management Protocol . . . . . . . . . . . . 349


16.1 Simple Network Management Protocol overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
16.1.1 SNMP agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
16.1.2 SNMP manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
16.1.3 SNMP trap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
16.1.4 SNMP communication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
16.1.5 Generic SNMP security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
16.1.6 Message Information Base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
16.1.7 SNMP trap request . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
16.1.8 DS8000 SNMP configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
16.2 SNMP notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
16.2.1 Serviceable event using specific trap 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
16.2.2 Copy Services event traps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
16.3 SNMP configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360

Chapter 17. Remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363


17.1 Introduction to remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
17.1.1 Suggested reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
17.1.2 Organization of this chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
17.1.3 Terminology and definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
17.2 IBM policies for remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
17.3 Remote connection types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
17.3.1 Modem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
17.3.2 IP network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
17.3.3 IP network with traditional VPN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
17.3.4 IP network with Business-to-Business VPN . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
17.4 DS8000 support tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
17.4.1 Call Home and heartbeat (outbound) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
17.4.2 Data offload (outbound) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
17.4.3 Code download (inbound) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
17.4.4 Remote support (inbound and two-way) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
17.5 Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
17.5.1 No connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
17.5.2 Modem only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
17.5.3 Modem and network with no VPN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
17.5.4 Modem and traditional VPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
17.5.5 Modem and Business-to-Business VPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
17.6 Audit logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377

Chapter 18. Capacity upgrades and CoD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379


18.1 Installing capacity upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
18.1.1 Installation order of upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
18.1.2 Checking how much total capacity is installed . . . . . . . . . . . . . . . . . . . . . . . . . 382
18.2 Using Capacity on Demand (CoD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
18.2.1 What is Capacity on Demand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
18.2.2 Determining if a DS8800 has CoD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
18.2.3 Using the CoD storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387

Appendix A. Tools and service offerings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389

Contents ix
Capacity Magic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
Disk Magic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
HyperPAV analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
FLASHDA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
IBM i SSD Analyzer Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
IBM Tivoli Storage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
IBM Certified Secure Data Overwrite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
IBM Global Technology Services: service offerings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
IBM STG Lab Services: Service offerings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397

Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399

Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403


IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
How to get IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405

x IBM System Storage DS8800: Architecture and Implementation


Notices

This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.

The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.

Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.

IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.

Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.

This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.

COPYRIGHT LICENSE:

This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.

© Copyright IBM Corp. 2011. All rights reserved. xi


Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both. These and other IBM trademarked terms are
marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US
registered or common law trademarks owned by IBM at the time this information was published. Such
trademarks may also be registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml

The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX 5L™ Power Architecture® System p®
AIX® Power Systems™ System Storage DS®
DB2® POWER5™ System Storage®
DS4000® POWER5+™ System x®
DS6000™ POWER6+™ System z10®
DS8000® POWER6® System z®
Enterprise Storage Server® PowerHA™ Tivoli®
ESCON® PowerPC® TotalStorage®
FICON® POWER® WebSphere®
FlashCopy® Redbooks® XIV®
GPFS™ Redpapers™ z/OS®
HACMP™ Redbooks (logo) ® z/VM®
HyperSwap® RMF™ z10™
i5/OS® S/390® z9®
IBM® System i®

The following terms are trademarks of other companies:

AMD, the AMD Arrow logo, and combinations thereof, are trademarks of Advanced Micro Devices, Inc.

Disk Magic, and the IntelliMagic logo are trademarks of IntelliMagic BV in the United States, other countries,
or both.

Novell, SUSE, the Novell logo, and the N logo are registered trademarks of Novell, Inc. in the United States
and other countries.

Oracle, JD Edwards, PeopleSoft, Siebel, and TopLink are registered trademarks of Oracle Corporation and/or
its affiliates.

Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other
countries, or both.

Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.

Intel Xeon, Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks
of Intel Corporation or its subsidiaries in the United States and other countries.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Other company, product, or service names may be trademarks or service marks of others.

xii IBM System Storage DS8800: Architecture and Implementation


Preface

This IBM® Redbooks® publication describes the concepts, architecture, and implementation
of the IBM System Storage® DS8800 storage subsystem. The book provides reference
information to assist readers who need to plan for, install, and configure the DS8800.

The IBM System Storage DS8800 is the most advanced model in the IBM DS8000® lineup. It
introduces IBM POWER6+™-based controllers, with dual two-way or dual four-way processor
complex implementations. It also features enhanced 8 Gpbs device adapters and host
adapters.

The DS8800 is equipped with high-density storage enclosures populated with 24 small form
factor SAS-2 drives. Solid State Drives are also available, as well as support for the Full Disk
Encryption (FDE) feature.

Its switched Fibre Channel architecture, dual processor complex implementation, high
availability design, and incorporated advanced Point-in-Time Copy and Remote Mirror and
Copy functions make the DS8800 system suitable for mission-critical business functions.

Host attachment and interoperability topics for the DS8000 series including the DS8800 are
now covered in the IBM Redbooks publication, IBM System Storage DS8000: Host
Attachment and Interoperability, SG24-8887.

To read about DS8000 Copy Services functions see the Redbooks IBM System Storage
DS8000: Copy Services for Open Environments, SG24-6788, and DS8000 Copy Services for
IBM System z, SG24-6787.

For information related to specific features, see IBM System Storage DS8700: Disk
Encryption Implementation and Usage Guidelines, REDP-4500, IBM System Storage
DS8000: LDAP Authentication, REDP-4505, and DS8000: Introducing Solid State Drives,
REDP-4522.

The team who wrote this book


This book was produced by a team of specialists from around the world working at the
International Technical Support Organization, San Jose Center.

Bertrand Dufrasne is an IBM Certified Consulting IT Specialist and Project Leader for IBM
System Storage disk products at the International Technical Support Organization, San Jose
Center. He has worked at IBM in various IT areas. Bertrand has written many IBM Redbooks
publications and has also developed and taught technical workshops. Before joining the
ITSO, he worked for IBM Global Services as an Application Architect in the retail, banking,
telecommunication, and health care industries. He holds a Masters degree in Electrical
Engineering from the Polytechnic Faculty of Mons, Belgium.

Doug Acuff is an Advisory Software Engineer for the DS8000 System Level Serviceability
team in Tucson, Arizona. He has been with IBM for ten years as a member of both test and
development teams for IBM Systems Storage products including ESS, DS8000 and
DS6000™ models. His responsibilities include testing DS8000 hardware and firmware,
having led multiple hardware test teams in both Functional Verification and System Level
Test. Doug holds a Masters degree in Information Systems from the University of New
Mexico.

© Copyright IBM Corp. 2011. All rights reserved. xiii


Pat Atkinson has been involved with IBM storage for more than 10 years, focusing on
Australian Federal Government clients. During this time he has worked extensively with IBM
storage products including ESS, DS8000, DS4000®, and IBM tape systems. His areas of
expertise include SAN Infrastructure design and implementation and SAN storage problem
resolution. Initially a member of the IBM Support & Delivery team for the IBM Australia
Federal accounts, Pat is now a Storage Architect in Canberra.

Urban Biel is an IT Specialist in IBM GTS Slovakia. His areas of expertise include System
p®, AIX®, Linux®, PowerHA™, DS6000/DS8000/SVC, Softek, and GPFS™. He has been
involved in various projects that typically include HA/DR solutions implementation using
DS8000 copy services with AIX/PowerHA. He also executed several storage and server
migrations. Urban holds a second degree in Electrical Engineering and Informatics from the
Technical University of Kosice.

Hans Paul Drumm is an IT Specialist in IBM Germany. He has 25 years of experience in the
IT industry, and has worked at IBM for nine years. Hans holds a degree in Computer Science
from the University of Kaiserslautern. His areas of expertise include Solaris, HP-UX and
z/OS®, with a special focus on disk subsystem attachment.

Jana Jamsek is an IT Specialist with IBM Slovenia. She works in Storage Advanced
Technical Support for Europe as a specialist for IBM Storage Systems and the IBM i (i5/OS®)
operating system. Jana has eight years of experience working with the IBM System i®
platform and its predecessor models, as well as eight years of experience working with
storage. She holds a Masters degree in Computer Science and a degree in Mathematics from
the University of Ljubljana, Slovenia.

Peter Kimmel is an IT Specialist and ATS team lead of the Enterprise Disk Solutions team at
the European Storage Competence Center (ESCC) in Mainz, Germany. He joined IBM
Storage in 1999 and since then has worked with all Enterprise Storage Server® (ESS) and
System Storage DS8000 generations, with a focus on architecture and performance. He has
been involved in the Early Shipment Programs (ESPs) of these early installs, and has
co-authored several DS8000 IBM Redbooks publications. Peter holds a Diploma (MSc)
degree in Physics from the University of Kaiserslautern.

Gero Schmidt is an IT Specialist in the IBM ATS technical sales support organization in
Germany. He joined IBM in 2001, working at the European Storage Competence Center
(ESCC) in Mainz and providing technical support for a broad range of IBM storage products
(SSA, ESS, DS4000, DS6000, and DS8000) in Open Systems environments with a primary
focus on storage subsystem performance. He participated in the product rollout and beta test
program for the DS6000/DS8000 series. Gero holds a degree in Physics (Dipl.-Phys.) from
the Technical University of Braunschweig, Germany.

Alexander Warmuth is a Senior IT Specialist in the IBM European Storage Competence


Center. He joined IBM in 1993 and has worked in technical sales support since 2001. He
designs and promotes new and complex storage solutions, drives the introduction of new
products and provides advice to clients, Business Partners, and sales. His main areas of
expertise include high-end storage solutions, business resiliency, Linux, and storage.
Alexander holds a diploma in Electrical Engineering from the University of Erlangen,
Germany.

xiv IBM System Storage DS8800: Architecture and Implementation


Figure 1 The team: Doug, Hans-Paul, Jana, Pat, Urban, Alexander, Gero, Peter and Bertrand

Many thanks to the following people at the IBM Systems Lab Europe in Mainz, Germany, who
helped with equipment provisioning and preparation:

Uwe Heinrich Müller, Uwe Schweikhard, Günter Schmitt, Jörg Zahn, Mike Schneider, Hartmut
Bohnacker, Stephan Weyrich, Uwe Höpfne, Werner Wendler

Special thanks to the Enterprise Disk team manager Bernd Müller and the ESCC director
Klaus-Jürgen Rünger for their continuous interest and support regarding the ITSO Redbooks
projects.

Thanks to the following people for their contributions to this project:

John Bynum
Worldwide Technical Support Management

James Davison, Dale Anderson, Brian Cagno, Stephen Blinick, Brian Rinaldi, John Elliott,
Kavitah Shah, Rick Ripberger, Denise Luzar, Stephen Manthorpe, Jeff Steffan

Now you can become a published author, too!


Here's an opportunity to spotlight your skills, grow your career, and become a published
author—all at the same time! Join an ITSO residency project and help write a book in your
area of expertise, while honing your experience using leading-edge technologies. Your efforts
will help to increase product acceptance and customer satisfaction, as you expand your
network of technical contacts and relationships. Residencies run from two to six weeks in
length, and you can participate either in person or as a remote resident working from your
home base.

Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html

Preface xv
Comments welcome
Your comments are important to us!

We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
򐂰 Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
򐂰 Send your comments in an email to:
[email protected]
򐂰 Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400

Stay connected to IBM Redbooks


򐂰 Find us on Facebook:
http://www.facebook.com/IBMRedbooks
򐂰 Follow us on Twitter:
http://twitter.com/ibmredbooks
򐂰 Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
򐂰 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
򐂰 Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html

xvi IBM System Storage DS8800: Architecture and Implementation


Part 1

Part 1 Concepts and


architecture
This part gives an overview of the IBM System Storage DS8700 concepts and architecture.
The topics covered include:
򐂰 Introduction to the IBM System Storage DS8800 series
򐂰 IBM System Storage DS8800 models
򐂰 Hardware components and architecture
򐂰 RAS on IBM System Storage DS8800
򐂰 Virtualization concepts
򐂰 IBM System Storage DS8800 Copy Services overview

© Copyright IBM Corp. 2011. All rights reserved. 1


2 IBM System Storage DS8800: Architecture and Implementation
1

Chapter 1. Introduction to the IBM System


Storage DS8800 series
This chapter introduces the features, functions, and benefits of the IBM System Storage
DS8800, and provides a comprehensive overview of those functions and features.

More detailed information about those functions and features is provided in subsequent
chapters.

The topics covered here include:


򐂰 The DS8800: A member of the DS family
򐂰 DS8800 features and functions overview
򐂰 Performance features

© Copyright IBM Corp. 2011. All rights reserved. 3


1.1 The DS8800: A member of the DS family
The System Storage DS® family is designed as a high performance, high capacity, and
resilient series of disk storage systems. It offers high availability, multiplatform support, and
simplified management tools to help provide a cost-effective path to an on demand world.

The IBM System Storage DS8000 series encompasses the flagship disk enterprise storage
products in the IBM System Storage portfolio. The DS8800, which is the IBM fourth
generation high-end disk system, represents the latest in this series introducing new small
form factor 2.5-inch SAS disk drive technology, POWER6+ processors, and new 8 Gbps disk
adapter (DA) and host adapter (HA) cards.

The IBM System Storage DS8800, shown in Figure 1-1, is designed to support the most
demanding business applications with its exceptional all-around performance and data
throughput. Combined with the world-class business resiliency and encryption features of the
DS8800, this provides a unique combination of high availability, performance, and security. Its
tremendous scalability, broad server support, and virtualization capabilities can help simplify
the storage environment by consolidating multiple storage systems onto a single DS8800.
Introducing new high density storage enclosures, the DS8800 model offers a considerable
reduction in footprint and energy consumption, thus making it the most space- and
energy-efficient model in the DS8000 series.

Figure 1-1 IBM System Storage DS8800, the IBM fourth generation high-end disk system

The IBM System Storage DS8800 adds Models 951 (base frame) and 95E (expansion unit) to
the 242x machine type family, delivering cutting edge technology, improved space and energy

4 IBM System Storage DS8800: Architecture and Implementation


efficiency, and increased performance. Compared with its predecessors, the IBM System
Storage DS8100, DS8300 and DS8700, the DS8800 is designed to provide new capabilities
for the combination of price and efficiency that is right for all application needs.
Enhancements include:
򐂰 IBM POWER6+ processor technology
The DS8800 features the IBM POWER6+ server technology to help support high
performance. Compared to the POWER5+™ processor in previous DS8300 models, the
POWER6+ processor can deliver more than 50% performance improvement in I/O
operations per second in transaction processing workload environments and as much as
200% bandwidth improvement for sequential workloads. Compared to the performance of
the DS8700 (based on POWER6®), the new processor aids the DS8800 in achieving
sequential read throughput performance improvement of up to 20% and sequential write
throughput performance improvement of up to 40%. The DS8800 offers either a dual
2-way processor complex or a dual 4-way processor complex.
򐂰 Processor memory offerings
The DS8800 model with 2-way configuration offers up to 128 GB of processor memory,
and the DS8800 model with 4-way configuration offers up to 384 GB of processor memory.
In addition, the non-volatile storage scales with the processor memory size selected to
help optimize performance.
򐂰 Improved configuration options
The DS8800 standard cabling is optimized for performance and highly scalable
configurations with capacity for large long-term growth. The DS8800 with standard cabling
allows for up to three frames and up to sixteen 8-port host adapters, providing a high
performance and scalable storage environment.
The DS8800 also provides a business class configuration option. The business class
option allows a system to be configured with more drives per device adapter, thereby
helping to reduce configuration cost and increasing adapter utilization. Business class
cabling can change the configuration options available. This configuration option is
intended for configurations where capacity and high resource utilization are most
important.
򐂰 Non-disruptive upgrade path for Model 951 (standard cabling), and additional Model 95E
expansion frames
The DS8800 provides a non-disruptive upgrade path for the DS8800 Model 951 (standard
cabling), and additional Model 95E expansion frames allowing processor, cache, and
storage enhancements to be performed concurrently without disrupting applications.
򐂰 High density storage enclosures
The DS8800 provides storage enclosure support for 24 small form factor (SFF, 2.5-inch)
SAS drives in 2U of rack space. This option helps improve the storage density for disk
drive modules (DDMs) as compared to previous enclosures, which only supported 16
DDMs in 3.5U of rack space.
򐂰 Improved high density frame design
The DS8800 can support a total of 1056 drives in a smaller footprint (three frames),
supporting higher density and helping to preserve valuable raised floor space in data
center environments. DS8800 is designed to leverage best practices with hot aisle and
cold aisle data center design, drawing air for cooling from the front of the rack and
exhausting hot air at the rear of the rack.
Coupled with this improved cooling implementation, the reduced system footprint, and
small form factor SAS-2 drives, a fully configured DS8800 consumes up to 40% less
power than previous generations of DS8000. The DS8800 base model supports up to 240

Chapter 1. Introduction to the IBM System Storage DS8800 series 5


drives, with the first expansion frame supporting up to 336 drives and second expansion
frame supporting up to 480 drives.
򐂰 8 Gbps host adapters (HAs)
The DS8800 model offers enhanced connectivity with 4-port and 8-port Fibre
Channel/FICON® host adapters (8x Gen2 PCIe) located in the PCIe-based IO enclosures
that are directly connected to the internal processor complexes, with point-to-point
Peripheral Component Interconnect Express ((PCIe) cables delivering improved I/O
Operations Per Second (IOPS) and sequential read/write throughput.
The DS8800 model offers 8 Gbps Fibre Channel/FICON host support designed to offer up
to 100% improvement in the single-port MBps throughput performance and up to 400%
improvement in single adapter throughput performance. This can help deliver cost savings
with potential reduction in the number of host ports needed. The DS8800 8 Gbps Fibre
Channel/FICON host adapter supports FICON attachment to FICON Express8 on
zEnterprise 196 (z196) and System z10® (z10 EC, z10 BC), allowing full exploitation of
zHPF. The DS8800 8 Gbps Fibre Channel/FICON host adapter also provides support for
FICON Express2-attached and FICON Express4-attached systems.
򐂰 8 Gbps device adapters
The DS8800 offers 8 Gbps device adapters (8x Gen2 PCIe) located in the PCIe-based I/O
enclosures that are directly connected to the internal processor complexes with
point-to-point PCIe (PCI Express) cables delivering improved I/O Operations Per Second
(IOPS) and sequential read/write throughput.
These adapters provide improved IOPS performance, throughput, and scalability. They
are optimized for SSD technology and architected for long-term support for scalability
growth. These capabilities complement the POWER6+ server family to provide significant
performance enhancements allowing up to 400% improvement in single adapter
throughput performance.
򐂰 6 Gbps SAS-2 Drive support, available as:
– 146 GB 15,000 rpm SAS
– 450 GB 10,000 rpm SAS
– 600 GB 10,000 rpm SAS
– 300 GB SSD drive
The 450 GB 10000 rpm SAS and 600 GB 10000 rpm SAS drives are additionally offered
with Full Disk Encryption.
򐂰 Value-based pricing and licensing
An operating environment license is priced based on the performance, capacity, speed,
and other characteristics that provide value in client environments.

The IBM System Storage DS8800 supports DS8000 Licensed Machine Code (LMC)
6.6.xxx.xx (bundle version 86.0.xxx.xx), or later.

The DS8800 inherits most of the features of its predecessors in the DS8000 series including:
򐂰 Storage virtualization offered by the DS8000 series allows organizations to allocate
system resources more effectively and better control application quality of service. The
DS8000 series improves the cost structure of operations and lowers energy consumption
through a tiered storage environment.
򐂰 The Dynamic Volume Expansion simplifies management by enabling easier, online
volume expansion to support application data growth, and to support data center
migrations to larger volumes to ease addressing constraints.

6 IBM System Storage DS8800: Architecture and Implementation


򐂰 The FlashCopy® SE capability enables more space efficient utilization of capacity for
copies, enabling improved cost effectiveness.
򐂰 System Storage Productivity Center (SSPC) single pane control and management
integrates the power of the Tivoli® Storage Productivity Center (TPC) and the DS Storage
Manager user interfaces into a single view.
򐂰 LDAP authentication support, which allows single sign-on functionality, can simplify user
management by allowing the DS8800 to rely on a centralized LDAP directory rather than a
local user repository. See IBM System Storage DS8000: LDAP Authentication,
REDP-4505, for more information.
򐂰 Storage Pool Striping helps maximize performance without special tuning. With DS8800
and Release 6.0 LMC Storage Pool Striping (rotate extents) is now the default when
creating new volumes and not explicitly specifying an extent allocation method (eam).
򐂰 Adaptive Multi-stream Prefetching (AMP) is a breakthrough caching technology that can
dramatically improve sequential performance, thereby reducing times for backup,
processing for business intelligence, and streaming media.
򐂰 RAID 6 allows for additional fault tolerance by using a second, independent distributed
parity scheme (dual parity).
򐂰 The DS8000 series has been certified as meeting the requirements of the IPv6 Read Logo
program, indicating its implementation of IPv6 mandatory core protocols and the ability to
interoperate with other IPv6 implementations. The IBM DS8000 can be configured in
native IPv6 environments. The logo program provides conformance and interoperability
test specifications based on open standards to support IPv6 deployment globally.
򐂰 Extended Address Volume support extends the addressing capability of IBM System z®
environments. Volumes can scale up to approximately 223 GB (262,668 cylinders). This
capability can help relieve address constraints to support large storage capacity needs.
This Extended Address Volumes are supported by z/OS 1.10 or later versions.
򐂰 Optional Solid State Drives provide extremely fast access to data, energy efficiency, and
higher system availability.
򐂰 Intelligent Write Caching (IWC) improves the Cache Algorithm for random writes.
Specifically, database applications would benefit from the new IWC technology.
򐂰 The Full Disk Encryption (FDE) feature can protect business-sensitive data by providing
disk-based hardware encryption combined with a sophisticated key management software
(Tivoli Key Lifecycle Manager). The Full Disk Encryption support feature is available only
as a plant order. See IBM System Storage DS8700 Disk Encryption Implementation and
Usage Guidelines, REDP-4500, for more information about this topic.
򐂰 Business continuity
The DS8000 series is designed for the most demanding, mission-critical environments
requiring extremely high availability. It is designed to avoid single points of failure. With the
advanced Copy Services functions that the DS8000 series integrates, data availability can
be enhanced even further. FlashCopy and FlashCopy SE allow production workloads to
continue execution concurrently with data backups.
Metro Mirror, Global Copy, Global Mirror, Metro/Global Mirror, z/OS Global Mirror, and
z/OS Metro/Global Mirror business continuity solutions are designed to provide the
advanced functionality and flexibility needed to tailor a business continuity environment for
almost any recovery point or recovery time objective.
The DS8000 also offers three-site solutions with Metro/Global Mirror and z/OS
Metro/Global Mirror for additional high availability and disaster protection. z/OS Global
Mirror offers Incremental Resync, which can significantly reduce the time needed to
restore a D/R environment after a HyperSwap® in a three-site Metro/z/OS Global Mirror

Chapter 1. Introduction to the IBM System Storage DS8800 series 7


configuration. With Incremental Resync it is possible to change the copy target destination
of a copy relation without requiring a full copy of the data. Another important feature for
z/OS Global Mirror (2-site) and z/OS Metro/Global Mirror (3-site) is Extended Distance
FICON, which can help reduce the need for channel extenders configurations by
increasing the number of read commands in flight.
The Copy Services can be managed and automated with IBM Tivoli Storage Productivity
Center for Replication (TPC-R).

1.2 DS8800 features and functions overview


The IBM System Storage DS8800 is a high performance, high capacity series of disk storage
systems. It offers balanced performance and storage capacity that scales linearly up to
hundreds of terabytes.

The IBM System Storage DS8800 highlights include:


򐂰 Robust, flexible, enterprise class, and cost-effective disk storage
򐂰 Exceptionally high system availability for continuous operations
򐂰 Cutting edge technology with small form factor (2.5-inch) SAS-2 drives, 6 Gbps SAS-2
high density storage enclosures
򐂰 8 Gbps Fibre Channel host and device adapters providing improved space and energy
efficiency, and increased performance
򐂰 IBM POWER6 processor technology
򐂰 Capacities currently from 2.3 TB (16 x 146 GB 15k rpm SAS drives) to 633 TB
(1056 x 600 GB 10k rpm SAS drives)
򐂰 Point-in-time copy function with FlashCopy, FlashCopy SE
򐂰 Remote Mirror and Copy functions with Metro Mirror, Global Copy, Global Mirror,
Metro/Global Mirror, z/OS Global Mirror, and z/OS Metro/Global Mirror with Incremental
Resync capability
򐂰 Support for a wide variety and intermix of operating systems, including IBM i and System z
򐂰 Designed to increase storage efficiency and utilization, ideal for green data centers

1.2.1 Overall architecture and components


From an architectural point of view, the DS8800 offers continuity with respect to the
fundamental architecture of the predecessor DS8100, DS8300, and DS8700 models. This
ensures that the DS8800 can use a stable and well-proven operating environment, offering
the optimum in availability. The hardware is optimized to provide higher performance,
connectivity, and reliability.

The DS8800 is available with different models and configurations, which are discussed in
detail in Chapter 2, “IBM System Storage DS8800 models” on page 21.

Figure 1-2 and Figure 1-3 show the front and rear view of a DS8800 base frame (model 951)
with two expansion frames (model 95E), which is the current maximum DS8800 system
configuration.

8 IBM System Storage DS8800: Architecture and Implementation


Figure 1-2 DS8800 base frame with two expansion frames (front view, 2-way, no PLD option)

Figure 1-3 DS8800 base frame with two expansion frames (rear view, 2-way, no PLD option)

Chapter 1. Introduction to the IBM System Storage DS8800 series 9


IBM POWER6+ processor technology
The DS8800 exploits IBM POWER6+ technology. The symmetric multiprocessing (SMP)
system features 2-way or 4-way, copper-based, silicon-on-insulator (SOI)-based POWER6+
microprocessors running at 5.0 GHz.

Compared to the POWER5+ processor in the DS8300 models, the POWER6+ processor can
enable over a 50% performance improvement in I/O operations per second in transaction
processing workload environments. Additionally, sequential workloads can receive as much
as 200% bandwidth improvement. The DS8800 offers either a dual 2-way processor complex
or a dual 4-way processor complex. A processor complex is also referred to as a storage
server or central processor complex (CPC).

Internal PCIe-based fabric


DS8800 uses direct point-to-point high speed PCI Express (PCIe) connections to the I/O
enclosures to communicate with the device and host adapters. Each single PCI Express
connection operates at a speed of 2 GB/s in each direction. There are up to 16 PCI Express
connections from the processor complexes to the I/O enclosures.

Device adapters
The DS8800 offers 8 Gbps device adapters. These adapters provide improved IOPS
performance, throughput, and scalability. They are optimized for SSD technology and
architected for long-term support for scalability growth. These capabilities complement the
POWER6+ server family to provide significant performance enhancements allowing up to
400% improvement in single adapter throughput performance.

Switched Fibre Channel backbone with SAS-2 drives


The DS8800 uses a switched FC-AL / SAS-2 based architecture as the backend for its disk
interconnection. The device adapters (DAs) connect to the controller cards in the storage
enclosures using 8 Gbps Fibre Channel arbitrated loop (FC-AL) with optical short wave
multimode interconnect. The controller cards in the storage enclosures convert to 6 Gbps
SAS-2 protocol on the disk side, offering a point-to-point connection to each drive and device
adapter, so that there are four paths available from the DS8800 processor complexes to each
disk drive.

Serial Attached SCSI generation 2 disk drives (SAS-2)


The DS8800 offers a selection of industry standard Serial Attached SCSI second generation
drives (SAS-2) that communicate using a 6 Gbps interface including 146 GB (15 K rpm),
450 GB (10 K rpm) and 600 GB (10 K rpm) drives. The 600 GB 10 K rpm SAS-2 drives allow
a single system to scale up to 633 TB of total capacity with 1056 drives and three frames. The
DS8800 series also allows clients to install Full Disk Encryption drive sets.

Solid State Drives (SSDs)


With 300 GB Solid State Drives (SSD), the DS8800 offers opportunities for ultra-high
performance applications. The SSD drives are the best choice for I/O-intensive workloads.
They provide up to 100 times the throughput and 10 times lower response time than 15K rpm
spinning disks. Additionally, they also consume much less power. For more information about
SSDs, see DS8000: Introducing Solid State Drives, REDP-4522.

Host adapters
The DS8800 series offers host connectivity with four-port or eight-port 8 Gbps Fibre
Channel/FICON host adapters. The 8 Gbps Fibre Channel/FICON Host Adapters are offered
in longwave and shortwave, and auto-negotiate to 8 GBps, 4 Gbps, or 2 Gbps link speeds.
Each port on the adapter can be individually configured to operate with Fibre Channel

10 IBM System Storage DS8800: Architecture and Implementation


Protocol (FCP) (also used for remote mirroring) or FICON. A DS8800 with the dual 4-way
feature can support up to a maximum of 16 host adapters, which provide up to 128 Fibre
Channel/FICON ports.

Note: ESCON® adapters are no longer supported.

IBM System Storage Productivity Center management console


As the main focal point for configuration and management, the DS8800 leverages the IBM
System Storage Productivity Center (SSPC), an advanced management console that can
provide a view of both IBM and non-IBM storage environments. The SSPC can enable a
greater degree of simplification for organizations confronted with the growing number of
element managers in their environment. The SSPC is an external System x® server with
preinstalled software, including IBM Tivoli Storage Productivity Center Basic Edition.

Utilizing IBM Tivoli Storage Productivity Center (TPC) Basic Edition software, SSPC extends
the capabilities available through the IBM DS Storage Manager. SSPC offers the unique
capability to manage a variety of storage devices connected across the storage area network
(SAN). The rich, user-friendly graphical user interface provides a comprehensive view of the
storage topology, from which the administrator can explore the health of the environment at
an aggregate or in-depth view. Moreover, the TPC Basic Edition, which is pre-installed on the
SSPC, can be optionally upgraded to TPC Standard Edition, which includes enhanced
functionality including monitoring and reporting capabilities that may be used to enable more
in-depth performance reporting, asset and capacity reporting, automation for the DS8000,
and to manage other resources, such as other storage devices, server file systems, tape
drives, tape libraries, and SAN environments.

Storage Hardware Management Console for the DS8800


The Hardware Management Console (HMC) is the focal point for maintenance activities. The
management console is a dedicated workstation (mobile computer) that is physically located
(installed) inside the DS8800 and can proactively monitor the state of your system, notifying
you and IBM when service is required. It can also be connected to your network to enable
centralized management of your system using the IBM System Storage DS Command-Line
Interface or storage management software utilizing the IBM System Storage DS Open API.
The HMC supports the IPv4 and IPv6 standards. For further information about IPv4 and IPv6,
see 8.3, “Network connectivity planning” on page 165.

An external management console is available as an optional feature and can be used as a


redundant management console for environments with high availability requirements.

Tivoli Key Lifecycle Manager (TKLM) isolated key server


The Tivoli Key Lifecycle Manager software performs key management tasks for IBM
encryption-enabled hardware, such as the IBM System Storage DS8000 Series family and
IBM encryption-enabled tape drives, by providing, protecting, storing, and maintaining
encryption keys that are used to encrypt information being written to, and decrypt information
being read from, encryption-enabled disks.

For DS8800 storage systems shipped with Full Disk Encryption (FDE) drives, two TKLM key
servers are required. An isolated key server (IKS) with dedicated hardware and
non-encrypted storage resources is required and can be ordered from IBM.

1.2.2 Storage capacity


The physical capacity for the DS8800 is purchased through disk drive sets. A disk drive set
contains sixteen identical disk drive modules (DDMs), which have the same capacity and the

Chapter 1. Introduction to the IBM System Storage DS8800 series 11


same revolutions per minute (rpm). In addition, Solid State Drives (SSDs) are available in disk
drive sets of 16 DDMs. The available drive options provide industry class capacity and
price/performance to address enterprise application and business requirements.

There is space for a maximum of 240 disk drive modules (DDMs) in the base frame (951),
336 DDMs in the first expansion frame (95E) and another 480 DDMs in the second expansion
frame (95E). With a maximum of 1056 DDMs, the DS8800 model 951 with the dual 4-way
feature, using 600 GB drives, currently provides up to 633 TB of physical storage capacity
with two expansion frames (95E) in a considerably smaller footprint and up to 40% less power
consumption than previous generations of DS8000.

The DS8800 storage capacity can be configured as RAID 5, RAID 6, RAID 10, or as a
combination (some restrictions apply for Full Disk Encryption (FDE) and Solid State Drives).

IBM Standby Capacity on Demand offering for the DS8800


Standby Capacity on Demand (Standby CoD) provides standby on demand storage for the
DS8800 that allows you to access the extra storage capacity whenever the need arises. With
CoD, IBM installs up to six additional Standby CoD disk drive sets (96 disk drives) in your
DS8800. At any time, you can logically configure your CoD drives, concurrently with
production, and you are automatically be charged for the capacity.

1.2.3 Supported environments


The DS8800 offers connectivity support across a broad range of server environments,
including IBM Power Systems™, System z, System p, System i, and System x servers,
servers from Sun and Hewlett-Packard, and non-IBM Intel®- and AMD-based servers.

The DS8800 supports over 90 platforms. For the most current list of supported platforms, see
the DS8000 System Storage Interoperation Center at:
http://www.ibm.com/systems/support/storage/config/ssic/index.jsp

This rich support of heterogeneous environments and attachments, along with the flexibility to
easily partition the DS8800 storage capacity among the attached environments, can help
support storage consolidation requirements and dynamic, changing environments.

1.2.4 Copy Services functions


For IT environments that cannot afford to stop their systems for backups, the DS8800
provides a fast replication technique that can provide a point-in-time copy of the data in a few
seconds or even less. This function is called FlashCopy.

For data protection and availability needs, the DS8800 provides Metro Mirror, Global Mirror,
Global Copy, Metro/Global Mirror, and z/OS Global Mirror, which are Remote Mirror and Copy
functions. These functions are also available and are fully interoperable with previous models
of the DS8000 family and even the ESS 800 and 750 models. These functions provide
storage mirroring and copying over large distances for disaster recovery or availability
purposes.

We briefly discuss Copy Services in Chapter 6, “IBM System Storage DS8800 Copy Services
overview” on page 107. For detailed information on Copy Services, see the Redbooks IBM
System Storage DS8000: Copy Services for Open Systems, SG24-6788, and IBM System
Storage DS8000: Copy Services for IBM System z, SG24-6787.

12 IBM System Storage DS8800: Architecture and Implementation


FlashCopy
The primary objective of FlashCopy is to quickly create a point-in-time copy of a source
volume on a target volume. The benefits of FlashCopy are that the point-in-time target copy is
immediately available for use for backups or testing and that the source volume is
immediately released so that applications can continue processing with minimal application
downtime. The target volume can be either a logical or physical copy of the data, with the
latter copying the data as a background process. In a z/OS environment, FlashCopy can also
operate at a data set level.

The following sections summarize the options available with FlashCopy.

Multiple Relationship FlashCopy


Multiple Relationship FlashCopy allows a source to have FlashCopy relationships with up to
12 targets simultaneously.

Incremental FlashCopy
Incremental FlashCopy provides the capability to refresh a LUN or volume involved in a
FlashCopy relationship. When a subsequent FlashCopy is initiated, only the data required to
make the target current with the source's newly established point-in-time is copied.

Remote Mirror Primary FlashCopy


Remote Mirror primary FlashCopy allows a FlashCopy relationship to be established where
the target is also a remote mirror primary volume. This enables a full or incremental
point-in-time copy to be created at a local site, and then use remote mirroring commands to
copy the data to the remote site. While the background copy task is copying data from the
source to the target, the remote mirror pair goes into a copy pending state.

Consistency Groups
FlashCopy Consistency Groups can be used to maintain a consistent point-in-time copy
across multiple LUNs or volumes, or even multiple DS8000, ESS 800, and ESS 750 systems.

Inband commands over remote mirror link


In a remote mirror environment, inband FlashCopy allows commands to be issued from the
local or intermediate site and transmitted over the remote mirror Fibre Channel links for
execution on the remote DS8000. This eliminates the need for a network connection to the
remote site solely for the management of FlashCopy.

IBM FlashCopy SE
The IBM FlashCopy SE feature provides a “space efficient” copy capability that can greatly
reduce the storage capacity needed for point-in-time copies. Only the capacity needed to
save pre-change images of the source data is allocated in a copy repository. This enables
more space efficient utilization than is possible with the standard FlashCopy function.
Furthermore, less capacity can mean fewer disk drives and lower power and cooling
requirements, which can help reduce costs and complexity. FlashCopy SE can be especially
useful in the creation of temporary copies for tape backup, online application checkpoints, or
copies for disaster recovery testing. For more information about FlashCopy SE, see IBM
System Storage DS8000 Series: IBM FlashCopy SE, REDP-4368.

Remote Mirror and Copy functions


The Remote Mirror and Copy functions include Metro Mirror, Global Copy, Global Mirror, and
Metro/Global Mirror. There is also z/OS Global Mirror for the System z environments. As with
FlashCopy, Remote Mirror and Copy functions can also be established between DS8000
systems and ESS 800/750 systems.

Chapter 1. Introduction to the IBM System Storage DS8800 series 13


The following sections summarize the Remote Mirror and Copy options available with the
DS8000 series.

Metro Mirror
Metro Mirror, previously called Peer-to-Peer Remote Copy (PPRC), provides a synchronous
mirror copy of LUNs or volumes at a remote site within 300 km. Metro Mirror Consistency
Groups, when used with a supporting application, can be used to maintain data and
transaction consistency across multiple LUNs or volumes, or even multiple DS8000,
ESS 800, and ESS 750 systems.

Global Copy
Global Copy, previously called Extended Distance Peer-to-Peer Remote Copy (PPRC-XD), is
a non-synchronous long distance copy option for data migration and backup.

Global Mirror
Global Mirror provides an asynchronous mirror copy of LUNs or volumes over virtually
unlimited distances. The distance is typically limited only by the capabilities of the network
and channel extension technology being used. A Global Mirror Consistency Group is used to
maintain data consistency across multiple LUNs or volumes, or even multiple DS8000,
ESS 800, and ESS 750 systems.

Metro/Global Mirror
Metro/Global Mirror is a three-site data replication solution for both Open Systems and the
System z environments. Local site (site a) to intermediate site (site b) provides high
availability replication using synchronous Metro Mirror, and intermediate site (site b) to remote
site (site c) provides long distance disaster recovery replication using asynchronous Global
Mirror.

z/OS Global Mirror


z/OS Global Mirror, previously called Extended Remote Copy (XRC), provides an
asynchronous mirror copy of volumes over virtually unlimited distances for the System z. It
now provides increased parallelism through multiple SDM readers (Multiple Reader
capability).

z/OS Metro/Global Mirror


This is a combination of Copy Services for System z environments that uses z/OS Global
Mirror to mirror primary site data to a remote location that is at a long distance and also that
uses Metro Mirror to mirror the primary site data to a location within the metropolitan area.
This enables a z/OS three-site high availability and disaster recovery solution.

z/OS Global Mirror also offers Incremental Resync, which can significantly reduce the time
needed to restore a DR environment after a HyperSwap in a three-site z/OS Metro/Global
Mirror configuration, after it is possible to change the copy target destination of a copy relation
without requiring a full copy of the data.

1.2.5 Service and setup


The installation of the DS8800 is performed by IBM in accordance with the installation
procedure for this machine. The client’s responsibility is the installation planning, retrieval and
installation of feature activation codes, and logical configuration planning and execution.

For maintenance and service operations, the Storage Hardware Management Console
(HMC) is the focal point. The management console is a dedicated workstation that is
physically located (installed) inside the DS8800 storage system and that can automatically

14 IBM System Storage DS8800: Architecture and Implementation


monitor the state of your system, notifying you and IBM when service is required.We suggest
having a dual HMC configuration, particularly when using Full Disk Encryption.

The HMC is also the interface for remote services (Call Home and Call Back), which can be
configured to meet client requirements. It is possible to allow one or more of the following:
򐂰 Call on error (machine-detected)
򐂰 Connection for a few days (client-initiated)
򐂰 Remote error investigation (service-initiated)

The remote connection between the management console and the IBM Service organization
is done using a virtual private network (VPN) point-to-point connection over the internet or
modem. A new secure SSL connection protocol option is now available for call home support
and additional audit logging.

The DS8800 storage system can be ordered with an outstanding four-year warranty, an
industry first, on both hardware and software.

1.2.6 Configuration flexibility


The DS8000 series uses virtualization techniques to separate the logical view of hosts onto
LUNs from the underlying physical layer, thus providing high configuration flexibility.
Virtualization is discussed in Chapter 5, “Virtualization concepts” on page 85.

Dynamic LUN/volume creation, deletion, and expansion


The DS8000 gives a high degree of flexibility in managing storage, allowing LUNs to be
created and deleted non-disruptively. Also, when a LUN is deleted, the freed capacity can be
used with other free space to form a LUN of a different size. A LUN can also be dynamically
increased in size.

Large LUN and large count key data (CKD) volume support
You can configure LUNs and volumes to span arrays, allowing for larger LUN sizes up to
2 TB. The maximum CKD volume size is 262,668 cylinders (about 223 GB), greatly reducing
the number of volumes to be managed and creating a new volume type on z/OS called 3390
Model A. This capability is referred to as Extended Address Volumes and requires z/OS 1.10
or later.

Flexible LUN-to-LSS association


With no predefined association of arrays to LSSs on the DS8000 series, users are free to put
LUNs or CKD volumes into LSSs and make best use of the 256 address range, overcoming
previous ESS limitations, particularly for System z.

Simplified LUN masking


The implementation of volume group-based LUN masking (as opposed to adapter-based
masking, as on the ESS) simplifies storage management by grouping all or some WWPNs of
a host into a Host Attachment. Associating the Host Attachment to a Volume Group allows all
adapters within it access to all of the storage in the Volume Group.

Logical definitions: maximum values


Here is a list of the current DS8000 maximum values for the major logical definitions:
򐂰 Up to 255 logical systems (LSS)
򐂰 Up to 65280 logical devices
򐂰 Up to 2 TB LUNs

Chapter 1. Introduction to the IBM System Storage DS8800 series 15


򐂰 Up to 262668 cylinder CKD volumes (Extended Address Volumes)
򐂰 Up to 130560 FICON logical paths (512 logical paths per control unit image)
򐂰 Up to 1280 logical paths per FC port
򐂰 Up to 8000 process logins (509 per SCSI-FCP port)

1.2.7 IBM Certified Secure Data Overwrite


Sometimes regulations and business prudence require that the data actually be removed
when the media is no longer needed.

An STG Lab Services Offering for all the DS8000 series and the ESS models 800 and 750
includes the following services:
򐂰 Multi-pass overwrite of the data disks in the storage system
򐂰 Purging of client data from the server and HMC disks

Note: The secure overwrite functionality is offered as a service exclusively and is not
intended for use by clients, IBM Business Partners, or IBM field support personnel.

To discover more about the IBM Certified Secure Data Overwrite service offerings, contact
your IBM sales representative or IBM Business Partner.

1.3 Performance features


The IBM System Storage DS8800 offers optimally balanced performance. This is possible
because the DS8800 incorporates many performance enhancements, such as the dual 2-way
and dual 4-way POWER6+ processor complex implementation, fast 8 Gbps Fibre
Channel/FICON host adapter cards, fast 8 Gbps Fibre Channel protocol device adapter
cards, latest Serial Attached SCSI generation 2 (SAS-2) disk drive technology with 6 Gbps,
Solid State Drives, and the high bandwidth, fault-tolerant point-to-point PCI Express internal
interconnections.

With all these components, the DS8800 is positioned at the top of the high performance
category.

1.3.1 Sophisticated caching algorithms


IBM Research conducts extensive investigations into improved algorithms for cache
management and overall system performance improvements.

Sequential Prefetching in Adaptive Replacement Cache


One of the performance enhancers of the DS8800 is its self-learning cache algorithm, which
improves cache efficiency and enhances cache hit ratios. This algorithm, which is used in the
DS8000 series, is called Sequential Prefetching in Adaptive Replacement Cache (SARC).

SARC provides the following abilities:


򐂰 Sophisticated, patented algorithms to determine what data should be stored in cache
based upon the recent access and frequency needs of the hosts
򐂰 Pre-fetching, which anticipates data prior to a host request and loads it into cache
򐂰 Self-learning algorithms to adaptively and dynamically learn what data should be stored in
cache based upon the frequency needs of the hosts

16 IBM System Storage DS8800: Architecture and Implementation


Adaptive Multi-stream Prefetching
Adaptive Multi-stream Prefetching (AMP) is a breakthrough caching technology that improves
performance for common sequential and batch processing workloads on the DS8000. AMP
optimizes cache efficiency by incorporating an autonomic, workload-responsive, and
self-optimizing prefetching technology.

Intelligent Write Caching


Intelligent Write Caching (IWC) improves performance through better write cache
management and destaging order of writes. It can double the throughput for random write
workloads. Specifically, database workloads benefit from this new IWC Cache algorithm.

SARC, AMP, and IWC play complementary roles. While SARC is carefully dividing the cache
between the RANDOM and the SEQ lists to maximize the overall hit ratio, AMP is managing
the contents of the SEQ list to maximize the throughput obtained for the sequential
workloads. IWC manages the write cache and decides what order and rate to destage to disk.

1.3.2 Solid State Drives


To improve data transfer rate (IOPS) and response time, the DS8000 series provides support
for Solid State Drives (SSDs). Solid State Drives have improved I/O transaction-based
performance over traditional platter-based drives. The DS8800 initially offers 300 GB Solid
State Drives with exceptional performance.

Solid State Drives are high-IOPS class enterprise storage devices targeted at Tier 0,
I/O-intensive workload applications that can use a high level of fast-access storage. Solid
State Drives offer a number of potential benefits over Hard Disk Drives, including better IOPS
performance, lower power consumption, less heat generation, and lower acoustical noise.

The DS8800 can take even better advantage of Solid Disk Drives due to its faster 8 Gbps
Fibre Channel protocol device adapters (DAs) compared to previous models of the DS8000
family.

1.3.3 Multipath Subsystem Device Driver


The Multipath Subsystem Device Driver (SDD) is a pseudo-device driver on the host system
designed to support the multipath configuration environments in IBM products. It provides
load balancing and enhanced data availability capability. By distributing the I/O workload over
multiple active paths, SDD provides dynamic load balancing and eliminates data-flow
bottlenecks. SDD also helps eliminate a potential single point of failure by automatically
rerouting I/O operations when a path failure occurs.

SDD is provided with the DS8000 series at no additional charge. Fibre Channel (SCSI-FCP)
attachment configurations are supported in the AIX, HP-UX, Linux, Windows®, Novell
NetWare, and Oracle Solaris environments.

Note: Support for multipath is included in an IBM i server as part of Licensed Internal Code
and the IBM i operating system (including i5/OS).

For more information about SDD, see IBM System Storage DS8000: Host Attachment and
Interoperability, SG24-8887.

Chapter 1. Introduction to the IBM System Storage DS8800 series 17


1.3.4 Performance for System z
The DS8000 series supports the following IBM performance enhancements for System z
environments:
򐂰 Parallel Access Volumes (PAVs) enable a single System z server to simultaneously
process multiple I/O operations to the same logical volume, which can help to significantly
reduce device queue delays. This is achieved by defining multiple addresses per volume.
With Dynamic PAV, the assignment of addresses to volumes can be automatically
managed to help the workload meet its performance objectives and reduce overall
queuing. PAV is an optional feature on the DS8000 series.
򐂰 HyperPAV is designed to enable applications to achieve equal or better performance than
with PAV alone, while also using fewer Unit Control Blocks (UCBs) and eliminating the
latency in targeting an alias to a base. With HyperPAV, the system can react immediately
to changing I/O workloads.
򐂰 Multiple Allegiance expands the simultaneous logical volume access capability across
multiple System z servers. This function, along with PAV, enables the DS8000 series to
process more I/Os in parallel, helping to improve performance and enabling greater use of
large volumes.
򐂰 I/O priority queuing allows the DS8000 series to use I/O priority information provided by
the z/OS Workload Manager to manage the processing sequence of I/O operations.
򐂰 High Performance FICON for z (zHPF) reduces the impact associated with supported
commands on current adapter hardware, thereby improving FICON throughput on the
DS8000 I/O ports. The DS8800 also supports the new zHPF I/O commands for multi-track
I/O operations.

Chapter 7, “Performance” on page 121,gives you more information about the performance
aspects of the DS8000 family.

1.3.5 Performance enhancements for IBM Power Systems


Many IBM Power Systems users can benefit from the following DS8000 features:
򐂰 End-to-end I/O priorities
򐂰 Cooperative caching
򐂰 Long busy wait host tolerance
򐂰 Automatic Port Queues

Chapter 7, “Performance” on page 121, gives you more information about these performance
enhancements.

1.3.6 Performance enhancements for z/OS Global Mirror


Many users of z/OS Global Mirror, which is the System z-based asynchronous disk mirroring
capability, will benefit from the DS8000 enhancement “z/OS Global Mirror suspend instead of
long busy option”. In the event of high workload peaks, which can temporarily overload the
z/OS Global Mirror configuration bandwidth, the DS8000 can initiate a z/OS Global Mirror
SUSPEND, preserving primary site application performance, which is an improvement over
the previous LONG BUSY status.

Consider the following points:


򐂰 All users of z/OS Global Mirror benefit from the DS8000’s “z/OS Global Mirror Multiple
Reader” support. This recent enhancement spreads the z/OS Global Mirror workload
across more than a single reader. In the event of high workload peaks restricted to a few
volumes, which can mean restricted to a single reader, the peak demand can now be

18 IBM System Storage DS8800: Architecture and Implementation


balanced across a set of up to 16 readers. This enhancement provides more efficient use
of the site-to-site network capacity, a higher single volume throughput capability, and an
environment that can effectively use Parallel Access Volumes.
򐂰 Extended Distance FICON is a capability that can help reduce the need for channel
extenders in z/OS Global Mirror configurations by increasing the numbers of read
commands in flight.

See IBM System Storage DS8000: Copy Services for IBM System z, SG24-6787, for a
detailed discussion of z/OS Global Mirror and related enhancements.

Chapter 1. Introduction to the IBM System Storage DS8800 series 19


20 IBM System Storage DS8800: Architecture and Implementation
2

Chapter 2. IBM System Storage DS8800


models
This chapter provides an overview of the DS8800 storage subsystem and describes the
different models and how well they scale regarding capacity and performance.

© Copyright IBM Corp. 2011. All rights reserved. 21


2.1 DS8800 model overview
The DS8800 family includes the DS8800 Model 951 base frame and the associated DS8800
Expansion Unit Model 95E.

The DS8800 is available in either of the following configurations:


򐂰 DS8800 Model 951 Standard Cabling
This model is available as either a dual 2-way processor complex with installation
enclosures for up to 144 DDMs and 4 FC host adapter cards, or as a dual 4-way processor
complex with enclosures for up to 240 DDMs and 8 FC host adapter cards. Standard
cabling is optimized for performance and highly scalable configurations, allowing large
long-term growth.

򐂰 DS8800 Model 951 Business Class Cabling


This configuration of the Model 951 is available as a dual 2-way processor complex with
installation enclosures for up to 240 DDMs and 4 FC host adapter cards. The business
class option allows a system to be configured with more drives per device adapter, thus
reducing configuration cost and increasing adapter utilization. Scalability is limited with
this option.

Note: Model 951 supports nondisruptive upgrades from dual 2-way to dual 4-way.

򐂰 DS8800 Model 95E


This expansion frame for the 951 model includes enclosures for additional DDMs and
additional FC adapter cards to allow a maximum configuration of 16 FC adapter cards.
The Expansion Unit 95E can only be attached to the 951 4-way processor complex. Up to
two expansion frames can be attached to a Model 951. FC adapter cards can only be
installed in the first expansion frame.

򐂰 Former 92E and 94E expansion frames cannot be reused in the DS8800.
򐂰 A model 951 supports nondisruptive upgrades from a 48 DDM install to a full two
expansion rack unit.
򐂰 Only one expansion frame can be added concurrently to the business class
configuration.
򐂰 Addition of a second expansion frame to a business class configuration requires
recabling as standard class. Note that this is disruptive, and is available by RPQ only.

Table 2-1 provides a comparison of the DS8800 model 951 and its available combination of
resources.

Table 2-1 DS8800 series model comparison 951 and additional resources
Base Cabling Expansion Processor Max DDMs Max Max
model model type processor host
memory adapters

951 Standard None 2-way 144 128 GB 4


5.0 GHz

951 Business None 2-way 240 128 GB 4


5.0 GHz

22 IBM System Storage DS8800: Architecture and Implementation


Base Cabling Expansion Processor Max DDMs Max Max
model model type processor host
memory adapters

951 Standard None 4-way 240 384 GB 8


5.0 GHz

951 Standard 1 x 95E 4-way 576 384 GB 16


5.0 GHz

951 Standard 2 x 95E 4-way 1056 384 GB 16


5.0 GHz

951 Business 1 x 95E 4-way 576 384 GB 12


5.0 GHz

Depending on the DDM sizes (which can be different within a 951or 95E) and the number of
DDMs, the total capacity is calculated accordingly.

Each Fibre Channel/FICON host adapter has four or eight Fibre Channel ports, providing up
to 128 Fibre Channel ports for a maximum configuration.

Machine type 242x


DS8800 models are associated to machine type 242x, exclusively. This machine type
corresponds to the “Enterprise Choice” length of warranty offer that allows a 1-year, 2-year,
3-year, or 4-year warranty period (x=1, 2, 3, or 4, respectively). The 95E expansion frame has
the same 242x machine type as the base unit.

2.1.1 DS8800 Model 951 overview


The DS8800 Model 951, shown in Figure 2-1, has the following features:
򐂰 A base frame with up to 240 DDMs for a maximum base frame disk storage capacity of
140 TB in high density storage enclosures. 1
򐂰 Two processor complexes, each with a IBM System p POWER6+ 5.0 GHz, 2-way or 4-way
Central Electronic Complex (CEC). 2
򐂰 Up to 128 GB (2-way) or 384 GB (4-way) of processor memory, also referred to as the
cache. Note that the DS8800 supports concurrent cache upgrades.
򐂰 Up to 8 four-port or eight-port Fibre Channel/FICON host adapters (HAs) of 8 Gbps. 3
Each port can be independently configured as either:
– FCP port to open systems hosts attachment
– FCP port for Metro Mirror, Global Copy, Global Mirror, and Metro/Global Mirror
connectivity
– FICON port to connect to System z hosts
– FICON port for z/OS Global Mirror connectivity
This totals up to 64 ports with any mix of FCP and FICON ports.
򐂰 A 2-way configuration requires two battery packs. A 4-way configuration requires three
battery packs. 4
򐂰 The DS8800 has redundant Primary power supplies (PPS) 5 are on the side of the frame.
They provide a redundant 208 VDC power distribution to the rack. The processor nodes,
I/O drawers, and storage enclosures have dual power supplies that are connected to the
rack power distribution units (PDUs). 6

Chapter 2. IBM System Storage DS8800 models 23


򐂰 The DS8800 Model 951 can connect up to two expansion frames (Model 95E). Figure 2-1
displays a front and rear view of a DS8800 Model 951 with the covers off, displaying the
indicated components.

Figure 2-1 DS8800 base frame with covers removed: front and rear

Figure 2-2 shows the maximum configuration for a DS8800 Model 951 base frame with one
95E expansion frame.

Figure 2-2 DS8800 configuration: 951 base unit with one 95E expansion frame: front

There are no additional I/O enclosures installed for the second expansion frame. The result of
installing all possible 1056 DDMs is that they will be distributed nearly evenly over all the

24 IBM System Storage DS8800: Architecture and Implementation


device adapter (DA) pairs (for an explanation of DA pairs, refer to 3.4.1, “Device adapters” on
page 44). The second 95E expansion frame is displayed in Figure 2-3.

Figure 2-3 DS8800 models 951/95E maximum configuration with 1056 disk drives: front

The DS8800 business cabling class option of the Model 951 is available as a dual 2-way
processor complex with installation enclosures for up to 240 DDMs and 4 FC adapter cards.
Figure 2-4 shows the maximum configuration of 2-way standard versus 2-way business class
cabling.

Figure 2-4 DS8800 2-way processor standard cabling versus business cabling: front

To connect an expansion frame to the business cabling configuration, an upgrade from a


2-way to 4-way processor complex is a prerequisite. The upgrade of the processor complex

Chapter 2. IBM System Storage DS8800 models 25


and the addition of the expansion frame are both concurrent operations. Figure 2-5 shows the
maximum configuration for business class cabling.

Figure 2-5 Business class cabling with expansion frame: front

The DS8800 offers a selection of disk drives, including Serial Attached SCSI second
generation drives (SAS) that feature a 6 Gbps interface. These drives, using a 2.5-inch form
factor, provide increased density and thus increased performance per frame. The SAS drives
are available in 146 GB (15K RPM) as well as 450 GB (10K RPM) and 600 GB (10K RPM)
capacities.

Besides SAS hard disk drives (HDDs), it is also possible to install 300 GB Solid State Drives
(SSDs) in the DS8800. There are restrictions regarding how many SSDs drives are supported
and how configurations are intermixed. SSDs are only supported in RAID 5 arrays. Solid
State Drives can be ordered in drive groups of sixteen. The suggested configuration of SSDs
is sixteen drives per DA pair. The maximum configuration of SSDs is 48 per DA pair.

Important: The maximum configuration of SSDs is 48 per DA pair.

SSDs may be intermixed on a DA pair with spinning drives, but intermix between SSDs and
spinning drives is not supported within storage enclosure pairs.

The DS8800 can be ordered with Full Disk Encryption (FDE) drives, with a choice of 450 GB
(10K RPM) and 600 GB (10K RPM) FC drives. You cannot intermix FDE drives with other
drives in a DS8800 system. For additional information about FDE drives, see IBM Encrypted
Storage Overview and Customer Requirements at the following site:

http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101479
򐂰 The DS8800 model 951 can have up to 144 DDMs and 4 FC adapter cards in the 2-way
standard configuration.

26 IBM System Storage DS8800: Architecture and Implementation


򐂰 The DS8800 model 951 can have up to 240 DDMs and 4 FC adapter cards in the 2-way
business class configuration.
򐂰 The DS8800 model 951 can have up to 240 DDMs and 8 FC adapter cards in the 4-way
configuration in the base frame. The DS8800 model 951 4-way configuration supports up
to 1056 DDMs and 16 FC adapter cards with two expansion frames.

A summary of the capacity characteristics is listed in Table 2-2. The minimum capacity is
achieved by installing one eight drive group of GB SSD drives.

Table 2-2 Capacity comparison of device adapters, DDMs, and storage capacity
Component 2-way base 2-way base 4-way base 4-way (one 4-way (two
frame with frame, with two I/O expansion expansion
one I/O business enclosure frame) frames)
enclosure class cabling pairs
pair

DA pairs 1 or 2 1 or 2 1 to 4 1 to 8 1 to 8

HDDs Up to 144 Up to 240 Up to 240 Up to 576 Up to 1056


increments increments increments increments increments
of 16 of 16 of 16 of 16 of 16

SSDs Up to 96 Up to 96 Up to 192 Up to 384 Up to 384


increments of increments of increments of increments of increments of
16 16 16 16 16

Physical 2.3 to 86 TB 2.3 to 144 TB 2.3 to 144 TB 0.6 to 346 TB 0.6 to 633 TB
capacity

Adding DDMs and Capacity on Demand


The DS8800 series has a linear capacity growth up to 633 TB.

A significant benefit of the DS8800 series is the ability to add DDMs without disruption for
maintenance. IBM offers capacity on demand solutions that are designed to meet the
changing storage needs of rapidly growing e-business. The Standby Capacity on Demand
(CoD) offering is designed to provide you with the ability to tap into additional storage and is
particularly attractive if you have rapid or unpredictable storage growth.

Up to six standby CoD disk drive sets (96 disk drives) can be concurrently field-installed into
your system. To activate, you simply logically configure the disk drives for use, which is a
nondisruptive activity that does not require intervention from IBM.

Upon activation of any portion of a standby CoD disk drive set, you must place an order with
IBM to initiate billing for the activated set. At that time, you can also order replacement
standby CoD disk drive sets. For more information about the standby CoD offering, refer to
the DS8800 series announcement letter, which can be found at the following address:
http://www.ibm.com/common/ssi/index.wss

Device Adapters and performance


By default, the DS8800 comes with an additional pair of Device Adapters per 48 DDMs. If
you order a system with, for example, 96 drives, you will get two Device Adapter (DA) pairs.

When ordering 432 disk drives, you get eight DA pairs, which is the maximum number of DA
pairs. Adding more drives will not add DA pairs.

Chapter 2. IBM System Storage DS8800 models 27


Having many DA pairs is important to achieving a higher throughput level required by some
sequential workloads, such as data warehouse installations requiring a throughput of 1 GBps
or more.

Adding I/O enclosures


With the DS8800, it is now possible to start with a 2-way configuration with disk enclosures for
48 DDMs, and grow to a full scale, 3-frame configuration concurrently.
򐂰 2-way base with one I/O enclosure pair
– Enables lower entry price by not requiring second I/O enclosure pair
򐂰 4-way = 2-way base + processor card feature + second I/O enclosure pair feature
– Enables improved performance on base rack
򐂰 4-way base + first expansion frame
– Enables 4 I/O enclosure pairs and 16 host adapters and 8 device adapter pairs
򐂰 4-way base with first expansion frame + second expansion frame
– Enables up to 1056 drives
򐂰 2-way base (business class cabling) with one I/O enclosure pair
– Enables lower entry price by not requiring second I/O enclosure pair or device adapters
򐂰 4-way base (business class cabling) + processor card feature
– Required for expansion frame
򐂰 4-way base (business class cabling) + first expansion frame
– Enables 3 I/O enclosure pairs
򐂰 4-way base (business class cabling) + first expansion frame + second I/O enclosure pair
– Enables 4 I/O enclosure pairs
– Enables 16 host adapters and 6 device adapter pairs

28 IBM System Storage DS8800: Architecture and Implementation


3

Chapter 3. Hardware components and


architecture
This chapter describes the hardware components of the IBM System Storage DS8800. It
provides readers with more insight into individual components and the architecture that holds
them together.

The following topics are covered in this chapter:


򐂰 Frames
򐂰 DS8800 architecture
򐂰 Storage facility processor complex (CEC)
򐂰 Disk subsystem
򐂰 Host adapters
򐂰 Power and cooling
򐂰 Management console network
򐂰 System Storage Productivity Center
򐂰 Isolated Tivoli Key Lifecycle Manager (TKLM) server

© Copyright IBM Corp. 2011. All rights reserved. 29


3.1 Frames
The DS8800 is designed for modular expansion. From a high-level view, there appear to be
three types of frames available for the DS8800. However, on closer inspection, the frames
themselves are almost identical. The only variations are the combinations of processors, I/O
enclosures, storage enclosures, batteries, and disks that the frames contain.

Figure 3-1 shows some of the frame variations that are possible with the DS8800. The left
frame is a base frame that contains the processors. In this example, it has two 4-way IBM
System p POWER6+ servers, because only 4-way systems can have expansion frames. The
center frame is an expansion frame that contains additional I/O enclosures but no additional
processors. The right frame is an expansion frame that contains simply disks and no
processors, I/O enclosures, or batteries. Each frame contains a frame power area with power
supplies and other power-related hardware. A DS8800 can consist of up to three frames.

Figure 3-1 DS8800 frame types

3.1.1 Base frame


The left side of the base frame, viewed from the front of the machine, is the frame power area.
Only the base frame contains rack power control cards (RPCs) to control power sequencing
for the storage unit. The base frame contains two primary power supplies (PPSs) to convert
input AC into DC power. The power area also contains two or three battery backup units
(BBUs). A base frame with a two-way processor contains two BBUs and a base frame with a
four-way processor contains three BBUs

The base frame can contain up to ten disk enclosures, each of which can contain up to 24
disk drives. In a maximum configuration, the base frame can hold 240 disk drives. Disk drives
are either hard disk drives (HDD) with real spinning disks or Solid State Drives (SSD), which
have no moving parts and enable a significant increase in random transactional processing.

A disk enclosure contains either HDDs or SSDs. With SSDs in a disk enclosure, it contains
either 16 drives or a partially populated disk enclosure with eight SSD drives. A disk

30 IBM System Storage DS8800: Architecture and Implementation


enclosure populated with HDDs contains 16 or 24 drives. Note that only up to 48 SSDs can
be configured per Device Adapter (DA) pair. It is inadvisable to configure more than 16 SSDs
per DA pair.

The base frame can be configured using either standard or business class cabling. Standard
cabling is optimized for performance and allows for highly scalable configurations with large
long term growth. The business class option allows a system to be configured with more
drives per device adapter, thereby reducing configuration cost and increasing adapter
utilization. This configuration option is intended for configurations where capacity and high
resource utilization is of the most importance. Scalability is limited in the business class
option.

Standard cabling supports either two-way processors with one I/O enclosure pair or four-way
processors with two I/O enclosure pairs. Standard cabling with one I/O enclosure pair
supports up to two DA pairs and six storage enclosures (144 DDMs). Standard cabling with
two I/O enclosure pairs supports up to four DA pairs and ten storage enclosures (240 DDMs).

Business class cabling utilizes two-way processors and one I/O enclosure pair. Business
class cabling supports two DA pairs and up to ten storage enclosures (240 DDMs).

Notes:
򐂰 Business class cabling is available as an initial order only. A business class cabling
configuration can only be ordered as a base frame with no expansion frames.
򐂰 DS8800s do not support model conversion, that is, business class and standard class
cabling conversions are not supported. Re-cabling is available as an RPQ only and is
disruptive.

Inside the disk enclosures are cooling fans located in the storage enclosure power supply
units. These fans blow exhaust to the rear of the frame.

Between the disk enclosures and the processor complexes are two Ethernet switches and a
Storage Hardware Management Console (HMC).

The base frame contains two processor complexes (CPCs). These System p POWER6+
servers contain the processor and memory that drive all functions within the DS8800.

Finally, the base frame contains two or four I/O enclosures. These I/O enclosures provide
connectivity between the adapters and the processors. Each I/O enclosure can contain up to
two device adapters and up to two host adapters.

The communication path used for adapter-to-processor complex communication in the


DS8800 consists of four-lane (x4) PCI Express Generation 2 connections, providing a
bandwidth of 2 GBps for each connection.

The interprocessor complex communication still utilizes the RIO-G loop as in previous models
of the DS8000 family. However, this RIO-G loop no longer has to handle data traffic, which
greatly improves performance.

3.1.2 Expansion frame


The left side of each expansion frame, viewed from the front of the machine, is the frame
power area. The expansion frames do not contain rack power control cards; these cards are
only present in the base frame. Each expansion frame contains two primary power supplies
(PPSs) to convert the AC input into DC power. Finally, the power area can contain zero or two
battery backup units (BBUs), depending on the model and configuration. A first expansion

Chapter 3. Hardware components and architecture 31


rack requires two BBUs. A second expansion rack requires two BBUs if the Extended Power
Line Disturbance (EPLD) feature is installed and no BBUs without EPLD. If the EPLD feature
is installed, the modules must be installed on the base frame and all expansion frames. The
EPLD feature is highly recommended as an additional safeguard against environmental
power fluctuations.

The first expansion frame can hold up to 14 storage enclosures, which contain the disk drives.
They are described as 24-packs, because each enclosure can hold 24 small form factor (SFF)
disks. In a maximum configuration, the first expansion frame can hold 336 disk drives.

Disk drives are either hard disk drives (HDDs) with real spinning disks or Solid State Drives
(SSDs), which have no moving parts and enable a significant increase in random
transactional processing. A disk enclosure contains either HDDs or SSDs but not both.

With SSDs in a disk drive enclosure pair, it is either 16 drives or a half-populated disk
enclosure with eight SSD drives. A disk enclosure populated with HDDs contains always 16
or 24 drives. Note that only up to 48 SSDs can be configured per Device Adapter (DA) pair,
and it is not advisable to configure more than 16 SSDs per DA pair. Example 3-1 shows the
configuration required to get to the maximum SSD configuration (384 DDMs).

Example 3-1 Configuration for maximum number of SSDs


In order to get to maximum SSD installation;

Installing from bottom up on the Base frame:


DA pair 2 (bottom 2 enclosures) get 16 SSDs
DA pair 0 (next 2 enclosures) get 16 SSDs
DA pair 3 (next 2 enclosures ) get 16 SSDs
DA pair 1 (next 2 enclosures) get 16 SSDs
DA pair 2 (top 2 enclosures) get HDDs

Installing from the bottom of the first expansion frame:


DA pairs 6, 4, 7, 5 follow the same pattern (16 SSDs each in the first pair of
storage enclosures)
And then the last six storage enclosures (DA pairs 6, 4 and 7) would all be HDDs

The next 16 SSDs would go into DA pair 2 (first frame, bottom 2 enclosures), then
the next 16 SSDs into DA pair 0 ( first frame, next 2 enclosures) etc until all 8
DA pairs had 32 SSDs

Then the next 16 SSDs would go into DA pair 2 again (first frame, bottom 2
enclosures, filling the enclosures), then the next 16 SSDs into DA pair 0 ( first
frame, next 2 enclosures, filling the enclosures) etc until all 8 DA pairs had 48
SSDs.

The second expansion frame would be all HDDs for a system total of 384 SSDs and
672 HDDs

Figure 3-2 shows a diagram of the SSD placement.

32 IBM System Storage DS8800: Architecture and Implementation


Figure 3-2 SSD installation locations

The second expansion frame can hold up to 20 storage enclosures. In the maximum
configuration, the second expansion frame can hold 480 disk drives. The maximum
configuration with the base frame and two expansion frames is 1056 drives.

An expansion frame contains I/O enclosures and adapters if it is the first expansion frame that
is attached to a DS8800 4-way system. Note that you cannot add any expansion frame to a
DS8800 2-way system.

Note: Adding an expansion frame to a business class cabling configuration requires an


upgrade from a 2-way to 4-way system as a prerequisite.

The second expansion frame cannot have I/O enclosures and adapters. If the expansion
frame contains I/O enclosures, the enclosures provide connectivity between the adapters and
the processors. The adapters contained in the I/O enclosures can be either device adapters
or host adapters, or both. The expansion frame model is called 95E. You cannot use
expansion frames from previous DS8000 models as expansion frames for a DS8800 storage
system.

Note: The business class cabling configuration is limited to one expansion frame and a
maximum of 576 DDMs. This restriction can be removed by converting to standard cabling,
which is disruptive. This conversion is only available via RPQ.

Chapter 3. Hardware components and architecture 33


3.1.3 Rack operator window
Each DS8800 frame features some status indicators. The status indicators can be seen when
the doors are closed. When the doors are open, the emergency power off switch (an EPO
switch) is also accessible. Figure 3-3 shows the operator panel.

Figure 3-3 Rack operator window

Each panel has two line cord indicators, one for each line cord. For normal operation, both of
these indicators are illuminated if each line cord is supplying correct power to the frame.
There is also a fault indicator on the bottom. If this indicator is illuminated, use the DS Storage
Manager GUI or the HMC Manage Serviceable Events menu to determine why this indicator
is illuminated.

There is also an EPO switch near the top of the primary power supplies (PPS). This switch is
only for emergencies. Tripping the EPO switch will bypass all power sequencing control and
result in immediate removal of system power. Do not trip this switch unless the DS8000 is
creating a safety hazard or is placing human life at risk. Data in non-volatile storage (NVS) will
not be destaged and will be lost. Figure 3-4 shows the EPO switch.

34 IBM System Storage DS8800: Architecture and Implementation


Figure 3-4 Emergency power off (EPO) switch

There is no power on/off switch on the operator window because power sequencing is
managed through the HMC. This ensures that all data in nonvolatile storage, known as
modified data, is destaged properly to disk prior to power down. It is not possible to shut down
or power off the DS8800 from the operator window, except in an emergency, and using the
EPO switch.

3.2 DS8800 architecture


Now that the frames have been described, the rest of this chapter explores the technical
details of each component. The overall architecture that connects the components of a
storage facility is shown in Figure 3-7 on page 38.

In effect, the DS8800 consists of two processor complexes. Each processor complex has
access to multiple host adapters to connect to Fibre Channel or FICON hosts. A DS8800 can
have up to 16 host adapters with 4 or 8 I/O ports on each adapter.

Fibre Channel adapters are also used to connect to internal fabrics, which are Fibre Channel
switches to which the disk drives are connected.

3.2.1 POWER6+ processor


The DS8800 is based on POWER6 p570 server technology. The 64-bit POWER6+
processors in the p570 server are integrated into a dual-core single chip module or a
dual-core dual chip module, with 32 MB of L3 cache, 8 MB of L2 cache, and 12 DDR2
memory DIMM slots. This enables operating at a high data rate for large memory
configurations. Each new processor card can support up to 12 DDR2 DIMMs running at
speeds of up to 667 MHz.

The Symmetric Multi-Processing (SMP) system features 2-way or 4-way, copper-based,


Silicon-on Insulator-based (SOI-based) POWER6+ microprocessors running at 5.0 GHz.

Chapter 3. Hardware components and architecture 35


Each POWER6+ processor provides a GX+ bus that is used to connect to an I/O subsystem
or fabric interface card. GX+ is a Host Channel Adapter used in POWER6 systems. For more
information, see IBM System p 570 Technical Overview and Introduction, REDP-4405.

Also see Chapter 4, “RAS on IBM System Storage DS8800” on page 55 and 7.1.4,
“POWER6+: heart of the DS8800 dual-cluster design” on page 126 for additional information
about the POWER6 processor.

3.2.2 Peripheral Component Interconnect Express (PCI Express)


The DS8800 processor complex utilizes a PCI Express infrastructure to access the I/O
subsystem, which provides a great improvement in performance.

PCI Express was designed to replace the general-purpose PCI expansion bus, the high-end
PCI-X bus, and the Accelerated Graphics Port (AGP) graphics card interface.

PCI Express is a serial I/O interconnect. Transfers are bidirectional, which means data can
flow to and from a device simultaneously. The PCI Express infrastructure involves a switch so
that more than one device can transfer data at the same time.

Unlike previous PC expansion interfaces, rather than being a bus, it is structured around
point-to-point full duplex serial links called lanes. Lanes can be grouped by 1x, 4x, 8x, 16x, or
32x, and each lane is high speed, using an 8b/10b encoding that results in 2.5 Gbps = 250
MBps per lane in a generation 1 implementation. Bytes are distributed across the lanes to
provide a high throughput (see Figure 3-5).

By te 7

By te 6

By te 5

By te 4 High-Speed GX+ bus


By te 3

By te 2

High speed data stream


By te 1
to/from POWER6 processor
By te 0

PCIe Switch Bytes distributed


on PCIe lanes

PCIe Gen2 SPEED


LIMIT
lanes run at 500MB/s
500 MB/s Byte 4 Byte 5 Byte 6 Byte 7

per lane
in each Byte 0 Byte 1 Byte 2 Byte 3

direction

Figure 3-5 PCI Express architecture

36 IBM System Storage DS8800: Architecture and Implementation


There are two generations of PCI Express in use today:
򐂰 PCI Express 1.1 (Gen 1) = 250 MBps per lane (current P6 processor I/O)
򐂰 PCI Express 2.0 (Gen 2) = 500 MBps per lane (used in the DS8800 I/O drawer)

As shown in Figure 3-6, a bridge is used to translate the x8 Gen 1 lanes from the processor to
the x4 Gen 2 lanes used by the I/O enclosures.

Figure 3-6 GX+ to PCI Express adapter

You can learn more about PCI Express at the following site:
http://www.redbooks.ibm.com/Redbooks.nsf/RedbookAbstracts/tips0456.html?Open

3.2.3 Device adapters and host adapters


To access the disk subsystem, each complex (CEC) uses several 4-port Fibre Channel
arbitrated loop (FC-AL) device adapters. A DS8800 can have up to 16 of these adapters
arranged into eight pairs. Each adapter connects the complex to two separate switched Fibre
Channel networks. Each network attaches disk enclosures that each contain up to 24 disks.
Each storage enclosure pair contains two 32-port bridges. Of these 32 ports, 24 are used to
attach to the 24 disks in the enclosure and 4 are used to either interconnect with other
enclosures or to the device adapters. Each disk is attached to both switches. Whenever the
device adapter connects to a disk, it uses a bridged connection to transfer data. This means
that all data travels through the shortest possible path. Additional information is available in
3.4.1, “Device adapters” on page 44.

The attached hosts interact with software running on the complexes to access data on logical
volumes. The servers manage all read and write requests to the logical volumes on the disk
arrays. During write requests, the servers use fast-write, in which the data is written to volatile
memory on one complex and persistent memory on the other complex. The server then
reports the write as complete before it has been written to disk. This provides much faster
write performance. Persistent memory is also called nonvolatile storage (NVS). Additional
information about this topic is available in 3.5, “Host adapters” on page 49.

Chapter 3. Hardware components and architecture 37


3.2.4 Storage facility architecture
As already mentioned, the DS8800 storage facility consists of two POWER6 p 570 servers.
They form a processor complex that utilizes a RIO-G loop for processor communication and
an PCI Express infrastructure to communicate to the I/O subsystem; see Figure 3-7.

Figure 3-7 DS8000 series architecture

When a host performs a read operation, the processor complexes, also called CPCs, fetch
the data from the disk arrays using the high performance switched disk architecture. The data
is then cached in volatile memory in case it is required again. The servers attempt to
anticipate future reads by an algorithm known as Sequential prefetching in Adaptive
Replacement Cache (SARC). Data is held in cache as long as possible using this smart
caching algorithm. If a cache hit occurs where requested data is already in cache, then the
host does not have to wait for it to be fetched from the disks. The cache management has
been enhanced by breakthrough caching technologies from IBM Research, such as the
Adaptive Multi-stream Prefetching (AMP) and Intelligent Write Caching (IWC) (see 7.4,
“DS8000 superior caching algorithms” on page 130).

Both the device and host adapters operate on high bandwidth fault-tolerant point-to-point
4-lane Generation 2 PCI Express interconnections. The device adapters feature an 8 Gb
Fibre Channel interconnect speed with a 6 Gb SAS connection to the disk drives for each
connection and direction. On a DS8800, as on a DS8700, the data traffic is isolated from the
processor complex communication that utilizes the RIO-G loop.

Figure 3-7 on page 38 shows how the DS8800 hardware is shared between the servers. On
the left side is one processor complex (CPC). The CPC uses the N-way symmetric
multiprocessor (SMP) of the complex to perform its operations. It records its write data and

38 IBM System Storage DS8800: Architecture and Implementation


caches its read data in the volatile memory of the left complex. For fast-write data, it has a
persistent memory area on the right processor complex. To access the disk arrays under its
management, it has its own device adapter. The server on the right operates in an identical
fashion. The host adapters are shared between both servers.

3.2.5 Server-based SMP design


The DS8000 series, which includes the DS8800, benefits from a fully assembled, leading
edge processor and memory system. The DS8000 systems use DDR2 memory DIMMs.
Using SMPs as the primary processing engine sets the DS8000 systems apart from other
disk storage systems on the market.

Additionally, the System p POWER6+ processors used in the DS8800 support the execution
of two independent threads concurrently. This capability is referred to as simultaneous
multi-threading (SMT). The two threads running on the single processor share a common L1
cache. The SMP/SMT design minimizes the likelihood of idle or overworked processors, while
a distributed processor design is more susceptible to an unbalanced relationship of tasks to
processors.

The design decision to use SMP memory as an I/O cache is a key element of the IBM storage
architecture. Although a separate I/O cache could provide fast access, it cannot match the
access speed of the SMP main memory.

All memory installed on any processor complex is accessible to all processors in that
complex. The addresses assigned to the memory are common across all processors in the
same complex. Alternatively, using the main memory of the SMP as the cache leads to a
partitioned cache. Each processor has access to the processor complex’s main memory, but
not to that of the other complex. You should keep this in mind with respect to load balancing
between processor complexes.

Chapter 3. Hardware components and architecture 39


3.3 Storage facility processor complex (CEC)
The DS8800 base frame contains two processor complexes. The 951 model can have the
2-way processor feature or the 4-way processor feature (2-way means that each processor
complex has two CPUs; 4-way means that each processor complex has four CPUs).
Figure 3-8 show the DS8800 storage subsystem with the 2-way processor feature. There can
be two or four I/O enclosures.

Figure 3-8 DS8800 2-way architecture

Figure 3-9 shows the DS8800 with the 4-way feature. In this case, four I/O enclosures are
required.

Figure 3-9 DS8800 4-way architecture

40 IBM System Storage DS8800: Architecture and Implementation


Figure 3-10 shows the DS8800 with an expansion frame using standard cabling. In this case,
eight I/O enclosures are required.

Figure 3-10 DS8800 with expansion frame and eight I/O enclosures

The DS8800 features IBM POWER6+ server technology. Compared to the POWER5+ based
processor models in DS8100 and DS8300, the POWER6 processor can achieve up to a 50%
performance improvement in I/O operations per second in transaction processing workload
environments and up to 150% throughput improvement for sequential workloads.

For details about the server hardware used in the DS8800, see IBM System p 570 Technical
Overview and Introduction, REDP-4405, found at:
http://www.redbooks.ibm.com/redpieces/pdfs/redp4405.pdf

Figure 3-11 shows a rear view of the DS8800 processor complex.

Figure 3-11 Processor complex

Chapter 3. Hardware components and architecture 41


3.3.1 Processor memory and cache management
The DS8800 offers up to 384 GB of processor memory. Half of this will be located in each
processor complex. Caching is a fundamental technique for hiding I/O latency. Like other
modern caches, DS8800 contains volatile memory used as a read cache and non-volatile
memory used as a write cache. The non-volatile storage (NVS) scales to the processor
memory size selected, which can also help optimize performance.

The effectiveness of a read cache depends upon the hit ratio, which is the fraction of requests
that are served from the cache without necessitating a read from the disk (read miss).

To help achieve dramatically greater throughput and faster response times, the DS8800 uses
Sequential-prefetching in Adaptive Replacement Cache (SARC). SARC is an efficient
adaptive algorithm for managing read caches with both:
򐂰 Demand-paged data: It finds recently used data in the cache.
򐂰 Prefetched data: It copies data speculatively into the cache before it is even requested.

The decision of when and what to prefetch is made in accordance with the Adaptive
Multi-stream Prefetching (AMP), a cache management algorithm.

The Intelligent Write Caching (IWC) manages the write cache and decides in what order and
at what rate to destage.

For details about cache management, see 7.4, “DS8000 superior caching algorithms” on
page 130.

Service processor and system power control network


The service processor (SP) is an embedded controller that is based on a PowerPC®
processor. The system power control network (SPCN) is used to control the power of the
attached I/O subsystem. The SPCN control software and the service processor software are
run on the same PowerPC processor.

The SP performs predictive failure analysis based on any recoverable processor errors. The
SP can monitor the operation of the firmware during the boot process, and it can monitor the
operating system for loss of control. This enables the service processor to take appropriate
action.

The SPCN monitors environmentals such as power, fans, and temperature. Environmental
critical and noncritical conditions can generate Early Power-Off Warning (EPOW) events.
Critical events trigger appropriate signals from the hardware to the affected components to
prevent any data loss without operating system or firmware involvement. Non-critical
environmental events are also logged and reported.

3.3.2 RIO-G
In a DS8800, the RIO-G ports are used for inter-processor communication only. RIO stands
for remote I/O. The RIO-G has evolved from earlier versions of the RIO interconnect.

Each RIO-G port can operate at 1 GHz in bidirectional mode and is capable of passing data in
each direction on each cycle of the port. It is designed as a high performance, self-healing
interconnect.

42 IBM System Storage DS8800: Architecture and Implementation


3.3.3 I/O enclosures
The DS8800 base frame contains I/O enclosures and adapters. There can be two or four I/O
enclosures in a DS8800 base frame. The I/O enclosures hold the adapters and provide
connectivity between the adapters and the processors. Device adapters and host adapters
are installed in the I/O enclosures. Each I/O enclosure has six slots. The DS8800 can use up
to four of these slots, two for device adapters and two for host adapters.

In a 4-way configuration, the 2 GX+ buses connect to different PCI-X adapters in the I/O
enclosures. One GX+ bus is connected to the lower I/O enclosures in each frame. The other
GX+ bus is connected to the upper I/O enclosures in each frame. This is to optimize
performance in a single-frame 4-way configuration. The connections are displayed in
Figure 3-12.

A 2-way configuration would only use the lower pairs of I/O enclosures. A 4-way configuration
is always required for an expansion frame. Slots 3 and 6 are used for the device adapters.
Slots 1 and 4 are available to install up to two host adapters per I/O enclosure. There can be
a total of 8 host adapters in a 4-I/O enclosure configuration and 16 host adapters in an 8-I/O
enclosure configuration using an expansion frame.

Figure 3-12 DS8800 I/O enclosure connections to CEC

Each I/O enclosure has the following attributes:


򐂰 5U rack-mountable enclosure
򐂰 Six PCI Express slots
򐂰 Default redundant hot plug power and cooling devices

Chapter 3. Hardware components and architecture 43


3.4 Disk subsystem
The disk subsystem consists of three components:
1. The device adapters, which are located in the I/O enclosures, are RAID controllers that
are used by the storage images to access the RAID arrays.
2. The device adapters, which connect to switched controller cards in the disk enclosures.
This creates a switched Fibre Channel disk network.
3. The disks themselves, which are commonly referred to as disk drive modules (DDMs).

We describe the disk subsystem components in the remainder of this section. Also see 4.6,
“RAS on the disk subsystem” on page 72 for additional information.

3.4.1 Device adapters


In the DS8800, a faster application-specific integrated circuit (ASIC) and a faster processor is
used on the device adapter cards compared to adapters of other members of the DS8000
family. This leads to higher throughput rates.The DS8800 replaces the PCI-X device and host
adapters with native PCIe 8Gbps FC adapters. This is an improvement from all previous
DS8000 models (including the DS8700).

Each DS8800 device adapter (DA) card offers four FC-AL ports. These ports are used to
connect the processor complexes through the I/O enclosures to the disk enclosures. The
adapter is responsible for managing, monitoring, and rebuilding the RAID arrays. The adapter
provides remarkable performance thanks to a high function/high performance ASIC. To
ensure maximum data integrity, it supports metadata creation and checking.

The DAs are installed in pairs for redundancy in connecting to each disk enclosure. This is
why we refer to them as pairs.

3.4.2 Disk enclosures


Each DS8800 frame contains a maximum of either 10, 14 or 20 disk enclosures, depending
on whether it is a base or expansion frame. Each DS8800 disk enclosure contains a total of
24 small form factor (SFF) DDMs or dummy carriers. A dummy carrier looks similar to a DDM
in appearance, but contains no electronics. The enclosure is shown in Figure 3-13.

Note: If a DDM is not present, its slot must be occupied by a dummy carrier. Without a
drive or a dummy, cooling air does not circulate properly.

The DS8800 also supports Solid State Drives (SSDs). SSDs also come in disk enclosures
either partially populated with 8 disks, 16 disks, or fully populated with 24 disks. They have
the same form factor as the traditional disks. SSDs and other disks cannot be intermixed
within the same enclosure pair.

Each DDM is an industry standard Serial Attached SCSI (SAS) disk. The DDMs are 2.5-inch
small form factor disks. This size allows 24 disk drives to be installed in each storage
enclosure. Each disk plugs into the disk enclosure backplane. The backplane is the electronic
and physical backbone of the disk enclosure.

The enclosure has a redundant pair of interface control cards (IC) that provides the
interconnect logic for the disk access and a SES processor for enclosure services. The ASIC
is an 8 Gbps FC-al switch with a Fibre Channel (FC) to SAS conversion logic on each disk
port. The FC and SAS conversion function provides speed aggregation on the FC

44 IBM System Storage DS8800: Architecture and Implementation


interconnection ports. The FC trunking connection provides full 8 Gbps transfer rates from a
group of drives with lower interface speeds.

Figure 3-13 DS8800 disk enclosure

Switched FC-AL advantages


The DS8000 uses switched FC-AL technology to link the DA pairs and the DDMs. Switched
FC-AL uses the standard FC-AL protocol, but the physical implementation is different. The
key features of switched FC-AL technology are:
򐂰 Standard FC-AL communication protocol from DA to DDMs
򐂰 Direct point-to-point links are established between DA and DDM
򐂰 Isolation capabilities in case of DDM failures, providing easy problem determination
򐂰 Predictive failure statistics
򐂰 Simplified expansion, where no cable rerouting is required when adding another disk
enclosure

The DS8000 architecture employs dual redundant switched FC-AL access to each of the disk
enclosures. The key benefits of doing this are:
򐂰 Two independent networks to access the disk enclosures.
򐂰 Four access paths to each DDM.
򐂰 Each device adapter port operates independently.
򐂰 Double the bandwidth over traditional FC-AL loop implementations.

Chapter 3. Hardware components and architecture 45


In Figure 3-14, each DDM is depicted as being attached to two separate interface connectors
with bridges to the SAS disk drivers. This means that with two device adapters, we have four
effective data paths to each disk. Each DA can support two networks.

Figure 3-14 DS8000 Storage enclosure

When a connection is made between the device adapter and a disk, the storage enclosure
uses backbone cabling at 8 Gbps which is translated from fibre channel to SAS to the disk
drives. This means that a mini-loop is created between the device adapter port and the disk;
see Figure 3-15.

DS8000 Series switched FC-AL implementation


For a more detailed look at how the switched disk architecture expands in the DS8800, see
Figure 3-15, which depicts how each DS8800 DA connects to two disk networks called loops.

Figure 3-15 DS8000 switched disk expansion

46 IBM System Storage DS8800: Architecture and Implementation


Expansion is achieved by adding enclosures to the expansion ports of each switch. Each loop
can potentially have up to six enclosures.

Expansion
Storage enclosures are added in pairs and disks are added in groups of 16. It takes three
orders of 16 DDMs to fully populate a disk enclosure pair (top and bottom).

For example, if a DS8800 had six disk enclosures total and all the enclosures were fully
populated with disks, there would be 144 DDMs in three enclosure pairs. If an additional order
of 16 DDMs were purchased, then two new disk enclosures would be added. The switched
networks do not need to be broken to add these enclosures. They are simply added to the
end of the loop; eight DDMs will go in the upper enclosure and the remaining eight DDMs will
go in the lower enclosure.

If an additional 16 DDMs are subsequently ordered, they will be added to the same (upper
and lower) enclosure pair. If a third set of 16 DDMs are ordered, they would be used to fill up
that pair of disk enclosures. These additional DDMs added have to be of the same type as the
DDMs already residing in the two enclosures.

Arrays and spares


Array sites, containing eight DDMs, are created as DDMs are installed. During the
configuration, you have the choice of creating a RAID 5, RAID 6, or RAID 10 array by
choosing one array site. Note that for SSDs, only RAID 5 is supported. The first four array
sites created on a DA pair each contribute one DDM to be a spare. So at least four spares are
created per DA pair, depending on the disk intermix.

The intention is to only have four spares per DA pair, but this number can increase depending
on DDM intermix. Four DDMs of the largest capacity and at least two DDMs of the fastest
RPM are needed. If all DDMs are the same size and RPM, four spares are sufficient.

Arrays across loops


Each array site consists of eight DDMs. Four DDMs are taken from the upper enclosure in an
enclosure pair, and four are taken from the lower enclosure in the pair. This means that when
a RAID array is created on the array site, half of the array is on each enclosure. Because the
upper enclosures are on one switched loop, and the lower enclosures are on a second
switched loop, this splits the array across two loops, known as array across loops (AAL). To
better understand AAL, refer to Figure 3-17 on page 48. To make the diagram clearer, only 16
DDMs are shown, eight in each disk enclosure. When fully populated, there would be 24
DDMs in each enclosure.

Figure 3-16 is used to show the DA pair layout. One DA pair creates two switched loops. The
upper enclosures populate one loop, and the lower enclosures populate the other loop. Each
enclosure places two switches onto each loop. Each enclosure can hold up to 24 DDMs.
DDMs are purchased in groups of 16. Half of the new DDMs go into the upper enclosure and
half of the new DDMs go into the lower enclosure.

Chapter 3. Hardware components and architecture 47


Figure 3-16 DS8800 switched loop layout

Having established the physical layout, the diagram is now changed to reflect the layout of the
array sites, as shown in Figure 3-17. Array site 1 in green (the darker disks) uses the four left
DDMs in each enclosure. Array site 2 in yellow (the lighter disks), uses the four right DDMs in
each enclosure. When an array is created on each array site, half of the array is placed on
each loop. A fully populated enclosure pair would have six array sites.

Figure 3-17 Array across loop

48 IBM System Storage DS8800: Architecture and Implementation


AAL benefits
AAL is used to increase performance. When the device adapter writes a stripe of data to a
RAID 5 array, it sends half of the write to each switched loop. By splitting the workload in this
manner, each loop is worked evenly, which improves performance. If RAID 10 is used, two
RAID 0 arrays are created. Each loop hosts one RAID 0 array. When servicing read I/O, half
of the reads can be sent to each loop, again improving performance by balancing workload
across loops.

3.4.3 Disk drives


Each disk drive module (DDM) is hot pluggable and has two indicators. The green indicator
shows disk activity, while the amber indicator is used with light path diagnostics to allow for
easy identification and replacement of a failed DDM.

The DS8800 supports 146 GB (15 K rpm), 450 GB (10 K rpm), 600 GB (10 k rpm), SAS disk
drive sets. The DS8800 also supports 450 GB (10 K rpm) and 600 GB (10 k rpm) encrypting
SAS drive stes.The DS8800 supports Solid State Drives (SSD) with a capacity of 300 GB. For
more information about Solid State Drives, see DS8000: Introducing Solid State Drives,
REDP-4522.

For information about encrypted drives and inherent restrictions, see IBM System Storage
DS8700 Disk Encryption Implementation and Usage Guidelines, REDP-4500.

3.5 Host adapters


The DS8800 supports 8 Gbps Fibre Channel adapters. Each adapter can have four or eight
ports. Each port can be configured to operate as a Fibre Channel port or as a FICON port.

3.5.1 Fibre Channel/FICON host adapters


Fibre Channel is a technology standard that allows data to be transferred from one node to
another at high speeds and great distances (up to 10 km). The DS8800 uses the Fibre
Channel protocol to transmit SCSI traffic inside Fibre Channel frames. It also uses Fibre
Channel to transmit FICON traffic, which uses Fibre Channel frames to carry System z I/O.

Each DS8800 Fibre Channel card offers four or eight 8 Gbps Fibre Channel ports. The cable
connector required to attach to this card is an LC type. Each 8 Gbps port independently
auto-negotiates to either 2, 4, or 8 Gbps link speed. Each of the ports on one DS8800 host
adapter can also independently be either Fibre Channel protocol (FCP) or FICON. The type
of the port can be changed through the DS Storage Manager GUI or by using DSCLI
commands. A port cannot be both FICON and FCP simultaneously, but it can be changed as
required.

The card itself is PCIe Gen 2. The card is driven by a new high function, that is, high
performance ASIC. To ensure maximum data integrity, it supports metadata creation and
checking. Each Fibre Channel port supports a maximum of 509 host login IDs and 1280
paths. This allows for the creation of very large storage area networks (SANs).

Chapter 3. Hardware components and architecture 49


Fibre Channel supported servers
The current list of servers supported by Fibre Channel attachment can be found at:
http://www.ibm.com/systems/support/storage/config/ssic/index.jsp

Consult these documents regularly, because they contain the most current information about
server attachment support.

Fibre Channel distances


There are two types of host adapter cards you can select:
򐂰 Longwave
򐂰 Shortwave

With longwave, you can connect nodes at distances of up to 10 km (non-repeated). With


shortwave, you are limited to a distance of 500 meters (non-repeated). All ports on each card
must be either longwave or shortwave. There can be no intermixing of the two types within a
card.

3.6 Power and cooling


The DS8800 series power and cooling system is highly redundant. The components are
described in this section. Also see 4.7, “RAS on the power subsystem” on page 79, for more
information about this topic.

Rack Power Control cards


The DS8800 has a pair of redundant Rack Power Control (RPC) cards that are used to
control certain aspects of power sequencing throughout the DS8800. These cards are
attached to the Service Processor (SP) card in each processor, which allows them to
communicate both with the Storage Hardware Management Console (HMC) and the storage
facility. The RPCs also communicate with each primary power supply and indirectly with each
rack’s fan sense cards and the disk enclosures in each frame.

Primary power supply


The DS8000 primary power supply (PPS) is a wide range PPS that converts input AC voltage
into DC voltage. The line cord needs to be ordered specifically for the operating voltage to
meet special requirements. The line cord connector requirements vary widely throughout the
world; for example, the line cord might not come with a suitable connector for your nation’s
preferred outlet. This connector might need to be replaced by an electrician after the machine
is delivered.
There are two redundant PPSs in each frame of the DS8800. Each PPS is capable of
powering the frame by itself. The PPS supplies 208V output power to four or six power
distribution units (PDUs) depending on the rack type (base, first expansion, second
expansion). Each PDU accesses both PPSs in each frame for redundancy. The PDUs feed
power to the processor complex, the I/O enclosures, and the storage enclosure power
supplies. In the base rack and the first expansion rack each PPS supplies power to four PDUs
supplying power to the storage enclosures and two PDUs supplying power to the I/O
enclosures and the processor complex (base rack only). Figure 3-21 shows the base rack
PDUs.

50 IBM System Storage DS8800: Architecture and Implementation


Figure 3-18 DS8800 Base frame power distribution units (PDUs)

In the second expansion frame, each PPS supplies power to six PDUs supplying power to
storage enclosures. Each PDU supplies power to 4 or 5 storage enclosure pairs (gigapacks).
Each storage enclosure has two power supply units (PSU). The storage enclosure PSU pairs
are connected to two separate PDUs for redundancy. There can also be an optional booster
module that will allow the PPSs to temporarily run the disk enclosures off of a battery, if the
extended power line disturbance feature has been purchased (see Chapter 4, “RAS on IBM
System Storage DS8800” on page 55 for a complete explanation of why this feature might be
necessary for your installation).

Each PPS has internal fans to supply cooling for that power supply.

Processor and I/O enclosure power supplies


Each processor and I/O enclosure has dual redundant power supplies to convert 208V DC
into the required voltages for that enclosure or complex. Each enclosure also has its own
cooling fans.

Disk enclosure power and cooling


The storage enclosures have two power supply units for each storage enclosure. These
PSUs draw power from the PPSs through the PDUs. There are cooling fans located in each
PDU. These fans draw cooling air through the front of each enclosure and exhaust air out of
the back of the frame.

Battery backup assemblies


The backup battery assemblies help protect data in the event of a loss of external power. In
the event of a complete loss of input AC power, the battery assemblies are used to allow the

Chapter 3. Hardware components and architecture 51


contents of NVS memory to be written to a number of DDMs internal to the processor
complex prior to power off.

The FC-AL DDMs are not protected from power loss unless the extended power line
disturbance feature has been purchased.

3.7 Management console network


All base models ship with one Storage Hardware Management Console (HMC) and two
Ethernet switches. A mobile computer HMC (Lenovo ThinkPad), shown in Figure 3-19, will be
shipped with a DS8800.

Changes performed by the storage administrator to a DS8800 configuration, using the GUI or
DSCLI, are passed to the storage system through the HMC.

Figure 3-19 Mobile computer HMC

Note: The DS8800 HMC supports IPv6, the next generation of the Internet Protocol. The
HMC continues to support the IPv4 standard and mixed IPV4 and IPv6 environments.

Ethernet switches
In addition to the Fibre Channel switches installed in each disk enclosure, the DS8000 base
frame contains two 8-port Ethernet switches. Two switches are supplied to allow the creation
of a fully redundant management network. Each processor complex has multiple connections
to each switch to allow each server to access each switch. This switch cannot be used for any
equipment not associated with the DS8800. The switches get power from the internal power
bus and thus do not require separate power outlets. The switches are shown in Figure 3-20.

52 IBM System Storage DS8800: Architecture and Implementation


Figure 3-20 Ethernet switches

See 4.5, “RAS on the HMC” on page 71 for more information.

3.8 System Storage Productivity Center


System Storage Productivity Center (SSPC) is a console (machine type 2805, hardware and
software) that enables Storage Administrators to centralize and standardize the management
of various storage network resources by using IBM storage management software. With
SSPC, it is possible to manage and fully configure multiple DS8000 storage systems from a
single point of control.

Note: An SSPC is required to remotely access the DS8800 Storage Manager GUI.

SSPC consists of three components:


򐂰 IBM Tivoli Storage Productivity Center Basic Edition (TPC BE), which has an integrated
DS8000 Storage Manager
򐂰 SVC GUI back-end (can manage up to two clusters)
򐂰 SVC CIM Agent (can manage up to two clusters)

TPC BE enables you to perform:


򐂰 Disk management: Discovery, health monitoring, capacity reporting, and configuration
operations
򐂰 Fabric management: Discovery, health monitoring, reporting, and configuration operations

Without installing additional software, clients have the option to upgrade their licenses of:
򐂰 TPC for Disk (to add performance monitoring capabilities)
򐂰 TPC for Fabric (to add performance monitoring capabilities)
򐂰 TPC for Data (to add storage management for open system hosts)
򐂰 TPC for Replication (to manage Copy Services sessions and support open systems and
z/OS-attached volumes)
򐂰 TPC Standard Edition (TPC SE) (to add all of these features)

Chapter 3. Hardware components and architecture 53


SSPC can be ordered as a software (SW) package to be installed on the client’s hardware or
can be ordered as Model 2805, which has the software preinstalled on an System x3550 with
a Quad Core Intel processor (2.4 Ghz) with 8 GB of memory running Windows Server 2008
(see Figure 3-21).

Figure 3-21 SSPC hardware

Important: Any DS8800 shipped requires a minimum of one SSPC per data center to
enable the launch of the DS8000 Storage Manager other than from the HMC.

SSPC is described in detail in Chapter 12, “System Storage Productivity Center” on


page 229.

3.9 Isolated Tivoli Key Lifecycle Manager server


The Tivoli Key Lifecycle Manager (TKLM) software performs key management tasks for IBM
encryption enabled hardware, such as the IBM System Storage DS8000 series and IBM
encryption-enabled tape drives by providing, protecting, storing, and maintaining encryption
keys that are used to encrypt information being written to, and decrypt information being read
from, encryption enabled disks. TKLM operates on a variety of operating systems.

For DS8800 storage subsystems shipped with Full Disk Encryption (FDE) drives, two TKLM
key servers are required. An isolated key server (IKS) with dedicated hardware and
non-encrypted storage resources is required.

The isolated TKLM key server can be ordered from IBM. It is the same hardware as used for
the SSPC. The following software is used on the isolated key server:
򐂰 Linux operating system
򐂰 Tivoli Key Lifecycle Manager V, which includes DB2® V9.1 FB4

No other hardware or software is allowed on the IKS.

See 4.8, “RAS and Full Disk Encryption” on page 82 for more information.

For more information, see IBM System Storage DS8700 Disk Encryption Implementation and
Usage Guidelines, REDP-4500.

54 IBM System Storage DS8800: Architecture and Implementation


4

Chapter 4. RAS on IBM System Storage


DS8800
This chapter describes the reliability, availability, and serviceability (RAS) characteristics of
the IBM System Storage DS8800. The following topics are covered in this chapter:
򐂰 Names and terms for DS8800
򐂰 RAS features of the DS8800 Central Electronic Complex (CEC)
򐂰 CEC failover and failback
򐂰 Data flow in the DS8800
򐂰 RAS on the HMC
򐂰 RAS on the disk subsystem
򐂰 RAS on the power subsystem
򐂰 RAS and Full Disk Encryption
򐂰 Other features

© Copyright IBM Corp. 2011. All rights reserved. 55


4.1 Names and terms for the DS8800 storage system
It is important to understand the naming conventions used to describe DS8800 components
and constructs to fully appreciate the discussion of RAS concepts. Although most terms have
been introduced in previous chapters of this book, they are repeated and summarized here,
because the rest of this chapter will use these terms frequently.

Storage unit
The term storage unit describes a single DS8800 (base frame plus optional expansion
frames). If your organization has one DS8800, then you have a single storage complex that
contains a single storage unit.

Base frame or primary frame


The DS8800 is available as a single model type (951), which includes a complete storage unit
contained in a single primary frame, also called a base frame. To increase the storage
capacity, up to two expansion frames may be added to the primary frame.

Note: A DS8800 with business class cabling can only add a single expansion frame.

A primary frame contains the following components:


򐂰 Power and cooling components (power supplies, batteries, and fans).
򐂰 Power control cards - Rack Power Control (RPC) and System Power Control Network
(SPCN).
򐂰 Two POWER6+ CECs.
򐂰 Two or four I/O Enclosures for Host Adapters and Device Adapters.
򐂰 2 Gigabit Ethernet switches for internal network.
򐂰 Storage Hardware Management Console.
򐂰 Up to five pairs (10 total) Storage Enclosures for storage disks.
– Each Storage Enclosure has up to 24 disk drive modules (DDM).
– The primary frame can have a maximum of 240 disk drive modules.

Expansion frame
Expansion frames can be added one at a time to increase the overall capacity of the storage
unit. All expansion frames contain the power and cooling components needed to run the
frame. The first expansion frame contains storage disks and I/O enclosures for the Fibre
Channel loops. The second expansion frame contains storage disks only. Because the Fibre
Channel loops are switched, the addition of an expansion frame is a concurrent operation for
the DS8800.
򐂰 Each Gigapack has up to 24 disk drive modules (DDM).
򐂰 The first expansion frame can have a maximum of 336 disk drive modules in 14
Gigapacks.
򐂰 The second expansion frame can have a maximum of 480 disk drive modules in 20
Gigapacks.

Storage complex
This term storage complex describes a group of DS8000s (that is, DS8300s, DS8700s or
DS8800s) managed by a single management console. A storage complex can, and usually
does, consist of simply a single DS8800 storage unit (primary frame plus optional expansion
frames).

56 IBM System Storage DS8800: Architecture and Implementation


Central Electronic Complex/processor complex/storage server
In the DS8800, a Central Electronic Complex (CEC) is an IBM System p server built on the
POWER6 architecture. The CECs run the AIX V6.1 operating system and storage-specific
microcode.

The DS8800 contains two CECs as a redundant pair so that if either fails, the remaining CEC
can continue to run the storage unit. Each CEC can have up to 192 GB of memory and 1 or
2 POWER6+ processor cards. In other models of the DS8000 family, a CEC was also referred
to as a processor complex or a storage server. The CECs are identified as CEC0 and CEC1.
Some chapters and illustrations in this publication refer to Server 0 and Server 1; these are
the same as CEC0 and CEC1 for the DS8800.

Storage HMC
The Storage Hardware Management Console (HMC) is the master console for the DS8800
unit. With connections to the CECs, the client network, the SSPC, and other management
systems, the HMC becomes the focal point for most operations on the DS8800. All storage
configuration and service actions are run through the HMC. Although many other IBM
products also use an HMC, the Storage HMC is unique to the DS8000 family. Throughout this
chapter, it will be referred to as the HMC, but keep in mind we are referring to the Storage
HMC that is cabled to the internal network of the DS8800.

System Storage Productivity Center


The DS8800 utilizes the IBM System Storage Productivity Center (SSPC), which is a
management system that integrates the power of the IBM Tivoli Storage Productivity Center
(TPC) and the DS Storage Manager user interfaces (residing at the HMC) into a single view.
The SSPC (machine type 2805-MC5) is an integrated hardware and software solution for
centralized management of IBM storage products with IBM storage management software.
The SSPC is described in detail in Chapter 12, “System Storage Productivity Center” on
page 229.

Storage facility images and logical partitions


A logical partition (LPAR) is a virtual server within a physical processor complex. A storage
facility image (SFI) consists of two logical partitions acting together as a virtual storage
server. Earlier DS8000 models supported more than one SFI, meaning that the two physical
CECs could be divided into four logical servers, each having control of some of the physical
resources of the DS8000. The DS8800 does not divide the CECs into logical partitions. There
is only one SFI, which owns 100 % of the physical resources. So for the DS8800, the term
storage facility image can be considered synonymous with storage unit.

Important: The DS8800 does not divide the CECs into logical partitions. There is only one
SFI, which owns 100 % of the physical resources. Information regarding multi-SFI or
LPARs does not apply to the IBM System Storage DS8800.

4.2 RAS features of DS8800 CEC


Reliability, availability, and serviceability (RAS) are important concepts in the design of the
IBM System Storage DS8800. Hardware features, software features, design considerations,
and operational guidelines all contribute to make the DS8800 extremely reliable. At the heart
of the DS8800 is a pair of POWER6+-based System p servers known as CECs. These two
servers share the load of receiving and moving data between the attached hosts and the disk
arrays. However, they also are redundant so that if either CEC fails, the remaining CEC can

Chapter 4. RAS on IBM System Storage DS8800 57


continue to run the DS8800 without any host interruption. This section looks at the RAS
features of the CECs, including the hardware, the operating system, and the interconnect.

4.2.1 POWER6 Hypervisor


The POWER6 Hypervisor (PHYP) is a component of system firmware that will always be
installed and activated, regardless of the system configuration. It operates as a hidden
partition, with no processor resources assigned to it.

The Hypervisor provides the following capabilities:


򐂰 Reserved memory partitions allow the setting aside of a certain portion of memory to use
as cache and a certain portion to use as NVS.
򐂰 Preserved memory support allows the contents of the NVS and cache memory areas to
be protected in the event of a server reboot.
򐂰 I/O enclosure initialization control, so that when one server is being initialized, it does not
initialize an I/O adapter that is in use by another server.
򐂰 Automatic reboot of a frozen partition or Hypervisor.

The AIX operating system uses PHYP services to manage the translation control entry (TCE)
tables. The operating system communicates the desired I/O bus address to logical mapping,
and the Hypervisor translates that into the I/O bus address to physical mapping within the
specific TCE table. The Hypervisor needs a dedicated memory region for the TCE tables to
translate the I/O address to the partition memory address, and then the Hypervisor can
perform direct memory access (DMA) transfers to the PCI adapters.

4.2.2 POWER6 processor


IBM POWER6 systems have a number of new features that enable systems to dynamically
adjust when issues arise that threaten availability. Most notably, POWER6 systems introduce
the POWER6 Processor Instruction Retry suite of tools, which includes Processor Instruction
Retry, Alternate Processor Recovery, Partition Availability Prioritization, and Single Processor
Checkstop. Taken together, in many failure scenarios these features allow a POWER6
processor-based system to recover with no impact from the failing core. The DS8800 uses a
POWER6+ processor running at 5.0 GHz.

The POWER6 processor implements the 64-bit IBM Power Architecture® technology and
capitalizes on all the enhancements brought by the POWER5™ processor. Each POWER6
chip incorporates two dual-threaded Simultaneous Multithreading processor cores, a private
4 MB level 2 cache (L2) for each processor, a 36 MB L3 cache controller shared by the two
processors, integrated memory controller, and data interconnect switch. It is designed to
provide an extensive set of RAS features that include improved fault isolation, recovery from
errors without stopping the processor complex, avoidance of recurring failures, and predictive
failure analysis.

POWER6 RAS features


The following sections describe the RAS leadership features of IBM POWER6 systems in
more detail.

POWER6 processor instruction retry


Soft failures in the processor core are transient errors. When an error is encountered in the
core, the POWER6 processor will first automatically retry the instruction. If the source of the
error was truly transient, the instruction will succeed and the system will continue as before.
On predecessor IBM systems, this error would have caused a checkstop.

58 IBM System Storage DS8800: Architecture and Implementation


POWER6 alternate processor retry
Hard failures are more difficult, being true logical errors that will be replicated each time the
instruction is repeated. Retrying the instruction will not help in this situation because the
instruction will continue to fail. Systems with POWER6 processors introduce the ability to
extract the failing instruction from the faulty core and retry it elsewhere in the system, after
which the failing core is dynamically deconfigured and called out for replacement. The entire
process is transparent to the partition owning the failing instruction. Systems with POWER6
processors are designed to avoid what would have been a full system outage.

POWER6 cache availability


In the event that an uncorrectable error occurs in L2 or L3 cache, the system will be able to
dynamically remove the offending line of cache without requiring a reboot. In addition,
POWER6+ utilizes an L1/L2 cache design and a write-through cache policy on all levels,
helping to ensure that data is written to main memory as soon as possible.

POWER6 single processor checkstopping


Another major advancement in POWER6 processors is single processor checkstopping. A
processor checkstop would result in a system checkstop. A new feature in System 570 is the
ability to contain most processor checkstops to the partition that was using the processor at
the time. This significantly reduces the probability of any one processor affecting total system
availability.

POWER6 fault avoidance


POWER6 systems are built to keep errors from ever happening. This quality-based design
includes such features as reduced power consumption and cooler operating temperatures for
increased reliability, enabled by the use of copper chip circuitry, silicon on insulator (SOI), and
dynamic clock-gating. It also uses mainframe-inspired components and technologies.

POWER6 First Failure Data Capture


If a problem should occur, the ability to diagnose it correctly is a fundamental requirement
upon which improved availability is based. The POWER6 incorporates advanced capability in
startup diagnostics and in runtime First Failure Data Capture (FFDC) based on strategic error
checkers built into the chips. Any errors that are detected by the pervasive error checkers are
captured into Fault Isolation Registers (FIRs), which can be interrogated by the service
processor (SP). The SP has the capability to access system components using
special-purpose service processor ports or by access to the error registers.

The FIRs are important because they enable an error to be uniquely identified, thus enabling
the appropriate action to be taken. Appropriate actions might include such things as a bus
retry, error checking and correction (ECC), or system firmware recovery routines. Recovery
routines could include dynamic deallocation of potentially failing components.

Errors are logged into the system nonvolatile random access memory (NVRAM) and the SP
event history log, along with a notification of the event to AIX for capture in the operating
system error log. Diagnostic Error Log Analysis (diagela) routines analyze the error log
entries and invoke a suitable action, such as issuing a warning message. If the error can be
recovered, or after suitable maintenance, the service processor resets the FIRs so that they
can accurately record any future errors.

Chapter 4. RAS on IBM System Storage DS8800 59


N+1 redundancy
High-opportunity components, or those that most affect system availability, are protected with
redundancy and the ability to be repaired concurrently. The use of redundant parts allows the
system to remain operational. Among them are:
򐂰 Redundant spare memory bits in cache, directories, and main memory
򐂰 Redundant and hot-swap cooling
򐂰 Redundant and hot-swap power supplies

Self-healing
For a system to be self-healing, it must be able to recover from a failing component by first
detecting and isolating the failed component. It should then be able to take it offline, fix or
isolate it, and then reintroduce the fixed or replaced component into service without any
application disruption. Examples include:
򐂰 Bit steering to redundant memory in the event of a failed memory module to keep the
server operational
򐂰 Bit scattering, thus allowing for error correction and continued operation in the presence of
a complete chip failure (Chipkill recovery)
򐂰 Single-bit error correction using Error Checking and Correcting (ECC) without reaching
error thresholds for main, L2, and L3 cache memory
򐂰 L3 cache line deletes extended from 2 to 10 for additional self-healing
򐂰 ECC extended to inter-chip connections on fabric and processor bus
򐂰 Memory scrubbing to help prevent soft-error memory faults
򐂰 Dynamic processor deallocation

Memory reliability, fault tolerance, and integrity


POWER6 uses Error Checking and Correcting (ECC) circuitry for system memory to correct
single-bit memory failures and to detect double-bit memory failures. Detection of double-bit
memory failures helps maintain data integrity. Furthermore, the memory chips are organized
such that the failure of any specific memory module only affects a single bit within a four-bit
ECC word (bit-scattering), thus allowing for error correction and continued operation in the
presence of a complete chip failure (Chipkill recovery).

The memory DIMMs also utilize memory scrubbing and thresholding to determine when
memory modules within each bank of memory should be used to replace ones that have
exceeded their threshold of error count (dynamic bit-steering). Memory scrubbing is the
process of reading the contents of the memory during idle time and checking and correcting
any single-bit errors that have accumulated by passing the data through the ECC logic. This
function is a hardware function on the memory controller chip and does not influence normal
system memory performance.

Fault masking
If corrections and retries succeed and do not exceed threshold limits, the system remains
operational with full resources and no client or IBM service representative intervention is
required.

Mutual surveillance
The SP can monitor the operation of the firmware during the boot process, and it can monitor
the operating system for loss of control. This enables the service processor to take
appropriate action when it detects that the firmware or the operating system has lost control.
Mutual surveillance also enables the operating system to monitor for service processor
activity and can request a service processor repair action if necessary.

60 IBM System Storage DS8800: Architecture and Implementation


Additional memory keys in the POWER6+
The DS8800 utilizes the POWER6+ processor, which delivers improved performance over the
POWER6 processor. Additionally, the POWER6+ processor has sixteen memory keys
compared to 8 memory keys in the POWER6 processor. This doubling of keys (8 for the
kernel, 7 for the user, and 1 for the Hypervisor) provides enhanced key resiliency that is
important for virtualization environments. This feature helps prevent accidental memory
overwrites that could cause critical applications to crash.

4.2.3 AIX operating system


Each CEC runs the IBM AIX Version 6.1 operating system. This is the latest generation of the
IBM well-proven, scalable, and open standards-based UNIX®-like operating system. This
version of AIX includes support for Failure Recovery Routines (FRRs).

With AIX V6.1, the kernel has been enhanced with the ability to recover from unexpected
errors. Kernel components and extensions can provide failure recovery routines to gather
serviceability data, diagnose, repair, and recover from errors. In previous AIX versions, kernel
errors always resulted in an unexpected system halt.

See IBM AIX Version 6.1 Differences Guide, SG24-7559, for more information about how
AIX V6.1 adds to the RAS features of AIX 5L™ V5.3.

You can also reference the IBM website for a more thorough review of the features of the IBM
AIX operating system at:
http://www.ibm.com/systems/power/software/aix/index.html

4.2.4 CEC dual hard drive rebuild


If a simultaneous failure of the dual hard drives in a CEC occurs, they need to be replaced
and then have the AIX OS and DS8800 microcode reloaded. The DS8700 introduced a
significant improvement in RAS for this process, known as a rebuild. Any fault that causes the
CEC to be unable to load the operating system from its internal hard drives would lead to this
service action. This function is also supported on the DS8800.

For a rebuild on previous DS8000 models, the IBM service representative would have to load
multiple CDs/DVDs directly onto the CEC being serviced. For the DS8800, there are no
optical drives on the CECs; only the HMC has a DVD drive. For a CEC dual hard drive rebuild,
the service representative acquires the needed code bundles on the HMC, which then runs
as a Network Installation Management on Linux (NIMoL) server. The HMC provides the
operating system and microcode to the CEC over the DS8800 internal network, which is
much faster than reading and verifying from an optical disc.

All of the tasks and status updates for a CEC dual hard drive rebuild are done from the HMC,
which is also aware of the overall service action that necessitated the rebuild. If the rebuild
fails, the HMC manages the errors, including error data, and allows the service representative
to address the problem and restart the rebuild. When the rebuild completes, the server is
automatically brought up for the first time (IML). Once the IML is successful, the service
representative can resume operations on the CEC.

Overall, the rebuild process on a DS8800 is more robust and straightforward, thereby
reducing the time needed to perform this critical service action.

Chapter 4. RAS on IBM System Storage DS8800 61


4.2.5 RIO-G interconnect
The RIO-G interconnect is a high speed loop between the two CECs. Each RIO-G port can
operate at 1 GHz in bidirectional mode and is capable of passing data in each direction on
each cycle of the port. In previous generations of the DS8000, the I/O Enclosures were on the
RIO-G loops between the two CECs. The RIO-G bus carried the CEC-to-DDM data (host I/O)
and all CEC-to-CEC communications.

For the DS8700 and DS8800, the I/O Enclosures are wired point-to-point with each CEC
using a PCI Express architecture. This means that only the CEC-to-CEC (XC)
communications are now carried on the RIO-G and the RIO loop configuration is greatly
simplified. Figure 4-1 shows the new fabric design of the DS8800.

Figure 4-1 DS8800 design of RIO-G loop and I/O enclosures

4.2.6 Environmental monitoring


Environmental monitoring related to power, fans, and temperature is performed by the
System Power Control Network (SPCN). Environmental critical and non-critical conditions
generate Early Power-Off Warning (EPOW) events. Critical events (for example, a Class 5 AC
power loss) trigger appropriate signals from hardware to the affected components to prevent
any data loss without operating system or firmware involvement. Non-critical environmental
events are logged and reported using Event Scan.

Temperature monitoring is also performed. If the ambient temperature goes above a preset
operating range, then the rotation speed of the cooling fans can be increased. Temperature
monitoring also warns the internal microcode of potential environment-related problems. An
orderly system shutdown will occur when the operating temperature exceeds a critical level.

Voltage monitoring provides warning and an orderly system shutdown when the voltage is out
of operational specification.

62 IBM System Storage DS8800: Architecture and Implementation


4.2.7 Resource deallocation
If recoverable errors exceed threshold limits, resources can be deallocated with the system
remaining operational, allowing deferred maintenance at a convenient time. Dynamic
deallocation of potentially failing components is nondisruptive, allowing the system to
continue to run. Persistent deallocation occurs when a failed component is detected; it is then
deactivated at a subsequent reboot.

Dynamic deallocation functions include the following components:


򐂰 Processor
򐂰 L3 cache lines
򐂰 Partial L2 cache deallocation
򐂰 PCIe bus and slots

Persistent deallocation functions include the following components:


򐂰 Processor
򐂰 Memory
򐂰 Deconfigure or bypass failing I/O adapters
򐂰 L2 cache

Following a hardware error that has been flagged by the service processor, the subsequent
reboot of the server invokes extended diagnostics. If a processor or cache has been marked
for deconfiguration by persistent processor deallocation, the boot process will attempt to
proceed to completion with the faulty device automatically deconfigured. Failing I/O adapters
will be deconfigured or bypassed during the boot process.

4.3 CEC failover and failback


To understand the process of CEC failover and failback, we have to review the logical
construction of the DS8800. For more complete explanations, you might want to see
Chapter 5, “Virtualization concepts” on page 85.

Creating logical volumes on the DS8800 works through the following constructs:
򐂰 Storage DDMs are installed into predefined array sites.
򐂰 These array sites are used to form arrays, structured as RAID 5, RAID 6, or RAID 10
(restrictions apply for Solid State Drives).
򐂰 These RAID arrays then become members of a rank.
򐂰 Each rank then becomes a member of an Extent Pool. Each Extent Pool has an affinity to
either server 0 or server 1. Each Extent Pool is either open systems fixed block (FB) or
System z count key data (CKD).
򐂰 Within each Extent Pool, we create logical volumes. For open systems, these are called
LUNs. For System z, these are called volumes. LUN stands for logical unit number, which
is used for SCSI addressing. Each logical volume belongs to a logical subsystem (LSS).

For open systems, the LSS membership is really only significant for Copy Services. But for
System z, the LSS is the logical control unit (LCU), which equates to a 3990 (a System z disk
controller which the DS8800 emulates). It is important to remember that LSSs that have an
even identifying number have an affinity with CEC 0, and LSSs that have an odd identifying
number have an affinity with CEC 1. When a host operating system issues a write to a logical
volume, the DS8800 host adapter directs that write to the CEC that owns the LSS of which
that logical volume is a member.

Chapter 4. RAS on IBM System Storage DS8800 63


4.3.1 Dual operational
One of the basic premises of RAS in respect to processing host data is that the DS8800 will
always try to maintain two copies of the data while it is moving through the storage system.
The CECs have two areas of their primary memory used for holding host data: cache memory
and non-volatile storage (NVS). NVS is an area of the system RAM that is persistent across a
server reboot.

Note: For the previous generations of DS8000, the maximum available NVS was 4 GB per
server. For the DS8700 and DS8800, that maximum has been increased to 6 GB per
server.

When a write is issued to a volume and the CECs are both operational, this write data gets
directed to the CEC that owns this volume. The data flow begins with the write data being
placed into the cache memory of the owning CEC. The write data is also placed into the NVS
of the other CEC. The NVS copy of the write data is accessed only if a write failure should
occur and the cache memory is empty or possibly invalid; otherwise, it will be discarded after
the destaging is complete, as shown in Figure 4-2.

NVS for NVS for


ODD EVEN
numbered numbered
LSS LSS

Cache Cache
Memory for Memory for
EVEN ODD
numbered numbered
LSS LSS

CEC 0 CEC 1
Figure 4-2 Write data when CECs are dual operational

Figure 4-2 shows how the cache memory of CEC 0 is used for all logical volumes that are
members of the even LSSs. Likewise, the cache memory of CEC 1 supports all logical
volumes that are members of odd LSSs. For every write that gets placed into cache, a second
copy gets placed into the NVS memory located in the alternate CEC. Thus, the normal flow of
data for a write when both CECs are operational is as follows:
1. Data is written to cache memory in the owning CEC.
2. Data is written to NVS memory of the alternate CEC.
3. The write operation is reported to the attached host as completed.
4. The write data is destaged from the cache memory to a disk array.
5. The write data is discarded from the NVS memory of the alternate CEC.

64 IBM System Storage DS8800: Architecture and Implementation


Under normal operation, both DS8800 CECs are actively processing I/O requests. The
following sections describe the failover and failback procedures that occur between the CECs
when an abnormal condition has affected one of them.

4.3.2 Failover
In the example shown in Figure 4-3, CEC 0 has failed. CEC 1 needs to take over all of the
CEC 0 functions. Because the RAID arrays are on Fibre Channel Loops that reach both
CECs, they can still be accessed via the Device Adapters owned by CEC 1. See 4.6.1, “RAID
configurations” on page 72 for more information about the Fibre Channel Loops.

NVS for NVS NVS


ODD For EVEN For ODD
numbered numbered numbered
LSS LSS LSS

Cache Cache Cache


Memory for Memory Memory
For EVEN For ODD
EVEN
numbered numbered
numbered LSS LSS
LSS

CEC 0 CEC 1
Failover
Figure 4-3 CEC 0 failover to CEC 1

At the moment of failure, CEC 1 has a backup copy of the CEC 0 write data in its own NVS.
From a data integrity perspective, the concern is for the backup copy of the CEC 1 write data,
which was in the NVS of CEC 0 when it failed. Because the DS8800 now has only one copy
of that data (active in the cache memory of CEC 1), it will perform the following steps:
1. CEC 1 destages the contents of its NVS (the CEC 0 write data) to the disk subsystem.
However, before the actual destage and at the beginning of the failover:
a. The working CEC starts by preserving the data in cache that was backed by the failed
CEC NVS. If a reboot of the single working CEC occurs before the cache data had
been destaged, the write data remains available for subsequent destaging.
b. In addition, the existing data in cache (for which there is still only a single volatile copy)
is added to the NVS so that it remains available if the attempt to destage fails or a
server reboot occurs. This functionality is limited so that it cannot consume more than
85% of NVS space.

2. The NVS and cache of CEC 1 are divided in two, half for the odd LSSs and half for the
even LSSs.
3. CEC 1 now begins processing the I/O for all the LSSs.

Chapter 4. RAS on IBM System Storage DS8800 65


This entire process is known as a failover. After failover, the DS8800 now operates as shown
in Figure 4-3. CEC 1 now owns all the LSSs, which means all reads and writes will be
serviced by CEC 1. The NVS inside CEC 1 is now used for both odd and even LSSs. The
entire failover process should be invisible to the attached hosts.

The DS8800 can continue to operate in this state indefinitely. There has not been any loss of
functionality, but there has been a loss of redundancy. Any critical failure in the working CEC
would render the DS8800 unable to serve I/O for the arrays, so IBM support should begin
work right away to determine the scope of the failure and to build an action plan to restore the
failed CEC to an operational state.

4.3.3 Failback
The failback process always begins automatically as soon as the DS8800 microcode
determines that the failed CEC has been resumed to an operational state. If the failure was
relatively minor and recoverable by the operating system or DS8800 microcode, then the
resume action will be initiated by the software. If there was a service action with hardware
components replaced, then the IBM service representative or remote support will resume the
failed CEC.

For this example where CEC 0 has failed, we should now assume that CEC 0 has been
repaired and has been resumed. The failback begins with CEC 1 starting to use the NVS in
CEC 0 again, and the ownership of the even LSSs being transferred back to CEC 0. Normal
I/O processing with both CECs operational then resumes. Just like the failover process, the
failback process is invisible to the attached hosts.

In general, recovery actions (failover/failback) on the DS8800 do not impact I/O operation
latency by more than 15 seconds. With certain limitations on configurations and advanced
functions, this impact to latency is often limited to just 8 seconds or less. If you have real-time
response requirements in this area, contact IBM to determine the latest information about
how to manage your storage to meet your requirements.

4.3.4 NVS and power outages


During normal operation, the DS8800 preserves write data by storing a duplicate in the NVS
of the alternate CEC. To ensure that this write data is not lost due to a power event, the
DS8800 contains battery backup units (BBUs). The single purpose of the BBUs is to preserve
the NVS area of CEC memory in the event of a complete loss of input power to the DS8800.
The design is to not move the data from NVS to the disk arrays. Instead, each CEC has dual
internal SCSI disks, which are available to store the contents of NVS.

Important: Unless the power line disturbance feature (PLD) has been purchased, the
BBUs are not used to keep the storage disks in operation. They keep the CECs in
operation long enough to dump NVS contents to internal hard disks.

If both power supplies in the primary frame should stop receiving input power, the CECs
would be informed that they are running on batteries and immediately begin a shutdown
procedure. It is during this shutdown that the entire contents of NVS memory are written to
the CEC hard drives so that the data will be available for destaging after the CECs are
operational again. If power is lost to a single primary power supply (PPS), the ability of the
other power supply to keep all batteries charged is not impacted, so the CECs would remain
online.

66 IBM System Storage DS8800: Architecture and Implementation


If all the batteries were to fail (which is extremely unlikely because the batteries are in an N+1
redundant configuration), the DS8800 would lose this NVS protection and consequently
would take all CECs offline because reliability and availability of host data are compromised.

The following sections show the steps followed in the event of complete power interruption.

Power loss
When an on-battery condition shutdown begins, the following events occur:
1. All host adapter I/O is blocked.
2. Each CEC begins copying its NVS data to internal disk (not the storage DDMs). For each
CEC, two copies are made of the NVS data.
3. When the copy process is complete, each CEC shuts down.
4. When shutdown in each CEC is complete (or a timer expires), the DS8800 is powered
down.

Power restored
When power is restored to the DS8800, the following events occur:
1. The CECs power on and perform power on self tests and PHYP functions.
2. Each CEC then begins boot up (IML).
3. At a certain stage in the boot process, the CEC detects NVS data on its internal SCSI
disks and begins to destage it to the storage DDMs.
4. When the battery units reach a certain level of charge, the CECs come online and begin to
process host I/O.

Battery charging
In many cases, sufficient charging will occur during the power on self test, operating system
boot, and microcode boot. However, if a complete discharge of the batteries has occurred,
which can happen if multiple power outages occur in a short period of time, then recharging
might take up to two hours.

Note: The CECs will not come online (process host I/O) until the batteries are sufficiently
charged to handle at least one outage.

4.4 Data flow in DS8800


One of the significant hardware changes for the DS8700 was the way in which host I/O was
brought into the storage unit. The DS8800 continues this design for the I/O enclosures, which
house the device adapter and host adapter cards. Connectivity between the CECs and the
I/O Enclosures was also improved. These changes use the many strengths of the PCI
Express architecture.

See 3.2.2, “Peripheral Component Interconnect Express (PCI Express)” on page 36, for more
information about this topic.

You can also discover more about PCI Express at the following site:
http://www.redbooks.ibm.com/Redbooks.nsf/RedbookAbstracts/tips0456.html?Open

Chapter 4. RAS on IBM System Storage DS8800 67


4.4.1 I/O enclosures
The DS8800 I/O enclosure, also called a bay, is a design introduced in the DS8700. The older
DS8000 bay consisted of multiple parts that required removal of the bay and disassembly for
service. In the DS8800, the switch card can be replaced without removing the I/O cards,
reducing time and effort in servicing the enclosure. As shown in Figure 4-1 on page 62, each
CEC is connected to all four bays via PCI Express cables. This makes each bay an extension
of each server.

The DS8800 I/O enclosures use hot-swap adapters with PCI Express connectors. These
adapters are in blind-swap hot plug cassettes, which allow them to be replaced concurrently.
Each slot can be independently powered off for concurrent replacement of a failed adapter,
installation of a new adapter, or removal of an old one.

In addition, each I/O enclosure has N+1 power and cooling in the form of two power supplies
with integrated fans. The power supplies can be concurrently replaced and a single power
supply is capable of supplying DC power to the whole I/O enclosure.

4.4.2 Host connections


Each DS8800 Fibre Channel host adapter card provides four or eight ports for connection
either directly to a host, or to a Fibre Channel SAN switch.

Single or multiple path


In DS8800, the host adapters are shared between the CECs. To illustrate this concept,
Figure 4-4 shows a potential machine configuration. In this example, two I/O enclosures are
shown. Each enclosure has a pair of Fibre Channel host adapters. If a host only has a single
path to a DS8800, as shown in Figure 4-4, then it would still be able to access volumes
belonging to all LSSs because the host adapter will direct the I/O to the correct CEC.
However, if an error were to occur on the host adapter (HA), host port (HP), or I/O enclosure,
or in the SAN, then all connectivity would be lost. Clearly, the host bus adapter (HBA) in the
attached host is also a single point of failure.

Single pathed host

HBA

HP HP HP HP HP HP HP HP

Host Host
Adapter Adapter

CEC 0
I/O enclosure 2 CEC 1
owning all
owning all PCI PCI
Express Express
even LSS odd LSS
x4 x4
logical logical
volumes I/O enclosure 3 volumes
Host Host
Adapter Adapter
HP HP HP HP HP HP HP HP

Figure 4-4 A single-path host connection

68 IBM System Storage DS8800: Architecture and Implementation


A more robust design is shown in Figure 4-5 where the host is attached to different Fibre
Channel host adapters in different I/O enclosures. This is also important because during a
microcode update, an I/O enclosure might need to be taken offline. This configuration allows
the host to survive a hardware failure on any component on either path.

Dual pathed host

HBA HBA

HP HP HP HP HP HP HP HP

Host Host
Adapter Adapter

CEC 0
owning all
I/O enclosure 2 CEC 1
owning all
PCI PCI
even LSS Express Express odd LSS
x4 x4
logical logical
volumes I/O enclosure 3 volumes

Host Host
Adapter Adapter
HP HP HP HP HP HP HP HP

Figure 4-5 A dual-path host connection

Important: Best practice is that hosts accessing the DS8800 have at least two
connections to separate host ports in separate host adapters on separate I/O enclosures.

SAN/FICON switches
Because a large number of hosts can be connected to the DS8800, each using multiple
paths, the number of host adapter ports that are available in the DS8800 might not be
sufficient to accommodate all the connections. The solution to this problem is the use of SAN
switches or directors to switch logical connections from multiple hosts. In a System z
environment, you will need to select a SAN switch or director that also supports FICON.

A logic or power failure in a switch or director can interrupt communication between hosts and
the DS8800. Provide more than one switch or director to ensure continued availability. Ports
from two different host adapters in two different I/O enclosures should be configured to go
through each of two directors. The complete failure of either director leaves half the paths still
operating.

Chapter 4. RAS on IBM System Storage DS8800 69


Multipathing software
Each attached host operating system requires a mechanism to allow it to manage multiple
paths to the same device, and to preferably load balance these requests. Also, when a failure
occurs on one redundant path, then the attached host must have a mechanism to allow it to
detect that one path is gone and route all I/O requests for those logical devices to an
alternative path. Finally, it should be able to detect when the path has been restored so that
the I/O can again be load-balanced. The mechanism that will be used varies by attached host
operating system and environment, as detailed in the next two sections.

Open systems and SDD


In the majority of open systems environments, the Subsystem Device Driver (SDD) is useful
to manage both path failover and preferred path determination. SDD is a software product
that IBM supplies as a no charge option with the DS8800.

SDD provides availability through automatic I/O path failover. If a failure occurs in the data
path between the host and the DS8800, SDD automatically switches the I/O to another path.
SDD will also automatically set the failed path back online after a repair is made. SDD also
improves performance by sharing I/O operations to a common disk over multiple active paths
to distribute and balance the I/O workload.

SDD is not available for every supported operating system. See IBM System Storage DS8000
Host Systems Attachment Guide, SC26-7917, and also the interoperability website for
guidance about which multipathing software might be required. Refer to the IBM System
Storage Interoperability Center (SSIC), found at:
http://www.ibm.com/systems/support/storage/config/ssic/index.jsp

For more information about the SDD, see the Redbooks publication IBM System Storage
DS8000: Host attachment and Interoperability, SG24-8887.

System z
In the System z environment, normal practice is to provide multiple paths from each host to a
disk subsystem. Typically, four paths are installed. The channels in each host that can access
each logical control unit (LCU) in the DS8800 are defined in the hardware configuration
definition (HCD) or I/O configuration data set (IOCDS) for that host. Dynamic Path Selection
(DPS) allows the channel subsystem to select any available (non-busy) path to initiate an
operation to the disk subsystem. Dynamic Path Reconnect (DPR) allows the DS8800 to
select any available path to a host to reconnect and resume a disconnected operation, for
example, to transfer data after disconnection due to a cache miss.

These functions are part of the System z architecture and are managed by the channel
subsystem on the host and the DS8800.

A physical FICON path is established when the DS8800 port sees light on the fiber (for
example, a cable is plugged in to a DS8800 host adapter, a processor or the DS8800 is
powered on, or a path is configured online by z/OS). At this time, logical paths are established
through the port between the host and some or all of the LCUs in the DS8800, controlled by
the HCD definition for that host. This happens for each physical path between a System z
CPU and the DS8800. There may be multiple system images in a CPU. Logical paths are
established for each system image. The DS8800 then knows which paths can be used to
communicate between each LCU and each host.

70 IBM System Storage DS8800: Architecture and Implementation


Control Unit Initiated Reconfiguration
Control Unit Initiated Reconfiguration (CUIR) prevents loss of access to volumes in System z
environments due to wrong path handling. This function automates channel path
management in System z environments, in support of selected DS8800 service actions.

CUIR is available for the DS8800 when operated in the z/OS and z/VM® environments. CUIR
provides automatic channel path vary on and vary off actions to minimize manual operator
intervention during selected DS8800 service actions.

CUIR also allows the DS8800 to request that all attached system images set all paths
required for a particular service action to the offline state. System images with the appropriate
level of software support respond to such requests by varying off the affected paths, and
either notifying the DS8800 subsystem that the paths are offline, or that it cannot take the
paths offline. CUIR reduces manual operator intervention and the possibility of human error
during maintenance actions, while at the same time reducing the time required for the
maintenance. This is particularly useful in environments where there are many z/OS or z/VM
systems attached to a DS8800.

4.4.3 Metadata checks


When application data enters the DS8800, special codes or metadata, also known as
redundancy checks, are appended to that data. This metadata remains associated with the
application data as it is transferred throughout the DS8800. The metadata is checked by
various internal components to validate the integrity of the data as it moves throughout the
disk system. It is also checked by the DS8800 before the data is sent to the host in response
to a read I/O request. Further, the metadata also contains information used as an additional
level of verification to confirm that the data returned to the host is coming from the desired
location on the disk.

4.5 RAS on the HMC


The HMC is used to perform configuration, management, and maintenance activities on the
DS8800. It can be ordered to be located either physically inside the base frame or external for
mounting in a client-supplied rack. The DS8800 HMC is able to work with IPv4, IPv6, or a
combination of both IP standards. For further information, see 8.3, “Network connectivity
planning” on page 165.

Important: The HMC described here is the Storage HMC, not to be confused with the
SSPC console, which is also required with any new DS8800. SSPC is described in 3.8,
“System Storage Productivity Center” on page 53.

If the HMC is not operational, then it is not possible to perform maintenance, power the
DS8800 up or down, perform modifications to the logical configuration, or perform Copy
Services tasks, such as the establishment of FlashCopies using the DSCLI or DS GUI. Best
practice is to order two management consoles to act as a redundant pair. Alternatively, if
Tivoli Storage Productivity Center for Replication (TPC-R) is used, Copy Services tasks can
be managed by that tool if the HMC is unavailable.

Note: The preceding alternative is only available if you have purchased and configured the
TPC-R management solution.

Chapter 4. RAS on IBM System Storage DS8800 71


4.5.1 Hardware
The DS8800 ships with a mobile computer HMC (Lenovo ThinkPad Model T510). Best
practice is to order a second HMC to provide redundancy. The second HMC is external to the
DS8800 rack(s). For more information about the HMC and network connections, see 9.1.1,
“Storage Hardware Management Console hardware” on page 178.

4.5.2 Microcode updates


The DS8800 contains many discrete redundant components. Most of these components have
firmware that can be updated. This includes the primary power supplies (PPS), Interface
Control cards (IC), device adapters, and host adapters. Both DS8800 CECs also have an
operating system (AIX) and Licensed Machine Code (LMC) that can be updated. As IBM
continues to develop and improve the DS8800, new releases of firmware and licensed
machine code become available to offer improvements in both function and reliability.

For a detailed discussion about microcode updates, refer to Chapter 15, “Licensed machine
code” on page 343.

Concurrent code updates


The architecture of the DS8800 allows for concurrent code updates. This is achieved by using
the redundant design of the DS8800. In general, redundancy is lost for a short period as each
component in a redundant pair is updated.

4.5.3 Call Home and Remote Support


Call Home is the capability of the HMC to contact IBM support services to report a problem.
This is referred to as Call Home for service. The HMC will also provide machine-reported
product data (MRPD) to IBM by way of the Call Home facility.

IBM Service personnel located outside of the client facility log in to the HMC to provide
remote service and support. Remote support and the Call Home option are described in
detail in Chapter 17, “Remote support” on page 363.

4.6 RAS on the disk subsystem


The reason for the DS8800’s existence is to safely store and retrieve large amounts of data.
Redundant Array of Independent Disks (RAID) is an industry-wide implementation of
methods to store data on multiple physical disks to enhance the availability of that data. There
are many variants of RAID in use today. The DS8800 supports RAID 5, RAID 6, and RAID 10.
It does not support the non-RAID configuration of disks better known as JBOD (just a bunch
of disks).

Note: Solid State Drives (SSD) support only RAID 5.

4.6.1 RAID configurations


The following RAID configurations are possible for the DS8800:
򐂰 6+P RAID 5 configuration: The array consists of six data drives and one parity drive. The
remaining drive on the array site is used as a spare.
򐂰 7+P RAID 5 configuration: The array consists of seven data drives and one parity drive.

72 IBM System Storage DS8800: Architecture and Implementation


򐂰 5+P+Q RAID 6 configuration: The array consists of five data drives and two parity drives.
The remaining drive on the array site is used as a spare.
򐂰 6+P+Q RAID 6 configuration: The array consists of six data drives and two parity drives.
򐂰 3+3 RAID 10 configuration: The array consists of three data drives that are mirrored to
three copy drives. Two drives on the array site are used as spares.
򐂰 4+4 RAID 10 configuration: The array consists of four data drives that are mirrored to four
copy drives.

For information regarding the effective capacity of these configurations, refer to Table 8-9 on
page 173.

4.6.2 Disk path redundancy


Each DDM in the DS8800 is attached to two SAN switches. These switches are built into the
disk enclosure controller cards. Figure 4-6 shows the redundancy features of the DS8800
switched disk architecture.

Each disk has two separate connections to the backplane. This allows it to be simultaneously
attached to both switches. If either disk enclosure controller card is removed from the
enclosure, the switch that is included in that card is also removed. However, the switch in the
remaining controller card retains the ability to communicate with all the disks and both device
adapters (DAs) in a pair. Equally, each DA has a path to each switch, so it also can tolerate
the loss of a single path. If both paths from one DA fail, then it cannot access the switches;
however, the partner DA retains connection.

DS8800 Storage Enclosure with Switched Dual Loops

Next CEC 0 Next


Storage Device Storage
Enclosure Enclosure
Adapter
CEC 1
Device
Adapter

Out Out In In In In Out Out

FC-AL Switch FC-AL Switch

Storage Enclosure Backplane

...24…
Disk Drive Modules

Figure 4-6 Switched disk path connections

Chapter 4. RAS on IBM System Storage DS8800 73


Figure 4-6 on page 73 also shows the connection paths to the neighboring Storage
Enclosures. Because expansion is done in this linear fashion, the addition of more enclosures
is completely nondisruptive.

See 3.4, “Disk subsystem” on page 44 for more information about the disk subsystem of the
DS8800.

4.6.3 Predictive Failure Analysis


The drives used in the DS8800 incorporate Predictive Failure Analysis (PFA) and can
anticipate certain forms of failures by keeping internal statistics of read and write errors. If the
error rates exceed predetermined threshold values, the drive will be nominated for
replacement. Because the drive has not yet failed, data can be copied directly to a spare
drive. This avoids using RAID recovery to reconstruct all of the data onto the spare drive.

4.6.4 Disk scrubbing


The DS8800 will periodically read all sectors on a disk. This is designed to occur without any
interference with application performance. If error correcting code (ECC)-correctable bad bits
are identified, the bits are corrected immediately by the DS8800. This reduces the possibility
of multiple bad bits accumulating in a sector beyond the ability of ECC to correct them. If a
sector contains data that is beyond ECC's ability to correct, then RAID is used to regenerate
the data and write a new copy onto a spare sector of the disk. This scrubbing process applies
to both array members and spare DDMs.

4.6.5 RAID 5 overview


The DS8800 series supports RAID 5 arrays. RAID 5 is a method of spreading volume data
plus parity data across multiple disk drives. RAID 5 provides faster performance by striping
data across a defined set of DDMs. Data protection is provided by the generation of parity
information for every stripe of data. If an array member fails, then its contents can be
regenerated by using the parity data.

RAID 5 implementation in DS8800


In a DS8800, a RAID 5 array built on one array site will contain either seven or eight disks,
depending on whether the array site is supplying a spare. A 7-disk array effectively uses one
disk for parity, so it is referred to as a 6+P array (where the P stands for parity). The reason
only seven disks are available to a 6+P array is that the eighth disk in the array site used to
build the array was used as a spare. We then refer to this as a 6+P+S array site (where the S
stands for spare). An 8-disk array also effectively uses one disk for parity, so it is referred to as
a 7+P array.

Drive failure with RAID 5


When a disk drive module fails in a RAID 5 array, the device adapter starts an operation to
reconstruct the data that was on the failed drive onto one of the spare drives. The spare that
is used will be chosen based on a smart algorithm that looks at the location of the spares and
the size and location of the failed DDM. The rebuild is performed by reading the
corresponding data and parity in each stripe from the remaining drives in the array, then
performing an exclusive-OR operation to recreate the data, and then writing this data to the
spare drive.

74 IBM System Storage DS8800: Architecture and Implementation


While this data reconstruction is going on, the device adapter can still service read and write
requests to the array from the hosts. There might be some degradation in performance while
the sparing operation is in progress because some DA and switched network resources are
used to do the reconstruction. Due to the switch-based architecture, this effect will be
minimal. Additionally, any read requests for data on the failed drive require data to be read
from the other drives in the array, and then the DA performs an operation to reconstruct the
data.

Performance of the RAID 5 array returns to normal when the data reconstruction onto the
spare device completes. The time taken for sparing can vary, depending on the size of the
failed DDM and the workload on the array, the switched network, and the DA. The use of
arrays across loops (AAL) both speeds up rebuild time and decreases the impact of a rebuild.

4.6.6 RAID 6 overview


The DS8800 supports RAID 6 protection. RAID 6 presents an efficient method of data
protection in case of double disk errors, such as two drive failures, two coincident medium
errors, or a drive failure and a medium error. RAID 6 protection provides more fault tolerance
than RAID 5 in the case of disk failures and uses less raw disk capacity than RAID 10.

RAID 6 allows for additional fault tolerance by using a second independent distributed parity
scheme (dual parity). Data is striped on a block level across a set of drives, similar to RAID 5
configurations, and a second set of parity is calculated and written across all the drives, as
shown in Figure 4-7.

One stripe with 5 data drives (5 + P + Q):

Drives 1 2 3 4 5 P Q
0 1 2 3 4 P00 P01
5 6 7 8 9 P10 P11
10 11 12 13 14 P20 P21
15 16 17 18 19 P30 P31
P41

P00 = 0+1+2+3+4; P10 = 5+6+7+8+9;… (parity on block level across a


set of drives)
P01 = 9+13+17+0; P11 = 14+18+1+5;… (parity across all drives)
P41 = 4+8+12+16

NOTE: For illustrative purposes only – implementation details may vary

Figure 4-7 Illustration of one RAID 6 stripe

Chapter 4. RAS on IBM System Storage DS8800 75


RAID 6 is best used in combination with large capacity disk drives, such as 2 TB SATA drives,
because they have a longer rebuild time. Comparing RAID 6 to RAID 5 performance gives
about the same results on reads. For random writes, the throughput of a RAID 6 array is
around only two thirds of a RAID 5, given the additional parity handling. Workload planning is
especially important before implementing RAID 6 for write intensive applications, including
copy services targets and FlashCopy SE repositories. Yet, when properly sized for the I/O
demand, RAID 6 is a considerable reliability enhancement.

RAID 6 implementation in the DS8800


A RAID 6 array in one array site of a DS8800 can be built on either seven or eight disks:
򐂰 In a 7-disk array, two disks are always used for parity, while the eighth disk of the array site
is needed as a spare. This kind of a RAID 6 array is hereafter referred to as a 5+P+Q+S
array, where P and Q stand for parity and S stands for spare.
򐂰 A RAID 6 array, consisting of eight disks, will be built when all necessary spare drives are
available. An 8-disk RAID 6 array also always uses two disks for parity, so it is referred to
as a 6+P+Q array.

Drive failure with RAID 6


When a disk drive module (DDM) fails in a RAID 6 array, the device adapter (DA) starts to
reconstruct the data of the failing drive onto one of the available spare drives. A smart
algorithm determines the location of the spare drive to be used, depending on the size and
the location of the failed DDM. After the spare drive has replaced a failed one in a redundant
array, the recalculation of the entire contents of the new drive is performed by reading the
corresponding data and parity in each stripe from the remaining drives in the array and then
writing this data to the spare drive.

During the rebuild of the data on the new drive, the device adapter can still handle I/O
requests of the connected hosts to the affected array. Some performance degradation could
occur during the reconstruction because some device adapters and switched network
resources are used to do the rebuild. Due to the switch-based architecture of the DS8800,
this effect will be minimal. Additionally, any read requests for data on the failed drive require
data to be read from the other drives in the array, and then the DA performs an operation to
reconstruct the data. Any subsequent failure during the reconstruction within the same array
(second drive failure, second coincident medium errors, or a drive failure and a medium error)
can be recovered without loss of data.

Performance of the RAID 6 array returns to normal when the data reconstruction on the spare
device has completed. The rebuild time will vary, depending on the size of the failed DDM and
the workload on the array and the DA. The completion time is comparable to a RAID 5 rebuild,
but slower than rebuilding a RAID 10 array in the case of a single drive failure.

4.6.7 RAID 10 overview


RAID 10 provides high availability by combining features of RAID 0 and RAID 1. RAID 0
optimizes performance by striping volume data across multiple disk drives at a time. RAID 1
provides disk mirroring, which duplicates data between two disk drives. By combining the
features of RAID 0 and RAID 1, RAID 10 provides a second optimization for fault tolerance.
Data is striped across half of the disk drives in the RAID 1 array. The same data is also striped
across the other half of the array, creating a mirror. Access to data is preserved if one disk in
each mirrored pair remains available. RAID 10 offers faster data reads and writes than
RAID 5 because it does not need to manage parity. However, with half of the DDMs in the
group used for data and the other half to mirror that data, RAID 10 disk groups have less
capacity than RAID 5 disk groups.

76 IBM System Storage DS8800: Architecture and Implementation


RAID 10 is not as commonly used as RAID 5, mainly because more raw disk capacity is
needed for every gigabyte of effective capacity. A typical area of operation for RAID 10 are
workloads with a high random write ratio.

RAID 10 implementation in DS8800


In the DS8800, the RAID 10 implementation is achieved by using either six or eight DDMs. If
spares need to be allocated on the array site, then six DDMs are used to make a 3-disk RAID
0 array, which is then mirrored. If spares do not need to be allocated, then eight DDMs are
used to make a 4-disk RAID 0 array, which is then mirrored.

Drive failure with RAID 10


When a disk drive module (DDM) fails in a RAID 10 array, the controller starts an operation to
reconstruct the data from the failed drive onto one of the hot spare drives. The spare that is
used will be chosen based on a smart algorithm that looks at the location of the spares and
the size and location of the failed DDM. Remember a RAID 10 array is effectively a RAID 0
array that is mirrored. Thus, when a drive fails in one of the RAID 0 arrays, we can rebuild the
failed drive by reading the data from the equivalent drive in the other RAID 0 array.

While this data reconstruction is going on, the DA can still service read and write requests to
the array from the hosts. There might be some degradation in performance while the sparing
operation is in progress, because some DA and switched network resources are used to do
the reconstruction. Due to the switch-based architecture of the DS8800, this effect will be
minimal. Read requests for data on the failed drive should not be affected because they can
all be directed to the good RAID 1 array.

Write operations will not be affected. Performance of the RAID 10 array returns to normal
when the data reconstruction onto the spare device completes. The time taken for sparing
can vary, depending on the size of the failed DDM and the workload on the array and the DA.
In relation to a RAID 5, RAID 10 sparing completion time is a little faster. This is because
rebuilding a RAID 5 6+P configuration requires six reads plus one parity operation for each
write, while a RAID 10 3+3 configuration requires one read and one write (essentially a direct
copy).

Arrays across loops and RAID 10


The DS8800 implements the concept of arrays across loops (AAL). With AAL, an array site is
actually split into two halves. Half of the site is located on the first disk loop of a DA pair and
the other half is located on the second disk loop of that DA pair. AAL is implemented primarily
to maximize performance and it is used for all the RAID types in the DS8800. However, in
RAID 10, we are able to take advantage of AAL to provide a higher level of redundancy. The
DS8800 RAS code will deliberately ensure that one RAID 0 array is maintained on each of the
two loops created by a DA pair. This means that in the extremely unlikely event of a complete
loop outage, the DS8800 would not lose access to the RAID 10 array. This is because while
one RAID 0 array is offline, the other remains available to service disk I/O. Figure 3-16 on
page 48 shows a diagram of this strategy.

4.6.8 Spare creation


When the arrays are created on a DS8800, the microcode determines which array sites will
contain spares. The first array sites in each DA pair that are assigned to arrays will contribute
one or two spares (depending on the RAID option), until the DA pair has access to at least
four spares, with two spares being placed on each loop.

Chapter 4. RAS on IBM System Storage DS8800 77


A minimum of one spare is created for each array site assigned to an array until the following
conditions are met:
򐂰 There are a minimum of four spares per DA pair.
򐂰 There are a minimum of four spares for the largest capacity array site on the DA pair.
򐂰 There are a minimum of two spares of capacity and RPM greater than or equal to the
fastest array site of any given capacity on the DA pair.

Floating spares
The DS8800 implements a smart floating technique for spare DDMs. A floating spare is
defined as follows: When a DDM fails and the data it contained is rebuilt onto a spare, then
when the disk is replaced, the replacement disk becomes the spare. The data is not migrated
to another DDM, such as the DDM in the original position the failed DDM occupied.

The DS8800 microcode takes this idea one step further. It might choose to allow the hot
spare to remain where it has been moved, but it can instead choose to migrate the spare to a
more optimum position. This will be done to better balance the spares across the DA pairs,
the loops, and the enclosures. It might be preferable that a DDM that is currently in use as an
array member is converted to a spare. In this case, the data on that DDM will be migrated in
the background onto an existing spare. This process does not fail the disk that is being
migrated, though it does reduce the number of available spares in the DS8800 until the
migration process is complete.

The DS8800 uses this smart floating technique so that the larger or higher RPM DDMs are
allocated as spares, which guarantees that a spare can provide at least the same capacity
and performance as the replaced drive. If we were to rebuild the contents of a 450 GB DDM
onto a 600 GB DDM, then approximately one-fourth of the 600 GB DDM will be wasted,
because that space is not needed. When the failed 450 GB DDM is replaced with a new
450 GB DDM, the DS8800 microcode will most likely migrate the data back onto the recently
replaced 450 GB DDM. When this process completes, the 450 GB DDM will rejoin the array
and the 600 GB DDM will become the spare again.

Another example would be if we fail a 146 GB 15 K RPM DDM onto a 600 GB 10 k RPM
DDM. The data has now moved to a slower DDM and is wasting a lot of space. This means
the array will have a mix of RPMs, which is not desirable. When the failed disk is replaced, the
replacement will be the same type as the failed 15 K RPM disk. Again, a smart migration of
the data will be performed after suitable spares have become available.

Hot pluggable DDMs


Replacement of a failed drive does not affect the operation of the DS8800, because the drives
are fully hot pluggable. Because each disk plugs into a switch, there is no loop break
associated with the removal or replacement of a disk. In addition, there is no potentially
disruptive loop initialization process.

Overconfiguration of spares
The DDM sparing policies support the overconfiguration of spares. This possibility might be of
interest to some installations, because it allows the repair of some DDM failures to be
deferred until a later repair action is required.

78 IBM System Storage DS8800: Architecture and Implementation


4.7 RAS on the power subsystem
The DS8800 has completely redundant power and cooling. Every power supply and cooling
fan in the DS8800 operates in what is known as N+1 mode. This means that there is always
at least one more power supply, cooling fan, or battery than is required for normal operation.
In most cases, this simply means duplication.

4.7.1 Components
Here we discuss the power subsystem components.

Primary power supplies


Each frame has two primary power supplies (PPSs). Each PPS produces voltages for two
different areas of the machine, as explained here.

208 V is produced to be supplied to each I/O enclosure and each processor complex. This
voltage is placed by each supply onto Power Distribution Units (PDUs). Then, this voltage
supplies the disk enclosures.

With the introduction of Gigapack enclosures on DS8800, some changes have been made to
the PPS to support Gigapack power requirements. The 5 V/12 V DDM power module in the
PPS is replaced with a 208 V module. The current 208 V modules are required to support the
Gigapacks. The 208 V input for the Gigapacks is from a Power Distribution Unit (PDU). The
Gigapack enclosure takes 208 V input and provides 5 V/12 V for the DDMs. The PDUs also
distribute power to the CECs and I/O enclosures.

If either PPS fails, the other can continue to supply all required voltage to all power buses in
that frame. The PPSs can be replaced concurrently.

Important: If you install the DS8800 so that both primary power supplies are attached to
the same circuit breaker or the same switchboard, then the DS8800 will not be
well-protected from external power failures. This is a common cause of unplanned
outages.

Battery backup units


Each frame with I/O enclosures, or every frame if the power line disturbance (PLD) feature is
installed, will have battery backup units (BBUs). Each BBU can be replaced concurrently, if no
more than one BBU is unavailable at any one time. The DS8800 BBUs have a planned
working life of at least four years.

Power distribution unit


The power distribution units (PDUs) are used to distribute 208 V from the PPS to the
Gigapack, CECs, and I/O enclosures. Each of the PDU modules can be replaced
concurrently.

Gigapack power supply


The Gigapack power supply unit provides 5 V and 12 V power for the DDMs, as well as
housing the fans for the Gigapack enclosure. DDM cooling on the DS8800 is provided by
these integrated fans in the Gigapack enclosures. The fans draw air from the front of the
DDMs and then move it out through back of the frame. The entire rack cools from front to
back, enabling “hot aisles and cold aisles”. There are redundant fans in each power supply
unit and redundant power supply units in each Gigapack. The Gigapack power supply can be
replaced concurrently.

Chapter 4. RAS on IBM System Storage DS8800 79


Rack Power Control card
The rack power control cards (RPCs) are part of the power management infrastructure of the
DS8800. There are two RPC cards for redundancy. Each card can independently control
power for the entire DS8800.

System Power Control Network


The System Power Control Network (SPCN) is used to control the power of the attached I/O
subsystem. The SPCN monitors environmental components such as power, fans, and
temperature. Environmental critical and noncritical conditions can generate Early Power-Off
Warning (EPOW) events. Critical events trigger appropriate signals from the hardware to the
affected components to prevent any data loss without operating system or firmware
involvement. Non-critical environmental events are also logged and reported.

4.7.2 Line power loss


The DS8800 uses an area of server memory as nonvolatile storage (NVS). This area of
memory is used to hold data that has not been written to the disk subsystem. If line power
were to fail, where both primary power supplies (PPS) in the primary frame were to report a
loss of AC input power, then the DS8800 must take action to protect that data. See 4.3, “CEC
failover and failback” on page 63 for a full explanation of the NVS Cache operation.

4.7.3 Line power fluctuation


The DS8800 primary frame contains battery backup units that are intended to protect
modified data in the event of a complete power loss. If a power fluctuation occurs that causes
a momentary interruption to power (often called a brownout), then the DS8800 will tolerate
this for approximately 30 ms. If the power line disturbance feature is not present on the
DS8800, then after that time, the DDMs will stop spinning and the servers will begin copying
the contents of NVS to the internal SCSI disks in the processor complexes. For many clients,
who use uninterruptible power supply (UPS) technology, this is not an issue. UPS-regulated
power is in general reliable, so additional redundancy in the attached devices is often
completely unnecessary.

Power line disturbance


If line power is not considered reliable, then consider adding the extended power line
disturbance (PLD) feature. This feature adds two separate pieces of hardware to the DS8800:
򐂰 For each primary power supply in each frame of the DS8800, a booster module is added.
This supplies the storage DDM enclosures with power directly from the batteries.
򐂰 Batteries will be added to expansion frames that did not already have them. Primary
frames (1) and expansion frames with I/O enclosures (2) get batteries by default.
Expansion frames that do not have I/O enclosures (3, 4, and 5) normally do not get
batteries.

With the addition of this hardware, the DS8800 will be able to run for up to 50 seconds on
battery power before the CECs begin to copy NVS to internal disk and then shut down. This
would allow for a 50-second interruption to line power with no outage to the DS8800.

4.7.4 Power control


The DS8800 does not possess a white power switch to turn the DS8800 storage unit off and
on, as was the case with previous storage models. All power sequencing is done using the
Service Processor Control Network (SPCN) and RPCs. If you want to power the DS8800 off,
you must do so by using the management tools provided by the Hardware Management

80 IBM System Storage DS8800: Architecture and Implementation


Console (HMC). If the HMC is not functional, then it will not be possible to control the power
sequencing of the DS8800 until the HMC function is restored. This is one of the benefits that
is gained by purchasing a redundant HMC.

4.7.5 Emergency power off


Each DS8800 frame has an operator panel with three LEDs that show the line power status
and the system fault indicator. The LEDs can be seen when the front door of the frame is
closed. See Figure 4-8 for an illustration of the operator panel. On the side of the operator
panel is an emergency power off (EPO) switch. This switch is red and is located inside the
front door protecting the frame; it can only be seen when the front door is open. This switch is
intended purely to remove power from the DS8800 in the following extreme cases:
򐂰 The DS8800 has developed a fault that is placing the environment at risk, such as a fire.
򐂰 The DS8800 is placing human life at risk, such as the electrocution of a person.

Apart from these two contingencies (which are highly unlikely), the EPO switch should never
be used. The reason for this is that when the EPO switch is used, the battery protection for
the NVS storage area is bypassed. Normally, if line power is lost, the DS8800 can use its
internal batteries to destage the write data from NVS memory to persistent storage so that
the data is preserved until power is restored. However, the EPO switch does not allow this
destage process to happen and all NVS cache data is immediately lost. This will most likely
result in data loss.

Figure 4-8 DS8800 EPO switch

If the DS8800 needs to be powered off for building maintenance or to relocate it, always use
the HMC to shut it down properly.

Chapter 4. RAS on IBM System Storage DS8800 81


4.8 RAS and Full Disk Encryption
Like previous DS8000 models, the DS8800 can be ordered with disk drive modules (DDMs)
that support Full Disk Encryption (FDE). These DDMs are available as 10 K RPM drives in
capacities of 450 GB and 600 GB. The purpose of FDE drives is to encrypt all data at rest
within the storage system for increased data integrity.

The DS8800 provides two important reliability, availability, and serviceability enhancements to
Full Disk Encryption storage: deadlock recovery and support for dual-platform key servers.

For current considerations and best practices regarding DS8000 encryption, see IBM
Encrypted Storage Overview and Customer Requirements, found at:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101479

This link also includes the IBM Notice for Storage Encryption, which must be read by all
clients acquiring an IBM storage device that includes encryption technology.

4.8.1 Deadlock recovery


The DS8000 family of storage servers with Full Disk Encryption drives can utilize a System z
key server running the Tivoli Key Lifecycle Manager (TKLM) solution. A TKLM server provides
a robust platform for managing the multiple levels of encryption keys needed for a secure
storage operation. System z mainframes do not have local storage; their operating system,
applications, and application data are often stored on an enterprise-class storage server,
such as a DS8000 storage subsystem.

Thus it becomes possible, due to a planning error or even the use of automatically-managed
storage provisioning, for the System z TKLM server storage to end up residing on the DS8000
that is a client for encryption keys. After a power interruption event, the DS8000 becomes
inoperable because it must retrieve the Data Key (DK) from the TKLM database on the
System z server. The TKLM database becomes inoperable because the System z server has
its OS or application data on the DS8000. This represents a deadlock situation. Figure 4-9
depicts this scenario.

Figure 4-9 Deadlock scenario

The DS8800 mitigates this problem by implementing a Recovery Key (RK). The Recovery Key
allows the DS8800 to decrypt the Group Key (GK) that it needs to come up to full operation. A
new client role is defined in this process: the security administrator. The security administrator

82 IBM System Storage DS8800: Architecture and Implementation


should be someone other than the storage administrator so that no single user can perform
recovery key actions. Setting up the Recovery Key and using the Recovery Key to boot a
DS8800 requires both people to take action. Use of a Recovery Key is entirely within the
client’s control; no IBM service representative needs to be involved. The DS8000 never stores
a copy of the Recovery Key on the encrypted disks, and it is never included in any service
data.

See IBM System Storage DS8700 Disk Encryption Implementation and Usage Guidelines,
REDP-4500, for a more complete review of the deadlock recovery process and further
information about working with a Recovery Key.

Note: Use the storage HMC to enter a Recovery Key. The Security Administrator and the
Storage Administrator might need to be physically present at the DS8800 to perform the
recovery.

4.8.2 Dual platform TKLM servers


The current DS8800 Full Disk Encryption solution requires the use of an IBM System x SUSE
Linux-based key server (IKS), which operates in “clear key mode”. Clients have expressed a
desire to run key servers that are hardware security module-based (HSM), which operate in
“secure key mode”. Key servers like the IKS, which implement a clear key design, can import
and export their public and private key pair to other key servers. Servers that implement
secure key design can only import and export their public key to other key servers.

To meet this request, the DS8800 allows propagation of keys across two different key server
platforms. The current IKS is still supported to address the standing requirement for an
isolated key server. Adding a z/OS Tivoli Key Lifecycle Manager (TKLM) Secure Key Mode
server, which is common in Tape Storage environments, is concurrently supported by the
DS8800.

After the key servers are set up, they will each have two public keys. They are each capable of
generating and wrapping two symmetric keys for the DS8800. The DS8800 stores both
wrapped symmetric keys in the key repository. Now either key server is capable of
unwrapping these keys upon a DS8800 retrieval exchange.

See IBM System Storage DS8700 Disk Encryption Implementation and Usage Guidelines,
REDP-4500, for more information regarding the dual-platform TKLM solution. Visit the
following site for further information regarding planning and deployment of TKLM servers:
http://www.ibm.com/developerworks/wikis/display/tivolidoccentral/Tivoli+Key+Lifecy
cle+Manager

4.9 Other features


There are many more features of the DS8800 that enhance reliability, availability, and
serviceability.

4.9.1 Internal network


Each DS8800 base frame contains two Gigabit Ethernet switches to allow the creation of a
fully redundant management network. Each CEC in the DS8800 has a connection to each
switch. Each HMC also has a connection to each switch. This means that if a single Ethernet
switch fails, then all traffic can successfully travel from either HMC to other components in the
storage unit using the alternate switch.

Chapter 4. RAS on IBM System Storage DS8800 83


There are also Ethernet connections for the service processor card within each CEC. If two
DS8800 storage complexes are connected together, they will also use ports on the Ethernet
switches. See 9.1.2, “Private Ethernet networks” on page 179 for more information about the
DS8800 internal network.

Note: Connections to the client’s network are made at the Storage HMC. No client network
connection should ever be made to the DS8800 internal Ethernet switches.

4.9.2 Remote support


The DS8800 HMC has the ability to be accessed remotely by IBM service personnel for many
service actions. IBM support can offload service data, change some configuration settings,
and initiate repair actions over a remote connection. The client decides which type of
connection they want to allow for remote support. Options include:
򐂰 Modem only for access to the HMC command line
򐂰 VPN only for access to the HMC GUI (WebUI)
򐂰 Modem and VPN
򐂰 No access (secure account)

Remote support is a critical topic for clients investing in the DS8800. Refer to Chapter 17,
“Remote support” on page 363 for a more thorough discussion of remote support operations.
Refer to Chapter 9, “DS8800 HMC planning and setup” on page 177 for more information
about planning the connections needed for HMC installation.

4.9.3 Earthquake resistance


The Earthquake Resistance Kit is an optional seismic kit for stabilizing the storage unit rack,
so that the rack complies with IBM earthquake resistance standards. It helps to prevent
human injury and ensures that the system will be available following the earthquake by
limiting potential damage to critical system components, such as hard drives.

A storage unit rack with this optional seismic kit includes cross-braces on the front and rear of
the rack which prevent the rack from twisting. Hardware at the bottom of the rack secures it to
the floor. Depending on the flooring in your environment, specifically non-raised floors,
installation of required floor mounting hardware might be disruptive.

This kit must be special ordered for the DS8800; contact your sales representative for further
information.

84 IBM System Storage DS8800: Architecture and Implementation


5

Chapter 5. Virtualization concepts


This chapter describes virtualization concepts as they apply to the IBM System Storage
DS8800.

This chapter covers the following topics:


򐂰 Virtualization definition
򐂰 The abstraction layers for disk virtualization:
– Array sites
– Arrays
– Ranks
– Extent Pools
– Logical volumes
– Track Space Efficient volumes
– Logical subsystems (LSSs)
– Volume access
– Virtualization hierarchy summary
򐂰 Benefits of virtualization

© Copyright IBM Corp. 2011. All rights reserved. 85


5.1 Virtualization definition
In a fast-changing world, to react quickly to changing business conditions, IT infrastructure
must allow for on demand changes. Virtualization is key to an on demand infrastructure.
However, when talking about virtualization, many vendors are talking about different things.

For this chapter, the definition of virtualization is the abstraction process from the physical
disk drives to a logical volume that is presented to hosts and servers in a way that they see it
as though it were a physical disk.

5.2 The abstraction layers for disk virtualization


In this chapter, when talking about virtualization, we mean the process of preparing physical
disk drives (DDMs) to become an entity that can be used by an operating system, which
means we are talking about the creation of LUNs.

The DS8800 uses a switched point to point topology utilizing Serial attached SCSI (SAS) disk
drives that are mounted in disk enclosures. The disk drives can be accessed by a pair of
device adapters. Each device adapter has four paths to the disk drives. One device interface
from each device adapter is connected to a set of FC-AL devices so that either device adapter
has access to any disk drive through two independent switched fabrics (the device adapters
and switches are redundant).

Figure 5-1 shows the physical layer on which virtualization is based.

PCIe PCIe

I/O Enclosure
I/O Enclosure
HA DA HA DA
HA DA HA DA
… … …24 … ... …
… … …24 … ... …
Storage enclosure pair
Server 0

Server 1

Switches
… … …24 … ... …

… … …24 … ... …

Switched loop 1 Switched loop 2

Figure 5-1 Physical layer as the base for virtualization

86 IBM System Storage DS8800: Architecture and Implementation


Each device adapter has four ports, and because device adapters operate in pairs, there are
eight ports or paths to the disk drives. All eight paths can operate concurrently and could
access all disk drives on the attached fabric. In normal operation, however, disk drives are
typically accessed by one device adapter. Which device adapter owns the disk is defined
during the logical configuration process. This avoids any contention between the two device
adapters for access to the disks.

Because of the switching design, each drive is in close reach of the device adapter, and some
drives will require a few more hops through the Fibre Channel switch. So, it is not really a loop
but a switched FC-AL loop with the FC-AL addressing schema, that is, Arbitrated Loop
Physical Addressing (AL-PA).

5.2.1 Array sites


An array site is a group of eight DDMs. Which DDMs are forming an array site is
predetermined automatically by the DS8000. The DDMs selected may be from any location
within the disk enclosures; also note that there is no predetermined server affinity for array
sites. The DDMs selected for an array site are chosen from two disk enclosures on different
loops; see Figure 5-2.

Array
Site

.. … … 24 … … ..
.. … … 24 … … ..
Switch

Loop 1 Loop 2

Figure 5-2 Array site

The DDMs in the array site are of the same DDM type, which means the same capacity and
the same speed (rpm).

As you can see from Figure 5-2, array sites span loops. Four DDMs are taken from loop 1 and
another four DDMs from loop 2. Array sites are the building blocks used to define arrays.

5.2.2 Arrays
An array is created from one array site. Forming an array means defining it as a specific
RAID type. The supported RAID types are RAID 5, RAID 6, and RAID 10 (see “RAID 5
implementation in DS8800” on page 74, “RAID 6 implementation in the DS8800” on page 76,
and “RAID 10 implementation in DS8800” on page 77). For each array site, you can select a
RAID type (remember that Solid State Drives can only be configured as RAID 5). The process
of selecting the RAID type for an array is also called defining an array.

Note: In a DS8000 series implementation, one array is defined using one array site.

Chapter 5. Virtualization concepts 87


According to the DS8000 series sparing algorithm, from zero to two spares can be taken from
the array site. This is discussed further in 4.6.8, “Spare creation” on page 77.

Figure 5-3 shows the creation of a RAID 5 array with one spare, also called a 6+P+S array (it
has a capacity of 6 DDMs for data, capacity of one DDM for parity, and a spare drive).
According to the RAID 5 rules, parity is distributed across all seven drives in this example.

On the right side in Figure 5-3, the terms D1, D2, D3, and so on stand for the set of data
contained on one disk within a stripe on the array. If, for example, 1 GB of data is written, it is
distributed across all the disks of the array.

Array
Site
D1 D7 D13 ...

D2 D8 D14 ...

D3 D9 D15 ...

Creation of D4 D10 D16 ...

an array D5 D11 P ...

D6 P D17 ...

P D12 D18 ...


Data
Data
Data
Data
RAID
Data
Data
Array
Parity Spare
Spare

Figure 5-3 Creation of an array

So, an array is formed using one array site, and although the array could be accessed by
each adapter of the device adapter pair, it is managed by one device adapter. You define
which server is managing this array later on in the configuration path.

5.2.3 Ranks
In the DS8000 virtualization hierarchy, there is another logical construct called a rank. When
defining a new rank, its name is chosen by the DS Storage Manager, for example, R1, R2, or
R3, and so on. You have to add an array to a rank.

Note: In the DS8000 implementation, a rank is built using just one array.

The available space on each rank will be divided into extents. The extents are the building
blocks of the logical volumes. An extent is striped across all disks of an array as shown in
Figure 5-4 and indicated by the small squares in Figure 5-5 on page 91.

88 IBM System Storage DS8800: Architecture and Implementation


The process of forming a rank does two things:
򐂰 The array is formatted for either fixed block (FB) data for open systems or count key data
(CKD) for System z data. This determines the size of the set of data contained on one disk
within a stripe on the array.
򐂰 The capacity of the array is subdivided into equal-sized partitions, called extents. The
extent size depends on the extent type, FB or CKD.

A FB rank has an extent size of 1 GB (more precisely, GiB, gibibyte, or binary gigabyte, being
equal to 230 bytes).

IBM System z users or administrators typically do not deal with gigabytes or gibibytes, and
instead they think of storage in terms of the original 3390 volume sizes. A 3390 Model 3 is
three times the size of a Model 1 and a Model 1 has 1113 cylinders, which is about 0.94 GB.
The extent size of a CKD rank is one 3390 Model 1 or 1113 cylinders.

Figure 5-4 shows an example of an array that is formatted for FB data with 1 GB extents (the
squares in the rank just indicate that the extent is composed of several blocks from different
DDMs).

D1 D7 D 13 ...
D ata
D ata D2 D8 D 14 ...
D ata
D ata
RAID D3 D9 D 15 ...

Array
D4 D 10 D 16 ...
D ata
D ata D5 D 11 P ...

Parity D6 P D 17 ...

Spare P D 12 D 18 ...

C reation of
a R ank

....

....

....

FB R ank
1G B 1G B 1G B 1G B ....

....
of 1G B
....
extents
....

....

Figure 5-4 Forming an FB rank with 1 GB extents

It is still possible to define a CKD volume with a capacity that is an integral multiple of one
cylinder or a fixed block LUN with a capacity that is an integral multiple of 128 logical blocks
(64 KB). However, if the defined capacity is not an integral multiple of the capacity of one
extent, the unused capacity in the last extent is wasted. For example, you could define a one
cylinder CKD volume, but 1113 cylinders (1 extent) will be allocated and 1112 cylinders would
be wasted.

Encryption group
A DS8000 series can be ordered with encryption capable disk drives. If you plan to use
encryption, before creating a rank, you must define an encryption group (for more
information, see IBM System Storage DS8700 Disk Encryption Implementation and Usage

Chapter 5. Virtualization concepts 89


Guidelines, REDP-4500). Currently, the DS8000 series supports only one encryption group.
All ranks must be in this encryption group. The encryption group is an attribute of a rank. So,
your choice is to encrypt everything or nothing. You can switch on (create an encryption
group) encryption later, but then all ranks must be deleted and re-created, which means your
data is also deleted.

5.2.4 Extent Pools


An Extent Pool is a logical construct to aggregate the extents from a set of ranks, forming a
domain for extent allocation to a logical volume. Typically the set of ranks in the Extent Pool
are to have the same RAID type and the same disk RPM characteristics so that the extents in
the Extent Pool have homogeneous characteristics.

Important: Do not mix ranks with different RAID types or disk rpm in an Extent Pool. Do
not mix SSD and HDD ranks in the same Extent Pool.

There is no predefined affinity of ranks or arrays to a storage server. The affinity of the rank
(and its associated array) to a given server is determined at the point it is assigned to an
Extent Pool.

One or more ranks with the same extent type (FB or CKD) can be assigned to an Extent Pool.
One rank can be assigned to only one Extent Pool. There can be as many Extent Pools as
there are ranks.

There are considerations regarding how many ranks should be added in an Extent Pool.
Storage Pool Striping allows you to create logical volumes striped across multiple ranks. This
will typically enhance performance. To benefit from Storage Pool Striping (see “Storage Pool
Striping: Extent rotation” on page 96), more than one rank in an Extent Pool is required.

Storage Pool Striping can enhance performance significantly, but when you lose one rank (in
the unlikely event that a whole RAID array failed due to a scenario with multiple failures at the
same time), not only is the data of this rank lost, but also all data in this Extent Pool because
data is striped across all ranks. To avoid data loss, mirror your data to a remote DS8000.

The DS Storage Manager GUI prompts you to use the same RAID types in an Extent Pool. As
such, when an Extent Pool is defined, it must be assigned with the following attributes:
򐂰 Server affinity
򐂰 Extent type
򐂰 RAID type
򐂰 Drive Class
򐂰 Encryption group

Just like ranks, Extent Pools also belong to an encryption group. When defining an Extent
Pool, you have to specify an encryption group. Encryption group 0 means no encryption.
Encryption group 1 means encryption. Currently, the DS8000 series supports only one
encryption group and encryption is on for all Extent Pools or off for all Extent Pools.

The minimum number of Extent Pools is two, with one assigned to server 0 and the other to
server 1 so that both servers are active. In an environment where FB and CKD are to go onto
the DS8000 series storage system, four Extent Pools would provide one FB pool for each
server, and one CKD pool for each server, to balance the capacity between the two servers.
Figure 5-5 is an example of a mixed environment with CKD and FB Extent Pools. Additional
Extent Pools might also be desirable to segregate ranks with different DDM types. Extent
Pools are expanded by adding more ranks to the pool. Ranks are organized in two rank
groups; rank group 0 is controlled by server 0 and rank group 1 is controlled by server 1.

90 IBM System Storage DS8800: Architecture and Implementation


Important: For best performance, balance capacity between the two servers and create at
least two Extent Pools, with one per server.

Extent Pool CKD0 Extent Pool CKD1

1113 1113 1113 1113 1113 1113 1113 1113


Cyl. Cyl. Cyl. Cyl. Cyl. Cyl. Cyl. Cyl.
CKD CKD CKD CKD CKD CKD CKD CKD

1113 1113 1113 1113


Cyl. Cyl. Cyl. Cyl.
Extent Pool FBprod
Server0

CKD CKD CKD CKD

Server1
1GB 1GB 1GB 1GB
FB FB FB FB

Extent Pool FBtest

1GB 1GB 1GB 1GB


1GB 1GB 1GB 1GB FB FB FB FB
FB FB FB FB

1GB 1GB 1GB 1GB 1GB 1GB 1GB 1GB


FB FB FB FB FB FB FB FB

Figure 5-5 Extent Pools

5.2.5 Logical volumes


A logical volume is composed of a set of extents from one Extent Pool.

On a DS8000, up to 65280 (we use the abbreviation 64 K in this discussion, even though it is
actually 65536 - 256, which is not quite 64 K in binary) volumes can be created (either 64 K
CKD, or 64 K FB volumes, or a mixture of both types with a maximum of 64 K volumes in
total).

Fixed block LUNs


A logical volume composed of fixed block extents is called a LUN. A fixed block LUN is
composed of one or more 1 GiB (230 bytes) extents from one FB Extent Pool. A LUN cannot
span multiple Extent Pools, but a LUN can have extents from different ranks within the same
Extent Pool. You can construct LUNs up to a size of 2 TiB (240 bytes).

LUNs can be allocated in binary GiB (230 bytes), decimal GB (109 bytes), or 512 or 520 byte
blocks. However, the physical capacity that is allocated for a LUN is always a multiple of
1 GiB, so it is a good idea to have LUN sizes that are a multiple of a gibibyte. If you define a
LUN with a LUN size that is not a multiple of 1 GiB, for example, 25.5 GiB, the LUN size is
25.5 GiB, but 26 GiB are physically allocated, of which 0.5 GiB of the physical storage remain
unusable.

Chapter 5. Virtualization concepts 91


CKD volumes
A System z CKD volume is composed of one or more extents from one CKD Extent Pool.
CKD extents are of the size of 3390 Model 1, which has 1113 cylinders. However, when you
define a System z CKD volume, you do not specify the number of 3390 Model 1 extents but
the number of cylinders you want for the volume.

The DS8000 and z/OS limit CKD extended address volumes (EAV) sizes. Now you can define
CKD volumes with up to 262,668 cylinders, which is about 223 GB. This new volume capacity
is called Extended Address Volume (EAV) and is supported by the 3390 Model A.

Important: EAV volumes can only be used by IBM z/OS 1.10 or later versions.

If the number of cylinders specified is not an exact multiple of 1113 cylinders, then some
space in the last allocated extent is wasted. For example, if you define 1114 or 3340
cylinders, 1112 cylinders are wasted. For maximum storage efficiency, you should consider
allocating volumes that are exact multiples of 1113 cylinders. In fact, multiples of 3339
cylinders should be consider for future compatibility.

If you want to use the maximum number of cylinders for a volume on a DS8800 (that is
262,668 cylinders), you are not wasting cylinders, because it is an exact multiple of 1113
(262,688 divided by 1113 is exactly 236). For even better future compatibility, you should use
a size of 260,442 cylinders, which is an exact multiple (78) of 3339, a model 3 size. On
DS8000s running older Licensed Machine Codes, the maximum number of cylinders was
65,520 and it is not a multiple of 1113. You can use 65,520 cylinders and waste 147 cylinders
for each volume (the difference to the next multiple of 1113), or you might be better off with a
volume size of 64,554 cylinders, which is a multiple of 1113 (factor of 58), or even better, with
63,441 cylinders, which is a multiple of 3339, a model 3 size. See Figure 5-6 on page 92.

Extent Pool CKD0 Logical


3390 Mod. 3

1113 1113 1113 1113


Rank-x free
3390 Mod. 3

1113 1113 Allocate 3226 cylinder


Rank-y used
free
used
free volume

Extent Pool CKD0


1113 1113 1113 1113
Rank-x used
3390 Mod. 3
Volume w ith
3226 cylinders

1113 1000
Rank-y used
used
used
used
113 cylinders unused

Figure 5-6 Allocation of a CKD logical volume

A CKD volume cannot span multiple Extent Pools, but a volume can have extents from
different ranks in the same Extent Pool or you can stripe a volume across the ranks (see

92 IBM System Storage DS8800: Architecture and Implementation


“Storage Pool Striping: Extent rotation” on page 96). Figure 5-6 shows how a logical volume is
allocated with a CKD volume as an example. The allocation process for FB volumes is similar
and is shown in Figure 5-7.

Extent Pool FBprod


Logical
3 GB LUN
1 GB 1 GB 1 GB 1 GB
Rank-a free
3 GB LUN

1 GB 1 GB
Rank-b used
free
used
free Allocate a 3 GB LUN

Extent Pool FBprod

1 GB 1 GB 1 GB 1 GB
Rank-a used
3 GB LUN 2.9 GB LUN
created

1 GB 1 GB
used used
Rank-b used used
100 MB unused

Figure 5-7 Creation of an FB LUN

IBM i LUNs
IBM i LUNs are also composed of fixed block 1 GiB extents. There are, however, some
special aspects with System i LUNs. LUNs created on a DS8000 are always RAID-protected.
LUNs are based on RAID 5, RAID 6, or RAID 10 arrays. However, you might want to deceive
i5/OS and tell it that the LUN is not RAID-protected. This causes the i5/OS to do its own
mirroring. System i LUNs can have the attribute unprotected, in which case, the DS8000 will
report that the LUN is not RAID-protected.

The i5/OS only supports certain fixed volume sizes, for example, model sizes of 8.5 GB,
17.5 GB, and 35.1 GB. These sizes are not multiples of 1 GB, and hence, depending on the
model chosen, some space is wasted. IBM i LUNs expose a 520 Byte block to the host. The
operating system uses 8 of these Bytes so the usable space is still 512 Bytes like other SCSI
LUNs. The capacities quoted for the IBM i LUNs are in terms of the 512 Byte block capacity
and are expressed in GB (109 ). These capacities should be converted to GiB (230 ) when
considering effective utilization of extents that are 1 GiB (230 ). For more information about
this topic, see IBM Redbooks publication IBM System Storage DS8000: Host Attachment and
Interoperability, SG24-8887.

5.2.6 Track Space Efficient volumes


Track Space Efficient volumes are supported as FlashCopy SE target volumes only.

When a standard FB LUN or CKD volume is created on the physical drive, it will occupy as
many extents as necessary for the defined capacity.

Chapter 5. Virtualization concepts 93


For the DS8800 with Licensed Machine Code 6.6.0.xx, there is the capability of creating Track
Space Efficient Volumes for use with FlashCopy SE. Track Space Efficient Volumes and
Extent Space Efficient Volumes, used by Thin Provisioned LUNs, are described in detail in
DS8000 Thin Provisioning, REDP-4554.

Note: The DS8800 at Licensed Machine Code 6.6.0.xx does not support Extent Space
Efficient Volumes.

A Space Efficient volume does not occupy physical capacity when it is created. Space gets
allocated when data is actually written to the volume. The amount of space that gets
physically allocated is a function of the amount of data written to or changes performed on the
volume. The sum of capacities of all defined Space Efficient volumes can be larger than the
physical capacity available.This function is also called over-provisioning or thin provisioning.

Space Efficient volumes for the DS8800 can be created when it has the IBM FlashCopy SE
feature enabled (licensing is required).

The general idea behind Space Efficient volumes is to use or allocate physical storage when
it is only potentially or temporarily needed.

Repository for Track Space Efficient volumes


The definition of Track Space Efficient (TSE) volumes begins at the Extent Pool level. TSE
volumes are defined from virtual space in that the size of the TSE volume does not initially
use physical storage. However, any data written to a TSE volume must have enough physical
storage to contain this write activity. This physical storage is provided by the repository.

Note: The TSE repository cannot be created on SATA Drives.

The repository is an object within an Extent Pool. In some sense it is similar to a volume
within the Extent Pool. The repository has a physical size and a logical size. The physical size
of the repository is the amount of space that is allocated in the Extent Pool. It is the physical
space that is available for all Space Efficient volumes in total in this Extent Pool. The
repository is striped across all ranks within the Extent Pool. There can only be one repository
per Extent Pool.

Important: The size of the repository and the virtual space it utilizes are part of the Extent
Pool definition. Each Extent Pool may have a TSE volume repository, but this physical
space cannot be shared between Extent Pools.

The logical size of the repository is limited by the available virtual capacity for Space Efficient
volumes. As an example, there could be a repository of 100 GB reserved physical storage
and you defined a virtual capacity of 200 GB. In this case, you could define 10 TSE-LUNs
with 20 GB each. So the logical capacity can be larger than the physical capacity. Of course,
you cannot fill all the volumes with data because the total physical capacity is limited by the
repository size, that is, to 100 GB in this example.

Note: In the current implementation of Track Space Efficient volumes, it is not possible to
expand the physical size of the repository. Therefore, careful planning for the size of the
repository is required before it is used. If a repository needs to be expanded, all Track
Space Efficient volumes within this Extent Pool must be deleted, and then the repository
must be deleted and recreated with the required size.

94 IBM System Storage DS8800: Architecture and Implementation


Space for a Space Efficient volume is allocated when a write occurs, more precisely, when a
destage from the cache occurs and there is not enough free space left on the currently
allocated extent or track. The TSE allocation unit is a track (64 KB for open systems LUNs or
57 KB for CKD volumes).

Because space is allocated in extents or tracks, the system needs to maintain tables
indicating their mapping to the logical volumes, so there is some impact involved with Space
Efficient volumes. The smaller the allocation unit, the larger the tables and the impact.

Summary: Virtual space is created as part of the Extent Pool definition. This virtual space
is mapped onto TSE volumes in the repository (physical space) as needed. No actual
storage is allocated until write activity occurs to the TSE volumes.

Figure 5-8 illustrates the concept of Track Space Efficient volumes.

Virtual repository capacity


Allocated tracks
Used
tracks

Extent
Pool

Space
efficient
volume Ranks

Repository for
space efficient
volumes normal
striped across Volume
ranks

Figure 5-8 Concept of Track Space Efficient volumes for FlashCopy SE

The lifetime of data on Track Space Efficient volumes is expected to be short because they
are used as FlashCopy targets only. Physical storage gets allocated when data is written to
Track Space Efficient volumes and we need some mechanism to free up physical space in the
repository when the data is no longer needed.

The FlashCopy commands have options to release the space of Track Space Efficient
volumes when the FlashCopy relationship is established or removed.

The CLI commands initfbvol and initckdvol can also release the space for Space
Efficient volumes.

Chapter 5. Virtualization concepts 95


5.2.7 Allocation, deletion, and modification of LUNs/CKD volumes
All extents of the ranks assigned to an Extent Pool are independently available for allocation
to logical volumes. The extents for a LUN/volume are logically ordered, but they do not have
to come from one rank and the extents do not have to be contiguous on a rank.

This construction method of using fixed extents to form a logical volume in the DS8000 series
allows flexibility in the management of the logical volumes. We can delete LUNs/CKD
volumes, resize LUNs/volumes, and reuse the extents of those LUNs to create other
LUNs/volumes, maybe of different sizes. One logical volume can be removed without
affecting the other logical volumes defined on the same Extent Pool.

Because the extents are cleaned after you have deleted a LUN or CKD volume, it can take
some time until these extents are available for reallocation. The reformatting of the extents is
a background process.

There are two extent allocation algorithms for the DS8000: Rotate volumes and Storage Pool
Striping (Rotate extents).

Note: The default for extent allocation method is Storage Pool Striping (Rotate extents) for
Licensed Machine Code 6.6.0.xx. In prior releases of Licensed Machine Code the default
allocation method was Rotate volumes.

Storage Pool Striping: Extent rotation


The preferred storage allocation method is Storage Pool Striping. Storage Pool Striping is an
option when a LUN/volume is created. The extents of a volume can be striped across several
ranks. An Extent Pool with more than one rank is needed to use this storage allocation
method.

The DS8000 maintains a sequence of ranks. The first rank in the list is randomly picked at
each power on of the storage subsystem. The DS8000 keeps track of the rank in which the
last allocation started. The allocation of the first extent for the next volume starts from the next
rank in that sequence. The next extent for that volume is taken from the next rank in sequence
and so on. Thus, the system rotates the extents across the ranks.

Rotate volumes allocation method


Extents can be allocated sequentially. In this case all extents are taken from the same rank
until we have enough extents for the requested volume size or the rank is full, in which case
the allocation continues with the next rank in the Extent Pool.

If more than one volume is created in one operation, the allocation for each volume starts in
another rank. When allocating several volumes, we rotate through the ranks.

You might want to consider this allocation method when you prefer to manage performance
manually. The workload of one volume is going to one rank. This makes the identification of
performance bottlenecks easier; however, by putting all the volumes data onto just one rank,
you might introduce a bottleneck, depending on your actual workload.

Tip: Rotate extents and rotate volume EAMs provide distribution of volumes over ranks.
Rotate extents does this at a granular (1 GB extent) level, which is the preferred method to
minimize hot spots and improve overall performance.

96 IBM System Storage DS8800: Architecture and Implementation


Figure 5-9 shows an example of how volumes are allocated within the Extent Pool.

Where to start with the first


volume is determined at
power on (say, R2)
Striped volume with two Extent
11
Extents created 47 89 1 1 1 R1 Pool
36014
Next striped volume (five
extents in this example)
starts at next rank (R3) from
which the previous volume 1 51 1 Ranks
was started R2
25

Non-striped volume created


Starts at next rank (R1),
going in a round-robin 1 1
2 361
6

3
3 5
Striped volume created R3
Starts at next rank (R2)
(extents 12 to 15)

Extent
8…….12

Figure 5-9 Extent allocation methods

When you create striped volumes and non-striped volumes in an Extent Pool, a rank could be
filled before the others. A full rank is skipped when you create new striped volumes.

Tip: If you have to add capacity to an Extent Pool because it is nearly full, it is better to add
several ranks at the same time, not just one. This allows new volumes to be striped across
the newly added ranks.

By using striped volumes, you distribute the I/O load of a LUN/CKD volume to more than just
one set of eight disk drives. The ability to distribute a workload to many physical drives can
greatly enhance performance for a logical volume. In particular, operating systems that do not
have a volume manager that can do striping will benefit most from this allocation method.

However, if you have Extent Pools with many ranks and all volumes are striped across the
ranks and you lose just one rank, for example, because there are two disk drives in the same
rank that fail at the same time and it is not a RAID 6 rank, you will lose much of your data.

On the other hand, if you do, for example, Physical Partition striping in AIX already, double
striping probably will not improve performance any further. The same can be expected when
the DS8000 LUNs are used by an SVC striping data across LUNs.

If you decide to use Storage Pool Striping it is probably better to use this allocation method for
all volumes in the Extent Pool to keep the ranks equally filled and utilized.

Tip: When configuring a new DS8000, do not mix volumes using the storage pool striping
method and volumes using the rotate volumes method in the same Extent Pool.

Chapter 5. Virtualization concepts 97


For more information about how to configure Extent Pools and volumes for optimal
performance see Chapter 7, “Performance” on page 121.

Logical volume configuration states


Each logical volume has a configuration state attribute. The configuration state reflects the
condition of the logical volume relative to user requested configuration operations, as shown
in Figure 5-10.

When a logical volume creation request is received, a logical volume object is created and the
logical volume's configuration state attribute is placed in the configuring configuration state.
Once the logical volume is created and available for host access, it is placed in the normal
configuration state. If a volume deletion request is received, the logical volume is placed in the
deconfiguring configuration state until all capacity associated with the logical volume is
deallocated and the logical volume object is deleted.

The reconfiguring configuration state is associated with a volume expansion request (refer to
“Dynamic Volume Expansion” for more information). As shown, the configuration state
serializes user requests with the exception that a volume deletion request can be initiated
from any configuration state.

Logical Volum e Configuration States

Create Volume

Configuring

Expand Volume
Volume Online Normal Reconfiguring
Volume Expansion
Online

From any other state

Delete Volume
Deconfiguring

Volume deleted

Figure 5-10 Logical volume configuration states

Dynamic Volume Expansion


The size of a LUN or CKD volume can be expanded without destroying the data. On the
DS8000, you simply add extents to the volume. The operating system will have to support this
re-sizing.

A logical volume has the attribute of being striped across the ranks or not. If the volume was
created as striped across the ranks of the Extent Pool, then the extents that are used to
increase the size of the volume are striped. If a volume was created without striping, the
system tries to allocate the additional extents within the same rank that the volume was
created from originally.

Because most operating systems have no means of moving data from the end of the physical
disk off to some unused space at the beginning of the disk, and because of the risk of data

98 IBM System Storage DS8800: Architecture and Implementation


corruption, IBM does not support shrinking a volume. The DS8000 configuration interfaces
DS CLI and DS GUI will not allow you to change a volume to a smaller size.

Consideration: Before you can expand a volume, you have to delete any copy services
relationship involving that volume.

5.2.8 Logical subsystem


A logical subsystem (LSS) is another logical construct. It groups logical volumes and LUNs, in
groups of up to 256 logical volumes.

On the DS8000 series, there is no fixed binding between any rank and any logical subsystem.
The capacity of one or more ranks can be aggregated into an Extent Pool and logical volumes
configured in that Extent Pool are not bound to any specific rank. Different logical volumes on
the same logical subsystem can be configured in different Extent Pools. As such, the
available capacity of the storage facility can be flexibly allocated across the set of defined
logical subsystems and logical volumes. You can now define up to 255 LSSs for the DS8000
series.

For each LUN or CKD volume, you can now choose an LSS. You can have up to 256 volumes
in one LSS. There is, however, one restriction. We already have seen that volumes are
formed from a number of extents from an Extent Pool. Extent Pools, however, belong to one
server (CEC), server 0 or server 1, respectively. LSSs also have an affinity to the servers. All
even-numbered LSSs (X’00’, X’02’, X’04’, up to X’FE’) belong to server 0 and all
odd-numbered LSSs (X’01’, X’03’, X’05’, up to X’FD’) belong to server 1. LSS X’FF’ is
reserved.

System z users are familiar with a logical control unit (LCU). System z operating systems
configure LCUs to create device addresses. There is a one to one relationship between an
LCU and a CKD LSS (LSS X'ab' maps to LCU X'ab'). Logical volumes have a logical volume
number X'abcd' where X'ab' identifies the LSS and X'cd' is one of the 256 logical volumes on
the LSS. This logical volume number is assigned to a logical volume when a logical volume is
created and determines the LSS that it is associated with. The 256 possible logical volumes
associated with an LSS are mapped to the 256 possible device addresses on an LCU (logical
volume X'abcd' maps to device address X'cd' on LCU X'ab'). When creating CKD logical
volumes and assigning their logical volume numbers, consider whether Parallel Access
Volumes (PAVs) are required on the LCU and reserve some of the addresses on the LCU for
alias addresses.

For open systems, LSSs do not play an important role except in determining which server
manages the LUN (and in which Extent Pool it must be allocated) and in certain aspects
related to Metro Mirror, Global Mirror, or any of the other remote copy implementations.

Some management actions in Metro Mirror, Global Mirror, or Global Copy, operate at the LSS
level. For example, the freezing of pairs to preserve data consistency across all pairs, in case
you have a problem with one of the pairs, is done at the LSS level. With the option to put all or
most of the volumes of a certain application in just one LSS, makes the management of
remote copy operations easier; see Figure 5-11.

Chapter 5. Virtualization concepts 99


Physical D rives Logical Volum es

Array
Site
LSS X'17'
DB2
Array

….
…. Site

24
24

. ...
. ...

Array
Site

Array
Site
LSS X'18'
…….
….

Array DB2-test
24
24

Site
. ... ...
…. ...

Array
Site

Figure 5-11 Grouping of volumes in LSSs

Fixed block LSSs are created automatically when the first fixed block logical volume on the
LSS is created, and deleted automatically when the last fixed block logical volume on the LSS
is deleted. CKD LSSs require user parameters to be specified and must be created before the
first CKD logical volume can be created on the LSS; they must be deleted manually after the
last CKD logical volume on the LSS is deleted.

Address groups
Address groups are created automatically when the first LSS associated with the address
group is created, and deleted automatically when the last LSS in the address group is
deleted.

All devices in an LSS must be either CKD or FB. This restriction goes even further. LSSs are
grouped into address groups of 16 LSSs. LSSs are numbered X'ab', where a is the address
group and b denotes an LSS within the address group. So, for example, X'10' to X'1F' are
LSSs in address group 1.

All LSSs within one address group have to be of the same type, CKD or FB. The first LSS
defined in an address group sets the type of that address group.

Important: System z users who still want to use ESCON to attach hosts to the DS8000
series should be aware that ESCON supports only the 16 LSSs of address group 0 (LSS
X'00' to X'0F'). Therefore, this address group should be reserved for ESCON-attached
CKD devices in this case and not used as FB LSSs. The DS8800 does not support
ESCON channels. ESCON devices can only be attached by using FICON/ESCON
converters.

Figure 5-12 shows the concept of LSSs and address groups.

100 IBM System Storage DS8800: Architecture and Implementation


Address group X'1x' CKD
LSS X'10' LSS X'11'
LSS X'12' LSS X'13'
LSS X'14' LSS X'15'
LSS X'16' X'1500'
Extent Pool CKD-1 Extent Pool CKD-2
LSS X'18'
Rank-a LSS X'1A' LSS X'17' Rank-w
LSS X'1C' LSS X'19'
LSS X'1B'
Rank-b X'1E00' Rank-x
LSS X'1D'

Server1
X'1E01'
X'1D00'
Server0

LSS X'1E'
Extent Pool FB-1 LSS X'1F' Extent Pool FB-2

Rank-y
Rank-c

Address group X'2x': FB


Rank-d
Extent Pool FB-2
LSS X'20' LSS X'21'
LSS X'22' X'2100' Rank-z
LSS X'24'
X'2101'
LSS X'26'
LSS X'23'
X'2800' LSS X'25'
LSS X'27'
LSS X'28' LSS X'29'
Volume ID LSS X'2A' LSS X'2B'
LSS X'2C' LSS X'2D'
LSS X'2E' LSS X'2F'

Figure 5-12 Logical storage subsystems

The LUN identifications X'gabb' are composed of the address group X'g', and the LSS
number within the address group X'a', and the position of the LUN within the LSS X'bb'. For
example, FB LUN X'2101' denotes the second (X'01') LUN in LSS X'21' of address group 2.

5.2.9 Volume access


A DS8000 provides mechanisms to control host access to LUNs. In most cases, a server has
two or more HBAs and the server needs access to a group of LUNs. For easy management of
server access to logical volumes, the DS8000 introduced the concept of host attachments
and volume groups.

Host attachment
Host bus adapters (HBAs) are identified to the DS8000 in a host attachment construct that
specifies the HBAs’ World Wide Port Names (WWPNs). A set of host ports can be associated
through a port group attribute that allows a set of HBAs to be managed collectively. This port
group is referred to as a host attachment within the GUI.

Each host attachment can be associated with a volume group to define which LUNs that HBA
is allowed to access. Multiple host attachments can share the same volume group. The host
attachment can also specify a port mask that controls which DS8800 I/O ports the HBA is
allowed to log in to. Whichever ports the HBA logs in on, it sees the same volume group that
is defined on the host attachment associated with this HBA.

The maximum number of host attachments on a DS8800 is 8192.

Chapter 5. Virtualization concepts 101


Volume group
A volume group is a named construct that defines a set of logical volumes. When used in
conjunction with CKD hosts, there is a default volume group that contains all CKD volumes
and any CKD host that logs in to a FICON I/O port has access to the volumes in this volume
group. CKD logical volumes are automatically added to this volume group when they are
created and automatically removed from this volume group when they are deleted.

When used in conjunction with open systems hosts, a host attachment object that identifies
the HBA is linked to a specific volume group. You must define the volume group by indicating
which fixed block logical volumes are to be placed in the volume group. Logical volumes can
be added to or removed from any volume group dynamically.

There are two types of volume groups used with open systems hosts and the type determines
how the logical volume number is converted to a host addressable LUN_ID on the Fibre
Channel SCSI interface. A map volume group type is used in conjunction with FC SCSI host
types that poll for LUNs by walking the address range on the SCSI interface. This type of
volume group can map any FB logical volume numbers to 256 LUN_IDs that have zeroes in
the last six Bytes and the first two Bytes in the range of X'0000' to X'00FF'.

A mask volume group type is used in conjunction with FC SCSI host types that use the Report
LUNs command to determine the LUN_IDs that are accessible. This type of volume group
can allow any and all FB logical volume numbers to be accessed by the host where the mask
is a bitmap that specifies which LUNs are accessible. For this volume group type, the logical
volume number X'abcd' is mapped to LUN_ID X'40ab40cd00000000'. The volume group type
also controls whether 512 Byte block LUNs or 520 Byte block LUNs can be configured in the
volume group.

When associating a host attachment with a volume group, the host attachment contains
attributes that define the logical block size and the Address Discovery Method (LUN Polling or
Report LUNs) that are used by the host HBA. These attributes must be consistent with the
volume group type of the volume group that is assigned to the host attachment so that HBAs
that share a volume group have a consistent interpretation of the volume group definition and
have access to a consistent set of logical volume types. The GUI typically sets these values
appropriately for the HBA based on your specification of a host type. You must consider what
volume group type to create when setting up a volume group for a particular HBA.

FB logical volumes can be defined in one or more volume groups. This allows a LUN to be
shared by host HBAs configured to different volume groups. An FB logical volume is
automatically removed from all volume groups when it is deleted.

The maximum number of volume groups is 8320 for the DS8800.

Figure 5-13 shows the relationships between host attachments and volume groups. Host
AIXprod1 has two HBAs, which are grouped together in one host attachment and both are
granted access to volume group DB2-1. Most of the volumes in volume group DB2-1 are also
in volume group DB2-2, accessed by server AIXprod2. In our example, there is, however, one
volume in each group that is not shared. The server in the lower left part has four HBAs and
they are divided into two distinct host attachments. One can access some volumes shared
with AIXprod1 and AIXprod2. The other HBAs have access to a volume group called “docs.”.

102 IBM System Storage DS8800: Architecture and Implementation


WWPN-1 WWPN-2 WWPN-3 WWPN-4
Host attachment: AIXprod1 Host attachment: AIXprod2

Volume group: DB2-1 Volume group: DB2-2

Volume group: DB2-test

Host att: Test


WWPN-5 WWPN-6

WWPN-7
Host att: Prog
WWPN-8 Volume group: docs

Figure 5-13 Host attachments and volume groups

5.2.10 Virtualization hierarchy summary


Going through the virtualization hierarchy, we start with just a bunch of disks that are grouped
in array sites. An array site is transformed into an array, with spare disks. The array is further
transformed into a rank with extents formatted for FB data or CKD.
򐂰 The extents from selected ranks are added to an Extent Pool. The combined extents from
those ranks in the Extent Pool are used for subsequent allocation to one or more logical
volumes. Within the Extent Pool, we can reserve some space for Track Space Efficient
(TSE) volumes by means of creating a repository. TSE volumes require virtual capacity to
be available in the Extent Pool.
򐂰 Next, we create logical volumes within the Extent Pools (by default, striping the volumes),
assigning them a logical volume number that determines which logical subsystem they
would be associated with and which server would manage them. Track Space Efficient
volumes for use with FlashCopy SE can only be created within the repository of the Extent
Pool.
򐂰 The LUNs are then assigned to one or more volume groups.
򐂰 Finally, the host HBAs are configured into a host attachment that is associated with a
volume group.

This virtualization concept provides much more flexibility than in previous products. Logical
volumes can dynamically be created, deleted, and resized. They can be grouped logically to
simplify storage management. Large LUNs and CKD volumes reduce the total number of
volumes, which contributes to the reduction of management effort.

Chapter 5. Virtualization concepts 103


Figure 5-14 summarizes the virtualization hierarchy.

Array RAID Rank Extent Logical


Site Array Type FB Pool Volume

Data

1 GB FB

1 GB FB
1 GB FB
Data

1 GB FB

1 GB FB
1 GB FB
Data
Data

Server0
Data

1 GB FB

1 GB FB
1 GB FB
Data
Parity
Spare

LSS Address Volume Host


FB Group Group Attachment

X'2x' FB
4096
addresses

LSS X'27'

X'3x' CKD
4096
addresses

Figure 5-14 Virtualization hierarchy

5.3 Benefits of virtualization


The DS8000 physical and logical architecture defines new standards for enterprise storage
virtualization. The main benefits of the virtualization layers are:
򐂰 Flexible LSS definition allows maximization and optimization of the number of devices per
LSS.
򐂰 No strict relationship between RAID ranks and LSSs.
򐂰 No connection of LSS performance to underlying storage.
򐂰 Number of LSSs can be defined based upon device number requirements:
– With larger devices, significantly fewer LSSs might be used.
– Volumes for a particular application can be kept in a single LSS.
– Smaller LSSs can be defined if required (for applications requiring less storage).
– Test systems can have their own LSSs with fewer volumes than production systems.
򐂰 Increased number of logical volumes:
– Up to 65280 (CKD)
– Up to 65280 (FB)
– 65280 total for CKD + FB
򐂰 Any mixture of CKD or FB addresses in 4096 address groups.

104 IBM System Storage DS8800: Architecture and Implementation


򐂰 Increased logical volume size:
– CKD: 223 GB (262,668 cylinders), architected for 219 TB
– FB: 2 TB, architected for 1 PB
򐂰 Flexible logical volume configuration:
– Multiple RAID types (RAID 5, RAID 6, and RAID 10)
– Storage types (CKD and FB) aggregated into Extent Pools
– Volumes allocated from extents of Extent Pool
– Storage pool striping
– Dynamically add and remove volumes
– Logical Volume Configuration States
– Dynamic Volume Expansion
– Track Space Efficient volumes for FlashCopy SE
– Extended Address Volumes (CKD)
򐂰 Virtualization reduces storage management requirements.

Chapter 5. Virtualization concepts 105


106 IBM System Storage DS8800: Architecture and Implementation
6

Chapter 6. IBM System Storage DS8800


Copy Services overview
This chapter provides an overview of the Copy Services functions that are available with the
DS8800 series models, including Remote Mirror and Copy functions and Point-in-Time Copy
functions.

These functions make the DS8800 series a key component for disaster recovery solutions,
data migration activities, and for data duplication and backup solutions.

This chapter covers the following topics


򐂰 Introduction to Copy Services
򐂰 FlashCopy and FlashCopy SE
򐂰 Remote Mirror and Copy:
– Metro Mirror
– Global Copy
– Global Mirror
– Metro/Global Mirror
– z/OS Global Mirror
– z/OS Metro/Global Mirror

The information provided in this chapter is only an overview. It is covered to a greater extent
and in more detail in the following IBM Redbooks and IBM Redpapers™ publications:
򐂰 IBM System Storage DS8000: Copy Services for Open Systems, SG24-6788
򐂰 IBM System Storage IBM System Storage DS8000: Copy Services for IBM System z,
SG24-6787
򐂰 IBM System Storage DS8000 Series: IBM FlashCopy SE, REDP-4368

© Copyright IBM Corp. 2011. All rights reserved. 107


6.1 Copy Services
Copy Services is a collection of functions that provide disaster recovery, data migration, and
data duplication functions. With the Copy Services functions, for example, you can create
backup data with little or no disruption to your application, and you can back up your
application data to the remote site for disaster recovery.

The Copy Services functions run on the DS8800 storage unit and support open systems and
System z environments. They are also supported on other DS8000 family models.

DS8800 Copy Services functions


Copy Services in the DS8800 includes the following optional licensed functions:
򐂰 IBM System Storage FlashCopy and IBM FlashCopy SE, which are point-in-time copy
functions
򐂰 Remote mirror and copy functions, which include:
– IBM System Storage Metro Mirror, previously known as synchronous PPRC
– IBM System Storage Global Copy, previously known as PPRC eXtended Distance
– IBM System Storage Global Mirror, previously known as asynchronous PPRC
– IBM System Storage Metro/Global Mirror, a three-site solution to meet the most
rigorous business resiliency needs
– For migration purposes on a RPQ base, consider IBM System Storage Metro/Global
Copy. Understand that this combination of Metro Mirror and Global Copy is not suited
for disaster recovery solutions; it is only intended for migration purposes.
򐂰 Additionally for IBM System z users, the following options are available:
– z/OS Global Mirror, previously known as eXtended Remote Copy (XRC)
– z/OS Metro/Global Mirror, a three-site solution that combines z/OS Global Mirror and
Metro Mirror

Many design characteristics of the DS8800 and its data copy and mirror capabilities and
features contribute to the protection of your data, 24 hours a day and seven days a week.

Copy Services management interfaces


You control and manage the DS8800 Copy Services functions by means of the following
interfaces:
򐂰 DS Storage Manager, the graphical user interface of the DS8800 (DS GUI)
򐂰 DS Command-Line Interface (DS CLI), which provides a set commands that cover all
Copy Service functions and options
򐂰 Tivoli Storage Productivity Center for Replication (TPC-R), which allows you to manage
large Copy Services implementations easily and provides data consistency across
multiple systems
򐂰 DS Open Application Programming Interface (DS Open API)

System z users can also use the following interfaces:


򐂰 TSO commands
򐂰 ICKDSF utility commands
򐂰 ANTRQST application programming interface (API)
򐂰 DFSMSdss utility

108 IBM System Storage DS8800: Architecture and Implementation


6.2 FlashCopy and FlashCopy SE
FlashCopy and FlashCopy SE provide the capability to create copies of logical volumes with
the ability to access both the source and target copies immediately. Such kind of copies are
called point-in-time copies.

FlashCopy is an optional licensed feature of the DS8800. Two variations of FlashCopy are
available:
򐂰 Standard FlashCopy, also referred to as the Point-in-Time Copy (PTC) licensed function
򐂰 FlashCopy SE licensed function

To use FlashCopy, you must have the corresponding licensed function indicator feature in the
DS8800, and you must acquire the corresponding DS8800 function authorization with the
adequate feature number license in terms of physical capacity. For details about feature and
function requirements, see 10.1, “IBM System Storage DS8800 licensed functions” on
page 204.

In this section, we discuss the FlashCopy and FlashCopy SE basic characteristics and
options.

6.2.1 Basic concepts


FlashCopy creates a point-in-time copy of the data. When a FlashCopy operation is invoked,
it takes only a few seconds to establish the FlashCopy relationship, consisting of the source
and target volume pairing and the necessary control bitmaps. Thereafter, a copy of the source
volume is available as though all the data had been copied. As soon as the pair has been
established, you can read and write to both the source and target volumes.

Two variations of FlashCopy are available:


򐂰 Standard FlashCopy uses a normal volume as target volume. This target volume has to
have at least the same size as the source volume and that space is fully allocated in the
storage system.
򐂰 FlashCopy Space Efficient (SE) uses Space Efficient volumes (see 5.2.6, “Track Space
Efficient volumes” on page 93) as FlashCopy targets. A Space Efficient target volume has
a virtual size that is at least that of the source volume. However, space is not allocated for
this volume when the volume is created and the FlashCopy initiated. Space is allocated
just for updated tracks only when the source or target volume are written.

Be aware that both FlashCopy and FlashCopy SE can coexist on a DS8800.

Note: In this chapter, track means a piece of data in the DS8800; the DS8800 uses the
concept of logical tracks to manage Copy Services functions.

Figure 6-1 and the subsequent section explain the basic concepts of a standard FlashCopy.

If you access the source or the target volumes while the FlashCopy relation exists, I/O
requests are handled as follows:
򐂰 Read from the source volume
When a read request goes to the source, data is directly read from there.
򐂰 Read from the target volume
When a read request goes to the target volume, FlashCopy checks the bitmap and:

Chapter 6. IBM System Storage DS8800 Copy Services overview 109


– If the requested data has already been copied to the target, it is read from there.
– If the requested data has not been copied yet, it is read from the source.
򐂰 Write to the source volume
When a write request goes to the source, the data is first written to the cache and
persistent memory (write cache). Later, when the data is destaged to the physical extents
of the source volume, FlashCopy checks the bitmap for the location that is to be
overwritten and:
– If the point-in-time data was already copied to the target, the update is written to the
source directly.
– If the point-in-time data has not been copied to the target yet, it is now copied
immediately and only then is the update written to the source.

FlashCopy provides a point-in-time copy

Source Target
FlashCopy command issued

Copy immediately available

Write Read Read


Write
Time

Read and write to both source


and copy possible
T0

When copy is complete,


relationship between
source and target ends

Figure 6-1 FlashCopy concepts

򐂰 Write to the target volume


Whenever data is written to the target volume while the FlashCopy relationship exists, the
storage system checks the bitmap and updates it if necessary. This way, FlashCopy does
not overwrite data that was written to the target with point-in-time data.

The FlashCopy background copy


By default, standard FlashCopy invokes a background copy process that copies all
point-in-time data to the target volume. After the completion of this process, the FlashCopy
relation ends and the target volume becomes independent of the source.

The background copy can slightly impact application performance because the physical copy
needs some storage resources. The impact is minimal because host I/O always has higher
priority than the background copy.

110 IBM System Storage DS8800: Architecture and Implementation


No background copy option
A standard FlashCopy relationship can also be is established with the NOCOPY option. With
this option FlashCopy does not initiate a background copy. Point-in-time data is copied only
when required due to an update to either source or target. This eliminates the impact of the
background copy.

This option is useful in the following situations:


򐂰 When the target will not be needed as an independent volume
򐂰 When repeated FlashCopy operations to the same target are expected

FlashCopy SE is automatically invoked with the NOCOPY option, because the target space is
not allocated and the available physical space is smaller than the size of the volume. A full
background copy would contradict the concept of space efficiency.

6.2.2 Benefits and use


The point-in-time copy created by FlashCopy is typically used where you need a copy of the
production data produced with little or no application downtime. Use cases for the
point-in-time copy created by FlashCopy include online backup, testing new applications, or
creating a copy of transactional data for data mining purposes. To the host or application, the
target looks exactly like the original source. It is an instantly available, binary copy.

IBM FlashCopy SE is designed for temporary copies. FlashCopy SE is optimized for use
cases where only about 5% of the source volume data is updated during the life of the
relationship. If more than 20% of the source data is expected to change, standard FlashCopy
would likely be the better choice.

Standard FlashCopy will generally have superior performance to FlashCopy SE. If


performance on the source or target volumes is important, using standard FlashCopy is a
more desirable choice.

Scenarios where using IBM FlashCopy SE is a good choice include:


򐂰 Creating a temporary copy and backing it up to tape.
򐂰 Creating temporary point-in-time copies for application development or DR testing.
򐂰 Performing regular online backup for different points in time.
򐂰 FlashCopy target volumes in a Global Mirror (GM) environment. Global Mirror is explained
in 6.3.3, “Global Mirror” on page 115.

In all scenarios, the write activity to both source and target is the crucial factor that decides
whether FlashCopy SE can be used.

6.2.3 FlashCopy options


FlashCopy provides many additional options and functions. We explain some of the options
and capabilities in this section:
򐂰 Incremental FlashCopy (refresh target volume)
򐂰 Persistent FlashCopy
򐂰 Data Set FlashCopy
򐂰 Multiple Relationship
򐂰 Consistency Group FlashCopy
򐂰 FlashCopy on existing Metro Mirror or Global Copy primary
򐂰 Inband commands over remote mirror link

Chapter 6. IBM System Storage DS8800 Copy Services overview 111


Incremental FlashCopy (refresh target volume)
Refresh target volume provides the ability to refresh a FlashCopy relation without copying all
data from source to target again. When a subsequent FlashCopy operation is initiated, only
the changed tracks on both the source and target need to be copied from the source to the
target. The direction of the refresh can also be reversed, from (former) target to source.

In many cases only a small percentage of the entire data is changed in a day. In this situation,
you can use this function for daily backups and save the time for the physical copy of
FlashCopy.

Incremental FlashCopy requires the background copy and the Persistent FlashCopy option to
be enabled.

Persistent FlashCopy
Persistent FlashCopy allows the FlashCopy relationship to remain even after the copy
operation completes. You must explicitly delete the relationship to terminate it.

Data Set FlashCopy


Data Set FlashCopy allows you to create a point-in-time copy of individual data sets instead of
complete volumes in an IBM System z environment.

Multiple Relationship FlashCopy


FlashCopy allows a source to have relationships with up to 12 targets simultaneously. A
usage case for this feature is create regular point-in-time copies as online backups or time
stamps. Only one of the multiple relations can be incremental.

Consistency Group FlashCopy


Consistency Group FlashCopy allows you to freeze and temporarily queue I/O activity to a
volume. Consistency Group FlashCopy helps you to create a consistent point-in-time copy
without quiescing the application across multiple volumes, and even across multiple storage
units.

Consistency Group FlashCopy ensures that the order of dependent writes is always
maintained and thus creates host-consistent copies, not application-consistent copies. The
copies have power-fail or crash level consistency. To recover an application from Consistency
Group FlashCopy target volumes, you need to perform the same kind of recovery as after a
system crash.

FlashCopy on existing Metro Mirror or Global Copy primary


This option allows you to establish a FlashCopy relationship where the target is a Metro Mirror
or Global Copy primary volume. This enables you to create full or incremental point-in-time
copies at a local site and then use remote mirroring to copy the data to the remote site.

Note: You cannot FlashCopy from a source to a target if the target is also a Global Mirror
primary volume.

Metro Mirror and Global Copy are explained in 6.3.1, “Metro Mirror” on page 114 and in 6.3.2,
“Global Copy” on page 114.

Inband commands over remote mirror link


In a remote mirror environment, commands to manage FlashCopy at the remote site can be
issued from the local or intermediate site and transmitted over the remote mirror Fibre

112 IBM System Storage DS8800: Architecture and Implementation


Channel links. This eliminates the need for a network connection to the remote site solely for
the management of FlashCopy.

Note: This function is available by using the DS CLI, TSO, and ICKDSF commands, but
not by using the DS Storage Manager GUI.

6.2.4 FlashCopy SE-specific options


Most options for standard FlashCopy (see 6.2.3, “FlashCopy options” on page 111) work
identically for FlashCopy SE. The options that differ are discussed in this section.

Incremental FlashCopy
Because Incremental FlashCopy implies an initial full volume copy and a full volume copy is
not possible in an IBM FlashCopy SE relationship, Incremental FlashCopy is not possible with
IBM FlashCopy SE.

Data Set FlashCopy


FlashCopy SE relationships are limited to full volume relationships. As a result, data set level
FlashCopy is not supported within FlashCopy SE.

Multiple Relationship FlashCopy SE


Standard FlashCopy supports up to 12 relationships per source volume and one of these
relationships can be incremental. A FlashCopy onto a Space Efficient volume has a certain
overhead because additional tables and pointers have to be maintained. Therefore it may be
advisable to avoid utilizing all 12 possible relations.

6.3 Remote Mirror and Copy


The Remote Mirror and Copy functions of the DS8800 are a set of flexible data mirroring
solutions that allow replication between volumes on two or more disk storage systems. These
functions are used to implement remote data backup and disaster recovery solutions.

The Remote Mirror and Copy functions are optional licensed functions of the DS8800 that
include:
򐂰 Metro Mirror
򐂰 Global Copy
򐂰 Global Mirror
򐂰 Metro/Global Mirror

In addition, System z users can use the DS8800 for:


򐂰 z/OS Global Mirror
򐂰 z/OS Metro/Global Mirror

In the following sections, we discuss these Remote Mirror and Copy functions.

For a more detailed and extensive discussion about these topics, refer to the IBM Redbooks
publications listed on page 107.

Licensing requirements
To use any of these Remote Mirror and Copy optional licensed functions, you must have the
corresponding licensed function indicator feature in the DS8800, and you must acquire the

Chapter 6. IBM System Storage DS8800 Copy Services overview 113


corresponding DS8800 function authorization with the adequate feature number license in
terms of physical capacity. For details about feature and function requirements, see 10.1,
“IBM System Storage DS8800 licensed functions” on page 204.

Also, consider that some of the remote mirror solutions, such as Global Mirror, Metro/Global
Mirror, or z/OS Metro/Global Mirror, integrate more than one licensed function. In this case,
you need to have all of the required licensed functions.

6.3.1 Metro Mirror


Metro Mirror, previously known as Synchronous Peer-to-Peer Remote Copy (PPRC), provides
real-time mirroring of logical volumes between two DS8800s or any other combination of
DS8100, DS8300, DS6800, and ESS800 that can be located up to 300 km from each other. It
is a synchronous copy solution where a write operation must be carried out on both copies, at
the local and remote sites, before it is considered complete.

Figure 6-2 illustrates the basic operational characteristics of Metro Mirror.

Server 4 Write
acknowledge
write
1 Write hit
on secondary
3

2
Primary Write to Secondary
(local) secondary (remote)
Figure 6-2 Metro Mirror basic operation

6.3.2 Global Copy


Global Copy, previously known as Peer-to-Peer Remote Copy eXtended Distance
(PPRC-XD), copies data non-synchronously and over longer distances than is possible with
Metro Mirror. When operating in Global Copy mode, the source does not wait for copy
completion on the target before acknowledging a host write operation. Therefore, the host is
not impacted by the Global Copy operation. Write data is sent to the target as the connecting
network allows and independent of the order of the host writes. This makes the target data lag
behind and be inconsistent during normal operation.

You have to take extra steps to make Global Copy target data usable at specific points in time.
These steps depend on the purpose of the copy.

114 IBM System Storage DS8800: Architecture and Implementation


Here are two examples:
򐂰 Data migration
You can use Global Copy to migrate data over very long distances. When you want to
switch from old to new data, you have to stop the applications on the old site, tell Global
Copy to synchronize the data, and wait until it is finished.
򐂰 Asynchronous mirroring
Global Mirror, the IBM solution for asynchronous data replication, uses Global Copy to
transport the data over the long distance network. Periodic FlashCopies are used to
provide consistent data points. Global Mirror is described in the next section.

6.3.3 Global Mirror


Global Mirror, previously known as Asynchronous PPRC, is a two-site, long distance,
asynchronous, remote copy technology for both System z and Open Systems data. This
solution integrates the Global Copy and FlashCopy technologies. With Global Mirror, the data
that the host writes at the local site is asynchronously mirrored to the storage unit at the
remote site. With special management steps, under control of the local master storage unit, a
consistent copy of the data is automatically maintained and periodically updated on the
storage unit at the remote site.

Global Mirror benefits


򐂰 Support for virtually unlimited distances between the local and remote sites, with the
distance typically limited only by the capabilities of the network and the channel extension
technology. This unlimited distance enables you to choose your remote site location based
on business needs and enables site separation to add protection from localized disasters.
򐂰 A consistent and restartable copy of the data at the remote site, created with minimal
impact to applications at the local site.
򐂰 Data currency where, for many environments, the remote site lags behind the local site
typically 3 to 5 seconds, minimizing the amount of data exposure in the event of an
unplanned outage. The actual lag in data currency that you experience will depend upon a
number of factors, including specific workload characteristics and bandwidth between the
local and remote sites.
򐂰 Dynamic selection of the desired recovery point objective (RPO), based upon business
requirements and optimization of available bandwidth.
򐂰 Session support: data consistency at the remote site is internally managed across up to
eight storage units located at both the local site and the remote site.
򐂰 Efficient synchronization of the local and remote sites with support for failover and failback
operations, helping to reduce the time that is required to switch back to the local site after
a planned or unplanned outage.

Chapter 6. IBM System Storage DS8800 Copy Services overview 115


How Global Mirror works
Figure 6-3 illustrates the basic operational characteristics of Global Mirror.

2 Write
Server acknowledge
write
1

Write to secondary
(non-synchronously) FlashCopy
B (automatically)
A
Automatic cycle controlled by active session

Figure 6-3 Global Mirror basic operation

The A volumes at the local site are the production volumes and are used as Global Copy
primaries. The data from the A volumes is replicated to the B volumes using Global Copy. At a
certain point in time, a Consistency Group is created from all the A volumes, even if they are
located in different storage units. This has very little application impact, because the creation
of the Consistency Group is quick (on the order of a few milliseconds).

After the Consistency Group is created, the application writes can continue updating the A
volumes. The missing increment of the consistent data is sent to the B volumes using the
existing Global Copy relations. After all data has reached the B volumes, Global Copy is
halted for brief period, while Global Mirror creates a FlashCopy from the B to the C volumes.
These now contain a consistent set of data at the secondary site.

The data at the remote site is current within 3 to 5 seconds, but this recovery point depends
on the workload and bandwidth available to the remote site.

With its efficient and autonomic implementation, Global Mirror is a solution for disaster
recovery implementations where a consistent copy of the data needs to be available at all
times at a remote location that can be separated by a long distance from the production site.

6.3.4 Metro/Global Mirror


Metro/Global Mirror is a three-site, multi-purpose, replication solution for both System z and
Open Systems data. Local site (site A) to intermediate site (site B) provides high availability
replication using Metro Mirror, and intermediate site (site B) to remote site (site C) supports
long distance disaster recovery replication with Global Mirror. See Figure 6-4.

116 IBM System Storage DS8800: Architecture and Implementation


Server or Servers

***
normal application I/Os failover application I/Os
Global Copy FlashCopy
asynchronous incremental
Metro Mirror long distance NOCOPY

A B C

Metro Mirror D
synchronous Global Mirror
short distance
Intermediate site Remote site
Local site (site A) (site B) (site C)

Figure 6-4 Metro/Global Mirror elements

Both Metro Mirror and Global Mirror are well established replication solutions. Metro/Global
Mirror combines Metro Mirror and Global Mirror to incorporate the best features of the two
solutions:
򐂰 Metro Mirror
– Synchronous operation supports zero data loss.
– The opportunity to locate the intermediate site disk systems close to the local site
allows use of intermediate site disk systems in a high availability configuration.

Note: Metro Mirror can be used for distances of up to 300 km, but when used in a
Metro/Global Mirror implementation, a shorter distance might be more appropriate in
support of the high availability functionality.

򐂰 Global Mirror
– Asynchronous operation supports long distance replication for disaster recovery.
– The Global Mirror methodology has no impact to applications at the local site.
– This solution provides a recoverable, restartable, and consistent image at the remote
site with an RPO, typically in the 3 to 5 second range.

6.3.5 z/OS Global Mirror


z/OS Global Mirror, previously known as eXtended Remote Copy (XRC), is a copy function
available for the z/OS operating systems. It involves a System Data Mover (SDM) that is found
only in z/OS. z/OS Global Mirror maintains a consistent copy of the data asynchronously at a
remote location, and can be implemented over unlimited distances. It is a combined hardware
and software solution that offers data integrity and data availability and can be used as part of

Chapter 6. IBM System Storage DS8800 Copy Services overview 117


business continuance solutions, for workload movement, and for data migration. z/OS Global
Mirror function is an optional licensed function of the DS8800.

Figure 6-5 illustrates the basic operational characteristics of z/OS Global Mirror.

Primary Secondary
site site

SDM manages
the data
consistency System
Data
Write Mover

acknowledge
Server 2
write Read
asynchronously
1

Figure 6-5 z/OS Global Mirror basic operations

z/OS Global Mirror on zIIP


The IBM z9® Integrated Information Processor (zIIP) is a special engine available for
System z since the z9 generation. z/OS now provides the ability to utilize these processors to
handle eligible workloads from the System Data Mover (SDM) in an z/OS Global Mirror (zGM)
environment.

Given the appropriate hardware and software, a range of zGM workload can be offloaded to
zIIP processors. The z/OS software must be at V1.8 and above with APAR OA23174,
specifying zGM PARMLIB parameter zIIPEnable(YES).

6.3.6 z/OS Metro/Global Mirror


This mirroring capability implements z/OS Global Mirror to mirror primary site data to a
location that is a long distance away and also uses Metro Mirror to mirror primary site data to
a location within the metropolitan area. This enables a z/OS three-site high availability and
disaster recovery solution for even greater protection against unplanned outages.

Figure 6-6 illustrates the basic operational characteristics of a z/OS Metro/Global Mirror
implementation.

118 IBM System Storage DS8800: Architecture and Implementation


Intermediate Local Remote
Site Site Site

z/OS Global
Mirror

Metropolitan Unlimited
distance distance

Metro
FlashCopy
Mirror
when
required
P X’
P’ X
DS8000 DS8000 DS8000 X”
Metro Mirror Metro Mirror/ z/OS Global Mirror
Secondary z/OS Global Mirror Secondary
Primary

Figure 6-6 z/OS Metro/Global Mirror

6.3.7 Summary of Remote Mirror and Copy function characteristics


In this section, we summarize the use of and considerations for the set of Remote Mirror and
Copy functions available with the DS8800 series.

Metro Mirror
Metro Mirror is a function for synchronous data copy at a limited distance. The following
considerations apply:
򐂰 There is no data loss, and it allows for rapid recovery for distances up to 300 km.
򐂰 There will be a slight performance impact for write operations.

Global Copy
Global Copy is a function for non-synchronous data copy at long distances, which is only
limited by the network implementation. The following considerations apply:
򐂰 It can copy your data at nearly an unlimited distance, making it suitable for data migration
and daily backup to a remote distant site.
򐂰 The copy is normally fuzzy but can be made consistent through a synchronization
procedure.

Global Mirror
Global Mirror is an asynchronous copy technique; you can create a consistent copy in the
secondary site with an adaptable Recovery Point Objective (RPO). RPO specifies how much
data you can afford to recreate if the system needs to be recovered. The following
considerations apply:
򐂰 Global Mirror can copy to nearly an unlimited distance.
򐂰 It is scalable across multiple storage units.

Chapter 6. IBM System Storage DS8800 Copy Services overview 119


򐂰 It can realize a low RPO if there is enough link bandwidth; when the link bandwidth
capability is exceeded with a heavy workload, the RPO will grow.
򐂰 Global Mirror causes only a slight impact to your application system.

z/OS Global Mirror


z/OS Global Mirror is an asynchronous copy technique controlled by z/OS host software
called System Data Mover. The following considerations apply:
򐂰 It can copy to nearly unlimited distances.
򐂰 It is highly scalable.
򐂰 It has low RPO; the RPO might grow if the bandwidth capability is exceeded, or host
performance might be impacted.
򐂰 Additional host server hardware and software is required.

120 IBM System Storage DS8800: Architecture and Implementation


7

Chapter 7. Performance
This chapter discusses the performance characteristics of the IBM System Storage DS8800
regarding physical and logical configuration. The considerations presented in this chapter can
help you plan the physical and logical setup.

For a detailed discussion about performance, see DS8000 Performance Monitoring and
Tuning, SG24-7146.

This chapter covers the following topics:


򐂰 DS8800 hardware: performance characteristics
򐂰 Software performance: synergy items
򐂰 Performance and sizing considerations for open systems
򐂰 Performance and sizing considerations for System z

© Copyright IBM Corp. 2011. All rights reserved. 121


7.1 DS8800 hardware: performance characteristics
The DS8800 features IBM POWER6+ server technology and a PCI Express I/O infrastructure
to help support high performance. Compared to the POWER5+ processor in previous models,
the POWER6 and POWER6+ processors can enable a more than 50% performance
improvement in I/O operations per second in transaction processing workload environments.
Additionally, peak large-block sequential workloads can receive as much as 200% bandwidth
improvement, which is an improvement factor of 3 compared to the DS8300 models.

The DS8800 offers either a dual 2-way processor complex or a dual 4-way processor
complex. The DS8800 overcomes many of the architectural limits of the predecessor disk
subsystems.

In this section, we go through the different architectural layers of the DS8000 and discuss the
performance characteristics that differentiate the DS8000 from other disk subsystems.

7.1.1 Fibre Channel switched interconnection at the back-end


DS8800 works with SAS disks. Shortly before the FC-to-SAS conversion is made, Fibre
Channel switching is used in the DS8800 back-end.

The FC technology is commonly used to connect a group of disks in a daisy-chained fashion


in a Fibre Channel Arbitrated Loop (FC-AL). To overcome the arbitration issue within FC-AL,
the DS8800 architecture is enhanced by adding a switch-based approach and creating FC-AL
switched loops, as shown in Figure 4-6 on page 73. It is called a Fibre Channel switched disk
system.

These switches use the FC-AL protocol and attach to the SAS drives (bridging to SAS
protocol) through a point-to-point connection. The arbitration message of a drive is captured
in the switch, processed, and propagated back to the drive, without routing it through all the
other drives in the loop.

Performance is enhanced because both device adapters (DAs) connect to the switched Fibre
Channel subsystem back-end, as shown in Figure 7-1. Note that each DA port can
concurrently send and receive data.

These two switched point-to-point connections to each drive, which also connect both DAs to
each switch, mean the following:
򐂰 There is no arbitration competition and interference between one drive and all the other
drives, because there is no hardware in common for all the drives in the FC-AL loop. This
leads to an increased bandwidth, which goes with the full 8 Gbps FC speed up to the
back-end place where the FC-to-SAS conversion is made, and which utilizes the full
SAS 2.0 speed for each individual drive.
򐂰 This architecture doubles the bandwidth over conventional FC-AL implementations due to
two simultaneous operations from each DA to allow for two concurrent read operations
and two concurrent write operations at the same time.
򐂰 In addition to superior performance, note the improved reliability, availability, and
serviceability (RAS) that this setup has over conventional FC-AL. The failure of a drive is
detected and reported by the switch. The switch ports distinguish between intermittent
failures and permanent failures. The ports understand intermittent failures, which are
recoverable, and collect data for predictive failure statistics. If one of the switches fails, a
disk enclosure service processor detects the failing switch and reports the failure using the
other loop. All drives can still connect through the remaining switch.

122 IBM System Storage DS8800: Architecture and Implementation


Figure 7-1 High availability and increased bandwidth connect both DAs to two logical loops

This discussion outlines the physical structure. A virtualization approach built on top of the
high performance architectural design contributes even further to enhanced performance, as
discussed in Chapter 5, “Virtualization concepts” on page 85.

7.1.2 Fibre Channel device adapter


The DS8000 relies on eight disk drive modules (DDMs) to form a RAID 5, RAID 6, or RAID 10
array. These DDMs are spread over two Fibre Channel fabrics. With the virtualization
approach and the concept of extents, the DS8000 device adapters (DAs) are mapping the
virtualization scheme over the disk subsystem back-end, as shown in Figure 7-2. For a
detailed discussion about disk subsystem virtualization, refer to Chapter 5, “Virtualization
concepts” on page 85.

The RAID device adapter is built on PowerPC technology with four Fibre Channel ports and
high function and high performance ASICs. It is PCIe Gen.-based and runs at 8 Gbps.

Note that each DA performs the RAID logic and frees up the processors from this task. The
actual throughput and performance of a DA is not only determined by the port speed and
hardware used, but also by the firmware efficiency.

Chapter 7. Performance 123


To host servers

Adapter Adapter

DA

Storage server
Processor Memory Processor PowerPC

Adapter Adapter
Fibre Channel Fibre Channel
Protocol Proc Protocol Proc

Fibre Channel ports

Figure 7-2 Fibre Channel device adapter

Figure 7-3 shows the detailed cabling between the Device Adapters and the 24-drive
Gigapacks. The ASICs seen there provide the FC-to-SAS bridging function from the external
SFP connectors to each of the ports on the SAS disk drives. The processor is the controlling
element in the system.

Gigap ack E nclosu res

8G b ps F i br e Ch an n el
O ptic al C o nn ecti on s
Debug Ports
Interface Card 1 8 G bp s F ib re C ha nn el D ebug Ports
Interface Card 1
O p ti cal Co nn ec ti on s
S FP SF P SF P S FP
8Gbps FC 8Gbps FC 8Gbps FC 8 Gbps FC
SF P
SF P
S FP ASIC SF P SF P ASIC S FP

Device SF P
SF P Pr oce sso r
Flash
SR AM
Pro cess or
Flas h
SRAM
6 Gbps SAS 6 Gbps SAS
Adapter AC /DC AC/D C
Po w e r Su p p ly P o w er S up p ly

SF P
SA S SAS ..24.. SAS S AS SAS SAS ..24.. SA S SAS
SF P AC /DC AC/D C

Device SF P
SF P
Po w e r Su p p ly P o w er S up p ly

Adapter 6 Gbps SAS


SR AM
6 Gbps SAS
SRAM
Pr oce sso r Flash Pro cess or Flas h

S FP ASIC SF P SF P ASIC S FP
8Gbps FC 8Gbps FC 8Gbps FC 8Gbps FC
S FP SF P SF P S FP

Debug Ports Interface Card 2 D ebug Ports Interface Card 2

Figure 7-3 Detailed DA-disk back-end diagram

Already for the DS8700, the device adapters had been upgraded with a twice-as-fast
processor on the adapter card compared to DS8100 and DS8300, providing much higher
throughput on the device adapter. For the DS8800, additional enhancements to the DA bring
a major performance improvement compared to DS8700: For DA limited workloads, the
maximum IOps throughput (small blocks) per DA has been increased by 40% to 80%, and DA
sequential throughput in MB/s (large blocks) has increased by approximately 85% to 210%
from DS8700 to DS8800. For instance, a single DA under ideal workload conditions can
process a sequential large-block read throughput of up to 1600 MB/s. These improvements
are of value in particular when using Solid-State Drives (SSDs), but also give the DS8800
system very high sustained sequential throughput, for instance in High-Performance
Computing configurations.

124 IBM System Storage DS8800: Architecture and Implementation


Technically, the improvements (processor, architecture) are similar to those designed for the
Host Adapters, and are described in 7.1.3, “Eight-port and four-port host adapters”.

7.1.3 Eight-port and four-port host adapters


Before looking into the heart of the DS8000 series, we briefly review the host adapters and
their enhancements to address performance. Figure 7-4 shows the host adapters. These
adapters are designed to hold either eight, or four Fibre Channel (FC) ports, which can be
configured to support either FCP or FICON.

Each port provides industry-leading throughput and I/O rates for FICON and FCP.

Fibre Channel Host ports

To host servers

Fibre Channel Fibre Channel


Protocol Proc Protocol Proc

Adapter Adapter

Processor Memory Processor


PowerPC
Storage server

HA
Adapter Adapter

Figure 7-4 Host adapter with four Fibre Channel ports

With FC adapters that are configured for FICON, the DS8000 series provides the following
configuration capabilities:
򐂰 Either fabric or point-to-point topologies
򐂰 A maximum of 128 host adapter ports, depending on the DS8800 processor feature
򐂰 A maximum of 509 logins per Fibre Channel port
򐂰 A maximum of 8192 logins per storage unit
򐂰 A maximum of 1280 logical paths on each Fibre Channel port
򐂰 Access to all control-unit images over each FICON port
򐂰 A maximum of 512 logical paths per control unit image

FICON host channels limit the number of devices per channel to 16,384. To fully access
65,280 devices on a storage unit, it is necessary to connect a minimum of four FICON host
channels to the storage unit. This way, by using a switched configuration, you can expose 64
control-unit images (16,384 devices) to each host channel.

The front-end with the 8 Gbps ports scales up to 128 ports for a DS8800, using the 8-port
HBAs. This results in a theoretical aggregated host I/O bandwidth of 128 times 8 Gbps.

Chapter 7. Performance 125


The following improvements have been implemented on the architecture of the Host Adapter,
leading to HA throughputs which are more than double compared to DS8700:
򐂰 The architecture is fully on 8 Gbps.
򐂰 x8 Gen2 PCIe interface; no PCI-X-to-PCIe bridge carrier is needed.
򐂰 The single-core 1 GHz PowerPC processor (750 GX) has been replaced by a dual-core
1.5 GHz (Freescale MPC8572).
򐂰 Adapter memory has increased fourfold.

The 8 Gbps adapter ports can negotiate to 8, 4, or 2 Gbps (1 Gbps not possible). For
attachments to 1-Gbps hosts, use a switch in between.

7.1.4 POWER6+: heart of the DS8800 dual-cluster design


The new DS8800 model incorporates POWER6+ processor technology. The DS8800 model
can be equipped with the 2-way processor feature or the 4-way processor feature for highest
performance requirements.

While the DS8100 and DS8300 used the RIO-G connection between the clusters as a
high-bandwidth interconnection to the device adapters, the DS8800 and DS8700 use
dedicated PCI Express connections to the I/O enclosures and the device adapters. This
increases the bandwidth to the storage subsystem back-end by a factor of up to 16 times to a
theoretical bandwidth of 64 GB/s.

High performance and high availability interconnect to the disk


subsystem
Figure 7-5 shows how the I/O enclosures connect to the processor complex.

DS8800
PCIe cable I/O attach

New I/O
attach

P6+
server
P6 P6

RIO RIO

P6 P6

5.0 GHz P6+ 570 CECs

PCIe cables

I/O enclosure
Figure 7-5 PCI Express connections to I/O enclosures

All I/O enclosures are equally served from either processor complex.

126 IBM System Storage DS8800: Architecture and Implementation


Each I/O enclosure contains two DAs. Each DA, with its four ports, connects to four switches
to reach out to two sets of 16 drives or disk drive modules (DDMs) each. Note that each
switch interface card has two ports to connect to the next card with 24 DDMs when vertically
growing within a DS8000. As outlined before, this dual two-logical loop approach allows for
multiple concurrent I/O operations to individual DDMs or sets of DDMs and minimizes
arbitration through the DDM/switch port mini-loop communication.

7.1.5 Vertical growth and scalability


Figure 7-6 shows a simplified view of the basic DS8800 structure and how it accounts for
scalability.

I/O enclosure I/O enclosure


Server 0 Server 1
L1,2 L1,2
Memory Memory Processor Processor Memory Memory

L3
L1,2
Memory Processor
PCIe interconnect Processor
L1,2
Memory
L3
L3
Memory Memory
Memory

RIO-G Module RIO-G Module


I/O enclosure I/O enclosure
POWER6 2-way SMP POWER6 2-way SMP
Dual two-way processor complex
Fibre Channel switched disk subsystems are not shown

I/O enclosure I/O enclosure


Server 0 Server 1
L1,2 L1,2
Memory Memory Processor Processor Memory Memory

L3 L1,2
PCIe interconnect L1,2
L3
L3
Memory Memory Processor Processor Memory
Memory
Memory

RIO-G Module I/O enclosure I/O enclosure RIO-G Module

L1,2 L1,2
Memory Memory Processor Processor Memory Memory
PCIe interconnect
L3 L1,2 L1,2
Processor L3
L3
Memory Memory Processor Memory
Memory
Memory

I/O enclosure I/O enclosure


POWER6 4-way SMP Dual four-way processor complex POWER6 4-way SMP

Figure 7-6 DS8800 scale performance linearly: view without disk subsystems

Although Figure 7-6 does not display the back-end part, it can be derived from the number of
I/O enclosures, which suggests that the disk subsystem also doubles, as does everything
else, when switching from a DS8800 2-way system with four I/O enclosures to an DS8800
4-way system with eight I/O enclosures. Doubling the number of processors and I/O
enclosures accounts also for doubling the potential throughput.

Again, note that a virtualization layer on top of this physical layout contributes to additional
performance potential.

Chapter 7. Performance 127


7.2 Software performance: synergy items
There are a number of performance features in the DS8000 that work together with the
software on the host and are collectively referred to as synergy items. These items allow the
DS8000 to cooperate with the host systems in manners beneficial to the overall performance
of the systems.

7.2.1 End-to-end I/O priority: synergy with AIX and DB2 on System p
End-to-end I/O priority is a new addition, requested by IBM, to the SCSI T10 standard. This
feature allows trusted applications to override the priority given to each I/O by the operating
system. This is only applicable to raw volumes (no file system) and with the 64-bit kernel.
Currently, AIX supports this feature in conjunction with DB2. The priority is delivered to
storage subsystem in the FCP Transport Header.

The priority of an AIX process can be 0 (no assigned priority) or any integer value from 1
(highest priority) to 15 (lowest priority). All I/O requests associated with a given process
inherit its priority value, but with end to end I/O priority, DB2 can change this value for critical
data transfers. At the DS8000, the host adapter will give preferential treatment to higher
priority I/O, improving performance for specific requests deemed important by the application,
such as requests that might be prerequisites for others, for example, DB2 logs.

7.2.2 Cooperative caching: Synergy with AIX and DB2 on System p


Another software-related performance item is cooperative caching, a feature which provides
a way for the host to send cache management hints to the storage facility. Currently, the host
can indicate that the information just accessed is unlikely to be accessed again soon. This
decreases the retention period of the cached data, allowing the subsystem to conserve its
cache for data that is more likely to be reaccessed, improving the cache hit ratio.

With the implementation of cooperative caching, the AIX operating system allows trusted
applications, such as DB2, to provide cache hints to the DS8000. This improves the
performance of the subsystem by keeping more of the repeatedly accessed data within the
cache. Cooperative caching is supported in System p AIX with the Multipath I/O (MPIO) Path
Control Module (PCM) that is provided with the Subsystem Device Driver (SDD). It is only
applicable to raw volumes (no file system) and with the 64-bit kernel.

7.2.3 Long busy wait host tolerance: Synergy with AIX on System p
Another new addition to the SCSI T10 standard is SCSI long busy wait, which provides a way
for the target system to specify that it is busy and how long the initiator should wait before
retrying an I/O.

This information, provided in the Fibre Channel Protocol (FCP) status response, prevents the
initiator from retrying too soon. This in turn reduces unnecessary requests and potential I/O
failures due to exceeding a set threshold for the number of retries. IBM System p AIX
supports SCSI long busy wait with MPIO, and it is also supported by the DS8000.

7.2.4 PowerHA Extended distance extensions: synergy with AIX on System p


The PowerHA SystemMirror Enterprise Edition (former HACMP/XD) provides server and
LPAR failover capability over extended distances. It can also take advantage of the Metro
Mirror or Global Mirror functions of the DS8000 as a data replication mechanism between the
primary and remote site. PowerHA System Mirror with Metro Mirror supports distances of up
to 300 km. The DS8000 requires no changes to be used in this fashion.

128 IBM System Storage DS8800: Architecture and Implementation


7.3 Performance considerations for disk drives
You can determine the number and type of ranks required based on the needed capacity and
on the workload characteristics in terms of access density, read to write ratio, and hit rates.

You can approach this task from the disk side and look at some basic disk figures. Current
SAS 15K RPM disks, for example, provide an average seek time of approximately 3.1 ms and
an average latency of 2 ms. For transferring only a small block, the transfer time can be
neglected. This is an average 5.1 ms per random disk I/O operation or 196 IOPS. A combined
number of eight disks (as is the case for a DS8000 array) will thus potentially sustain 1568
IOPS when spinning at 15 K RPM. Reduce the number by 12.5% when you assume a spare
drive in the eight pack.

Back on the host side, consider an example with 1000 IOPS from the host, a read-to-write
ratio of 3 to 1, and 50% read cache hits. This leads to the following IOPS numbers:
򐂰 750 read IOPS.
򐂰 375 read I/Os must be read from disk (based on the 50% read cache hit ratio).
򐂰 250 writes with RAID 5 results in 1,000 disk operations due to the RAID 5 write penalty
(read old data and parity, write new data and parity).
򐂰 This totals to 1375 disk I/Os.

With 15K RPM DDMs doing 1000 random IOPS from the server, we actually do 1375 I/O
operations on disk compared to a maximum of 1440 operations for 7+P configurations or
1260 operations for 6+P+S configurations. Thus, 1000 random I/Os from a server with a
standard read-to-write ratio and a standard cache hit ratio saturate the disk drives. We made
the assumption that server I/O is purely random. When there are sequential I/Os,
track-to-track seek times are much lower and higher I/O rates are possible. We also assumed
that reads have a hit ratio of only 50%. With higher hit ratios, higher workloads are possible.
This shows the importance of intelligent caching algorithms as used in the DS8000.

Important: When sizing a storage subsystem, you should consider the capacity and the
number of disk drives needed to satisfy the performance requirements.

For a single disk drive, various disk vendors provide the disk specifications on their websites.
Because the access times for the disks are the same for same RPM speeds, but they have
different capacities, the I/O density is different. 146 GB 15K RPM disk drives can be used for
access densities up to, and slightly over, 1 I/O per GB·s. For 450 GB drives, it is
approximately 0.5 I/O per GB·s. While this discussion is theoretical in approach, it provides a
first estimate.

After the speed of the disk has been decided, the capacity can be calculated based on your
storage capacity needs and the effective capacity of the RAID configuration you will use.
Refer to Table 8-9 on page 173 for information about calculating these needs.

Solid State Drives


From a performance point of view, the best choice for your DS8800 disks would be the new
Solid State Drives (SSDs). SSDs have no moving parts (no spinning platters and no actuator
arm). The performance advantages are the fast seek time and average access time. They are
targeted at applications with heavy IOPS bad cache hit rates and random access workload,
which necessitates fast response times. Database applications with their random and
intensive I/O workloads are prime candidates for deployment on SSDs.

Chapter 7. Performance 129


For detailed recommendations about SSD usage and performance, refer to DS8000:
Introducing Solid State Drives, REDP-4522.

Differences between SATA/Nearline-SAS and SAS/FC disk drives


SAS or Fibre Channel disk drives provide higher performance, reliability, availability, and
serviceability when compared to Nearline-SAS or SATA disk drives. SAS or Fibre Channel
disk drives rotate at 15,000 or 10,000 RPM, but SATA or Nearline-SAS drives rotate only at
7200 RPM. If an application requires high performance data throughput and almost
continuous, intensive I/O operations, SAS/FC disk drives are the suggested option.

Important: SATA drives are not the appropriate option for every storage requirement. For
many enterprise applications, and certainly mission-critical and production applications,
SAS (or Fibre Channel) disks remain the best choice.

SATA disk drives are a cost-efficient storage option for lower intensity storage workloads and
are available with the DS8700.

7.4 DS8000 superior caching algorithms


Most, if not all, high-end disk systems have an internal cache integrated into the system
design, and some amount of system cache is required for operation. Over time, cache sizes
have dramatically increased, but the ratio of cache size to system disk capacity has remained
nearly the same. The DS8800 can be equipped with up to 384 GB of cache.

7.4.1 Sequential Adaptive Replacement Cache


The DS8000 series uses the Sequential Adaptive Replacement Cache (SARC) algorithm,
which was developed by IBM Storage Development in partnership with IBM Research. It is a
self-tuning, self-optimizing solution for a wide range of workloads with a varying mix of
sequential and random I/O streams. SARC is inspired by the Adaptive Replacement Cache
(ARC) algorithm and inherits many features of it. For a detailed description about ARC, see
“Outperforming LRU with an adaptive replacement cache algorithm” by N. Megiddo et al. in
IEEE Computer, volume 37, number 4, pages 58–65, 2004. For a detailed description about
SARC, see “SARC: Sequential Prefetching in Adaptive Replacement Cache” by Binny Gill,
et al, in the Proceedings of the USENIX 2005 Annual Technical Conference, pages 293–308.

SARC basically attempts to determine four things:


򐂰 When data is copied into the cache.
򐂰 Which data is copied into the cache.
򐂰 Which data is evicted when the cache becomes full.
򐂰 How the algorithm dynamically adapts to different workloads.

The DS8000 series cache is organized in 4 KB pages called cache pages or slots. This unit of
allocation (which is smaller than the values used in other storage systems) ensures that small
I/Os do not waste cache memory.

The decision to copy some amount of data into the DS8000 cache can be triggered from two
policies: demand paging and prefetching.
򐂰 Demand paging means that eight disk blocks (a 4K cache page) are brought in only on a
cache miss. Demand paging is always active for all volumes and ensures that I/O patterns
with some locality discover at least some recently used data in the cache.

130 IBM System Storage DS8800: Architecture and Implementation


򐂰 Prefetching means that data is copied into the cache speculatively even before it is
requested. To prefetch, a prediction of likely future data accesses is needed. Because
effective, sophisticated prediction schemes need an extensive history of page accesses
(which is not feasible in real systems), SARC uses prefetching for sequential workloads.
Sequential access patterns naturally arise in video-on-demand, database scans, copy,
backup, and recovery. The goal of sequential prefetching is to detect sequential access
and effectively preload the cache with data so as to minimize cache misses. Today
prefetching is ubiquitously applied in web servers and clients, databases, file servers,
on-disk caches, and multimedia servers.

For prefetching, the cache management uses tracks. A track is a set of 128 disk blocks (16
cache pages). To detect a sequential access pattern, counters are maintained with every
track to record whether a track has been accessed together with its predecessor. Sequential
prefetching becomes active only when these counters suggest a sequential access pattern. In
this manner, the DS8000 monitors application read-I/O patterns and dynamically determines
whether it is optimal to stage into cache:
򐂰 Just the page requested
򐂰 That page requested plus the remaining data on the disk track
򐂰 An entire disk track (or a set of disk tracks), which has not yet been requested

The decision of when and what to prefetch is made in accordance with the Adaptive
Multi-stream Prefetching (AMP) algorithm, which dynamically adapts the amount and timing
of prefetches optimally on a per-application basis (rather than a system-wide basis). AMP is
described further in 7.4.2, “Adaptive Multi-stream Prefetching” on page 132.

To decide which pages are evicted when the cache is full, sequential and random
(non-sequential) data is separated into different lists. Figure 7-7 illustrates the SARC
algorithm for random and sequential data.

RANDOM SEQ

MRU MRU

Desired size

SEQ bottom
LRU
RANDOM bottom
LRU

Figure 7-7 Sequential Adaptive Replacement Cache

A page that has been brought into the cache by simple demand paging is added to the head
of Most Recently Used (MRU) of the RANDOM list. Without further I/O access, it goes down
to the bottom of Least Recently Used (LRU). A page that has been brought into the cache by
a sequential access or by sequential prefetching is added to the head of MRU of the SEQ list
and then goes in that list. Additional rules control the migration of pages between the lists so
as to not keep the same pages in memory twice.

Chapter 7. Performance 131


To follow workload changes, the algorithm trades cache space between the RANDOM and
SEQ lists dynamically and adaptively. This makes SARC scan-resistant, so that one-time
sequential requests do not pollute the whole cache. SARC maintains a desired size
parameter for the sequential list. The desired size is continually adapted in response to the
workload. Specifically, if the bottom portion of the SEQ list is found to be more valuable than
the bottom portion of the RANDOM list, then the desired size is increased; otherwise, the
desired size is decreased. The constant adaptation strives to make optimal use of limited
cache space and delivers greater throughput and faster response times for a given cache
size.

Additionally, the algorithm dynamically modifies the sizes of the two lists and the rate at which
the sizes are adapted. In a steady state, pages are evicted from the cache at the rate of
cache misses. A larger (respectively, a smaller) rate of misses effects a faster (respectively, a
slower) rate of adaptation.

Other implementation details take into account the relationship of read and write (NVS)
cache, efficient destaging, and the cooperation with Copy Services. In this manner, the
DS8000 cache management goes far beyond the usual variants of the Least Recently
Used/Least Frequently Used (LRU/LFU) approaches.

7.4.2 Adaptive Multi-stream Prefetching


As described previously, SARC dynamically divides the cache between the RANDOM and
SEQ lists, where the SEQ list maintains pages brought into the cache by sequential access or
sequential prefetching.

In DS8800 and DS8700, Adaptive Multi-stream Prefetching (AMP), which is a tool from IBM
Research, manages the SEQ. AMP is an autonomic, workload-responsive, self-optimizing
prefetching technology that adapts both the amount of prefetch and the timing of prefetch on
a per-application basis to maximize the performance of the system. The AMP algorithm
solves two problems that plague most other prefetching algorithms:
򐂰 Prefetch wastage occurs when prefetched data is evicted from the cache before it can be
used.
򐂰 Cache pollution occurs when less useful data is prefetched instead of more useful data.

By wisely choosing the prefetching parameters, AMP provides optimal sequential read
performance and maximizes the aggregate sequential read throughput of the system. The
amount prefetched for each stream is dynamically adapted according to the application's
needs and the space available in the SEQ list. The timing of the prefetches is also
continuously adapted for each stream to avoid misses and at the same time avoid any cache
pollution.

SARC and AMP play complementary roles. While SARC is carefully dividing the cache
between the RANDOM and the SEQ lists so as to maximize the overall hit ratio, AMP is
managing the contents of the SEQ list to maximize the throughput obtained for the sequential
workloads. While SARC impacts cases that involve both random and sequential workloads,
AMP helps any workload that has a sequential read component, including pure sequential
read workloads.

AMP dramatically improves performance for common sequential and batch processing
workloads. It also provides excellent performance synergy with DB2 by preventing table
scans from being I/O bound and improves performance of index scans and DB2 utilities like
Copy and Recover. Furthermore, AMP reduces the potential for array hot spots, which result
from extreme sequential workload demands.

132 IBM System Storage DS8800: Architecture and Implementation


For a detailed description about AMP and the theoretical analysis for its optimal usage, see
“AMP: Adaptive Multi-stream Prefetching in a Shared Cache” by Binny Gill, et al. in USENIX
File and Storage Technologies (FAST), February 13 - 16, 2007, San Jose, CA. For a more
detailed description, see “Optimal Multistream Sequential Prefetching in a Shared Cache” by
Binny Gill, et al, in the ACM Journal of Transactions on Storage, October 2007.

7.4.3 Intelligent Write Caching


Another additional cache algorithm, referred to as Intelligent Write Caching (IWC), has been
implemented in the DS8000 series. IWC improves performance through better write cache
management and a better destaging order of writes. This new algorithm is a combination of
CLOCK, a predominantly read cache algorithm, and CSCAN, an efficient write cache
algorithm. Out of this combination, IBM produced a powerful and widely applicable write
cache algorithm.

The CLOCK algorithm exploits temporal ordering. It keeps a circular list of pages in memory,
with the “hand” pointing to the oldest page in the list. When a page needs to be inserted in the
cache, then a R (recency) bit is inspected at the “hand's” location. If R is zero, the new page is
put in place of the page the “hand” points to and R is set to 1; otherwise, the R bit is cleared
and set to zero. Then, the clock hand moves one step clockwise forward and the process is
repeated until a page is replaced.

The CSCAN algorithm exploit spatial ordering. The CSCAN algorithm is the circular variation
of the SCAN algorithm. The SCAN algorithm tries to minimize the disk head movement when
servicing read and write requests. It maintains a sorted list of pending requests along with the
position on the drive of the request. Requests are processed in the current direction of the
disk head, until it reaches the edge of the disk. At that point, the direction changes. In the
CSCAN algorithm, the requests are always served in the same direction. Once the head has
arrived at the outer edge of the disk, it returns to the beginning of the disk and services the
new requests in this one direction only. This results is more equal performance for all head
positions.

The basic idea of IWC is to maintain a sorted list of write groups, as in the CSCAN algorithm.
The smallest and the highest write groups are joined, forming a circular queue. The additional
new idea is to maintain a recency bit for each write group, as in the CLOCK algorithm. A write
group is always inserted in its correct sorted position and the recency bit is set to zero at the
beginning. When a write hit occurs, the recency bit is set to one. The destage operation
proceeds, where a destage pointer is maintained that scans the circular list looking for
destage victims. Now this algorithm only allows destaging of write groups whose recency bit
is zero. The write groups with a recency bit of one are skipped and the recent bit is then
turned off and reset to zero, which gives an “extra life” to those write groups that have been hit
since the last time the destage pointer visited them; Figure 7-8 gives an idea of how this
mechanism works.

In the DS8000 implementation, an IWC list is maintained for each rank. The dynamically
adapted size of each IWC list is based on workload intensity on each rank. The rate of
destage is proportional to the portion of NVS occupied by an IWC list (the NVS is shared
across all ranks in a cluster). Furthermore, destages are smoothed out so that write bursts
are not translated into destage bursts.

In summary, IWC has better or comparable peak throughput to the best of CSCAN and
CLOCK across a wide gamut of write cache sizes and workload configurations. In addition,
even at lower throughputs, IWC has lower average response times than CSCAN and CLOCK.

Chapter 7. Performance 133


Figure 7-8 Intelligent Write Caching

7.5 Performance considerations for logical configuration


To determine the optimal DS8000 layout, the I/O performance requirements of the different
servers and applications should be defined up front, because they will play a large part in
dictating both the physical and logical configuration of the disk subsystem. Prior to designing
the disk subsystem, the disk space requirements of the application should be well
understood.

7.5.1 Workload characteristics


The answers to questions such as “How many host connections do I need?” and “How much
cache do I need?” always depend on the workload requirements, such as how many I/Os per
second per server, I/Os per second per gigabyte of storage, and so on.

The information you need to conduct detailed modeling includes:


򐂰 Number of I/Os per second
򐂰 I/O density
򐂰 Megabytes per second
򐂰 Relative percentage of reads and writes
򐂰 Random or sequential access characteristics
򐂰 Cache hit ratio

7.5.2 Data placement in the DS8000


Once you have determined the disk subsystem throughput, the disk space, and the number of
disks required by your different hosts and applications, you have to make a decision regarding
data placement.

134 IBM System Storage DS8800: Architecture and Implementation


As is common for data placement, and to optimize DS8000 resource utilization, follow these
guidelines:
򐂰 Equally spread the LUNs and volumes across the DS8000 servers. Spreading the
volumes equally on rank group 0 and 1 will balance the load across the DS8000 units.
򐂰 Use as many disks as possible. Avoid idle disks, even if all storage capacity will not be
initially utilized.
򐂰 Distribute capacity and workload across DA pairs.
򐂰 Use multirank Extent Pools.
򐂰 Stripe your logical volume across several ranks (the default for large Extent Pools).
򐂰 Consider placing specific database objects (such as logs) on different ranks.
򐂰 For an application, use volumes from both even and odd numbered Extent Pools (even
numbered pools are managed by server 0,and odd numbers are managed by server 1).
򐂰 For large, performance-sensitive applications, consider using two dedicated Extent Pools
(one managed by server 0, the other managed by server 1).
򐂰 Consider using different Extent Pools for 6+P+S arrays and 7+P arrays. If you use the
default Storage Pool Striping, this will ensure that your ranks are equally filled.

Important: Balance your ranks and Extent Pools between the two DS8000 servers. Half of
the ranks should be managed by each server (see Figure 7-9).

Server 0 Server 1

DA2 DA2

DA0 DA0

DA3 DA3

DA1 DA1

ExtPool 0 ExtPool 1

Figure 7-9 Ranks in a multirank Extent Pool configuration balanced across DS8000 servers

Note: Database logging usually consists of sequences of synchronous sequential writes.


Log archiving functions (copying an active log to an archived space) also tend to consist of
simple sequential read and write sequences. Consider isolating log files on separate
arrays.

All disks in the storage disk subsystem should have roughly equivalent utilization. Any disk
that is used more than the other disks will become a bottleneck to performance. A practical
method is to use predefined Storage Pool Striping. Alternatively, make extensive use of
volume-level striping across disk drives.

Chapter 7. Performance 135


7.5.3 Data placement
There are several options for creating logical volumes. You can select an Extent Pool that is
owned by one server. There could be just one Extent Pool per server or you could have
several. The ranks of Extent Pools can come from arrays on different device adapter pairs.

For optimal performance, your data should be spread across as many hardware resources as
possible. RAID 5, RAID 6, or RAID 10 already spreads the data across the drives of an array,
but this is not always enough. There are two approaches to spreading your data across even
more disk drives:
򐂰 Storage Pool Striping
򐂰 Striping at the host level

Storage Pool Striping


Striping is a technique for spreading the data across several disk drives in such a way that the
I/O capacity of the disk drives can be used in parallel to access data on the logical volume.

The easiest way to stripe is to use Extent Pools with more than one rank and use Storage
Pool Striping when allocating a new volume (see Figure 7-10). This striping method is
independent of the operating system.

Storage Pool Striping


4 rank per Extent Pool

Extent pool
Rank 1

Extent
Rank 2
1GB 8 GB LUN

Rank 3

Rank 4

Figure 7-10 Storage Pool Striping

In 7.3, “Performance considerations for disk drives” on page 129, we discuss how many
random I/Os can be performed for a standard workload on a rank. If a volume resides on just
one rank, this rank’s I/O capability also applies to the volume. However, if this volume is
striped across several ranks, the I/O rate to this volume can be much higher.

The total number of I/Os that can be performed on a given set of ranks does not change with
Storage Pool Striping.

On the other hand, if you stripe all your data across all ranks and you lose just one rank, for
example, because you lose two drives at the same time in a RAID 5 array, all your data is

136 IBM System Storage DS8800: Architecture and Implementation


gone. Remember that with RAID 6 you can increase reliability and survive two drive failures,
but the better choice is to mirror your data to a remote DS8000.

Tip: Use Storage Pool Striping and Extent Pools with a minimum of four to eight ranks of
the same characteristics (RAID type and disk RPM) to avoid hot spots on the disk drives.

Figure 7-11 shows a good configuration. The ranks are attached to DS8000 server 0 and
server 1 in a half and half configuration, ranks on different device adapters are used in a
multi-rank Extent Pool, and there are separate Extent Pools for 6+P+S and 7+P ranks.

DS8000
Server 0 Server 1

DA2 6+P+S 6+P+S DA2


DA0 6+P+S 6+P+S DA0
DA3 6+P+S 6+P+S DA3

Extent Pool P0 DA1 6+P+S 6+P+S DA1 Extent Pool P1


DA2 6+P+S 6+P+S DA2
DA0 6+P+S 6+P+S DA0
DA3 6+P+S 6+P+S DA3
DA1 6+P+S 6+P+S DA1
DA2 7+P 7+P DA2
DA0 7+P 7+P DA0
DA3 7+P 7+P DA3
DA1 7+P 7+P DA1
Extent Pool P2 DA2 7+P 7+P DA2
Extent Pool P3
DA0 7+P 7+P DA0
DA3 7+P 7+P DA3
DA1 7+P 7+P DA1

Figure 7-11 Balanced Extent Pool configuration

There is no reorg function for Storage Pool Striping. If you have to expand an Extent Pool, the
extents are not rearranged.

Tip: If you have to expand a nearly full Extent Pool, it is better to add several ranks at the
same time instead of just one rank, to benefit from striping across the newly added ranks.

Striping at the host level


Many operating systems have the option to stripe data across several (logical) volumes. An
example is AIX’s Logical Volume Manager (LVM).

Other examples for applications that stripe data across the volumes include the SAN Volume
Controller (SVC) and IBM System Storage N series Gateways.

Do not expect that double striping (at the storage subsystem level and at the host level) will
enhance performance any further.

LVM striping is a technique for spreading the data in a logical volume across several disk
drives in such a way that the I/O capacity of the disk drives can be used in parallel to access
data on the logical volume. The primary objective of striping is high performance reading and
writing of large sequential files, but there are also benefits for random access.

Chapter 7. Performance 137


If you use a logical volume manager (such as LVM on AIX) on your host, you can create a
host logical volume from several DS8000 logical volumes (LUNs). You can select LUNs from
different DS8000 servers and device adapter pairs, as shown in Figure 7-12. By striping your
host logical volume across the LUNs, you will get the best performance for this LVM volume.

Host LVM volume

LSS 00 LSS 01

Extent Pool FB-0a Extent Pool FB-1a


DA pair 1

DA pair 1
Extent Pool FB-0b Extent Pool FB-1b

Server1
Server0

Extent Pool FB-0c Extent Pool FB-1c


DA pair 2

DA pair 2
Extent Pool FB-0d Extent Pool FB-1d

Figure 7-12 Optimal placement of data

Figure 7-12 shows an optimal distribution of eight logical volumes within a DS8000. Of
course, you could have more Extent Pools and ranks, but when you want to distribute your
data for optimal performance, you should make sure that you spread it across the two
servers, across different device adapter pairs, and across several ranks.

To be able to create very large logical volumes or to be able to use Extent Pool striping, you
must consider having Extent Pools with more than one rank.

If you use multirank Extent Pools and you do not use Storage Pool Striping, you have to be
careful where to put your data, or you can easily unbalance your system (see the right side of
Figure 7-13).

138 IBM System Storage DS8800: Architecture and Implementation


Balanced implementation: LVM striping Non-balanced implementation: LUNs across ranks
1 rank per extent pool More than 1 rank per extent pool

Rank 1 Extent pool 1 Extent Pool Pool 5


Extent
2 GB LUN 1
Rank 5 8GB LUN

Extent
Rank 2 Extent pool 2
1GB 2GB LUN 2
Rank 6
Extent
1GB

Extent pool 3
2GB LUN 3 Rank 7

Rank 3
Extent pool 4 Rank 8
2GB LUN 4

Rank 4

LV striped across 4 LUNs

Figure 7-13 Spreading data across ranks

Combining Extent Pools made up of one rank and then LVM striping over LUNs created on
each Extent Pool will offer a balanced method to evenly spread data across the DS8000
without using Extent Pool striping, as shown on the left side of Figure 7-13.

The stripe size


Each striped logical volume that is created by the host’s logical volume manager has a stripe
size that specifies the fixed amount of data stored on each DS8000 logical volume (LUN) at
one time.

The stripe size has to be large enough to keep sequential data relatively close together, but
not too large so as to keep the data located on a single array.

We recommend that you define stripe sizes using your host’s logical volume manager in the
range of 4 MB to 64 MB. You should choose a stripe size close to 4 MB if you have a large
number of applications sharing the arrays and a larger size when you have few servers or
applications sharing the arrays.

Combining Extent Pool striping and logical volume manager striping


Striping by a logical volume manager is done on a stripe size in the MB range (about 64 MB).
Extent Pool striping is done at a 1 GiB stripe size. Both methods could be combined. LVM
striping can stripe across Extent Pools and use volumes from Extent Pools that are attached
to server 0 and server 1 of the DS8000 series. If you already use LVM Physical Partition (PP)
striping, you might want to continue to use that striping. Double striping will probably not
increase performance.

Chapter 7. Performance 139


7.6 Performance and sizing considerations for open systems
In these sections, we discuss topics that are particularly relevant to open systems.

7.6.1 Determining the number of paths to a LUN


When configuring an IBM System Storage DS8000 for an open systems host, a decision must
be made regarding the number of paths to a particular LUN, because the multipathing
software allows (and manages) multiple paths to a LUN. There are two opposing factors to
consider when deciding on the number of paths to a LUN:
򐂰 Increasing the number of paths increases availability of the data, protecting against
outages.
򐂰 Increasing the number of paths increases the amount of CPU used because the
multipathing software must choose among all available paths each time an I/O is issued.

A good compromise is between two and four paths per LUN.

7.6.2 Dynamic I/O load-balancing: Subsystem Device Driver


The Subsystem Device Driver (SSD) is an IBM-provided pseudo-device driver that is
designed to support the multipath configuration environments in the DS8000. It resides in a
host system with the native disk device driver.

The dynamic I/O load-balancing option (default) of SDD is recommended to ensure better
performance because:
򐂰 SDD automatically adjusts data routing for optimum performance. Multipath load
balancing of data flow prevents a single path from becoming overloaded, causing
input/output congestion that occurs when many I/O operations are directed to common
devices along the same input/output path.
򐂰 The path to use for an I/O operation is chosen by estimating the load on each adapter to
which each path is attached. The load is a function of the number of I/O operations
currently in process. If multiple paths have the same load, a path is chosen at random from
those paths.

For more information about the SDD, see the IBM Redbooks publication DS8000: Host
Attachment and Interoperability, SG24-8887.

7.6.3 Automatic port queues


When there is I/O between a server and a DS8800 Fibre Channel port, both the server host
adapter and the DS8800 host bus adapter support queuing I/Os. How long this queue can be
is called the queue depth. Because several servers can and usually do communicate with few
DS8800 posts, the queue depth of a storage host bus adapter should be larger than the one
on the server side. This is also true for the DS8800, which supports 2048 FC commands
queued on a port. However, sometimes the port queue in the DS8800 HBA can be flooded.

When the number of commands sent to the DS8000 port has exceeded the maximum
number of commands that the port can queue, the port has to discard these additional
commands.

This operation is a normal error recovery operation in the Fibre Channel protocol to prevent
more damage. The normal recovery is a 30-second timeout for the server, after that time the

140 IBM System Storage DS8800: Architecture and Implementation


command is resent. The server has a command retry count before it will fail the command.
Command Timeout entries will be seen in the server logs.

Automatic Port Queues is a mechanism the DS8800 uses to self-adjust the queue based on
the workload. This allows higher port queue oversubscription while maintaining a fair share
for the servers and the accessed LUNs.

The port that the queue is filling up goes into SCSI Queue Fill mode, where it accepts no
additional commands to slow down the I/Os.

By avoiding error recovery and the 30 second blocking SCSI Queue Full recovery interval, the
overall performance is better with Automatic Port Queues.

7.6.4 Determining where to attach the host


When determining where to attach multiple paths from a single host system to I/O ports on a
host adapter to the storage facility image, the following considerations apply:
򐂰 Choose the attached I/O ports on different host adapters.
򐂰 Spread the attached I/O ports evenly between the four I/O enclosure groups.

The DS8000 host adapters have no server affinity, but the device adapters and the rank have
server affinity. Figure 7-14 shows a host that is connected through two FC adapters to two
DS8000 host adapters located in different I/O enclosures.

Reads
Reads

HAs do not have DS8000 server affinity


LUN1
HA FC0 FC1
I/Os I/Os

L1,2
Memory Processor L1,2
Memory Processor Memory Memory
SERVER 0 PCIe Interconnect SERVER 1
L3 L1,2
Memory Processor L1,2 L3
Memory Processor Memory Memory

RIO-G Module Extent pool 1Interface


Switch Extent
Cardpool 4
RIO-G Module
24 DDM
LUN1 ooo

Switch Interface Card

DA Switch Interface Card DA

DAs with an affinity to server 0 LUN1 ooo DAs with an affinity to server 1
24 DDM

Switch Interface Card

Extent pool 1 oooo Extent pool 4


controlled by server 0 controlled by server 1

Figure 7-14 Dual-port host attachment

The host has access to LUN1, which is created in the Extent Pool 1 controlled by the DS8000
server 0. The host system sends read commands to the storage server.

When a read command is executed, one or more logical blocks are transferred from the
selected logical drive through a host adapter over an I/O interface to a host. In this case, the

Chapter 7. Performance 141


logical device is managed by server 0, and the data is handled by server 0. The read data to
be transferred to the host must first be present in server 0's cache. When the data is in the
cache, it is then transferred through the host adapters to the host.

7.7 Performance and sizing considerations for System z


Here we discuss several System z specific topics regarding the performance potential of the
DS8000 series. We also discuss the considerations you must have when you configure and
size a DS8000 that replaces older storage hardware in System z environments.

7.7.1 Host connections to System z servers


Figure 7-15 partially shows a configuration where a DS8000 connects to FICON hosts. Note
that this figure only indicates the connectivity to the Fibre Channel switched disk subsystem
through its I/O enclosure, symbolized by the rectangles.

Each I/O enclosure can hold up to four HAs. The example in Figure 7-15 shows only eight
FICON channels connected to the first two I/O enclosures. Not shown is a second FICON
director, which connects in the same fashion to the remaining two I/O enclosures to provide a
total of 16 FICON channels in this particular example. The DS8800 disk storage system
provides up to 128 FICON channel ports. Again, note the efficient FICON implementation in
the DS8000 FICON ports.

z/OS 1.7+
Parallel
Sysplex
HA

FICON
Director

I/O enclosure I/O enclosure

Server 0 Server 1
L1,2 Processor L1,2
Memory Memory Processor
Memory Memory

L3 L1,2
Memory
Processor
PCIe connections Processor L1,2 L3
Memory Memory Memory

RIO-G Module RIO-G Module


POWER6 2-way SMP POWER6 2-way SMP

Figure 7-15 DS8800 front-end connectivity example (partial view)

142 IBM System Storage DS8800: Architecture and Implementation


Note the following performance factors:
򐂰 Do not mix ports connected to a FICON channel with a port connected to a PPRC link in
the same Host Adapter.
򐂰 For very large sequential loads (and with large block sizes), only use two ports per Host
Adapter.

7.7.2 Parallel Access Volume


Parallel Access Volume (PAV) is one of the features that was originally introduced with the
IBM TotalStorage® Enterprise Storage Server (ESS) and that the DS8000 series has
inherited. PAV is an optional licensed function of the DS8000 for the z/OS and z/VM operating
systems, helping the System z servers that are running applications to concurrently share the
same logical volumes.

The ability to do multiple I/O requests to the same volume nearly eliminates I/O supervisor
queue delay (IOSQ) time, one of the major components in z/OS response time. Traditionally,
access to highly active volumes has involved manual tuning, splitting data across multiple
volumes, and more. With PAV and the Workload Manager (WLM), you can almost forget
about manual performance tuning. WLM manages PAVs across all the members of a Sysplex
too. This way, the DS8000 in conjunction with z/OS has the ability to meet the performance
requirements by its own.

Traditional z/OS behavior without PAV


Traditional storage disk subsystems have allowed for only one channel program to be active
to a volume at a time to ensure that data being accessed by one channel program cannot be
altered by the activities of some other channel program.

Figure 7-16 illustrates the traditional z/OS behavior without PAV, where subsequent
simultaneous I/Os to volume 100 are queued while volume 100 is still busy with a preceding
I/O.

Appl. A Appl. B Appl. C


UCB 100 UCB 100 UCB 100

UCB Busy Device Busy

One I/O to
one volume
System z at one time System z

100

Figure 7-16 Traditional z/OS behavior

Chapter 7. Performance 143


From a performance standpoint, it did not make sense to send more than one I/O at a time to
the storage disk subsystem, because the hardware could process only one I/O at a time.
Knowing this, the z/OS systems did not try to issue another I/O to a volume, which, in z/OS, is
represented by a Unit Control Block (UCB), while an I/O was already active for that volume,
as indicated by a UCB busy flag; see Figure 7-16 on page 143.

Not only were the z/OS systems limited to processing only one I/O at a time, but also the
storage subsystems accepted only one I/O at a time from different system images to a shared
volume, for the same reasons previously mentioned; see Figure 7-16 on page 143.

concurrent I/Os to volume 100


using different UCBs --- no one is queued

Appl. A Appl. B Appl. C


UCB 1FF UCB 1FE
UCB 100
alias to alias to
UCB 100 UCB 100

z/OS
Single image
System z

DS8000 with PAV 100 Logical volume

Physical layer

Figure 7-17 z/OS behavior with PAV

Parallel I/O capability z/OS behavior with PAV


The DS8000 has the ability to perform more than one I/O to a CKD volume. Using the alias
address in addition to the conventional base address, a z/OS host can use several UCBs for
the same logical volume instead of one UCB per logical volume. For example, base address
100 might have alias addresses 1FF and 1FE, which allows for three parallel I/O operations to
the same volume; see Figure 7-17.

This feature that allows parallel I/Os to a volume from one host is called Parallel Access
Volume (PAV).

There are two concepts that are basic in PAV functionality:


򐂰 Base address
The base device address is the conventional unit address of a logical volume. There is
only one base address associated with any volume.
򐂰 Alias address
An alias device address is mapped to a base address. I/O operations to an alias run
against the associated base address storage space. There is no physical space
associated with an alias address. You can define more than one alias per base.

Alias addresses have to be defined to the DS8000 and to the I/O definition file (IODF). This
association is predefined, and you can add new aliases nondisruptively. Still, the association

144 IBM System Storage DS8800: Architecture and Implementation


between base and alias is not fixed; the alias address can be assigned to a different base
address by the z/OS Workload Manager.

For guidelines about PAV definition and support, see DS8000: Host Attachment and
Interoperability, G24-8887.

PAV is an optional licensed function on the DS8000 series. PAV also requires the purchase of
the FICON Attachment feature.

7.7.3 z/OS Workload Manager: Dynamic PAV tuning


It is not always easy to predict which volumes should have an alias address assigned, and
how many. Your software can automatically manage the aliases according to your goals. z/OS
can exploit automatic PAV tuning if you are using the z/OS Workload Manager (WLM) in Goal
mode. The WLM can dynamically tune the assignment of alias addresses. The Workload
Manager monitors the device performance and is able to dynamically reassign alias
addresses from one base to another if predefined goals for a workload are not met.

z/OS recognizes the aliases that are initially assigned to a base during the Nucleus
Initialization Program (NIP) phase. If dynamic PAVs are enabled, the WLM can reassign an
alias to another base by instructing the IOS to do so when necessary; see Figure 7-18.

WLM can dynamically


UCB 100 reassign an alias to
another base

WLM

IOS
Assign to base 100

1F0 1F1 1F2 1F3


Base Alias Alias Alias Alias Base
100 to 100 to 100 to 110 to 110 110

Figure 7-18 WLM assignment of alias addresses

z/OS Workload Manager in Goal mode tracks system workloads and checks whether
workloads are meeting their goals as established by the installation.

WLM also keeps track of the devices utilized by the different workloads, accumulates this
information over time, and broadcasts it to the other systems in the same sysplex. If WLM
determines that any workload is not meeting its goal due to IOS queue (IOSQ) time, WLM will
attempt to find an alias device that can be reallocated to help this workload achieve its goal;
see Figure 7-19.

Chapter 7. Performance 145


WLMs exchange performance information
Goals not met because of IOSQ ?
Who can donate an alias ?

System z System z System z System z


WLM WLM WLM WLM
IOSQ on 100 ? IOSQ on 100 ? IOSQ on 100 ? IOSQ on 100 ?

Base Alias Alias Alias Alias Base


100 to 100 to 100 to 110 to 110 110

Dynamic PAVs Dynamic PAVs


DS8000

Figure 7-19 Dynamic PAVs in a sysplex

7.7.4 HyperPAV
Dynamic PAV requires the WLM to monitor the workload and goals. It takes some time until
the WLM detects an I/O bottleneck. Then the WLM must coordinate the reassignment of alias
addresses within the sysplex and the DS8000. All of this takes time, and if the workload is
fluctuating or has a burst character, the job that caused the overload of one volume could
have ended before the WLM had reacted. In these cases, the IOSQ time was not eliminated
completely.

With HyperPAV, an on demand proactive assignment of aliases is possible, as shown in


Figure 7-20.

Aliases kept in pool for use as needed


Applications z/OS Image UCB 08F3
P
do I/O to O
base UCB 08F2
O DS8000
volumes UCB 0801 UCB 08F1 L

Applications
UCB 08F0 Logical Subsystem (LSS) 0800
z/OS Sysplex

do I/O to
base UCB 0802
volumes Alias UA=F0
Alias UA=F1
Alias UA=F2
Alias UA=F3
Applications z/OS Image
do I/O to Base UA=01
base UCB 08F0
volumes UCB 0801
Base UA=02
UCB 08F1 P
Applications
UCB 08F3 O
do I/O to
base UCB 0802 O
volumes UCB 08F2 L

Figure 7-20 HyperPAV: Basic operational characteristics

146 IBM System Storage DS8800: Architecture and Implementation


With HyperPAV, the WLM is no longer involved in managing alias addresses. For each I/O, an
alias address can be picked from a pool of alias addresses within the same LCU.

This capability also allows different HyperPAV hosts to use one alias to access different
bases, which reduces the number of alias addresses required to support a set of bases in an
iBM System z environment, with no latency in assigning an alias to a base. This functionality
is also designed to enable applications to achieve better performance than is possible with
the original PAV feature alone, while also using the same or fewer operating system
resources.

Benefits of HyperPAV
HyperPAV has been designed to:
򐂰 Provide an even more efficient Parallel Access Volumes (PAV) function
򐂰 Help clients who implement larger volumes to scale I/O rates without the need for
additional PAV alias definitions
򐂰 Exploit the FICON architecture to reduce impact, improve addressing efficiencies, and
provide storage capacity and performance improvements:
– More dynamic assignment of PAV aliases improves efficiency
– Number of PAV aliases needed might be reduced, taking fewer from the 64 K device
limitation and leaving more storage for capacity use
򐂰 Enable a more dynamic response to changing workloads
򐂰 Simplified management of aliases
򐂰 Make it easier for users to make a decision to migrate to larger volume sizes

Optional licensed function


HyperPAV is an optional licensed function of the DS8000 series. It is required in addition to
the normal PAV license, which is capacity dependent. The HyperPAV license is independent
of the capacity.

HyperPAV alias consideration on EAV


HyperPAV provides a far more agile alias management algorithm, as aliases are dynamically
bound to a base for the duration of the I/O for the z/OS image that issued the I/O. When I/O
completes, the alias is returned to the pool in the LCU. It then becomes available to
subsequent I/Os.

Our rule of thumb is that the number of aliases required can be approximated by the peak of
the following multiplication: I/O rate multiplied by the average response time. For example, if
the peak of the above calculation happened when the I/O rate is 2000 I/O per second and the
average response time is 4 ms (which is 0.004 sec), then the result of above calculation will
be:

2000 IO/sec x 0.004 sec/IO = 8

This means that the average number of I/O operations executing at one time for that LCU
during the peak period is eight. Therefore, eight aliases should be able to handle the peak I/O
rate for that LCU. However, because this calculation is based on the average during the
RMF™ period, you should multiply the result by two, to accommodate higher peaks within
that RMF interval. So in this case, the recommended number of aliases would be:

2 x 8 = 16

Chapter 7. Performance 147


Depending on the kind of workload, there is a huge reduction in PAV-alias UCBs with
HyperPAV. The combination of HyperPAV and EAV allows you to significantly reduce the
constraint on the 64 K device address limit and in turn increase the amount of addressable
storage available on z/OS. In conjunction with Multiple Subchannel Sets (MSS) on IBM
System z196 (zEnterprise), z10, and z9, you have even more flexibility in device
configuration. Keep in mind that the EAV volumes will be supported only on IBM z/OS V1.10
and later. Refer to the IBM Redbooks publication IBM System Storage DS8000: Host
Attachment and Interoperability, SG24-8887, for more details about EAV specifications and
considerations.

Note: For more details about MSS, see Multiple Subchannel Sets: An Implementation
View, REDP-4387, found at:
http://www.redbooks.ibm.com/abstracts/redp4387.html?Open

HyperPAV implementation and system requirements


For support and implementation guidance, see DS8000: Host Attachment and
Interoperability, SG24-8887.

RMF reporting on PAV


RMF reports the number of exposures for each device in its Monitor/DASD Activity report and
in its Monitor II and Monitor III Device reports. If the device is a HyperPAV base device, the
number is followed by an 'H', for example, 5.4H. This value is the average number of
HyperPAV volumes (base and alias) in that interval. RMF reports all I/O activity against the
base address, not by the individual base and associated aliases. The performance
information for the base includes all base and alias I/O activity.

HyperPAV would help minimize the Input/Output Supervisor Queue (IOSQ) Time. If you still
see IOSQ Time, then there are two possible reasons:
򐂰 There are more aliases required to handle the I/O load compared to the number of aliases
defined in the LCU.
򐂰 There is Device Reserve issued against the volume. A Device Reserve would make the
volume unavailable to the next I/O, causing the next I/O to be queued. This delay will be
recorded as IOSQ Time.

7.7.5 PAV in z/VM environments


z/VM provides PAV support in the following ways:
򐂰 As traditionally supported, for VM guests as dedicated guests through the CP ATTACH
command or DEDICATE user directory statement.
򐂰 Starting with z/VM 5.2.0, with APAR VM63952, VM supports PAV minidisks.

Figure 7-21 and Figure 7-22 illustrate PAV in a z/VM environment.

148 IBM System Storage DS8800: Architecture and Implementation


DSK001

E101 E100 E102

DASD E100-E102 access same time


base alias alias 9800 9801 9802
RDEV RDEV RDEV
E100 E101 E102 Guest 1

Figure 7-21 z/VM support of PAV volumes dedicated to a single guest virtual machine

DSK001

E101 E100 E102

9800 9801 9802 9800 9801 9802 9800 9801 9802

Guest 1 Guest 2 Guest 3

Figure 7-22 Linkable minidisks for guests that exploit PAV

In this way, PAV provides to the z/VM environments the benefits of a greater I/O performance
(throughput) by reducing I/O queuing.

With the small programming enhancement (SPE) introduced with z/VM 5.2.0 and APAR
VM63952, additional enhancements are available when using PAV with z/VM. For more
information, see 10.4, “z/VM considerations” in DS8000: Host Attachment and
Interoperability, SG24-8887.

7.7.6 Multiple Allegiance


Normally, if any System z host image (server or LPAR) does an I/O request to a device
address for which the storage disk subsystem is already processing an I/O that came from
another System z host image, then the storage disk subsystem will send back a device busy
indication, as shown in Figure 7-16 on page 143. This delays the new request and adds to the
overall response time of the I/O; this delay is shown in the Device Busy Delay (AVG DB DLY)
column in the RMF DASD Activity Report. Device Busy Delay is part of the Pend time.

The DS8000 series accepts multiple I/O requests from different hosts to the same device
address, increasing parallelism and reducing channel impact. In older storage disk
subsystems, a device had an implicit allegiance, that is, a relationship created in the control
unit between the device and a channel path group when an I/O operation is accepted by the
device. The allegiance causes the control unit to guarantee access (no busy status

Chapter 7. Performance 149


presented) to the device for the remainder of the channel program over the set of paths
associated with the allegiance.

With Multiple Allegiance, the requests are accepted by the DS8000 and all requests are
processed in parallel, unless there is a conflict when writing to the same data portion of the
CKD logical volume, as shown in Figure 7-23.

parallel I/O capability

Appl. A Appl. B
UCB 100 UCB 100

Multiple
System z Allegiance System z

DS8000 100 Logical volume

Physical
layer
Figure 7-23 Parallel I/O capability with Multiple Allegiance

Nevertheless, good application software access patterns can improve global parallelism by
avoiding reserves, limiting the extent scope to a minimum, and setting an appropriate file
mask, for example, if no write is intended.

In systems without Multiple Allegiance, all except the first I/O request to a shared volume are
rejected, and the I/Os are queued in the System z channel subsystem, showing up in Device
Busy Delay and PEND time in the RMF DASD Activity reports. Multiple Allegiance will allow
multiple I/Os to a single volume to be serviced concurrently. However, a device busy condition
can still happen. This will occur when an active I/O is writing a certain data portion on the
volume and another I/O request comes in and tries to either read or write to that same data.
To ensure data integrity, those subsequent I/Os will get a busy condition until that previous I/O
is finished with the write operation.

Multiple Allegiance provides significant benefits for environments running a sysplex, or


System z systems sharing access to data volumes. Multiple Allegiance and PAV can operate
together to handle multiple requests from multiple hosts.

7.7.7 I/O priority queuing


The concurrent I/O capability of the DS8000 allows it to execute multiple channel programs
concurrently, as long as the data accessed by one channel program is not altered by another
channel program.

150 IBM System Storage DS8800: Architecture and Implementation


Queuing of channel programs
When the channel programs conflict with each other and must be serialized to ensure data
consistency, the DS8000 will internally queue channel programs. This subsystem I/O queuing
capability provides significant benefits:
򐂰 Compared to the traditional approach of responding with a device busy status to an
attempt to start a second I/O operation to a device, I/O queuing in the storage disk
subsystem eliminates the impact associated with posting status indicators and redriving
the queued channel programs.
򐂰 Contention in a shared environment is eliminated. Channel programs that cannot execute
in parallel are processed in the order that they are queued. A fast system cannot
monopolize access to a volume also accessed from a slower system. Each system gets a
fair share.

Priority queuing
I/Os from different z/OS system images can be queued in a priority order. It is the z/OS
Workload Manager that makes use of this priority to privilege I/Os from one system against
the others. You can activate I/O priority queuing in WLM Service Definition settings. WLM has
to run in Goal mode.

When a channel program with a higher priority comes in and is put in front of the queue of
channel programs with lower priority, the priority of the low-priority programs will be
increased; see Figure 7-24. This prevents high-priority channel programs from dominating
lower priority ones and gives each system a fair share.

System A System B

WLM WLM
IO Queue
for I/Os that IO with
have to be Priority
queued X'21'

Execute I/O from A Pr X'FF'


:
I/O from B Pr X'9C'
:
: IO with
Priority
: X'80'
DS8000
:
I/O from B Pr X'21'
3390

Figure 7-24 I/O priority queuing

7.7.8 Performance considerations on Extended Distance FICON


The function known as Extended Distance FICON produces performance results similar to
z/OS Global Mirror (zGM) Emulation/XRC Emulation at long distances. Extended Distance
FICON does not really extend the distance supported by FICON, but can provide the same
benefits as XRC Emulation. In other words, with Extended Distance FICON, there is no need
to have XRC Emulation running on the Channel extender.

Chapter 7. Performance 151


For support and implementation discussions, see 10.6, “Extended Distance FICON” in
DS8000: Host Attachment and Interoperability, SG24-8887.

Figure 7-25 shows Extended Distance FICON (EDF) performance comparisons for a
sequential write workload. The workload consists of 64 jobs performing 4 KB sequential
writes to 64 data sets with 1113 cylinders each, which all reside on one large disk volume.
There is one SDM configured with a single, non-enhanced reader to handle the updates.
When turning the XRC Emulation off (Brocade emulation in the diagram), the performance
drops significantly, especially at longer distances. However, after the Extended Distance
FICON (Persistent IU Pacing) function is installed, the performance returns to where it was
with XRC Emulation on.

Figure 7-25 Extended Distance FICON with small data blocks sequential writes on one SDM reader

Figure 7-26 shows EDF performance, this time used in conjunction with Multiple Reader
support. There is one SDM configured with four enhanced readers.

Figure 7-26 Extended Distance FICON with small data blocks sequential writes on four SDM readers

152 IBM System Storage DS8800: Architecture and Implementation


These results again show that when the XRC Emulation is turned off, performance drops
significantly at long distances. When the Extended Distance FICON function is installed, the
performance again improves significantly.

7.7.9 High Performance FICON for z


The FICON protocol involved several exchanges between the channel and the control unit.
This led to unnecessary overhead. With High Performance FICON, the protocol has been
streamlined and the number of exchanges has been reduced; see Figure 7-27.

High Performance FICON for z (zHPF) is an enhanced FICON protocol and system I/O
architecture that results in improvements for small block transfers (a track or less) to disk
using the device independent random access method. Instead of Channel Command Word
(CCWs), Transport Control Words (TCWs) can be used. I/O that is using the Media Manager,
like DB2, PDSE, VSAM, zFS, VTOC Index (CVAF), Catalog BCS/VVDS, or Extended Format
SAM, will benefit from zHPF.

CCWs TCWs (Transport Control Word)

Figure 7-27 zHPF protocol

High Performance FICON for z (zHPF) is an optional licensed feature.

In situations where this is the exclusive access in use, it can improve FICON I/O throughput
on a single DS8000 port by 100%. Realistic workloads with a mix of data set transfer sizes
can see a 30% to 70% increase in FICON IOs utilizing zHPF, resulting in up to a 10% to 30%
channel utilization savings.

Although clients can see I/Os complete faster as the result of implementing zHPF, the real
benefit is expected to be obtained by using fewer channels to support existing disk volumes,
or increasing the number of disk volumes supported by existing channels.

Additionally, the changes in architecture offer end-to-end system enhancements to improve


reliability, availability, and serviceability (RAS).

Only the System z196 (zEnterprise) or z10 processors support zHPF, and only on the FICON
Express8, FICON Express 4, or FICON Express2 adapters. The FICON Express adapters
are not supported. The required software is z/OS V1.7 with IBM Lifecycle Extension for z/OS
V1.7 (5637-A01), z/OS V1.8, z/OS V1.9, or z/OS V1.10 with PTFs, or z/OS 1.11 and higher.

Chapter 7. Performance 153


IBM Laboratory testing and measurements are available at the following website:
http://www.ibm.com/systems/z/hardware/connectivity/ficon_performance.html

zHPF is transparent to applications. However, z/OS configuration changes are required:


Hardware Configuration Definition (HCD) must have Channel path ID (CHPID) type FC
defined for all the CHPIDs that are defined to the 2107 control unit, which also supports zHPF.
For the DS8000, installation of the Licensed Feature Key for the zHPF Feature is required.
After these items are addressed, existing FICON port definitions in the DS8000 will function in
either FICON or zHPF protocols in response to the type of request being performed. These
are nondisruptive changes.

For z/OS, after the PTFs are installed in the LPAR, you must then set ZHPF=YES in
IECIOSxx in SYS1.PARMLIB or issue the SETIOS ZHPF=YES command. ZHPF=NO is the
default setting.

IBM suggests that clients use the ZHPF=YES setting after the required configuration changes
and prerequisites are met. For more information about zHPF in general, refer to:
http://www.ibm.com/systems/z/resources/faq/index.html

zHPF multitrack support


Although the original zHPF implementation supported the new Transport Control Words only
for I/O that did not span more than a track, the DS8800 supports TCW also for I/O operations
on multiple tracks.

7.7.10 Extended distance High Performance FICON


This feature allows clients to achieve equivalent FICON write performance at a distance,
because some existing clients running multiple sites at long distances (10 to 100 km) cannot
exploit zHPF due to the large impact to the write I/O service time.

Figure 7-28 shows that on the base code, without this feature, going from 0 km to 20 km will
increase the service time by 0.4 ms. With the extended distance High Performance FICON,
the service time increase will be reduced to 0.2 ms.

1.80

1.60

1.40
Response Time (ms)

1.20

1.00

0.80

0.60

0.40

0.20

0.00
0 5,000 10,000 15,000 20,000 25,000
IO Rate

Base, 0KM Base, 20KM Ext'd Dist. Cap, 0KM Ext'd Dist. Cap, 20KM

Figure 7-28 Single port 4 K write hit

154 IBM System Storage DS8800: Architecture and Implementation


Part 2

Part 2 Planning and


installation
In this part, we discuss matters related to the installation planning process for the IBM System
Storage DS8000 series. We cover the following topics:
򐂰 Physical planning and installation
򐂰 DS8800 HMC planning and setup
򐂰 IBM System Storage DS8800 features and license keys

© Copyright IBM Corp. 2011. All rights reserved. 155


156 IBM System Storage DS8800: Architecture and Implementation
8

Chapter 8. Physical planning and


installation
This chapter discusses the various steps involved in the planning and installation of the IBM
System Storage DS8800, including a reference listing of the information required for the setup
and where to find detailed technical reference material. The topics covered include:
򐂰 Considerations prior to installation
򐂰 Planning for the physical installation
򐂰 Network connectivity planning
򐂰 Secondary HMC, SSPC, TKLM, LDAP, and Business-to-Business VPN planning
򐂰 Remote mirror and copy connectivity
򐂰 Disk capacity considerations
򐂰 Planning for growth

Review IBM System Storage DS8000 Introduction and Planning Guide, GC27-2297, for
additional information and details that you will need during the configuration and installation
process.

© Copyright IBM Corp. 2011. All rights reserved. 157


8.1 Considerations prior to installation
Start by developing and following a project plan to address the many topics needed for a
successful implementation. In general, the following items should be considered for your
installation planning checklist:
򐂰 Plan for growth to minimize disruption to operations. Expansion frames can only be placed
to the right (from the front) of the DS8800.
򐂰 Location suitability, floor loading, access constraints, elevators, doorways, and so on.
򐂰 Power requirements: Redundancy and use of Uninterrupted Power Supply (UPS).
򐂰 Environmental requirements: Adequate cooling capacity.
򐂰 A place and connection for the secondary HMC.
򐂰 A plan for encryption integration if FDE drives are considered for the configuration.
򐂰 A place and connection for the TKLM server.
򐂰 Integration of LDAP to allow a single user ID / password management.
򐂰 Business to Business VPN for the DS8800 to allow fast data offload and service
connections.
򐂰 A plan detailing the desired logical configuration of the storage.
򐂰 Consider TPC monitoring for your environment.
򐂰 Oversee the available services from IBM to check for microcode compatibility and
configuration checks.
򐂰 Available Copy Services and backup technologies.
򐂰 Staff education and availability to implement the storage plan. Alternatively, IBM or IBM
Business Partner services.

Client responsibilities for the installation


The DS8800 series is specified as an IBM or IBM Business Partner installation and setup
system. However, the following activities are some of the required planning and installation
activities for which the client is responsible at a high level:
򐂰 Physical configuration planning is a client responsibility. Your disk Marketing Specialist can
help you plan and select the DS8800 series physical configuration and features.
򐂰 Installation planning is a client responsibility.
򐂰 Integration of LDAP and Business to Business VPN connectivity are client responsibilities.
IBM can provide services to set up and integrate these components.
򐂰 Integration of TPC and SNMP into the client environment for monitoring of performance
and configuration is a client responsibility. IBM can provide services to set up and
integrate these components.
򐂰 Configuration and integration of TKLM servers and DS8800 Encryption for extended data
security is a client responsibility. IBM provides services to set up and integrate these
components.
򐂰 Logical configuration planning and application is a client responsibility. Logical
configuration refers to the creation of RAID ranks, volumes, and LUNs, and the
assignment of the configured capacity to servers. Application of the initial logical
configuration and all subsequent modifications to the logical configuration are client
responsibilities. The logical configuration can be created, applied, and modified using the
DS Storage Manager, DS CLI, or DS Open API.

158 IBM System Storage DS8800: Architecture and Implementation


IBM Global Services will also apply or modify your logical configuration (these are
fee-based services).

In this chapter, you will find information that will assist you with the planning and installation
activities. Additional information can be found in IBM System Storage DS8000 Introduction
and Planning Guide, GC27-2297.

8.1.1 Who should be involved


We suggest having a project manager to coordinate the many tasks necessary for a
successful installation. Installation will require close cooperation with the user community, the
IT support staff, and the technical resources responsible for floor space, power, and cooling.

A Storage Administrator should also coordinate requirements from the user applications and
systems to build a storage plan for the installation. This will be needed to configure the
storage after the initial hardware installation is complete.

The following people should be briefed and engaged in the planning process for the physical
installation:
򐂰 Systems and Storage Administrators
򐂰 Installation Planning Engineer
򐂰 Building Engineer for floor loading and air conditioning and Location Electrical Engineer
򐂰 Security Engineers for Business-to-Business VPN, LDAP, TKLM, and encryption
򐂰 Administrator and Operator for monitoring and handling considerations
򐂰 IBM or Business Partner Installation Engineer

8.1.2 What information is required


A validation list to assist in the installation process should include:
򐂰 Drawings detailing the positioning as specified and agreed upon with a building engineer,
ensuring the weight is within limits for the route to the final installation position.
򐂰 Approval to use elevators if the weight and size are acceptable.
򐂰 Connectivity information, servers, and SAN, and mandatory LAN connections.
򐂰 Agreement on the security structure of the installed DS8800 with all security engineers.
򐂰 Ensure that you have a detailed storage plan agreed upon, with the client available to
understand how the storage is to be configured. Ensure that the configuration specialist
has all the information to configure all the arrays and set up the environment as required.
򐂰 License keys for the Operating Environment License (OEL), which are mandatory, and
any optional license keys.

Note that IBM System Storage DS8000 Introduction and Planning Guide, GC27-2297,
contains additional information about physical planning. You can download it from the
following address:
http://www.ibm.com/systems/storage/disk/ds8000/index.html

Chapter 8. Physical planning and installation 159


8.2 Planning for the physical installation
This section discusses the physical installation planning process and gives some important
tips and considerations.

8.2.1 Delivery and staging area


The shipping carrier is responsible for delivering and unloading the DS8800 as close to its
final destination as possible. Inform your carrier of the weight and size of the packages to be
delivered and inspect the site and the areas where the packages will be moved (for example,
hallways, floor protection, elevator size and loading, and so on).

Table 8-1 lists the final packaged dimensions and maximum packaged weight of the DS8800
storage unit shipgroup.

Table 8-1 Packaged dimensions and weight for DS8800 models


Shipping container Packaged dimensions (in Maximum packaged weight
centimeters and inches) (in kilograms and pounds)

Model 951 Height 207.5 cm (81.7 in.) 1336 kg (2940 lb)


pallet or crate Width 101.5 cm (40 in.)
Depth 137.5 cm (54.2 in.)

Model 941 (4-way) Height 207.5 cm (81.7 in.) 1378 kg (3036 lb)
pallet or crate Width 101.5 cm (40 in.)
Depth 137.5 cm (54.2 in.)

Model 95E expansion unit Height 207.5 cm (81.7 in.) 1277 kg (2810 lb)
pallet or crate Width 101.5 cm (40 in.)
Depth 137.5 cm (54.2 in.)

Shipgroup Height 105.0 cm (41.3 in.) up to 90 kg (199 lb)


(height may be lower and Width 65.0 cm (25.6 in.)
weight may be less) Depth 105.0 cm (41.3 in.)

(if ordered) System Storage Height 68.0 cm (26.8 in.) 47 kg (104 lb)
Productivity Center (SSPC), Width 65.0 cm (25.6 in.)
PSU Depth 105.0 cm (41.3 in.)

(if ordered) System Storage Height 68.0 cm (26.8 in.) 62 kg (137 lb)
Productivity Center (SSPC), Width 65.0 cm (25.6 in.)
PSU, External HMC Depth 105.0 cm (41.3 in.)

(if ordered as MES) External Height 40.0 cm (17.7 in.) 32 kg (71 lb)
HMC container Width 65.0 cm (25.6 in.)
Depth 105.0 cm (41.3 in.)

Attention: A fully configured model in the packaging can weight over 1416 kg (3120 lbs).
Use of fewer than three persons to move it can result in injury.

8.2.2 Floor type and loading


The DS8800 can be installed on a raised or nonraised floor. It is best practice to install the
unit on a raised floor, because this allows you to operate the storage unit with better cooling
efficiency and cabling layout protection.

160 IBM System Storage DS8800: Architecture and Implementation


The total weight and space requirements of the storage unit will depend on the configuration
features that you ordered. You might need to consider calculating the weight of the unit and
the expansion box (if ordered) in their maximum capacity to allow for the addition of new
features.

Table 8-2 lists the weights of the various DS8800 models.

Table 8-2 DS8800 weights


Model Maximum weight

Model 951 (2-way) 1200 kg (2640 lb)

Model 951 (4-way) 1256 kg (2770 lb)

Model 951(with Model 95E expansion model) 2354 kg (5190 lb)

Important: You need to check with the building engineer or other appropriate personnel
that the floor loading was properly considered.

Raised floors can better accommodate cabling layout. The power and interface cables enter
the storage unit through the rear side.

Figure 8-1 shows the location of the cable cutouts. You may use the following measurements
when you cut the floor tile:
򐂰 Width: 45.7 cm (18.0 in.)
򐂰 Depth: 16 cm (6.3 in.)

Figure 8-1 Floor tile cable cutout for DS8800

Chapter 8. Physical planning and installation 161


8.2.3 Room space and service clearance
The total amount of space needed by the storage units can be calculated using the
dimensions in Table 8-3.

Table 8-3 DS8800 dimensions


Dimension with Model 951 Model 95E
covers (base frame only)

Height 76 in. 76 in.


193.4 cm 193.4 cm

Width 33.3 in. 33.3 in.


84.7 cm 84.7 cm

Depth 46.6 in. 46.6 in.


118.3 cm 118.3 cm

The storage unit location area should also cover the service clearance needed by IBM service
representatives when accessing the front and rear of the storage unit. You can use the
following minimum service clearances; the dimensions are also shown in Figure 8-2:
1. For the front of the unit, allow a minimum of 121.9 cm (48 in.) for the service clearance.
2. For the rear of the unit, allow a minimum of 76.2 cm (30 in.) for the service clearance.
3. For the sides of the unit, allow a minimum of 5.1 cm (2 in.) for the service clearance.

Figure 8-2 Service clearance requirements

162 IBM System Storage DS8800: Architecture and Implementation


8.2.4 Power requirements and operating environment
Consider the following basic items when planning for the DS8800 power requirements:
򐂰 Power connectors
򐂰 Input voltage
򐂰 Power consumption and environment
򐂰 Power control features
򐂰 Power Line Disturbance (PLD) feature

Power connectors
Each DS8800 base and expansion unit has redundant power supply systems. The two line
cords to each frame should be supplied by separate AC power distribution systems. Use a
60 A rating for the low voltage feature and a 25 A rating for the high voltage feature.

For more details regarding power connectors and line cords, see IBM System Storage
DS8000 Introduction and Planning Guide, GC27-2297.

Input voltage
The DS8800 supports a three-phase input voltage source. Table 8-4 lists the power
specifications for each feature code.

Table 8-4 DS8800 input voltages and frequencies


Characteristic Low voltage (Feature 9090) High voltage (Feature 9091)

Nominal input voltage 200, 208, 220, or 240 RMS Vac 380, 400, 415, or 480 RMS Vac
(3-phase)

Minimum input voltage 180 RMS Vac 333 RMS Vac


(3-phase)

Maximum input voltage 264 RMS Vac 508 RMS Vac


(3-phase)

Steady-state input frequency 50 ± 3 or 60 ± 3.0 Hz 50 ± 3 or 60 ± 3.0 Hz

Power consumption and environment


Table 8-5 lists the power consumption specifications of the DS8800. The power estimates
given here are on the conservative side and assume a high transaction rate workload.

Table 8-5 DS8800 power consumption


Measurement Model 951 Model 95E
(4-way) with I/O

Peak electric power 7.3 kVa 7.2 KVa

Thermal load (BTU/hr) 25,000 24,600

The values represent data that was obtained from typically systems configured as follows:
򐂰 Base models that contain 15 disk drive sets (240 disk drives) and fibre channel adapters
򐂰 Expansion models that contain 21 disk drive sets (336 disk drives) and fibre channel
adapters

Air circulation for the DS8800 is provided by the various fans installed throughout the frame.
All of the fans on the DS8800 direct air flow from the front of the frame to the rear of the
frame. No air exhausts to the top of the machine. Using a directional air flow in this manner
allows for “cool aisles” and “hot aisles” to the front and rear of the machine.

Chapter 8. Physical planning and installation 163


The recommended operating temperature for the DS8800 is between 20o to 25o C (68o to
78o F) at a relative humidity range of 40% to 50%.

Important: Make sure that air circulation for the DS8800 base unit and expansion units is
maintained free from obstruction to keep the unit operating in the specified temperature
range.

Power control features


The DS8800 has remote power control features that allow you to control the power of the
storage complex through the DS Storage Manager console. Another power control feature is
available for the System z environment.

For more details regarding power control features, see IBM System Storage DS8000
Introduction and Planning Guide, GC27-2297.

Power Line Disturbance feature


The Power Line Disturbance (PLD) feature stretches the available uptime of the DS8800 from
30 milliseconds to 30-50 seconds during a PLD event. It is best practice to install this feature,
especially with environments that have no UPS. There is no additional physical connection
planning needed for the client with or without the PLD.

8.2.5 Host interface and cables


The DS8800 Model 951 supports a maximum of 16 host adapters and eight device adapter
pairs.

The DS8800 supports one type of fiber adapter, the 8 Gb Fibre Channel/FICON PCI Express
adapter, which is offered in shortwave and longwave versions.

Fibre Channel/FICON
The DS8800 Fibre Channel/FICON adapter has four or eight ports per card. Each port
supports FCP or FICON, but not simultaneously. FCP is supported on point-to-point, fabric,
and arbitrated loop topologies. FICON is supported on point-to-point and fabric topologies.
Fabric components from various vendors, including IBM, CNT, McDATA, Brocade, and Cisco,
are supported by both environments.

The Fibre Channel/FICON shortwave Host Adapter, feature 3153, when used with 50 micron
multi-mode fibre cable, supports point-to-point distances of up to 300 meters. The Fibre
Channel/FICON longwave Host Adapter, when used with 9 micron single-mode fibre cable,
extends the point-to-point distance to 10 km for feature 3245 (4 Gb 10 km LW Host Adapter).
Feature 3243 (4 Gb LW Host Adapter) supports point-to-point distances up to 4 km.
Additional distance can be achieved with the use of appropriate SAN fabric components.

A 31-meter fiber optic cable or a 2-meter jumper cable can be ordered for each Fibre Channel
adapter port.

164 IBM System Storage DS8800: Architecture and Implementation


Table 8-6 lists the fiber optic cable features for the FCP/FICON adapters.

Table 8-6 FCP/FICON cable features


Feature Length Connector Characteristic

1410 31 m LC/LC 50 micron, multimode

1411 31 m LC/SC 50 micron, multimode

1412 2m SC to LC adapter 50 micron, multimode

1420 31 m LC/LC 9 micron, single mode

1421 31 m LC/SC 9 micron, single mode

1422 2m SC to LC adapter 9 micron, single mode

Note: The Remote Mirror and Copy functions use FCP as the communication link between
the IBM System Storage DS8000 series, DS6000s, and ESS Models 800 and 750.

For more details about IBM-supported attachments, see IBM System Storage DS8000 Host
Systems Attachment Guide, SC26-7917.

For the most up-to-date details about host types, models, adapters, and operating systems
supported by the DS8800 unit, refer to the DS8800 System Storage Interoperability Center at
the following address:
http://www.ibm.com/systems/support/storage/ssic/interoperability.wss

8.3 Network connectivity planning


Implementing the DS8800 requires that you consider the physical network connectivity of the
storage adapters and the Hardware Management Console (HMC) within your local area
network.

Check your local environment for the following DS8800 unit connections:
򐂰 Hardware Management Console and network access
򐂰 System Storage Productivity Center and network access
򐂰 DSCLI console
򐂰 DSCIMCLI console
򐂰 Remote support connection
򐂰 Remote power control
򐂰 Storage area network connection
򐂰 TKLM connection
򐂰 LDAP connection

For more details about physical network connectivity, see IBM System Storage DS8000
User´s Guide, SC26-7915, and IBM System Storage DS8000 Introduction and Planning
Guide, GC27-2297.

Chapter 8. Physical planning and installation 165


8.3.1 Hardware Management Console and network access
Hardware Management Consoles (HMCs) are the focal point for configuration, Copy Services
management, and maintenance for a DS8800 unit. The internal HMC included with every
primary rack is mounted in a pull-out tray for convenience and security. The HMC consists of
a mobile workstation (Lenovo Thinkpad T510) with adapters for modem and 10/100/1000 Mb
Ethernet. Ethernet cables connect the HMC to the storage unit.

A second, redundant external HMC is orderable and highly recommended for environments
that use TKLM encryption management and Advanced Copy Services functions. The second
HMC is external to the DS8800 rack(s) and consists of a similar mobile workstation as the
primary HMC.

Tip: To ensure that the IBM service representative can quickly and easily access an
external HMC, place the external HMC rack within 15.2 m (50 ft) of the storage units that
are connected to it.

The management console can be connected to your network for remote management of your
system by using the DS Storage Manager web-based graphical user interface (GUI), the DS
Command-Line Interface (CLI), or using storage management software through the DS Open
API. In order to use the CLI to manage your storage unit, you need to connect the
management console to your LAN because the CLI interface is not available on the HMC. The
DS8800 can be managed from the HMC, or remotely using SSPC. Connecting the System
Storage Productivity Center (SSPC) to your LAN allows you to access the DS Storage
Manager GUI from any location that has network access.

To connect the management consoles (internal, and external if present) to your network, you
need to provide the following settings to your IBM service representative so that he can
configure the management consoles for attachment to your LAN:
򐂰 Management console network IDs, host names, and domain name
򐂰 Domain Name Server (DNS) settings (if you plan to use DNS to resolve network names)
򐂰 Routing information

For additional information regarding the HMC planning, see Chapter 9, “DS8800 HMC
planning and setup” on page 177.

8.3.2 System Storage Productivity Center and network access


SSPC is a solution consisting of hardware and software elements.

SSPC hardware
The SSPC (IBM model 2805-MC5) server contains the following hardware components:
򐂰 x86 server 1U rack installed
򐂰 Intel Quadcore Xeon processor running at 2.53GHz
򐂰 8 GB of RAM
򐂰 Two hard disk drives
򐂰 Dual port Gigabit Ethernet

Optional components are:


򐂰 KVM Unit
򐂰 8 Gb Fibre Channel Dual Port HBA ( this feature enables you to move the Tivoli Storage
Productivity Center database from the SSPC server to the IBM System Storage DS8000).
򐂰 Secondary power supply

166 IBM System Storage DS8800: Architecture and Implementation


򐂰 Additional hard disk drives
򐂰 CD media to recover image for 2805-MC5

SSPC software
The IBM System Storage Productivity Center includes the following preinstalled (separately
purchased) software, running under a licensed Microsoft® Windows Server 2008 Enterprise
Edition R2 64-bit (included):
򐂰 IBM Tivoli Storage Productivity Center V4.2.1 licensed as TPC Basic Edition (includes the
Tivoli Integrated Portal). A TPC upgrade requires that you purchase and add additional
TPC licenses.
򐂰 DS CIM Agent Command-Line Interface (DSCIMCLI) 5.5
򐂰 IBM Tivoli Storage Productivity Center for Replication (TPC-R) V4.2.1. To run TPC-R on
SSPC, you must purchase and add TPC-R licenses.
򐂰 IBM DB2 Enterprise Server Edition 9.7 64-bit Enterprise.

Optionally, the following components can be installed on the SSPC:


򐂰 Software components contained since SSPC V1.3 but not on previous SSPC versions
(TPC-R, DSCIMCLI, Version 10.70 of the IBM System Storage DS Storage Manager for
DS3000, DS40000, or DS5000). IBM Java™ 1.6 is preinstalled and can be used with DS
Storage Manager 10.70. You do not need to download Java from Sun Microsystems.
򐂰 DS8000 Command-Line Interface (DSCLI).
򐂰 Antivirus software.

Clients have the option to purchase and install the individual software components to create
their own SSPC server.

For details, see Chapter 12, “System Storage Productivity Center” on page 229, and IBM
System Storage Productivity Center Deployment Guide, SG24-7560.

Network connectivity
To connect the System Storage Productivity Center (SSPC) to your network, you need to
provide the following settings to your IBM service representative:
򐂰 SSPC network IDs, host names, and domain name
򐂰 Domain Name Server (DNS) settings (if you plan to use DNS to resolve network names)

Routing information
There are several networks ports that need to be opened between the SSPC console and the
DS8800 and LDAP server if the SSPC is installed behind a firewall.

8.3.3 DSCLI console


The DSCLI provides a command-line interface for managing and configuring the DS8800
storage system. The DSCLI can be installed on and used from a LAN-connected system,
such as the storage administrator’s mobile computer. You might consider installing the DSCLI
on a separate workstation connected to the storage unit’s LAN.

For details about the hardware and software requirements for the DSCLI, see IBM System
Storage DS8000: Command-Line Interface User´s Guide, SC26-7916.

Chapter 8. Physical planning and installation 167


8.3.4 DSCIMCLI
The DSCIMCLI has to be used to configure the CIM agent running on the HMC. The DS8800
can be managed either by the CIM agent that is bundled with the HMC or with a separately
installed CIM agent. The DSCIMCLI utility, which configures the CIM agent, is available from
the DS CIM agent website as part of the DS CIM agent installation bundle, and also as a
separate installation bundle.

For details about the configuration of the DSCIMCLI, see IBM DS Open Application
Programming Interface Reference, GC35-0516.

8.3.5 Remote support connection


Remote support connection is available from the HMC using a modem (dial-up) and the
Virtual Private Network (VPN) over the Internet through the client LAN.

You can take advantage of the DS8800 remote support feature for outbound calls (Call Home
function) or inbound calls (remote service access by an IBM technical support
representative). You need to provide an analog telephone line for the HMC modem.

Figure 8-3 shows a typical remote support connection.

Figure 8-3 DS8800 HMC remote support connection

Note the following guidelines to assist in the preparation for attaching the DS8800 to the
client’s LAN:
1. Assign a TCP/IP address and host name to the HMC in the DS8800.
2. If email notification of service alert is allowed, enable the support on the mail server for the
TCP/IP addresses assigned to the DS8800.
3. Use the information that was entered on the installation worksheets during your planning.

It is best practice to use a service connection through the high-speed VPN network utilizing a
secure Internet connection. You need to provide the network parameters for your HMC

168 IBM System Storage DS8800: Architecture and Implementation


through the installation worksheet prior to actual configuration of the console. See Chapter 9,
“DS8800 HMC planning and setup” on page 177 for more details.

Your IBM System Support Representative (SSR) will need the configuration worksheet during
the configuration of your HMC. A worksheet is available in IBM System Storage DS8000
Introduction and Planning Guide, GC27-2297.

See Chapter 17, “Remote support” on page 363 for further discussion about remote support
connection.

8.3.6 Business-to-Business VPN connection


The Business-to-Business VPN connection allows faster data communications between IBM
support and the client environment. This is helpful when a new microcode needs to be sent to
the DS8800 or problem determination data needs to be offloaded. The data transfer is secure
and traceable by the client. All activities performed by IBM personnel to the client environment
can be monitored and documented. See Chapter 17, “Remote support” on page 363 for more
information.

8.3.7 Remote power control


The System z remote power control setting allows you to power on and off the storage unit
from a System z interface. If you plan to use the System z power control feature, be sure that
you order the System z power control feature. This feature comes with four power control
cables.

In a System z environment, the host must have the Power Sequence Controller (PSC) feature
installed to have the ability to turn on and off specific control units, such as the DS8800. The
control unit is controlled by the host through the power control cable. The power control cable
comes with a standard length of 31 meters, so be sure to consider the physical distance
between the host and DS8800.

8.3.8 Storage area network connection


The DS8800 can be attached to a SAN environment through its Fibre Channel ports. SANs
provide the capability to interconnect open systems hosts, S/390® and System z hosts, and
other storage systems.

A SAN allows your single Fibre Channel host port to have physical access to multiple Fibre
Channel ports on the storage unit. You might need to establish zones to limit the access (and
provide access security) of host ports to your storage ports. Take note that shared access to a
storage unit Fibre Channel port might come from hosts that support a combination of bus
adapter types and operating systems.

8.3.9 Tivoli Key Lifecycle Manager server for encryption


If the DS8800 is configured with FDE drives and enabled for encryption, an isolated Tivoli Key
Lifecycle Manager (TKLM) server is also required.

The isolated TKLM server consists of the following hardware and software:
򐂰 IBM System x3650 with L5420 processor
– Quad-core Intel Xeon® processor L5420 (2.5 GHz, 12 MB L2, 1.0 GHz FSB, 50 W)
– 6 GB memory

Chapter 8. Physical planning and installation 169


– 146 GB SAS RAID 1 storage
– SUSE Linux v10
– Dual Gigabit Ethernet ports (standard)
– Power supply
򐂰 Tivoli Key Lifecycle Manager V2 (includes DB2 9.5 FB2)

Note: No other hardware or software is allowed on this server. An isolated server must
only use internal disk for all files necessary to boot and have the TKLM key server
become operational.

Table 8-7 lists the general hardware requirements.

Table 8-7 TKLM hardware requirements


System components Minimum values Suggested values

System memory (RAM) 4 GB 4 GB

Processor speed 򐂰 For Linux and Windows 򐂰 For Linux and Windows
systems: systems:
2.66 GHz single processor 3.0 GHz dual processors
򐂰 For AIX and Sun Solaris 򐂰 For AIX and Sun Solaris
systems: systems:
1.5 GHz (2–way) 1.5 GHz (4–way)

Disk space free for product 15 GB 30 GB


and prerequisite products,
such as DB2 Database
and keystore files

Operating system requirement and software prerequisites


Table 8-8 lists the operating systems requirements for installation.

Table 8-8 TKLM software requirements


Operating system Patch and maintenance level at time of initial
publication

AIX Version 5.3 64-bit, and Version 6.1 For Version 5.3, use Technology Level 5300-04
and Service Pack 5300-04-02

Sun Server Solaris 10 (SPARC 64-bit) None


Note: Tivoli Key Lifecycle Manager runs in a
32-bit JVM.

Windows Server 2003 R2 (32-bit Intel) None

Red Hat Enterprise Linux AS Version 4.0 on x86 None


32-bit

SUSE Linux Enterprise Server Version 9 on x86 None


(32-bit) and Version 10 on x86 (32-bit)

On Linux platforms, Tivoli Key Lifecycle Manager requires the following package:
compat-libstdc++-33-3.2.3-61 or higher

170 IBM System Storage DS8800: Architecture and Implementation


On Red Hat systems, to determine if you have the package, run the following command:
rpm -qa | grep -i "libstdc"

For more information regarding the required TKLM server and other requirements and
guidelines, see IBM System Storage DS8700 Disk Encryption Implementation and Usage
Guidelines, REDP-4500.

TKLM connectivity and routing information


To connect the Tivoli Key Lifecycle Manager to your network, you need to provide the
following settings to your IBM service representative:
򐂰 SSPC network IDs, host names, and domain name
򐂰 Domain Name Server (DNS) settings (if you plan to use DNS to resolve network names)

There are two network ports that need to be opened on a firewall to allow DS8800 connection
and have an administration management interface to the TKLM server. These ports are
defined by the TKLM administrator.

8.3.10 Lightweight Directory Access Protocol server for single sign-on


A Lightweight Directory Access Protocol (LDAP) server can be used to provide directory
services to the DS8800 via the SSPC TIP (Tivoli Integrated Portal). This can enable a single
sign-on interface to all DS8800s in the client environment.

Typically, there is normally one LDAP server installed in the client environment to provide
directory services. For details, see IBM System Storage DS8000: LDAP Authentication,
REDP-4505.

LDAP connectivity and routing information


To connect the Lightweight Directory Access Protocol (LDAP) server to the System Storage
Productivity Center (SSPC), you need to provide the following settings to your IBM service
representative:
򐂰 LDAP network IDs, and host names domain name and port
򐂰 User ID and password of the LDAP server

If the LDAP server is isolated from the SSPC by a firewall, the LDAP port needs to be opened
in that firewall. There might also be a firewall between the SSPC and the DS8800 that needs
to be opened to allow LDAP traffic between them.

8.4 Remote mirror and copy connectivity


The DS8800 uses the high speed Fibre Channel protocol (FCP) for Remote Mirror and Copy
connectivity.

Make sure that you have a sufficient number of FCP paths assigned for your remote mirroring
between your source and target sites to address performance and redundancy issues. When
you plan to use both Metro Mirror and Global Copy modes between a pair of storage units, we
recommend that you use separate logical and physical paths for the Metro Mirror and another
set of logical and physical paths for the Global Copy.

Plan the distance between the primary and secondary storage units to properly acquire the
necessary length of fiber optic cables that you need or if your Copy Services solution requires
separate hardware, such as channel extenders or dense wavelength division multiplexing
(DWDM).

Chapter 8. Physical planning and installation 171


For detailed information, refer to the IBM Redbooks publications IBM System Storage
DS8000: Copy Services for Open Systems, SG24-6788 and IBM System Storage DS8000:
Copy Services for IBM System z, SG24-6787.

8.5 Disk capacity considerations


The effective capacity of the DS8800 is determined by several factors:
򐂰 The spares configuration
򐂰 The size of the installed disk drives
򐂰 The selected RAID configuration: RAID 5, RAID 6, or RAID 10, in two sparing
combinations
򐂰 The storage type: Fixed Block (FB) or Count Key Data (CKD)

8.5.1 Disk sparing


The DS8800 assigns spare disks automatically. The first four array sites (a set of eight disk
drives) on a Device Adapter (DA) pair will normally each contribute one spare to the DA pair.
A minimum of one spare is created for each array site defined until the following conditions
are met:
򐂰 A minimum of four spares per DA pair
򐂰 A minimum of four spares of the largest capacity array site on the DA pair
򐂰 A minimum of two spares of capacity and RPM greater than or equal to the fastest array
site of any given capacity on the DA pair

The DDM sparing policies support the overconfiguration of spares. This possibility might be
useful for some installations, because it allows the repair of some DDM failures to be deferred
until a later repair action is required. See IBM System Storage DS8000 Introduction and
Planning Guide, GC27-2297 and 4.6.8, “Spare creation” on page 77 for more details about
the DS8800 sparing concepts.

8.5.2 Disk capacity


The DS8800 operates in either a RAID 5, RAID 6, or RAID 10 configuration. The following
RAID configurations are possible:
򐂰 6+P RAID 5 configuration: The array consists of six data drives and one parity drive. The
remaining drive on the array site is used as a spare.
򐂰 7+P RAID 5 configuration: The array consists of seven data drives and one parity drive.
򐂰 5+P+Q RAID 6 configuration: The array consists of five data drives and two parity drives.
The remaining drive on the array site is used as a spare.
򐂰 6+P+Q RAID 6 configuration: The array consists of six data drives and two parity drives.
򐂰 3+3 RAID 10 configuration: The array consists of three data drives that are mirrored to
three copy drives. Two drives on the array site are used as spares.
򐂰 4+4 RAID 10 configuration: The array consists of four data drives that are mirrored to four
copy drives.

172 IBM System Storage DS8800: Architecture and Implementation


Table 8-9 helps you plan the capacity of your DS8800 system. It shows the effective capacity
of one rank in the different possible configurations. A disk drive set contains
16 drives, which form two array sites. Hard Disk Drive capacity is added in increments of one
disk drive set. Solid State Drive capacity can be added in increments of a half disk drive set
(eight drives). The capacities in the table are expressed in decimal gigabytes and as the
number of extents.

Table 8-9 Disk drive set capacity for open systems and System z environments

Disk size/ Effective capacity of one rank in decimal GB


Rank type (Number of extents)

Rank of RAID 10 arrays Rank of RAID 6 arrays Rank of RAID 5 arrays

3+3 4+4 5+P+Q 6+P+Q 6+P 7+P

146 GB 413.39 551.90 674.31 811.75 828.93 966.37


(SAS)/ (385) (514) (628) (756) (772) (900)
FB

146 GB 364.89 544.89 665.99 801.27 818.30 954.52


(SAS)/ (431) (576) (704) (847) (865) (1009)
CKD

450 GB 1275.60 1702.95 2077.69 2500.74 2553.35 2977.48


(SAS)/ (1188) (1586) (1935) (2329) (2378) (2773)
FB

450 GB 1259.13 1679.15 2051.87 2468.11 2520.14 2938.28


(SAS)/ (1331) (1775) (2169) (2609) (2664) (3106)
CKD

600 GB 1701.82 2270.88 2771.22 3335.99 3406.85 3970.54


(SAS)/ (1585) (2115) (2581) (3107) (3173) (3698)
FB

600 GB 1679.15 2240.13 2736.78 3293.03 3361.14 3919.28


(SAS)/ (1775) (2368) (2893) (3481) (3553) (4143)
CKD

300 GB N/A N/A N/A N/A 1690.07 1970.31


(SSD)/ (1574) (1835)
FB

300 GB N/A N/A N/A N/A 1667.80 1944.98


(SSD)/ (1763) (2056)
CKD

Notes:
1. Effective capacities are in decimal gigabytes (GB). One GB is 1,000,000,000 bytes.
2. Although disk drive sets contain 16 drives, arrays use only eight drives. The effective
capacity assumes that you have two arrays for each disk drive set

An updated version of Capacity Magic (see “Capacity Magic” on page 580) will aid you in
determining the raw and net storage capacities and the numbers regarding the required
extents for each available type of RAID.

Chapter 8. Physical planning and installation 173


8.5.3 Solid State Drive (SSD) considerations
SSD drives follow special rules for their physical installation. An RPQ is needed to change
these rules. These rules are as follows:
򐂰 SSD drives can be ordered in sixteen drive install groups. SSD drives can be installed in
eight drive or sixteen drive increments.
Note also that an eight drive install increment means that the SSD rank added is assigned
to only one DS8800 server (CEC).
򐂰 SSD disks are installed in their preferred locations, these being the first disk enclosure pair
on each device adapter (DA) pair. This is done to spread SSD disks over as many DA
pairs as possible for improved performance. The preferred locations are split among eight
locations in the first two frames, four in the first frame and four in the second.
򐂰 An eight drive SSD half drive set is always upgraded to a full 16 drive set when SSD
capacity is added. A system can contain at most one SSD half drive set.
򐂰 A SSD feature (drive set or half drive set) is first installed in the first enclosure of a
preferred enclosure pair. A second SSD feature can be installed in the same DA pair only
after the system contains at least eight SSD features, that is, after each of the eight DA
pairs contains at least one SSD disk drive set. This means that you can have more than 16
SSD disks in a DA pair only if the system has two or more frames. The second SSD
feature in the DA pair must be a full drive set.
򐂰 A DA pair can contain at most three SSD drive sets (48 drives). With the maximum of eight
DA pairs, two of which only support two drive sets, a DS8800 system can contain up to 22
SSD drive sets (352 disks in 44 arrays).
򐂰 Limiting the number of SSD drives to 16 per DA pair is preferred. This configuration
maintains adequate performance without saturating the DA.
򐂰 SSD disks are available in 300 GB capacity.
򐂰 SSDs can be intermixed with HDDs within the same DA pair, but not on the same disk
enclosure pair.
򐂰 The DS8300 was limited to a total of 512 drives if it had both SSD and HDD drives
installed, limiting the system to three frames. The DS8800 does not have this limitation.
򐂰 RPQ 8S1027 is no longer required to order SSD drives on a new DS8800 (plant order). An
RPQ process is still required to order SSD feature(s) on an existing DS8800 (field
upgrade). The RPQ is needed to ensure that SSDs can be placed in proper locations.
򐂰 RAID 6 and RAID 10 implementations are not supported for SSD arrays, only RAID 5.
SSD drives follow normal sparing rules. The array configuration is either 6+P+S or 7+P.

8.5.4 Full Disk Encryption disk considerations


New systems can be ordered equipped with Full Disk Encryption (FDE) drive sets. An RPQ
process is required (RPQ 8S1028), a waiver must be signed by the client, and there are also
specific technical requirements, such as an isolated TKLM server. FDE drives cannot be
intermixed with other drive types within the same storage facility image.

For more information about encrypted drives and inherent restrictions, see IBM System
Storage DS8700 Disk Encryption Implementation and Usage Guidelines, REDP-4500.

174 IBM System Storage DS8800: Architecture and Implementation


8.6 Planning for growth
The DS8800 storage unit is a highly scalable storage solution. Features such as total storage
capacity, cache size, and host adapters, can be easily increased by physically adding the
necessary hardware or by changing the needed licensed keys for Advanced Copy Services
features (as ordered).

Planning for future growth normally suggests an increase in physical requirements, in your
installation area (including floor loading), electrical power, and environmental cooling.

A key feature that you can order for your dynamic storage requirement is the Standby
Capacity on Demand (CoD). This offering is designed to provide you with the ability to tap into
additional storage and is particularly attractive if you have rapid or unpredictable growth, or if
you simply want the knowledge that the extra storage will be there when you need it. Standby
CoD allows you to access the extra storage capacity when you need it through a
nondisruptive activity. For more information about Capacity on Demand, see 18.2, “Using
Capacity on Demand (CoD)” on page 383.

Chapter 8. Physical planning and installation 175


176 IBM System Storage DS8800: Architecture and Implementation
9

Chapter 9. DS8800 HMC planning and setup


This chapter discusses the planning activities needed for the setup of the required DS
Hardware Management Console (HMC). This chapter covers the following topics:
򐂰 Hardware Management Console overview
򐂰 Hardware Management Console software
򐂰 HMC activities
򐂰 HMC and IPv6
򐂰 HMC user management
򐂰 External HMC

© Copyright IBM Corp. 2011. All rights reserved. 177


9.1 Hardware Management Console overview
The HMC is the focal point for DS8800 management with multiple functions, including:
򐂰 DS8800 power control
򐂰 Storage provisioning
򐂰 Advanced Copy Services management
򐂰 Interface for onsite service personnel
򐂰 Call Home and problem management
򐂰 Remote support
򐂰 Connection to TKLM for encryption functions

The HMC is the point where the DS8800 is connected to the client network. It provides the
services that the client needs to configure and manage the storage, and it also provides the
interface where service personnel will perform diagnostics and repair actions. The HMC is the
contact point for remote support, both modem and VPN.

9.1.1 Storage Hardware Management Console hardware


The HMC consists of a mobile workstation (Lenovo Thinkpad T510) with adapters for modem
and 10/100/1000 Mb Ethernet. The internal HMC included with every Primary Rack is
mounted in a pull-out tray for convenience and security. A second, redundant mobile
workstation HMC is orderable and highly recommended for environments that use TKLM
encryption management and Advanced Copy Services functions. A second HMC is external
to the DS8800 rack(s). See 9.6, “External HMC” on page 199 for more information regarding
adding an external HMC. Figure 9-1 shows a sketch of the mobile computer HMC and the
network connections. This drawing also applies to an external HMC.

Figure 9-1 DS8800 mobile workstation HMC and connections

178 IBM System Storage DS8800: Architecture and Implementation


9.1.2 Private Ethernet networks
The HMC is connected to the storage facility by way of redundant private Ethernet networks.
Figure 9-2 shows the pair of Ethernet switches internal to the DS8800.

SW1 Left switch – Black Network SW2 Right switch – Gray Network

en0 en0 SP SP External Internal en0 en0 SP SP External Internal


Lower Upper Lower Upper HMC HMC Lower Upper Lower Upper HMC HMC
Server Server Server Server or Server Server Server Server or
Second 8700 Second 8700

Figure 9-2 Rear view of DS8800 Ethernet switches

The HMC’s public Ethernet port, shown as eth2 in Figure 9-1 on page 178, is where the client
connects to its network. The HMC’s private Ethernet ports, eth0 and eth3, are configured into
port 1 of each Ethernet switch to form the private DS8800 network. To interconnect two
DS8800 primary frames, FC1190 provides a pair of 31 m Ethernet cables to connect each
switch in the second base frame into port 2 of switches in the first frame. Depending on the
machine configuration, one or more ports might be unused on each switch.

Important: The internal Ethernet switches pictured in Figure 9-2 are for the DS8800
private network only. No client network connection should ever be made to the internal
switches. Client networks are connected to the HMCs directly.

9.2 Hardware Management Console software


The Linux-based HMC includes two application servers that run within a WebSphere®
environment: DS Storage Management server and Enterprise Storage Server Network
Interface server:
򐂰 DS Storage Management server
The DS Storage Management server is the logical server that communicates with the
outside world to perform DS8800-specific tasks.
򐂰 Enterprise Storage Server Network Interface server (ESSNI)
ESSNI is the logical server that communicates with the DS Storage Management server
and interacts with the two CECs of the DS8800.

Chapter 9. DS8800 HMC planning and setup 179


The DS8800 HMC provides several management interfaces. These include:
򐂰 DS Storage Manager graphical user interface (GUI)
򐂰 DS Command-Line Interface (DS CLI)
򐂰 DS Open Application Programming Interface (DS Open API)
򐂰 Web-based user interface (WebUI), specifically for use by support personnel

The GUI and the CLI are comprehensive, easy-to-use interfaces for a storage administrator to
perform DS8800 management tasks to provision the storage arrays, manage application
users, and change some HMC options. The two can be used interchangeably, depending on
the particular task.

The DS Open API provides an interface for external storage management programs, such as
Tivoli Productivity Center (TPC), to communicate with the DS8800. It channels traffic through
the IBM System Storage Common Information Model (CIM) agent, a middleware application
that provides a CIM-compliant interface.

Older DS8000 family products used a service interface called WebSM. The DS8800 uses a
newer, faster interface called WebUI that can be used remotely over a VPN by support
personnel to check the health status or to perform service tasks.

9.2.1 DS Storage Manager GUI


DS Storage Manager can be accessed via the TPC Element Manager of the SSPC from any
network-connected workstation with a supported browser. It can also be accessed directly
from the DS8800 management console by using the browser on the HMC. Login procedures
are explained in the following sections.

SSPC login to DS Storage Manager GUI


The DS Storage Manager graphical user interface (GUI) can be launched via the TPC
Element Manager of the SSPC from any supported network-connected workstation.

To access the DS Storage Manager GUI through the SSPC, open a new browser window or
tab and type the following address:
http://<SSPC ipaddress>:9550/ITSRM/app/welcome.html

A more thorough description of setting up and logging into SSPC can be found in 12.2.1,
“Configuring SSPC for DS8800 remote GUI access” on page 233.

Console login to DS Storage Manager GUI


The following procedure can be used to log in to the management console and access the DS
Storage Manager GUI using the browser that is preinstalled on the HMC:
1. Open and turn on the management console. The Hardware Management Console login
window displays.

180 IBM System Storage DS8800: Architecture and Implementation


2. Move the mouse pointer to an empty area of the desktop background. Right-click with the
mouse to open a Fluxbox, as shown in Figure 9-3. Select Net  HMC Browser.

Fluxbox
Terminals >
Net > Net
HMC Browser

Figure 9-3 DS8800 console welcome window

3. The web browser starts with no address bar and a web page titled WELCOME TO THE
DS8000 MANAGEMENT CONSOLE appears, as shown in Figure 9-4.

Figure 9-4 Management Console welcome window

4. On the Welcome window, click IBM TOTALSTORAGE DS STORAGE MANAGER.

Chapter 9. DS8800 HMC planning and setup 181


5. A certificate window opens. Click Accept.
6. The IBM System Storage DS8000 SignOn window opens. Proceed by entering a user ID
and password. The predefined user ID and password are:
– User ID: admin
– Password: admin
The user will be required to change the password at first login. If someone has already
logged on, check with that person to obtain the new password.
7. A Wand (password manager) window opens. Select OK.

9.2.2 Command-line interface


The DS Command-Line Interface (DS CLI), which must be executed in the command
environment of an external workstation, is a second option to communicate with the HMC.
The DS CLI might be a good choice for configuration tasks when there are many updates to
be done. This avoids the web page load time for each window in the DS Storage Manager
GUI.

See Chapter 14, “Configuration with the DS Command-Line Interface” on page 307 for more
information about using DS CLI, as only a few commands are covered in this section. See
IBM System Storage DS8000: Command-Line Interface User´s Guide, SC26-7916, for a
complete list of DS CLI commands.

Note: The DS CLI cannot be used locally at the DS8800 Hardware Management Console.

After the DS CLI has been installed on a workstation, you can use it by typing dscli in a
command prompt window. The DS CLI provides three command modes:
򐂰 Interactive command mode
򐂰 Script command mode
򐂰 Single-shot command mode

Interactive mode
To enter the interactive mode of the DS CLI, type dscli in a command prompt window and
follow the prompts to log in, as shown in Example 9-1. After you are logged on, you can enter
DS CLI commands one at a time.

Example 9-1 DS CLI interactive mode


C:\Program Files\IBM\dscli>dscli
Enter the primary management console IP address: 10.0.0.1
Enter the secondary management console IP address: 10.0.0.1
Enter your username: StevenJ
Enter your password:
Date/Time: October 12, 2009 9:47:13 AM MST IBM DSCLI Version: 5.4.30.253 DS:
IBM.2107-7502241
dscli> lssi
Date/Time: October 12, 2009 9:47:23 AM MST IBM DSCLI Version: 5.4.30.253 DS: -
Name ID Storage Unit Model WWNN State ESSNet
============================================================================
- IBM.2107-7502241 IBM.2107-7502240 941 5005076303FFC076 Online Enabled
dscli> exit

182 IBM System Storage DS8800: Architecture and Implementation


Tip: Commands in the DS CLI are not case sensitive. lssi is the same as Lssi. However,
user names for logging in to the DS8800 are case sensitive.

The information required to connect to a DS8800 by DS CLI can be predefined as a profile.


Example 9-2 shows editing the lines for “hmc1” and “devid” in a profile file using HMC IP
Address 9.155.62.102 and for the SFI of serial number 7520280. For the DS8800, there is
only one SFI, so it will be the DS8800 serial number with a 1 at the end instead of a 0. The file
dscli.profile is the default profile used if a profile is not specified on the command line.

Example 9-2 Modifying dscli.profile


# hmc1 and hmc2 are equivalent to -hmc1 and -hmc2 command options.
hmc1:9.155.62.102
# Default target Storage Image ID
devid:IBM.2107-7520281

To prepare a custom DS CLI profile, the file dscli.profile can be copied and then modified,
as shown in Example 9-2 on page 183. On a Windows workstation, save the file in the
directory C:\Program Files\IBM\dscli with the name lab8700.profile. The -cfg flag is used
at the dscli prompt to call this profile. Example 9-3 shows how to connect DS CLI to the
DS8800 HMC using this custom profile.

Example 9-3 DS CLI command to use a saved profile


C:\Program Files\IBM\dscli>dscli -cfg lab8700.profile
Date/Time: October 12, 2009 2:47:26 PM CEST IBM DSCLI Version: 5.4.30.253 DS:
IBM.2107-75ABTV1
dscli>

Script mode
If you already know exactly what commands you want to issue on the DS8800, multiple DS
CLI commands can be integrated into a script that can be executed by launching dscli with
the -script parameter. To call a script with DS CLI commands, use the following syntax in a
command prompt window of a Windows workstation:
dscli -script <script_filename> -hmc1 <ip-address> -user <userid> -passwd
<password>

In Example 9-4, the script file lssi.cli contains just one CLI command, that is, the lssi
command.

Example 9-4 CLI script mode


C:\Program Files\IBM\dscli>dscli -script c:\DS8800\lssi.cli -hmc1 10.0.0.1 -user
StevenJ -passwd temp4now
Date/Time: October 12, 2009 9:33:25 AM MST IBM DSCLI Version: 5.4.30.253
IBM.2107-75ABTV1
Name ID Storage Unit Model WWNN State ESSNet
============================================================================
- IBM.2107-75ABTV1 IBM.2107-75ABTV0 951 5005076303FFC663 Online Enabled

Note: A script contains commands to run against a DS8800. A profile contains


instructions on which HMC to connect to and what settings to use.

Chapter 9. DS8800 HMC planning and setup 183


Single-shot mode
A single-shot is a single command that is executed upon successful login to the DS8800
HMC. Example 9-5 shows how to run a single-shot command from a workstation prompt.

Example 9-5 CLI single-shot mode


C:\Program Files\IBM\dscli>dscli -cfg 75abtv1.profile lssi
Date/Time: October 12, 2009 3:31:02 PM MST IBM DSCLI Version: 5.4.30.253 DS: -
Name ID Storage Unit Model WWNN State ESSNet
==================================================================================
- IBM.2107-75ABTV1 IBM.2107-75ABTV0 951 5005076303FFC663 Online Enabled
C:\ProgramFiles\ibm\dscli>

9.2.3 DS Open Application Programming Interface


Calling DS Open Application Programming Interfaces (DS Open APIs) from within a program
is a third option to implement communication with the HMC. Both DS CLI and DS Open API
communicate directly with the ESSNI server software running on the HMC.

The Common Information Model (CIM) Agent for the DS8800 is Storage Management
Initiative Specification (SMI-S) 1.1-compliant. This agent is used by storage management
applications, such as Tivoli Productivity Center (TPC), Tivoli Storage Manager, and
VSS/VDS. Also, to comply with more open standards, the agent can be accessed by software
from third-party vendors, including VERITAS/Symantec, HP/AppIQ, EMC, and many other
applications at the SNIA Interoperability Lab. For more information, visit the following address:
http://www.snia.org/forums/smi/tech_programs/lab_program/

For the DS8800, the CIM agent is preloaded with the HMC code and is started when the HMC
boots. An active CIM agent only allows access to the DS8800s managed by the HMC on
which it is running. Configuration of the CIM agent must be performed by an IBM Service
representative using the DS CIM Command Line Interface (DSCIMCLI).

9.2.4 Web-based user interface


The Web User Interface (WebUI) is a Internet browser based interface used for remote
access to system utilities. If a VPN connection has been set up, then WebUI can be used by
support personnel for DS8800 diagnostic tasks, data offloading, and many service actions.
The connection is over port 443 over SSL, providing a secure and full interface to utilities
running at the HMC.

Important: Use a secure Virtual Private Network (VPN) or Business-to-Business VPN,


which allows service personnel to quickly respond to client needs using the WebUI.

184 IBM System Storage DS8800: Architecture and Implementation


The following procedure can be used to log in to the Hardware Management Console:
1. Open your browser and connect to the HMC using the URL https://<HMC
ipaddress>/preloginmonitor/index.jsp. The browser might need your approval
regarding the HMC security certificate upon first connection; each browser is different in
how it handles security exceptions. The Hardware Management Console login window
displays, as shown in Figure 9-3 on page 181.
2. Click Log on and launch the Hardware Management Console web application to open
the login window and log in. The default user ID is customer and the default password is
cust0mer.
3. If you are successfully logged in, you will see the Hardware Management console window,
where you can select Status Overview to see the status of the DS8800. Other areas of
interest are illustrated in Figure 9-5.

HMC Service
Help Logoff
Management Management

Extra
Status
Informatio
Overview
n

Figure 9-5 WebUI main window

Because the HMC WebUI is mainly a services interface, it will not be covered here. Further
information can be obtained through the Help menu.

Chapter 9. DS8800 HMC planning and setup 185


9.3 HMC activities
This section covers some of the planning and maintenance activities for the DS8800 HMC.
See Chapter 8, “Physical planning and installation” on page 157 as well, which contains
overall planning information.

9.3.1 HMC planning tasks


The following activities are needed to plan the installation or configuration:
򐂰 The installation activities for the optional external HMC need to be identified as part of the
overall project plan and agreed upon with the responsible IBM personnel.
򐂰 A connection to the client network will be needed at the primary frame for the internal
HMC. Another connection will also be needed at the location of the second, external HMC.
The connections should be standard CAT5/6 Ethernet cabling with RJ45 connectors.
򐂰 IP addresses for the internal and external HMCs will be needed. The DS8800 can work
with both IPv4 and IPv6 networks. See 9.4, “HMC and IPv6” on page 189 for procedures
to configure the DS8800 HMC for IPv6.
򐂰 A phone line will be needed at the primary frame for the internal HMC. Another line will
also be needed at the location of the second, external HMC. The connections should be
standard phone cabling with RJ11 connectors.
򐂰 The SSPC (machine type 2805-MC4) is an integrated hardware and software solution for
centralized management of IBM storage products with IBM storage management
software. Alternatively, you can use an existing TPC server in your environment to access
the DS GUI on the HMC. SSPC is described in detail in Chapter 12, “System Storage
Productivity Center” on page 229.
򐂰 The web browser to be used on any administration workstation should be a supported
one, as mentioned in DS8000 Introduction and Planning Guide, GC35-0515 or in the
Information Center for the DS8800, which can be found at the following address:
http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic/index.jsp
A decision should be made as to which web browser should be used. The web browser is
the only software that is needed on workstations that will do configuration tasks online
using the DS Storage Manager GUI (through the SSPC).
򐂰 The IP addresses of SNMP recipients need to be identified if the client wants the DS8800
HMC to send SNMP traps to a network station.
򐂰 E-mail accounts need to be identified if the client wants the DS8800 HMC to send e-mail
messages for problem conditions.
򐂰 The IP addresses of NTP servers need to be identified if the client wants the DS8800
HMC to utilize Network Time Protocol for time synchronization.
򐂰 When ordering a DS8800, the license and some optional features need activation as part
of the customization of the DS8800. See Chapter 10, “IBM System Storage DS8800
features and license keys” on page 203 for details.

Note: Applying increased feature activation codes is a concurrent action, but a license
reduction or deactivation is a disruptive action.

186 IBM System Storage DS8800: Architecture and Implementation


9.3.2 Planning for microcode upgrades
The following activities need to be considered in regard to the microcode upgrades on the
DS8800:
򐂰 Microcode changes
IBM might release changes to the DS8800 series Licensed Machine Code. IBM plans to
make most DS8800 series Licensed Machine Code changes available for download by the
HMC from the IBM System Storage technical support website. Note that not all Licensed
Machine Code changes might be available through the support website.
򐂰 Microcode install
An IBM service representative can install the changes that IBM does not make available
for you to download. If the machine does not function as warranted and your problem can
be resolved through your application of downloadable Licensed Machine Code, you are
responsible for downloading and installing these designated Licensed Machine Code
changes as IBM specifies. Check whether the new microcode requires new levels of DS
Storage Manager, DS CLI and DS Open API and plan on upgrading them on the relevant
workstations if necessary.
򐂰 Host prerequisites
When planning for initial installation or for microcode updates, make sure that all
prerequisites for the hosts are identified correctly. Sometimes a new level is required for
the SDD as well. The Interoperability Matrix should be the primary source to identify
supported operating systems, HBAs, and hardware of hosts. View this online at:
http://www-03.ibm.com/systems/storage/product/interop.html
To prepare for the download of drivers, refer to the HBA Support Matrix referenced in the
Interoperability Matrix and make sure that drivers are downloaded from the IBM Internet
site. This is to make sure that drivers are used with the settings corresponding to the
DS8800, not some other IBM storage subsystem.
DS8800 interoperability information can also be found at the IBM System Storage
Interoperability Center (SSIC) at the following website:
http://www.ibm.com/systems/support/storage/config/ssic

Important: The Interoperability Center reflects information regarding the latest


supported code levels. This does not necessarily mean that former levels of HBA
firmware or drivers are no longer supported. If in doubt about any supported levels,
contact your IBM representative.

򐂰 Maintenance windows
Even though the microcode update of the DS8800 is a nondisruptive action, any
prerequisites identified for the hosts (for example, patches, new maintenance levels, or
new drivers) could make it necessary to schedule a maintenance window. The host
environments can then be upgraded to the level needed in parallel to the microcode
update of the DS8800 taking place.

For more information about microcode upgrades, see Chapter 15, “Licensed machine code”
on page 343.

Chapter 9. DS8800 HMC planning and setup 187


9.3.3 Time synchronization
For proper error analysis, it is important to have the date and time information synchronized
as much as possible on all components in the DS8800 environment. This includes the
DS8800 HMCs, the SSPC, and the DS Storage Manager and DS CLI workstations.

With the DS8800, the HMC has the ability to utilize the Network Time Protocol (NTP) service.
Customers can specify NTP servers on their internal network to provide the time to the HMC.
It is a client responsibility to ensure that the NTP servers are working, stable, and accurate.
An IBM service representative will enable the HMC to use NTP servers, ideally at the time of
the initial DS8800 installation.

Note: Because of the many components and operating systems within the DS8800, time
and date setting is a maintenance activity that can only be done by the IBM service
representative.

9.3.4 Monitoring with the HMC


A client can receive notifications from the HMC through SNMP traps and email messages.
Notifications contain information about your storage complex, such as open serviceable
events. You can choose one or both notification methods:
򐂰 Simple Network Management Protocol (SNMP) traps
For monitoring purposes, the DS8800 uses SNMP traps. An SNMP trap can be sent to a
server in the client’s environment, perhaps with System Management Software, which
handles the trap based on the MIB delivered with the DS8800 software. A MIB containing
all traps can be used for integration purposes into System Management Software. The
supported traps are described in more detail in the documentation that comes with the
microcode on the CDs provided by the IBM service representative. The IP address to
which the traps should be sent needs to be configured during initial installation of the
DS8800. For more information about the DS8800 and SNMP, see Chapter 16, “Monitoring
with Simple Network Management Protocol” on page 349.
򐂰 Email
When you choose to enable email notifications, email messages are sent to all the
addresses that are defined on the HMC whenever the storage complex encounters a
serviceable event or must alert you to other information.
During the planning process, create a list of who needs to be notified.

SIM notification is only applicable for System z servers. It allows you to receive a notification
on the system console in case of a serviceable event. SNMP and email are the only
notification options for the DS8800.

9.3.5 Call Home and remote support


The HMC uses both outbound (Call Home) and inbound (remote service) support.

Call Home is the capability of the HMC to contact IBM support to report a serviceable event.
Remote Services is the capability of IBM service representatives to connect to the HMC to
perform service tasks remotely. If allowed to do so by the setup of the client’s environment, an
IBM service support representative could connect to the HMC to perform detailed problem
analysis. The IBM service support representative can view error logs and problem logs, and
initiate trace or dump retrievals.

188 IBM System Storage DS8800: Architecture and Implementation


Remote support can be configured for dial-up connection through a modem or high-speed
virtual private network (VPN) Internet connection. Setup of the remote support environment is
done by the IBM service representative during initial installation. For more complete
information, see Chapter 17, “Remote support” on page 363.

9.4 HMC and IPv6


The DS8800 Hardware Management Console (HMC) can be configured for an IPv6 client
network. Note that IPv4 is still also supported.

Configuring the HMC in an IPv6 environment


Usually, the configuration will be done by the IBM service representative during the DS8800
initial installation. See 8.3.2, “System Storage Productivity Center and network access” on
page 166 for a thorough discussion about the formatting of IPv6 addresses and subnet
masks.

In the remainder of this section, we illustrate the steps required to configure the DS8800 HMC
eth2 port for IPv6:
1. Launch and log in to WebUI; refer to 9.2.4, “Web-based user interface” on page 184 for the
procedure.
2. In the HMC welcome window, select HMC Management, as shown in Figure 9-6.

Figure 9-6 WebUI welcome window

Chapter 9. DS8800 HMC planning and setup 189


3. In the HMC Management window, select Change Network Settings, as shown in
Figure 9-7.

Figure 9-7 WebUI HMC management window

4. Select the LAN Adapters tab.


5. Only eth2 is shown; the private network ports are not editable. Click the Details... button.
6. Select the IPv6 Settings tab.
7. Click the Add... button to add a static IP address to this adapter. Figure 9-8 shows the
LAN Adapter Details window where you can configure the IPv6 values.

Figure 9-8 WebUI IPv6 settings window

190 IBM System Storage DS8800: Architecture and Implementation


9.5 HMC user management
User management can be performed using the DS CLI or the DS GUI. An administrator user
ID is preconfigured during the installation of the DS8800, using the following defaults:
User ID admin
Password admin

The password of the admin user ID will need to be changed before it can be used. The GUI
will force you to change the password when you first log in. The DS CLI will allow you to log in
but will not allow you to issue any other commands until you have changed the password. As
an example, to change the admin user’s password to passw0rd, use the following DS CLI
command:
chuser-pw passw0rd admin

After you have issued that command, you can then issue other commands.

Note: The DS8800 supports the capability to use a Single Point of Authentication function
for the GUI and CLI through a centralized LDAP server. This capability requires a TPC
Version 4.1 server. For detailed information about LDAP based authentication, refer to IBM
System Storage DS8000: LDAP Authentication, REDP-4505.

User roles
During the planning phase of the project, a worksheet or a script file was established with a
list of all people who need access to the DS GUI or DS CLI. Note that a user can be assigned
to more than one group. At least one person should be assigned to each of the following roles
(user_id):
򐂰 The Administrator (admin) has access to all HMC service methods and all storage image
resources, except for encryption functionality. This user authorizes the actions of the
Security Administrator during the encryption deadlock prevention and resolution process.
򐂰 The Security Administrator (secadmin) has access to all encryption functions. secadmin
requires an Administrator user to confirm the actions taken during the encryption deadlock
prevention and resolution process.
򐂰 The Physical operator (op_storage) has access to physical configuration service methods
and resources, such as managing storage complex, storage image, Rank, array, and
Extent Pool objects.
򐂰 The Logical operator (op_volume) has access to all service methods and resources that
relate to logical volumes, hosts, host ports, logical subsystems, and Volume Groups,
excluding security methods.
򐂰 The Monitor group has access to all read-only, nonsecurity HMC service methods, such
as list and show commands.
򐂰 The Service group has access to all HMC service methods and resources, such as
performing code loads and retrieving problem logs, plus the privileges of the Monitor
group, excluding security methods.
򐂰 The Copy Services operator has access to all Copy Services methods and resources, plus
the privileges of the Monitor group, excluding security methods.
򐂰 No access prevents access to any service method or storage image resources. This group
is used by an administrator to temporarily deactivate a user ID. By default, this user group
is assigned to any user account in the security repository that is not associated with any
other user group.

Chapter 9. DS8800 HMC planning and setup 191


Password policies
Whenever a user is added, a password is entered by the administrator. During the first login,
this password must be changed. Password settings include the time period in days after
which passwords expire and a number that identifies how many failed logins are allowed. The
user ID is deactivated if an invalid password is entered more times than the limit. Only a user
with administrator rights can then reset the user ID with a new initial password.

Best practice: Do not set the values of chpass to 0, as this indicates that passwords never
expire and unlimited login attempts are allowed.

If access is denied for the administrator due to the number of invalid login attempts, a
procedure can be obtained from your IBM representative to reset the administrator’s
password. The password for each user account is forced to adhere to the following rules:
򐂰 The length of the password must be between 6 and 16 characters.
򐂰 It must begin and end with a letter.
򐂰 It must have at least five letters.
򐂰 It must contain at least one number.
򐂰 It cannot be identical to the user ID.
򐂰 It cannot be a previous password.

Note: User names and passwords are case sensitive. If you create a user name called
Anthony, you cannot log in using the user name anthony.

9.5.1 User management using the DS CLI


The exact syntax for any DS CLI command can be found in the IBM System Storage DS8000:
Command-Line Interface User´s Guide, SC26-7916. You can also use the DS CLI help
command to get further assistance.

The commands to manage user IDs using the DS CLI are:


򐂰 mkuser
This command creates a user account that can be used with both DS CLI and the DS GUI.
In Example 9-6, we create a user called RolandW, who is in the op_storage group. His
temporary password is tempw0rd. He will have to use the chpass command when he logs
in for the first time.

Example 9-6 Using the mkuser command to create a new user


dscli> mkuser -pw tempw0rd -group op_storage RolandW
Date/Time: October 12, 2009 9:47:13 AM MST IBM DSCLI Version: 5.4.30.253
CMUC00133I mkuser: User RolandW successfully created.

192 IBM System Storage DS8800: Architecture and Implementation


򐂰 rmuser
This command removes an existing user ID. In Example 9-7, we remove a user called
JaneSmith.

Example 9-7 Removing a user


dscli> rmuser JaneSmith
Date/Time: October 12, 2009 9:47:13 AM MST IBM DSCLI Version: 5.4.30.253
CMUC00135W rmuser: Are you sure you want to delete user JaneSmith? [y/n]:y
CMUC00136I rmuser: User JaneSmith successfully deleted.

򐂰 chuser
This command changes the password or group (or both) of an existing user ID. It is also
used to unlock a user ID that has been locked by exceeding the allowable login retry
count. The administrator could also use this command to lock a user ID. In Example 9-8,
we unlock the user, change the password, and change the group membership for a user
called JensW. He must use the chpass command when he logs in the next time.

Example 9-8 Changing a user with chuser


dscli> chuser -unlock -pw passw0rd -group monitor JensW
Date/Time: October 12, 2009 9:47:13 AM MST IBM DSCLI Version: 5.4.30.253
CMUC00134I chuser: User JensW successfully modified.

򐂰 lsuser
With this command, a list of all user IDs can be generated. In Example 9-9, we can see
three users and the admin account.

Example 9-9 Using the lsuser command to list users


dscli> lsuser
Date/Time: October 12, 2009 9:47:13 AM MST IBM DSCLI Version: 5.4.30.253
Name Group State
==========================
StevenJ op_storage active
admin admin active
JensW op_volume active
RolandW monitor active

򐂰 showuser
The account details of a user ID can be displayed with this command. In Example 9-10,
we list the details of the user Robert.

Example 9-10 Using the showuser command to list user information


dscli> showuser Robert
Date/Time: October 12, 2009 9:47:13 AM MST IBM DSCLI Version: 5.4.30.253
Name Robert
Group op_volume
State active
FailedLogin 0

Chapter 9. DS8800 HMC planning and setup 193


򐂰 managepwfile
This command creates or adds to an encrypted password file that will be placed onto the
local machine. This file can be referred to in a DS CLI profile. This allows you to run scripts
without specifying a DS CLI user password in clear text. If manually starting DS CLI, you
can also refer to a password file with the -pwfile parameter. By default, the file is located in
the following locations:
Windows C:\Documents and Settings\<User>\DSCLI\security.dat
Non-Windows $HOME/dscli/security.dat
In Example 9-11, we manage our password file by adding the user ID SJoseph. The
password is now saved in an encrypted file called security.dat.

Example 9-11 Using the managepwfile command


dscli> managepwfile -action add -name SJoseph -pw passw0rd
Date/Time: October 12, 2009 9:47:13 AM MST IBM DSCLI Version: 5.4.30.253
CMUC00206I managepwfile: Record 10.0.0.1/SJoseph successfully added to password
file C:\Documents and Settings\StevenJ\DSCLI\security.dat.

򐂰 chpass
This command lets you change two password policies: password expiration (days) and
failed logins allowed. In Example 9-12, we change the expiration to 365 days and five
failed login attempts.

Example 9-12 Changing rules using the chpass command


dscli> chpass -expire 365 -fail 5
Date/Time: October 12, 2009 9:47:13 AM MST IBM DSCLI Version: 5.4.30.253
CMUC00195I chpass: Security properties successfully set.

򐂰 showpass
This command lists the properties for passwords (Password Expiration days and Failed
Logins Allowed). In Example 9-13, we can see that passwords have been set to expire in
90 days and that four login attempts are allowed before a user ID is locked.

Example 9-13 Using the showpass command


dscli> showpass
Date/Time: October 12, 2009 9:47:13 AM MST IBM DSCLI Version: 5.4.30.253
Password Expiration 90 days
Failed Logins Allowed 4

194 IBM System Storage DS8800: Architecture and Implementation


9.5.2 User management using the DS GUI
To work with user administration, sign on to the DS GUI. See 12.2.1, “Configuring SSPC for
DS8800 remote GUI access” on page 233 for procedures about using the SSPC to launch the
DS Storage Manager GUI. From the categories in the left sidebar, select User
Administration under the section Monitor System, as shown in Figure 9-9.

Figure 9-9 DS Storage Manager GUI main window

Chapter 9. DS8800 HMC planning and setup 195


You are presented with a list of the storage complexes and their active security policies.
Select the complex that you want to modify by checking the check box under the Select
column. You can choose to either create a new security policy or manage one of the existing
policies. Do this by selecting Create Storage Authentication Service Policy or Manage
Authentication Policy from the Select action drop-down menu, as shown in Figure 9-10.

Figure 9-10 Selecting a storage complex

196 IBM System Storage DS8800: Architecture and Implementation


The next window displays all of the security policies on the HMC for the storage complex you
chose. Note that you can create many policies, but only one at a time can be active. Select a
policy by checking the check box under the Select column. Then select Properties from the
Select action drop-down menu, as shown in Figure 9-11.

Figure 9-11 Selecting a security policy

The next window shows you the users defined on the HMC. You can choose to add a new
user (select Select action  Add user) or modify the properties of an existing user. The
administrator can perform several tasks from this window:
򐂰 Add User (The DS CLI equivalent is mkuser.)
򐂰 Modify User (The DS CLI equivalent is chuser.)
򐂰 Lock or Unlock User: Choice will toggle (The DS CLI equivalent is chuser.)
򐂰 Delete User (The DS CLI equivalent is rmuser.)
򐂰 Password Settings (The DS CLI equivalent is chpass.)

Chapter 9. DS8800 HMC planning and setup 197


The Password Settings window is where you can modify the number of days before a
password expires, as well as the number of login retries that a user gets before the account
becomes locked, as shown in Figure 9-12.

Figure 9-12 Password Settings window

Note: If a user who is not in the Administrator group logs on to the DS GUI and goes to the
User Administration window, the user will only be able to see their own user ID in the list.
The only action they will be able to perform is to change their password.

198 IBM System Storage DS8800: Architecture and Implementation


Selecting Add user displays a window in which a user can be added by entering the user ID,
the temporary password, and the role. See Figure 9-13 for an example. The role will decide
what type of activities can be performed by this user. In this window, the user ID can also be
temporarily deactivated by selecting only the No access option.

Figure 9-13 Adding a new user to the HMC

Take special note of the new role of the Security Administrator (secadmin). This role was
created to separate the duties of managing the storage from managing the encryption for
DS8800 units that are shipped with Full Disk Encryption storage drives.

If you are logged in to the GUI as a Storage Administrator, you cannot create, modify, or
delete users of the Security Administrator role. Notice how the Security Administrator option
is disabled in the Add/Modify User window in Figure 9-13. Similarly, Security Administrators
cannot create, modify, or delete Storage Administrators. This is a new feature of the
microcode for the DS8800.

9.6 External HMC


An external, redundant HMC can be ordered for the DS8800. The external HMC is an optional
purchase, but one that is highly useful. The two HMCs run in a dual-active configuration, so
either HMC can be used at any time. For this book, the distinction between the internal and
external HMC is only for the purposes of clarity and explanation, because they are identical in
functionality.

The DS8800 is capable of performing all storage duties while the HMC is down, but
configuration, error reporting, and maintenance capabilities become severely restricted. Any
organization with extremely high availability requirements should consider deploying an
external HMC.

Note: To help preserve Data Storage functionality, the internal and external HMCs are not
available to be used as general purpose computing resources.

Chapter 9. DS8800 HMC planning and setup 199


9.6.1 External HMC benefits
Having an external HMC provides a number of advantages. Among these are:
򐂰 Enhanced maintenance capability
Because the HMC is the only interface available for service personnel, an external HMC
will provide maintenance operational capabilities if the internal HMC fails.

򐂰 Greater availability for power management


Using the HMC is the only way to safely power on or power off the DS8800. An external
HMC is necessary to shut down the DS8800 in the event of a failure with the internal HMC.
򐂰 Greater availability for remote support over modem
A second HMC with a phone line on the modem provides IBM with a way to perform
remote support if an error occurs that prevents access to the first HMC. If network offload
(FTP) is not allowed, one HMC can be used to offload data over the modem line while the
other HMC is used for troubleshooting. See Chapter 17, “Remote support” on page 363
for more information regarding HMC modems.
򐂰 Greater availability of encryption deadlock recovery
If the DS8800 is configured for full disk encryption and an encryption deadlock scenario
occurs, then using the HMC is the only way to input a Recovery Key to allow the DS8800
to become operational. See 4.8.1, “Deadlock recovery” on page 82 for more information
regarding encryption deadlock.
򐂰 Greater availability for Advanced Copy Services
Since all Copy Services functions are driven by the HMC, any environment using
Advanced Copy Services should have dual HMCs for operations continuance.
򐂰 Greater availability for configuration operations
All configuration commands must go through the HMC. This is true regardless of whether
access is through the SSPC, DS CLI, the DS Storage Manager, or DS Open API with
another management program. An external HMC will allow these operations to continue in
the event of a failure with the internal HMC.

When a configuration or Copy Services command is issued, the DS CLI or DS Storage


Manager will send the command to the first HMC. If the first HMC is not available, it will
automatically send the command to the second HMC instead. Typically, you do not have to
reissue the command.

Any changes made using one HMC are instantly reflected in the other HMC. There is no
caching of host data done within the HMC, so there are no cache coherency issues.

9.6.2 Configuring DS CLI to use a second HMC


The second HMC can either be specified on the command line or in the profile file used by the
DS CLI. To specify the second HMC in a command, use the -hmc2 parameter, as shown in
Example 9-14.

Example 9-14 Using the -hmc2 parameter


C:\Program Files\IBM\dscli>dscli -hmc1 hmcalpha.ibm.com -hmc2 hmcbravo.ibm.com
Enter your username: stevenj
Enter your password:
Date/Time: October 12, 2009 9:47:13 AM MST IBM DSCLI Version: 5.4.30.253 DS:
IBM.2107-7503461

200 IBM System Storage DS8800: Architecture and Implementation


Alternatively, you can modify the following lines in the dscli.profile (or any profile) file:
# Management Console/Node IP Address(es)
# hmc1 and hmc2 are equivalent to -hmc1 and -hmc2 command options.
hmc1:hmcalpha.ibm.com
hmc2:hmcbravo.ibm.com

After you make these changes and save the profile, the DS CLI will be able to automatically
communicate through HMC2 in the event that HMC1 becomes unreachable. This change will
allow you to perform both configuration and Copy Services commands with full redundancy.

Chapter 9. DS8800 HMC planning and setup 201


202 IBM System Storage DS8800: Architecture and Implementation
10

Chapter 10. IBM System Storage DS8800


features and license keys
This chapter discusses the activation of licensed functions and the following topics:
򐂰 IBM System Storage DS8800 licensed functions
򐂰 Activation of licensed functions
򐂰 Licensed scope considerations

© Copyright IBM Corp. 2011. All rights reserved. 203


10.1 IBM System Storage DS8800 licensed functions
Many of the functions of the DS8800 that we have discussed so far are optional licensed
functions that must be enabled to use them. The licensed functions are enabled through a
242x licensed function indicator feature, plus a 239x licensed function authorization feature
number, in the following way:
򐂰 The licensed functions for DS8800 are enabled through a pair of 242x-951 licensed
function indicator feature numbers (FC07xx and FC7xxx), plus a Licensed Function
Authorization (239x-LFA), feature number (FC7xxx). These functions and numbers are
listed in Table 10-1.

Table 10-1 DS8800 model 951 licensed functions


Licensed function for IBM 242x indicator IBM 239x function authorization
DS8800 model 951 with feature numbers model and feature numbers
Enterprise Choice warranty

Operating Environment License 0700 and 70xx 239x Model LFA, 703x/706x

FICON Attachment 0703 and 7091 239x Model LFA, 7091

Database Protection 0708 and 7080 239x Model LFA, 7080

High Performance FICON 0709 and 7092 239x Model LFA, 7092

FlashCopy 0720 and 72xx 239x Model LFA, 725x-726x

Space Efficient FlashCopy 0730 and 73xx 239x Model LFA, 735x-736x

Metro/Global Mirror 0742 and 74xx 239x Model LFA, 748x-749x

Metro Mirror 0744 and 75xx 239x Model LFA, 750x-751x

Global Mirror 0746 and 75xx 239x Model LFA, 752x-753x

z/OS Global Mirror 0760 and 76xx 239x Model LFA, 765x-766x

z/OS Metro/Global Mirror 0763 and 76xx 239x Model LFA, 768x-769x
Incremental Resync

Parallel Access Volumes 0780 and 78xx 239x Model LFA, 782x-783x

HyperPAV 0782 and 7899 239x Model LFA, 7899

򐂰 The DS8800 provides Enterprise Choice warranty options associated with a specific
machine type. The x in 242x designates the machine type according to its warranty period,
where x can be either 1, 2, 3, or 4. For example, a 2424-951 machine type designates a
DS8800 storage system with a four-year warranty period.
򐂰 The x in 239x can either be 6, 7, 8, or 9, according to the associated 242x base unit
model. 2396 function authorizations apply to 2421 base units, 2397 to 2422, and so on.
For example, a 2399-LFA machine type designates a DS8800 Licensed Function
Authorization for a 2424 machine with a four-year warranty period.
򐂰 The 242x licensed function indicator feature numbers enable the technical activation of the
function, subject to the client applying a feature activation code made available by IBM.
The 239x licensed function authorization feature numbers establish the extent of
authorization for that function on the 242x machine for which it was acquired.

With the DS8800 storage system, IBM offers value-based licensing for the Operating
Environment License. It is priced based on the disk drive performance, capacity, speed, and

204 IBM System Storage DS8800: Architecture and Implementation


other characteristics that provide more flexible and optimal price/performance configurations.
As shown in Table 10-2, each feature indicates a certain number of value units.

Table 10-2 Operating Environment License (OEL): value unit indicators


Feature number Description

7050 OEL - inactive indicator

7051 OEL - 1 value unit indicator

7052 OEL - 5 value unit indicator

7053 OEL - 10 value unit indicator

7054 OEL - 25 value unit indicator

7055 OEL - 50 value unit indicator

7060 OEL - 100 value unit indicator

7065 OEL - 200 value unit indicator

These features are required in addition to the per TB OEL features (#703x-704x). For each
disk drive set, the corresponding number of value units must be configured, as shown in
Table 10-3.

Table 10-3 Value unit requirements based on drive size, type, and speed
Drive set Drive size Drive type Drive speed Encryption Value units
feature drive required
number

2208 146 GB SAS 15K RPM No 4.8

2608 450 GB SAS 10K RPM No 8.6

2708 600 GB SAS 10K RPM No 10.9

5608 450 GB SAS 10K RPM Yes 8.6

5708 600 GB SAS 10K RPM Yes 10.9

6008 300 GB SSD N/A No 6.8

The HyperPAV license is a flat-fee, add-on license that requires the Parallel Access Volumes
(PAV) license to be installed.

The license for Space-Efficient FlashCopy does not require the ordinary FlashCopy (PTC)
license. As with the ordinary FlashCopy, the FlashCopy SE is licensed in tiers by gross
amount of TB installed. FlashCopy (PTC) and FlashCopy SE can be complementary licenses.
FlashCopy SE will serve to do FlashCopies with Track Space-Efficient (TSE) target volumes.
When also doing FlashCopies to standard target volumes, use the PTC license in addition.

Metro Mirror (MM license) and Global Mirror (GM) can be complementary features as well.

Chapter 10. IBM System Storage DS8800 features and license keys 205
Note: For a detailed explanation of the features involved and the considerations you must
have when ordering DS8800 licensed functions, refer to these announcement letters:
򐂰 IBM System Storage DS8800 series (IBM 242x)
򐂰 IBM System Storage DS8800 series (M/T 239x) high performance flagship - Function
Authorizations.

IBM announcement letters can be found at the following address:


http://www.ibm.com/common/ssi/index.wss

Use the DS8800 keyword as a search criteria in the Contents field.

10.2 Activation of licensed functions


Activating the license keys of the DS8800 can be done after the IBM service representative
has completed the storage complex installation. Based on your 239x licensed function order,
you need to obtain the necessary keys from the IBM Disk Storage Feature Activation (DSFA)
website at the following address:
http://www.ibm.com/storage/dsfa

Important: There is a special procedure to obtain the license key for the Full Disk
Encryption feature. It cannot be obtained from the DSFA website. Refer to IBM System
Storage DS8700 Disk Encryption, REDP-4500, for more information.

You can activate all license keys at the same time (for example, on initial activation of the
storage unit) or they can be activated individually (for example, additional ordered keys).

Before connecting to the IBM DSFA website to obtain your feature activation codes, ensure
that you have the following items:
򐂰 The IBM License Function Authorization documents. If you are activating codes for a new
storage unit, these documents are included in the shipment of the storage unit. If you are
activating codes for an existing storage unit, IBM will send the documents to you in an
envelope.
򐂰 A USB memory device can be used for downloading your activation codes if you cannot
access the DS Storage Manager from the system that you are using to access the DSFA
website. Instead of downloading the activation codes in softcopy format, you can also print
the activation codes and manually enter them using the DS Storage Manager GUI.
However, this is slow and error prone, because the activation keys are 32-character long
strings.

206 IBM System Storage DS8800: Architecture and Implementation


10.2.1 Obtaining DS8800 machine information
To obtain license activation keys from the DFSA website, you need to know the serial number
and machine signature of your DS8800 unit.

To obtain the required information, perform the following steps:


1. Start the DS Storage Manager application. Log in using a user ID with administrator
access. If this is the first time you are accessing the machine, contact your IBM service
representative for the user ID and password. After a successful login, the DS8800 Storage
Manager Welcome window opens. In the My Work navigation window on the left side,
select Manage Hardware (Figure 10-1).

Figure 10-1 DS8800 Storage Manager GUI: Welcome window

2. Select Storage Complexes to open the Storage Complexes Summary window, as shown
in Figure 10-2. From here, you can obtain the serial number of your DS8800 storage
image.

Figure 10-2 DS8800 Storage Manager: Storage Complexes Summary

Chapter 10. IBM System Storage DS8800 features and license keys 207
3. In the Storage Complexes Summary window, select the storage image by checking the
box to the left of it, and select Properties from the drop-down Select action menu in the
Storage Unit section (Figure 10-3).

Figure 10-3 Select Storage Unit Properties

4. The Storage Unit Properties window opens. Click the Advanced tab to display more
detailed information about the DS8800 storage image, as shown in Figure 10-4.

Figure 10-4 Storage Unit Properties window

Gather the following information about your storage unit:


– The MTMS (Machine Type - Model Number - Serial Number) is a string that contains
the machine type, model number, and serial number. The machine type is 242x and the
machine mode is 951. The last seven characters of the string are the machine's serial
number (XYABCDE).
– From the Machine signature field, note the machine signature:
(ABCD-EFGH-IJKL-MNOP).

208 IBM System Storage DS8800: Architecture and Implementation


Use Table 10-4 to document this information, which will be entered in the IBM DSFA website
to retrieve the activation codes.

Table 10-4 DS8800 machine information


Property Your storage unit’s information

Machine type and model

Machine’s serial number

Machine signature

10.2.2 Obtaining activation codes


Perform the following steps to obtain the activation codes:
1. Connect to the IBM Disk Storage Feature Activation (DSFA) website at the following
address:
http://www.ibm.com/storage/dsfa
Figure 10-5 shows the DSFA website.

Figure 10-5 IBM DSFA website

Chapter 10. IBM System Storage DS8800 features and license keys 209
2. Click IBM System Storage DS8000 series. This brings you to the Select DS8000 series
machine window (Figure 10-6). Select the appropriate 242x Machine Type.

Figure 10-6 DS8800 DSFA machine information entry window

3. Enter the machine information collected in Table 10-4 on page 209 and click Submit. The
View machine summary window opens (Figure 10-7).

Figure 10-7 DSFA View machine summary window

210 IBM System Storage DS8800: Architecture and Implementation


The View machine summary window shows the total purchased licenses and how many of
them are currently assigned. The example in Figure 10-7 shows a storage unit where all
licenses have already been assigned. When assigning licenses for the first time, the
Assigned field shows 0.0 TB.
4. Click Manage activations. The Manage activations window opens. Figure 10-8 shows the
Manage activations window for your storage images. For each license type and storage
image, enter the license scope (fixed block data (FB), count key data (CKD), or All) and a
capacity value (in TB) to assign to the storage image. The capacity values are expressed
in decimal terabytes with 0.1 TB increments. The sum of the storage image capacity
values for a license cannot exceed the total license value.

Figure 10-8 DSFA Manage activations window

Chapter 10. IBM System Storage DS8800 features and license keys 211
5. When you have entered the values, click Submit. The View activation codes window
opens, showing the license activation codes for the storage images (Figure 10-9). Print the
activation codes or click Download to save the activation codes in a file that you can later
import in the DS8800. The file contains the activation codes for two storage images (which
are currently not offered for DS8800).

Figure 10-9 DSFA View activation codes window

Note: In most situations, the DSFA application can locate your 239x licensed function
authorization record when you enter the DS8800 (242x) serial number and signature.
However, if the 239x licensed function authorization record is not attached to the 242x
record, you must assign it to the 242x record using the Assign function authorization link
on the DSFA application. In this case, you need the 239x serial number (which you can find
on the License Function Authorization document).

10.2.3 Applying activation codes using the GUI


Use this process to apply the activation codes on your DS8800 storage images using the DS
Storage Manager GUI. After they are applied, the codes enable you to begin configuring
storage on a storage image.

212 IBM System Storage DS8800: Architecture and Implementation


Important: The initial enablement of any optional DS8800 licensed function is a
concurrent activity (assuming the appropriate level of microcode is installed on the
machine for the given function).

The following activation activities are disruptive and require a machine IML or reboot of the
affected image:
򐂰 Removal of a DS8800 licensed function to deactivate the function.
򐂰 A lateral change or reduction in the license scope. A lateral change is defined as
changing the license scope from fixed block (FB) to count key data (CKD) or from CKD
to FB. A reduction is defined as changing the license scope from all physical capacity
(ALL) to only FB or only CKD capacity.

Attention: Before you begin this task, you must resolve any current DS8800 problems.
Contact IBM support for assistance in resolving these problems.

The easiest way to apply the feature activation codes is to download the activation codes from
the IBM Disk Storage Feature Activation (DSFA) website to your local computer and import
the file into the DS Storage Manager. If you can access the DS Storage Manager from the
same computer that you use to access the DSFA website, you can copy the activation codes
from the DSFA window and paste them into the DS Storage Manager window. The third
option is to manually enter the activation codes in the DS Storage Manager from a printed
copy of the codes.

Perform the following steps to apply the activation codes:


1. In the My Work navigation pane on the DS Storage Manager Welcome window, select
Manage hardware  Storage Complexes, and from the drop-down Select action menu,
click Apply Activation Codes in the Storage Image section, as shown in Figure 10-10.

Figure 10-10 DS8800 Storage Manager GUI: select Apply Activation Codes

Chapter 10. IBM System Storage DS8800 features and license keys 213
2. The Apply Activation Codes window opens (Figure 10-11). If this is the first time that you
are applying the activation codes, the fields in the window are empty. In our example, there
is only a 19 TB Operating Environment License (OEL) for FB volumes. You have an option
to manually add an activation key by selecting Add Activation Key from the drop-down
Select action menu. The other option is to select Import Key File, which you used when
you downloaded a file with the activation key from the IBM DSFA site, as explained in
10.2.2, “Obtaining activation codes” on page 209.

Figure 10-11 Apply Activation Codes window

3. The easiest way is to import the activation key from the file, as shown in Figure 10-12.

Figure 10-12 Apply Activation Codes by importing the key from the file

214 IBM System Storage DS8800: Architecture and Implementation


4. After the file has been selected, click Next to continue. The Confirmation window displays
the key name. Click Finish to complete the new key activation procedure (Figure 10-13).

Figure 10-13 Apply Activation Codes: Confirmation window

5. Your license is now listed in the table. In our example, there is one OEL license active, as
shown in Figure 10-14.

Figure 10-14 Apply Activation Codes window

6. Click OK to exit Apply Activation Codes wizard.

Chapter 10. IBM System Storage DS8800 features and license keys 215
7. To view all the activation codes that have been applied, from My Work navigation pane on
the DS Storage Manager Welcome window, select Manage hardware  Storage
Complexes, and from the drop-down Select action menu, click Apply Activation Codes.
The activation codes are displayed, as shown in Figure 10-15.

Figure 10-15 Activation codes applied

10.2.4 Applying activation codes using the DS CLI


The license keys can also be activated using the DS CLI. This is available only if the machine
Operating Environment License (OEL) has previously been activated and you have a console
with a compatible DS CLI program installed.

Perform the following steps:


1. Use the showsi command to display the DS8800 machine signature, as shown in
Example 10-1.

Example 10-1 DS CLI showsi command


dscli> showsi ibm.2107-75tv181
Date/Time: 10 September 2010 13:22:53 CEST IBM DSCLI Version: 6.6.0.288 DS:
ibm.2107-75tv181
Name ATS_04
desc DS8000-R6
ID IBM.2107-75TV181
Storage Unit IBM.2107-75TV180
Model 951
WWNN 500507630AFFC29F
Signature 633b-1234-5678-5baa
State Online
ESSNet Enabled
Volume Group V0
os400Serial 29F
NVS Memory 8.0 GB
Cache Memory 238.3 GB

216 IBM System Storage DS8800: Architecture and Implementation


Processor Memory 253.4 GB
MTS IBM.2421-75TV180
numegsupported 0

2. Obtain your license activation codes from the IBM DSFA website, as discussed in 10.2.2,
“Obtaining activation codes” on page 209.
3. Use the applykey command to activate the codes and the lskey command to verify which
type of licensed features are activated for your storage unit.
c. Enter an applykey command at the dscli command prompt as follows. The -file
parameter specifies the key file. The second parameter specifies the storage image.
dscli> applykey -file c:\2107_7520780.xml IBM.2107-7520781
d. Verify that the keys have been activated for your storage unit by issuing the DS CLI
lskey command, as shown in Example 10-2.

Example 10-2 Using lskey to list installed licenses


dscli> lskey ibm.2107-7520781
Date/Time: 05 October 2010 14:19:17 CET IBM DSCLI Version: 6.6.0.220 DS:
ibm.2107-7520781
Activation Key Authorization Level (TB) Scope
=========================================================================
Global mirror (GM) 70 FB
High Performance FICON for System z (zHPF) on CKD
IBM FlashCopy SE 100 All
IBM HyperPAV on CKD
IBM database protection on FB
Metro mirror (MM) 70 FB
Metro/Global mirror (MGM) 70 FB
Operating environment (OEL) 100 All
Parallel access volumes (PAV) 30 CKD
Point in time copy (PTC) 100 All
RMZ Resync 30 CKD
Remote mirror for z/OS (RMZ) 30 CKD

For more details about the DS CLI, refer to IBM System Storage DS: Command-Line
Interface User’s Guide, GC53-1127.

10.3 Licensed scope considerations


For the Point-in-Time Copy (PTC) function and the Remote Mirror and Copy functions, you
have the ability to set the scope of these functions to be FB, CKD, or All. You need to decide
what scope to set, as shown in Figure 10-8 on page 211. In that example, Image One has
16 TB of RMC, and the user has currently decided to set the scope to All. If the scope was set
to FB instead, then you cannot use RMC with any CKD volumes that are later configured.
However, it is possible to return to the DSFA website at a later time and change the scope
from CKD or FB to All, or from All to either CKD or FB. In every case, a new activation code is
generated, which you can download and apply.

Chapter 10. IBM System Storage DS8800 features and license keys 217
10.3.1 Why you get a choice
Let us imagine a simple scenario where a machine has 20 TB of capacity. Of this capacity,
15 TB is configured as FB and 5 TB is configured as CKD. If we only want to use
Point-in-Time Copy for the CKD volumes, then we can purchase just 5 TB of Point-in-Time
Copy and set the scope of the Point-in-Time Copy activation code to CKD. There is no need
to buy a new PTC license in case you do not need Point-in-Time Copy for CKD anymore, but
you would like to use it for FB only. Simply obtain a new activation code from DSFA website by
changing the scope to FB.

When deciding which scope to set, there are several scenarios to consider. Use Table 10-5 to
guide you in your choice. This table applies to both Point-in-Time Copy and Remote Mirror
and Copy functions.

Table 10-5 Deciding which scope to use


Scenario Point-in-Time Copy or Remote Mirror and Suggested scope setting
Copy function usage consideration

1 This function is only used by open systems Select FB.


hosts.

2 This function is only used by System z hosts. Select CKD.

3 This function is used by both open systems Select All.


and System z hosts.

4 This function is currently only needed by open Select FB and change to scope
systems hosts, but we might use it for System All if and when the System z
z at some point in the future. requirement occurs.

5 This function is currently only needed by Select CKD and change to scope
System z hosts, but we might use it for open All if and when the open systems
systems hosts at some point in the future. requirement occurs.

6 This function has already been set to All. Leave the scope set to All.
Changing the scope to CKD or
FB at this point requires a
disruptive outage.

Any scenario that changes from FB or CKD to All does not require an outage. If you choose to
change from All to either CKD or FB, then you must have a disruptive outage. If you are
absolutely certain that your machine will only ever be used for one storage type (for example,
only CKD or only FB), then you can also quite safely just use the All scope.

10.3.2 Using a feature for which you are not licensed


In Example 10-3, we have a machine where the scope of the Point-in-Time Copy license is
set to FB. This means we cannot use Point-in-Time Copy to create CKD FlashCopies. When
we try, the command fails. We can, however, create CKD volumes, because the Operating
Environment License (OEL) key scope is All.

Example 10-3 Trying to use a feature for which you are not licensed
dscli> lskey IBM.2107-7520391
Date/Time: 05 October 2010 14:19:17 CET IBM DSCLI Version: 6.6.0.220 DS: IBM.2107-7520391
Activation Key Authorization Level (TB) Scope
============================================================
Metro mirror (MM) 5 All
Operating environment (OEL) 5 All
Point in time copy (PTC) 5 FB

218 IBM System Storage DS8800: Architecture and Implementation


The FlashCopy scope is currently set to FB.

dscli> lsckdvol
Date/Time: 05 October 2010 14:19:17 CET IBM DSCLI Version: 6.6.0.220 DS: IBM.2107-7520391
Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl)
=========================================================================================
- 0000 Online Normal Normal 3390-3 CKD Base - P2 3339
- 0001 Online Normal Normal 3390-3 CKD Base - P2 3339

dscli> mkflash 0000:0001 We are not able to create CKD FlashCopies


Date/Time: 05 October 2009 14:19:17 CET IBM DSCLI Version: 6.5.0.220 DS: IBM.2107-7520391
CMUN03035E mkflash: 0000:0001: Copy Services operation failure: feature not installed

10.3.3 Changing the scope to All


As a follow-on to the previous example, in Example 10-4 we have logged onto DSFA and
changed the scope for the PTC license to All. We then apply this new activation code. We are
now able to perform a CKD FlashCopy.

Example 10-4 Changing the scope from FB to All


dscli> lskey IBM.2107-7520391
Date/Time: 05 October 2010 14:19:17 CET IBM DSCLI Version: 6.6.0.220 DS: IBM.2107-7520391
Activation Key Authorization Level (TB) Scope
============================================================
Metro mirror (MM) 5 All
Operating environment (OEL) 5 All
Point in time copy (PTC) 5 FB
The FlashCopy scope is currently set to FB

dscli> applykey -key 1234-5678-9FEF-C232-51A7-429C-1234-5678 IBM.2107-7520391


Date/Time: 05 October 2010 14:19:17 CET IBM DSCLI Version: 6.6.0.220 DS: IBM.2107-7520391
CMUC00199I applykey: Licensed Machine Code successfully applied to storage image
IBM.2107-7520391.

dscli> lskey IBM.2107-7520391


Date/Time: 05 October 2010 14:19:17 CET IBM DSCLI Version: 6.6.0.220 DS: IBM.2107-7520391
Activation Key Authorization Level (TB) Scope
============================================================
Metro mirror (MM) 5 All
Operating environment (OEL) 5 All
Point in time copy (PTC) 5 All
The FlashCopy scope is now set to All

dscli> lsckdvol
Date/Time: 04 October 2010 15:51:53 CET IBM DSCLI Version: 6.6.0.220 DS: IBM.2107-7520391
Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl)
=========================================================================================
- 0000 Online Normal Normal 3390-3 CKD Base - P2 3339
- 0001 Online Normal Normal 3390-3 CKD Base - P2 3339

dscli> mkflash 0000:0001 We are now able to create CKD FlashCopies


Date/Time: 05 October 2010 14:19:17 CET IBM DSCLI Version: 6.6.0.220 DS: IBM.2107-7520391
CMUC00137I mkflash: FlashCopy pair 0000:0001 successfully created.

Chapter 10. IBM System Storage DS8800 features and license keys 219
10.3.4 Changing the scope from All to FB
In Example 10-5, we decide to increase storage capacity for the entire machine. However, we
do not want to purchase any more PTC licenses, because PTC is only used by open systems
hosts and this new capacity is only to be used for CKD storage. We therefore decide to
change the scope to FB, so we log on to the DSFA website and create a new activation code.
We then apply it, but discover that because this is effectively a downward change (decreasing
the scope), it does not apply until we have a disruptive outage on the DS8800.

Example 10-5 Changing the scope from All to FB


dscli> lskey IBM.2107-7520391
Date/Time: 05 October 2010 14:19:17 CET IBM DSCLI Version: 6.6.0.220 DS: IBM.2107-7520391
Activation Key Authorization Level (TB) Scope
============================================================
Metro mirror (MM) 5 All
Operating environment (OEL) 5 All
Point in time copy (PTC) 5 All
The FlashCopy scope is currently set to All

dscli> applykey -key ABCD-EFAB-EF9E-6B30-51A7-429C-1234-5678 IBM.2107-7520391


Date/Time: 05 October 2010 14:19:17 CET IBM DSCLI Version: 6.6.0.220 DS: IBM.2107-7520391
CMUC00199I applykey: Licensed Machine Code successfully applied to storage image
IBM.2107-7520391.

dscli> lskey IBM.2107-7520391


Date/Time: 05 October 2010 14:19:17 CET IBM DSCLI Version: 6.6.0.220 DS: IBM.2107-7520391
Activation Key Authorization Level (TB) Scope
============================================================
Metro mirror (MM) 5 All
Operating environment (OEL) 5 All
Point in time copy (PTC) 5 FB
The FlashCopy scope is now set to FB

dscli> lsckdvol
Date/Time: 05 October 2010 14:19:17 CET IBM DSCLI Version: 6.6.0.220 DS: IBM.2107-7520391
Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl)
=========================================================================================
- 0000 Online Normal Normal 3390-3 CKD Base - P2 3339
- 0001 Online Normal Normal 3390-3 CKD Base - P2 3339

dscli> mkflash 0000:0001 But we are still able to create CKD FlashCopies
Date/Time: 05 October 2010 14:19:17 CET IBM DSCLI Version: 6.6.0.220 DS: IBM.2107-7520391
CMUC00137I mkflash: FlashCopy pair 0000:0001 successfully created.

In this scenario, we have made a downward license feature key change. We must schedule
an outage of the storage image. We should in fact only make the downward license key
change immediately before taking this outage.

Consideration: Making a downward license change and then not immediately performing
a reboot of the storage image is not supported. Do not allow your machine to be in a
position where the applied key is different than the reported key.

220 IBM System Storage DS8800: Architecture and Implementation


10.3.5 Applying an insufficient license feature key
In this example, we have a scenario where a DS8800 has a 5 TB Operating Environment
License (OEL), FlashCopy (PTC), and Metro Mirror (MM) license. We increased storage
capacity and therefore increased the license key for OEL and MM. However, we forgot to
increase the license key for FlashCopy (PTC). In Example 10-6, we can see the FlashCopy
license is only 5 TB. However, we are still able to create FlashCopies.

Example 10-6 Insufficient FlashCopy license


dscli> lskey IBM.2107-7520391
Date/Time: 05 October 2010 14:19:17 CET IBM DSCLI Version: 6.6.0.220 DS:
IBM.2107-7520391
Activation Key Authorization Level (TB) Scope
============================================================
Metro mirror (MM) 10 All
Operating environment (OEL) 10 All
Point in time copy (PTC) 5 All

dscli> mkflash 1800:1801


Date/Time: 04 October 2010 17:46:14 CET IBM DSCLI Version: 6.6.0.220 DS:
IBM.2107-7520391
CMUC00137I mkflash: FlashCopy pair 1800:1801 successfully created.

At this point, this is still a valid configuration, because the configured ranks on the machine
total less than 5 TB of storage. In Example 10-7, we then try to create a new rank that brings
the total rank capacity above 5 TB. This command fails.

Example 10-7 Creating a rank when we are exceeding a license key


dscli> mkrank -array A1 -stgtype CKD
Date/Time: 05 October 2010 14:19:17 CET IBM DSCLI Version: 6.6.0.220 DS:
IBM.2107-7520391
CMUN02403E mkrank: Unable to create rank: licensed storage amount has been
exceeded

To configure the additional ranks, we must first increase the license key capacity of every
installed license. In this example, that is the FlashCopy license.

10.3.6 Calculating how much capacity is used for CKD or FB


To calculate how much disk space is currently used for CKD or FB storage, we need to
combine the output of two commands. There are some simple rules:
򐂰 License key values are decimal numbers. So, 5 TB of license is 5000 GB.
򐂰 License calculations use the disk size number shown by the lsarray command.
򐂰 License calculations include the capacity of all DDMs in each array site.
򐂰 Each array site is eight DDMs.

To make the calculation, we use the lsrank command to determine how many arrays the rank
contains, and whether those ranks are used for FB or CKD storage. We use the lsarray
command to obtain the disk size used by each array. Then, we multiply the disk size (146,
300, 450, or 600) by eight (for eight DDMs in each array site).

In Example 10-8 on page 222, lsrank tells us that rank R0 uses array A0 for CKD storage.
Then, lsarray tells us that array A0 uses 300 GB DDMs. So we multiple 300 (the DDM size)
by 8, giving us 300 x 8 = 2400 GB. This means we are using 2400 GB for CKD storage.

Chapter 10. IBM System Storage DS8800 features and license keys 221
Now, rank R4 in Example 10-8 is based on array A6. Array A6 uses 146 GB DDMs, so we
multiply 146 by 8, giving us 146 x 8 = 1168 GB. This means we are using 1168 GB for FB
storage.

Example 10-8 Displaying array site and rank usage


dscli> lsrank
Date/Time: 05 October 2010 14:19:17 CET IBM DSCLI Version: 6.6.0.220 DS:
IBM.2107-75ABTV1
ID Group State datastate Array RAIDtype extpoolID stgtype
==========================================================
R0 0 Normal Normal A0 5 P0 ckd
R4 0 Normal Normal A6 5 P4 fb

dscli> lsarray
Date/Time: 05 October 2010 14:19:17 CET IBM DSCLI Version: 6.6.0.220 DS:
IBM.2107-75ABTV1
Array State Data RAIDtype arsite Rank DA Pair DDMcap (10^9B)
====================================================================
A0 Assigned Normal 5 (6+P+S) S1 R0 0 300.0
A1 Unassigned Normal 5 (6+P+S) S2 - 0 300.0
A2 Unassigned Normal 5 (6+P+S) S3 - 0 300.0
A3 Unassigned Normal 5 (6+P+S) S4 - 0 300.0
A4 Unassigned Normal 5 (7+P) S5 - 0 146.0
A5 Unassigned Normal 5 (7+P) S6 - 0 146.0
A6 Assigned Normal 5 (7+P) S7 R4 0 146.0
A7 Assigned Normal 5 (7+P) S8 R5 0 146.0

So for CKD scope licenses, we currently use 2,400 GB. For FB scope licenses, we currently
use 1168 GB. For licenses with a scope of All, we currently use 3568 GB. Using the limits
shown in Example 10-6 on page 221, we are within scope for all licenses.

If we combine Example 10-6 on page 221, Example 10-7 on page 221, and Example 10-8,
we can also see why the mkrank command in Example 10-7 on page 221 failed. In
Example 10-7 on page 221, we tried to create a rank using array A1. Now, array A1 uses 300
GB DDMs. This means that for FB scope and All scope licenses, we use 300 x 8 = 2400 GB
more license keys.

In Example 10-6 on page 221, we had only 5 TB of FlashCopy license with a scope of All.
This means that we cannot have total configured capacity that exceeds
5000 TB. Because we already use 3568 GB, the attempt to use 2400 more GB will fail,
because 3568 plus 2400 equals 5968 GB, which is more than 5000 GB. If we increase the
size of the FlashCopy license to 10 TB, then we can have 10,000 GB of total configured
capacity, so the rank creation will then succeed.

222 IBM System Storage DS8800: Architecture and Implementation


Part 3

Part 3 Storage
configuration
In this part, we discuss the configuration tasks required on your IBM System Storage
DS8800. We cover the following topics:
򐂰 System Storage Productivity Center (SSPC)
򐂰 Configuration using the DS Storage Manager GUI
򐂰 Configuration with the DS Command-Line Interface

© Copyright IBM Corp. 2011. All rights reserved. 223


224 IBM System Storage DS8800: Architecture and Implementation
11

Chapter 11. Configuration flow


This chapter gives a brief overview of the tasks required to configure the storage in a IBM
System Storage DS8800.

© Copyright IBM Corp. 2011. All rights reserved. 225


11.1 Configuration worksheets
During the installation of the DS8800, your IBM service representative customizes the setup
of your storage complex based on information that you provide in a set of customization
worksheets. Each time that you install a new storage unit or management console, you must
complete the customization worksheets before the IBM service representatives can perform
the installation.

The customization worksheets are important and need to be completed before the
installation. It is important that this information is entered into the machine so that preventive
maintenance and high availability of the machine are maintained. You can find the
customization worksheets in IBM System Storage DS8000 Introduction and Planning Guide,
GC27-2297.

The customization worksheets allow you to specify the initial setup for the following items:
򐂰 Company information: This information allows IBM service representatives to contact you
as quickly as possible when they need to access your storage complex.
򐂰 Management console network settings: This allows you to specify the IP address and LAN
settings for your management console (MC).
򐂰 Remote support (includes Call Home and remote service settings): This allows you to
specify whether you want outbound (Call Home) or inbound (remote services) remote
support.
򐂰 Notifications (include SNMP trap and e-mail notification settings): This allows you to
specify the types of notifications that you want and that others might want to receive.
򐂰 Power control: This allows you to select and control the various power modes for the
storage complex.
򐂰 Control Switch settings: This allows you to specify certain DS8800 settings that affect host
connectivity. You need to enter these choices on the control switch settings worksheet so
that the service representative can set them during the installation of the DS8800.

Important: IBM service representatives cannot install a storage unit or management


console until you provide them with the completed customization worksheets.

11.2 Configuration flow


The following list shows the tasks that need to be done when configuring storage in the
DS8700. The order of the tasks does not have to be exactly as shown here, and some of the
individual tasks can be done in a different order.

Important: The configuration flow changes when you use the Full Disk Encryption Feature
for the DS8800. For details, refer to IBM System Storage DS8000:IBM System Storage
DS8700 Disk Encryption Implementation and Usage Guidelines, REDP-4500.

1. Install license keys: Activate the license keys for the storage unit.
2. Create arrays: Configure the installed disk drives as either RAID 5, RAID 6, or RAID 10
arrays.
3. Create ranks: Assign each array to either a fixed block (FB) rank or a count key data
(CKD) rank.

226 IBM System Storage DS8800: Architecture and Implementation


4. Create Extent Pools: Define Extent Pools, associate each one with either Server 0 or
Server 1, and assign at least one rank to each Extent Pool. If you want to take advantage
of Storage Pool Striping, you must assign multiple ranks to an Extent Pool. Note that with
current versions of the DS GUI, you can start directly with the creation of Extent Pools
(arrays and ranks will be automatically and implicitly defined).
5. Create a repository for Space Efficient volumes.
6. Configure I/O ports: Define the type of the Fibre Channel/FICON ports. The port type can
be either Switched Fabric, Arbitrated Loop, or FICON.
7. Create host connections for open systems: Define open systems hosts and their Fibre
Channel (FC) host bus adapter (HBA) worldwide port names.
8. Create volume groups for open systems: Create volume groups where FB volumes will be
assigned and select the host attachments for the volume groups.
9. Create open systems volumes: Create striped open systems FB volumes and assign them
to one or more volume groups.
10.Create System z logical control units (LCUs): Define their type and other attributes, such
as subsystem identifiers (SSIDs).
11.Create striped System z volumes: Create System z CKD base volumes and Parallel
Access Volume (PAV) aliases for them.

The actual configuration can be done using either the DS Storage Manager GUI or DS
Command-Line Interface, or a mixture of both. A novice user might prefer to use the GUI,
while a more experienced user might use the CLI, particularly for some of the more repetitive
tasks, such as creating large numbers of volumes.

For a more detailed discussion about how to perform the specific tasks, refer to:
򐂰 Chapter 10, “IBM System Storage DS8800 features and license keys” on page 203
򐂰 Chapter 13, “Configuration using the DS Storage Manager GUI” on page 251
򐂰 Chapter 14, “Configuration with the DS Command-Line Interface” on page 307

General guidelines when configuring storage


Remember the following general guidelines when configuring storage on the DS8800:
򐂰 To achieve a well-balanced load distribution, use at least two Extent Pools, each assigned
to one DS8800 internal server (extent Pool 0 and Extent Pool 1). If CKD and FB volumes
are required, use at least four Extent Pools.
򐂰 Address groups (16 LCUs/logical subsystems (LSSs)) are all for CKD or all for FB.
򐂰 Volumes of one LCU/LSS can be allocated on multiple Extent Pools.
򐂰 An Extent Pool cannot contain all three RAID 5, RAID 6, and RAID 10 ranks. Each Extent
Pool pair should have the same characteristics in terms of RAID type, RPM, and DDM
size.
򐂰 Ranks in one Extent Pool should belong to different Device Adapters.
򐂰 Assign multiple ranks to Extent Pools to take advantage of Storage Pool Striping.
򐂰 CKD:
– 3380 and 3390 type volumes can be intermixed in an LCU and an Extent Pool.

Chapter 11. Configuration flow 227


򐂰 FB:
– Create a volume group for each server unless LUN sharing is required.
– Place all ports for a single server in one volume group.
– If LUN sharing is required, there are two options:
• Use separate volumes for servers and place LUNs in multiple volume groups.
• Place servers (clusters) and volumes to be shared in a single volume group.
򐂰 I/O ports:
– Distribute host connections of each type (FICON and FCP) evenly across the I/O
enclosure.
– Typically, access any is used for I/O ports with access to ports controlled by SAN
zoning.

228 IBM System Storage DS8800: Architecture and Implementation


12

Chapter 12. System Storage Productivity


Center
This chapter discusses how to set up and manage the System Storage Productivity Center
(SSPC) to work with the IBM System Storage DS8800 series.

The chapter covers the following topics:


򐂰 System Storage Productivity Center overview
򐂰 System Storage Productivity Center components
򐂰 System Storage Productivity Center setup and configuration
򐂰 Working with a DS8800 system and Tivoli Storage Productivity Center Basic Edition

© Copyright IBM Corp. 2011. All rights reserved. 229


12.1 System Storage Productivity Center overview
The System Storage Productivity Center (SSPC) (machine type 2805-MC5) is an integrated
hardware and software solution for centralized management of IBM storage products with
IBM storage management software. It is designed to reduce the number of management
servers. Through the integration of software and hardware on a single platform, the client can
start to consolidate the storage management infrastructure and manage the storage network,
hosts, and physical disks in context rather than by device.

IBM System Storage Productivity Center simplifies storage management by:


򐂰 Centralizing the management of storage network resources with IBM storage
management software.
򐂰 Providing greater synergy between storage management software and IBM storage
devices.
򐂰 Reducing the number of servers that are required to manage the storage infrastructure.
The goal is to have one SSPC per data center.
򐂰 Providing a simple migration path from basic device management to using storage
management applications that provide higher-level functions.

Taking full advantage of the available and optional functions usable with SSPC can result in:
򐂰 Fewer resources required to manage the growing storage infrastructure
򐂰 Reduced configuration errors
򐂰 Decreased troubleshooting time and improved accuracy

12.1.1 SSPC components


SSPC is a solution consisting of hardware and software elements.

SSPC hardware
The SSPC (IBM model 2805-MC5) server contains the following hardware components:
򐂰 x86 server 1U rack installed
򐂰 Intel Quadcore Xeon processor running at 2.53 GHz
򐂰 8 GB of RAM
򐂰 Two hard disk drives
򐂰 Dual port Gigabit Ethernet

Optional components are:


򐂰 KVM Unit
򐂰 Secondary power supply
򐂰 Additional hard disk drives
򐂰 CD media to recover image for 2805-MC5
򐂰 8 Gb Fibre Channel Dual Port HBA ( this feature enables you to move the Tivoli Storage
Productivity Center database from the SSPC server to the IBM System Storage DS8000).

SSPC software
The IBM System Storage Productivity Center 1.5 includes the following preinstalled
(separately purchased) software, running under a licensed Microsoft Windows Server 2008
Enterprise Edition R2, 64-bit (included):
򐂰 IBM Tivoli Storage Productivity Center V4.2.1 licensed as TPC Basic Edition (includes the
Tivoli Integrated Portal). A TPC upgrade requires that you purchase and add additional
TPC licenses.

230 IBM System Storage DS8800: Architecture and Implementation


򐂰 DS CIM Agent Command-Line Interface (DSCIMCLI) 5.5
򐂰 IBM Tivoli Storage Productivity Center for Replication (TPC-R) V4.2.1. To run TPC-R on
SSPC, you must purchase and add TPC-R licenses.
򐂰 IBM DB2 Enterprise Server Edition 9.7 64-bit Enterprise.
򐂰 IBM Java 1.6 is preinstalled. You do not need to download Java from Sun Microsystems.

Optionally, the following components can be installed on the SSPC:


򐂰 Software components contained since SSPC V1.3 but not on previous SSPC versions
(TPC-R, DSCIMCLI, Version 10.70 of the IBM System Storage DS Storage Manager for
DS3000, DS4000, or DS5000).
򐂰 DS8000 Command-Line Interface (DSCLI).
򐂰 Antivirus software.

Customers have the option to purchase and install the individual software components to
create their own SSPC server.

12.1.2 SSPC capabilities


SSPC, as shipped to the client, offers the following capabilities:
򐂰 Preinstalled and tested console: IBM has designed and tested SSPC to support
interoperability between server, software, and supported storage devices.
򐂰 IBM Tivoli Storage Productivity Center Basic Edition is the core software that drives SSPC
and brings together the ability to manage the SAN, storage devices, and host resources
from a single control point by providing the following features:
– Remote access to the IBM System Storage DS8000 Storage Manager GUI. The
DS8000 Storage Manager GUI itself resides and is usable on the DS8000 HMC.
– Ability to discover, monitor, alert, report, and provision storage, including:
• Automated discovery of supported storage systems in the environment.
• Asset and capacity reporting from storage devices in the SAN.
• Monitor status change of the storage devices.
• Alert for storage device status change.
• Report and display findings.
• Basic end-to-end storage provisioning.
– Advanced topology viewer showing a graphical and detailed view of the overall SAN,
including device relationships and visual notifications.
– A status dashboard.

12.1.3 SSPC upgrade options


You can upgrade some of the software included with the standard SSPC.

Tivoli Storage Productivity Center Standard Edition


Tivoli Storage Productivity Center Basic Edition can be easily upgraded with one or all of the
advanced capabilities found in IBM Tivoli Storage Productivity Center Standard Edition. IBM
Tivoli Storage Productivity Center Standard Edition (TPC-SE) includes IBM Tivoli Storage
Productivity Center for Disk, IBM Tivoli Storage Productivity Center for Data, and IBM Tivoli
Storage Productivity Center for Fabric. In addition to the capabilities provided by TPC Basic
Edition, the Standard Edition can monitor performance and connectivity from the host file
system to the physical disk, including in-depth performance monitoring and analysis of SAN

Chapter 12. System Storage Productivity Center 231


fabric performance. This provides you with a main storage management application to
monitor, plan, configure, report, and do problem determination on a heterogeneous storage
infrastructure.

TPC-SE offers the following capabilities:


򐂰 Device configuration and management of SAN-attached devices from a single console. It
also allows users to gather and analyze historical and near real-time performance metrics.
򐂰 Management of file systems and databases, thus enabling enterprise-wide reports,
monitoring and alerts, policy-based action, and file system capacity automation in
heterogeneous environments.
򐂰 Management, monitoring, and control of SAN fabric to help automate device discovery,
topology rendering, error detection fault isolation, SAN error predictor, zone control,
real-time monitoring and alerts, and event management for heterogeneous enterprise
SAN environments. In addition, it allows collection of performance statistics from IBM
Tivoli Storage, Brocade, Cisco, and McDATA fabric switches and directors that implement
the SNIA SMI-S specification.

Tivoli Storage Productivity Center for Replication


Optionally on SSPC V1.1, Tivoli Storage Productivity Center for Replication (TPC-R) can be
installed. SSPC V1.2 and above come with TPC-R preinstalled. To use the preinstalled
TPC-R, licences need to be applied first to TPC-R on the SSPC. IBM Tivoli Storage
Productivity Center for Replication provides management of IBM copy services capabilities
(see IBM System Storage DS8000: Copy Services for Open Environments, SG24-6788, for
more details).

With version 4.2, IBM Tivoli Storage Productivity Center for Replication no longer supports
DB2 as the datastore for its operational data. IBM Tivoli Storage Productivity Center for
Replication uses an embedded repository for its operational data.

IBM Tivoli Storage Productivity Center for Replication is designed to:


򐂰 Simplify and improve the configuration and management of replication on your network
storage devices by performing advanced copy operations in a single action.
򐂰 Manage advanced storage replication services, such as Metro Mirror, Global Mirror,
Metro/Global Mirror, FlashCopy, and FlashCopy SE. TPC-R can also monitor copy
services.
򐂰 Enable multiple pairing options for source and target volumes.
򐂰 Define session pairs using target and source volume groups, confirm path definitions, and
create consistency sets for replication operations.
򐂰 IBM Tivoli Storage Productivity Center for Replication Three Site BC is an addition to the
Tivoli Storage Productivity Center for Replication V4 family. Three Site BC provides:
– Support for three-site IBM DS8000 family Metro Global Mirror configurations.
– Disaster recovery configurations that can be set up to indicate copy service type
(FlashCopy, Metro Mirror, Global Mirror) and the number of separate copies and sites
to be set.
򐂰 Manage Open HyperSwap, high availability solution for storage systems based on Metro
MIrror. For more information see the IBM Redbooks publication, IBM System Storage
DS8000: Copy Services for Open Systems, SG24-6788, or the IBM Tivoli Storage
Productivity Center V4.2 - Information Center at:
http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/index.jsp?topic=/com.ibm
.tpc_V42.doc/frg_t_manage_hs.html

232 IBM System Storage DS8800: Architecture and Implementation


12.2 SSPC setup and configuration
This section summmarizes the tasks and sequence of steps required to set up and configure
the DS8800 system and the SSPC used to manage the DS8800 system.

For detailed information, and additional considerations, see the TPC/SSPC Information
Center at the following address:
http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/index.jsp

12.2.1 Configuring SSPC for DS8800 remote GUI access


The steps to configure the SSPC can be found in System Storage Productivity Center User’s
Guide, SC27-2336, which is shipped with the SSPC. The document is also available at the
following address:
http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/index.jsp

After IBM Support physically installs the 2805-MC5 and tests IP connectivity with the
DS8000, SSPC is ready for configuration by either the client or IBM Services.

After the initial configuration of SSPC is done, the SSPC user is able to configure the remote
GUI access to all DS8000 systems in the TPC Element Manager, as described in
“Configuring TPC to access the DS8000 GUI” on page 235.

Accessing the TPC on SSPC


The following methods can be used to access the TPC on SSPC console:
򐂰 Access the complete SSPC:
– Launch TPC directly at the SSPC Terminal.
– Launch TPC by Remote Desktop to the SSPC.
򐂰 Install the TPC V4.2 GUI by using the TPC installation procedure. The GUI will then
connect to the TPC running on the SSPC.
򐂰 Launch the TPC GUI front-end as a Java Webstart session.
– In a browser, enter http://<SSPC ipaddress>:9550/ITSRM/app/welcome.html.
– Download the correct IBM Java version if it is not installed yet.
– Select TPC GUI and open it with the Java Webstart executable.
For the initial setup, a Java Webstart session will be opened and the TSRMGUI.jar file will
be offloaded to the workstation on which the browser was started. After the user agrees to
unrestricted access to the workstation for the TPC-GUI, the system will ask if a shortcut
should be created.

Chapter 12. System Storage Productivity Center 233


Figure 12-1 shows the entry window for installing TPC GUI access through a browser.

Figure 12-1 Entry window for installing TPC-GUI access through a browser

After you click TPC GUI (Java Web Start), a login window appears, as shown in Figure 12-2.

Figure 12-2 TPC GUI login window

If “Change password at first login” was specified by the SSPC administrator for the Windows
user account, the user must first log on to the SSPC to change the password. The logon can
be done at the SSPC terminal itself or by Windows Remote Desktop authentication to SSPC.

Change the field Server to <SSPC ipaddress>:9549 if there is no nameserver resolution


between User Terminal and SSPC.

234 IBM System Storage DS8800: Architecture and Implementation


Tip: Set the SSPC IP address in the TPC-GUI entry window.
򐂰 On the SSPC server, go to:
%ProgramFiles%\IBM\TPC\device\apps\was\profiles\deviceServer\installedApps\DefaultNod
e\DeviceServer.ear\DeviceServer.war\app
򐂰 Open the file tpcgui.jnlp and change the setting from:
<argument>SSPC_Name:9549</argument>
to:
<argument>SSPC_ipaddress:9549</argument>

Configuring TPC to access the DS8000 GUI


The TPC Element Manager is a single point of access to the GUI for all the DS8000 systems
in your environment. Using the TPC Element Manager for DS8000 remote GUI access allows
you to:
򐂰 View a list of Elements (DS8000 GUIs within your environment).
򐂰 Access all DS8000 GUIs by launching an Element with a single action.
򐂰 Add and remove DS8000 Elements. The DS8000 GUI front-end can be accessed by http
or https.
򐂰 Save the user and password to access the DS8000 GUI. This option to access the
DS8000 GUI without reentering the password allows you to configure SSPC as a single
password logon to all DS8000s in the client environment.

Since DS8700 LIC Release 5, remote access to the DS8000 GUI is bundled with SSPC.

Note: For storage systems discovered or configured for CIMOM or native device interface,
TPC automatically defines Element Manager access. You need to specify the correct
username and password in the Element Manager to use it.

To access the DS8000 GUI in TPC, complete the following steps:


1. Launch Internet Explorer.
2. Select Tools and then Internet options on the Internet Explorer toolbar.
3. Click the Security tab.
4. Click the Trusted sites icon and then the Sites button.
5. Type the URL with HTTP or HTTPS, whichever you intend to use in the Add Element
Manager window. For example, type https://<hmc_ip_address>, where
<hmc_ip_address> is the HMC IP address that will be entered into the Add this website to
the zone field.
6. Click Add.
7. Click Close.
8. Click OK.
9. Close Internet Explorer.
10.Launch TPC. You can double-click the Productivity Center icon on the desktop or select
Start  Programs  IBM Tivoli Storage Productivity Center  Productivity Center.
The other option is to use Web user interface as described previously.

Chapter 12. System Storage Productivity Center 235


11.Log onto the TPC GUI with the default user ID and password. The default user ID is
db2admin. The default password is passw0rd. If you login for the first time, the TPC
welcome window opens, as shown in Figure 12-3.

Figure 12-3 TPC welcome window

12.Click the Element Management button in the Welcome window. If this is not the first login
to the TPC and you removed the Welcome window, then click the Element Management
button above the Navigation Tree section. The new window displays all storage systems
(Element Managers) already defined to TPC. From the Select action drop-down menu,
select Add Element Manager, as shown in Figure 12-4.

Figure 12-4 TPC Element Manager view: Options to add and launch Elements

236 IBM System Storage DS8800: Architecture and Implementation


13.In the Add Element Manager window (Figure 12-5), you have to provide the following
information:
a. Host: Enter the Domain Name System (DNS) name or IP address of the DS8000 HMC.
b. Port: The port number on which the DS8000 HMC listens for requests.
c. User name and associated Password already defined on DS8000: The default DS8000
user name is admin and password is admin. If this is the first time you try to log on into
DS8000 with the admin user name, you are prompted to change the password. Be
prepared to enter a new password and record it in a safe place.
d. Protocol: HTTPS or HTTP
e. Display Name: Specify a meaningful name of each Element Manager to identify each
DS8000 system in the Element Manager table. This is useful, particularly when you
have more than one DS8000 system managed by a single SSPC console.
Click Save to add the Element Manager.

Figure 12-5 Configure a new DS8800 Element in the TPC Element Manager view

TPC tests the connection to the DS8000 Element Manager. If the connection was
successful, the new DS8000 Element Manager is displayed in the Element Manager table.
14.After the DS8000 GUI has been added to the Element Manager, select the Element
Manager you want to work with and, from the Select Action drop-down menu, click
Launch Default Element Manager, as shown in Figure 12-6.

Figure 12-6 Launch the Element Manager

Chapter 12. System Storage Productivity Center 237


15.If the user credentials used in step 13 on page 237 need to be changed, you need to
modify the DS8000 Element Manager accordingly. Otherwise, you will not be able to
access the DS8000 GUI via the TPC Element Manager for this DS8000 system.

To modify the password and re-enable remote GUI access through TPC:
1. Launch the TPC and navigate to Element Manager.
2. Select the DS8000 system for which the password needs to be modified and, from the
Select action drop-down menu, click Modify Element Manager.
3. Enter a modified password in the Modify Element Manager window matching the DS8000
system security rules, as documented in the DS8000 Information Center (go to
http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic/index.jsp, search for
User administration, and select Defining password rules). The password and its use
must meet the following criteria:
– Be six to 16 characters long.
– Must contain five or more letters, and it must begin and end with a letter.
– Must contain one or more numbers.
– Cannot contain the user's user ID.
– Is case-sensitive.
– Four unique new passwords must be issued before an old password can be reused.
– Allowable characters are a-z, A-Z, and 0-9.

After the password has been changed, the access to the DS8000 GUI is reenabled.

If SSPC will be used for access to the DS8000 Storage Manager only, the configuration of
SSPC in regards to DS8000 system administration and monitoring is completed.

If the SSPC user wants to use the advanced function of TPC-BE, further configuration will be
required, as described in the next sections.

12.2.2 Manage embedded CIMOM on DS8000


With DS8000 and LIC Release 6, the embedded CIMOM on DS8000 HMC is enabled by
default after the HMC is started. Starting in version 4.2 of IBM Tivoli Storage Productivity
Center and SSPC 1.5 respectively, TPC supports ESSNI API and connects to DS8000 HMC
without CIMOM.

There is an option to enable or disable the embedded CIMOM manually through the DS8000
HMC Web User Interface (WUI).

To enable or disable the embedded CIMOM, perform these tasks:


1. Log into the DS8000 Web User Interface (WUI) by directing your browser to
https://<DS8000 HMC IP address>. The HMC WUI window appears.
2. Click Log on and Launch the Hardware Management Console Web application. The
HMC welcome window opens.
3. Select HMC Managementand under the Storage Server HMC Tasks section, click
Start/Stop CIM Agent (Figure 12-7).

238 IBM System Storage DS8800: Architecture and Implementation


Figure 12-7 HMC WUI: Start/Stop CIM Agent

Test connectivity to DS8000 Embedded CIMOM using DSCIMCLI


On the SSPC desktop, double-click the Launch DSCIMCLI icon. A DSCIMCLI command
prompt window opens. At the prompt, enter the DS8000 HMC IP address and then the
DS8000 Element Manager user name and password. Use the lsdev command shown in
Example 12-1 to verify connectivity between the CIMOM agent and the primary and
secondary DS8000 HMCs. Specify the correct HMC IP address/port and HMC credentials.
The status of the lsdev command output indicates successful connection.

Example 12-1 DSCIMCLI commands to check CIMOM connectivity to primary and secondary HMC

> dscimcli lsdev -l -s https://9.155.70.27:6989 -u <ESSNI user> -p <ESSNI password>


Type IP IP2 Username Storage Image Status Code Level Min Codelevel
===== ============ ======= ========= ================ ========== ========= ==============
DS 9.155.70.27 - * IBM.2107-1305081 successful 5.4.2.540 5.1.0.309

> dscimcli lsdev -l -s https://9.155.70.28:6989 -u <ESSNI user> -p <ESSNI password>


Type IP IP2 Username Storage Image Status Code Level Min Codelevel
===== ============ ======= ========= ================ ========== =========== ==============

DS 9.155.70.28 - * IBM.2107-1305081 successful 5.4.2.540 5.1.0.309

Offload embedded CIMOM logs through DSCIMCLI


For problem determination purposes, there is an option to offload the embedded CIMOM logs
to the SSPC console using DSCIMCLI commands, as shown in Example 12-2. The file will be
offloaded as a compressed file to the SSPC.

Example 12-2 DSCIMCLI commands to offload DSCIMCLI logs from DS8000 HMC onto SSPC
C:\Program Files\IBM\DSCIMCLI\Windows> dscimcli collectlog -s
https://<<DS8800_HMC_IP_addr.>:6989 -u <valid ESSNI user> -p <associated ESSNI
password>
Old remote log files were successfully listed.
No one old log file on the DS Agent side.
New remote log file was successfully created.

Chapter 12. System Storage Productivity Center 239


getting log file dscim-logs-2009.3.1-16.57.17.zip from DS Agent: complete 100%
Local log file was successfully created and saved as C:\Program
Files\IBM\DSCIMCLI\WINDOWS\/dscim-logs-2009.3.1-16.57.17.zip.
The new created log file dscim-logs-2009.3.1-16.57.17.zip was successfully got
from DS Agent side.
The new created log file dscim-logs-2009.3.1-16.57.17.zip was successfully deleted
on DS Agent
side

12.2.3 Set up SSPC user management


If the HMC CIM agent is configured from TPC-BE, the administrator can configure and
change many aspects of the storage environment. If configured by the user, SSPC supports
password cascading, which allows you to change the logical configuration of the DS8000
system and input the Windows user credentials onto SSPC. Therefore, it is good practice for
the SSPC administrator to ensure that in multiple user environments, all users have
appropriate access permissions configured.

Figure 12-8 shows the cascade of authentication from SSPC (on the Windows operating
system) to the DS8000 storage configuration.

Figure 12-8 Cascade of authentication from SSPC (on the Windows operating system) to the DS8000
storage configuration

The procedure to add users to the SSPC requires TPC Administrator credentials and can be
split into two parts:
򐂰 Set up a user at the operating system level and then add this user to a group.
򐂰 Set up TPC-BE to map the operating system group to a TPC Role.

240 IBM System Storage DS8800: Architecture and Implementation


Tivoli Storage Productivity Center (TPC) supports mainly two types of users: the operator and
the administrator users. For the DS8000 system, the following roles are used:
򐂰 Disk Administrator
򐂰 Disk Operator (Physical, Logical, or Copy Services)
򐂰 Security Administrator

Set up users at the OS level


To set up a new SSPC user, the SSPC administrator needs to first grant appropriate user
permissions at the operating system level, using the following steps, which are also illustrated
in Figure 12-9.

Figure 12-9 Set up a new user on the SSPC

1. From the SSPC Desktop, select My Computer  Manage  Configuration  Local


Users and Groups  Users.
2. Select Action  New User and:
– Set the user name and password.
– If appropriate, check User must change password at next logon. Note that if this box
is checked, further actions are required for the new user to log on.
– If appropriate, check Password never expires.
3. Click Create to add the new user.
4. Go back to Local Users and Groups  Groups.
– Right-click Groups to add a new group or select an existing group.
– Select Add  Advanced  Find Now and select the user to be added to the group.
5. Click OK to add the user to the group, then click OK again to exit user management.

Chapter 12. System Storage Productivity Center 241


Tip: To simplify user administration, use the same name for the Windows user group and
the user groups role in TPC. For example, create the Windows user group “Disk
Administrator” and assign this group to the TPC role “Disk Administrator”.

Set up user roles in TPC


The group defined at the OS level now needs to be mapped to a role defined in TPC. To do
this. perform these steps:
1. Log into TPC with Administrator permissions and select Administrative Services 
Configuration  Role-to-Group Mapping.
2. Add the Group you created in Windows to a Role, as shown in Figure 12-10. For DS8000
system management, the recommended roles are Disk Operator, Disk Administrator, and
Security Administrator.
After the SSPC administrator has defined the user role, the operator is able to access the
TPC GUI.
3. The authorization level in TPC depends on the role assigned to a user group. Table 12-1
shows the association between job roles and authorization levels.

Table 12-1 TPC roles and TCP administration levels


TCP Role TCP administration level

Superuser Has full access to all TPC functions

TPC Administrator Has full access to all operations in the TPC GUI

Disk Administrator • Has full access to TPC GUI disk functions, including tape devices
• Can launch DS8000 GUI by using stored passwords in TPC Element
Manager
• Can add and delete volumes by TPC

Disk Operator • Has access to reports of disk functions and tape devices
• Has to enter user name/password to launch the DS8000 GUI
• Cannot start CIMOM discoveries or probes
• Cannot take actions in TPC, for example, delete and add volumes

Figure 12-10 Assigning the Windows user group Disk Administrator to the TPC Role Disk Administrator

242 IBM System Storage DS8800: Architecture and Implementation


12.2.4 Set up and discover DS8000 using native device interface
Starting with IBM Tivoli Storage Productivity Center V4.2, the DS8000 can be only be
managed using the native device interface. If you define DS8000 CIMOM connection in TPC,
a discovery process will fail with an error such as HWN021724W CIMOM
https://9.155.54.30:6989 manages Device(s) of type DS8000 which is supported
through the native device interface only.

To set up a native device interface, in the left menu panel navigate to Administrative
services  Data sources  Storage subsystems and click the Add button. In the
Configure Devices panel shown in Figure 12-11, fill in the required information, then click
Add.

Figure 12-11 Dialog for adding DS8000 using native device interface in TPC 4.2.1

The connection will be verified and the device will be added to the list. Click Next (at the
bottom of the dialog panel) to discover the new device. On the next panel, you can specify
how the device will be probed. There are predefined templates. Choose the option that best
fits your environment and click Next. A summary page will appear and after confirmation the
Result page is displayed. Click Finish. Device discovery will start immediately and depending
on your settings a pop-up window, as shown in Figure 12-12, might appear allowing you to
view the discovery logs.

Figure 12-12 Pop-up window will bring you to the jobs list

With TPC V4.2, there is a significant change in log management: now you can access all TPC
logs in one place through the Job Management window as shown in Figure 12-13.

Chapter 12. System Storage Productivity Center 243


Figure 12-13 Job management interface

Note: If not already defined, during this process TPC automatically defines GUI access for
the discovered DS8000 in the Element Manager.

12.3 Working with a DS8000 system in TPC-BE


Perform a number of tasks regularly to ensure that operational performance is maintained. In
this section, we describe actions to maintain the relationship between TPC and the DS8000
system.

12.3.1 Manually recover CIM Agent connectivity after HMC shutdown


If a CIMOM discovery or probe fails because none of the HMCs are available, the device will
be flagged as “unavailable” in the topology view and “failed” in the CIMOM discovery and
probe. The unavailability of the HMC can be caused by various factors, such as IP network
problems, the CIM Agent being stopped, an HMC hardware error in a single HMC setup, or a
DS8000 codeload in a single HMC setup.

In cases where CIMOM discovery and probes are not scheduled to run continuously or the
time frame until the next scheduled run is not as desired, the TPC user can manually run the
CIMOM discovery and probe to re-enable TPC ahead of the next scheduled probe to display
the health status and to continue the optional performance monitoring. To do this task, the
TPC user must have an Administrator role. The steps to perform are:
1. In the TPC Enterprise Manager view, select Administrative Services  Data
Sources  CIMOM Agents. Then select the CIMOM connections reported as failed and
execute Test CIMOM Connection. If the connection status of one or more CIMOM
connections changes to SUCCESS, continue with step 2.

244 IBM System Storage DS8800: Architecture and Implementation


2. Perform a CIMOM Discovery, as described in “Working with a DS8000 system in TPC-BE”
on page 244.
3. Perform a probe.

12.3.2 Display disks and volumes of DS8000 Extent Pools


To display the volumes and Disk Drive Modules (DDMs) used by an Extent Pool, double-click
that Extent Pool in the Topology viewer. Underneath this topology image, a table view
provides further information about the DS8000 devices, as shown in Figure 12-14,
Figure 12-15, Figure 12-16 on page 246, and Figure 12-17 on page 246. Details about the
displayed health status are discussed in 12.3.4, “Storage health management” on page 249.

Figure 12-14 Drill-down of the topology viewer for a DS8000 Extent Pool

Figure 12-15 shows a graphical and tabular view with more information.

Figure 12-15 Graphical and tabular view of a broken DS8000 DDM set to deferred maintenance

Figure 12-16 on page 246 shows a TPC graphical and tabular view to an Extent Pool.

Chapter 12. System Storage Productivity Center 245


Figure 12-16 TPC graphical and tabular view to an Extent Pool configured out of one rank

The DDM displayed as green in Figure 12-16 is a spare DDM, and is not part of the RAID 5
configuration process that is currently in progress.

The two additional DDMs displayed in Figure 12-17 in the missing state have been replaced,
but are still displayed due to the settings configured in historic data retention.

Figure 12-17 TPC graphical view to an Extent Pool configured out of three ranks (3x8=24 DDMs)

246 IBM System Storage DS8800: Architecture and Implementation


12.3.3 Display the physical paths between systems
If Out of Band Fabric agents are configured, TPC-BE can display physical paths between
SAN components. The view consists of four windows (computer information, switch
information, subsystem information, and other systems) that show the physical paths through
a fabric or set of fabrics (host-to-subsystem or subsystem-to-subsystem).

To display the path information shown in Figure 12-18 and Figure 12-19, execute these steps:
1. In the topology view, select Overview  Fabrics  Fabric.
2. Expand the Connectivity view of the devices for which you would like to see the physical
connectivity.
3. Click the first device.
4. Press Ctrl and click any additional devices to which you would like to display the physical
path (Figure 12-18).
5. To obtain more details about the connectivity of dedicated systems, as shown in
Figure 12-19, double-click the system of interest and expand the details of the system
view.

Figure 12-18 Topology view of physical paths between one host and one DS8000 system

In Figure 12-18, the display of the Topology view points out physical paths between the hosts
and their volumes located on the DS8000 system. In this view, there are only WWPNs shown
in left box labeled Other. To interpret WWPNs of a host in the fabric, data agents must be
placed on that host. Upgrading TPC-BE with additional TPC licenses will enable TPC to
assess and also warn you about lack of redundancy.

Figure 12-19 Topology view: detailed view of the DS8000 host ports assigned to one of its two switches

Chapter 12. System Storage Productivity Center 247


In Figure 12-19, the display of the Topology viewer points out that the switch connectivity
does not match one of the recommendations given by the DS8000 Information Center on host
attachment path considerations for a storage image. In this example, we have two I/O
enclosures in each I/O enclosure pair (I1/I2 or I3/I4) located on different RIO loop halves (the
DS8000 Information Center1 mentions that “you can place two host attachments, one in each
of the two I/O enclosures of any I/O enclosure pair”). In the example, all switch connections
are assigned to one DS8000 RIO loop only (R1-I1 and R1-I2).

As shown in Figure 12-20, the health status function of TPC-BE Topology Viewer allows you
to display the individual FC port health inside a DS8000 system.

Figure 12-20 TPC graphical view of a broken DS8000 host adapter Card R1-I1-C5 and the associated
WWNN as displayed in the tabular view of the topology viewer

As shown in Figure 12-21, the TPC-BE Topology viewer allows you to display the connectivity
and path health status of one DS8000 system into the SAN by providing a view that can be
broken down to the switch ports and their WWPNs.

Figure 12-21 Connectivity of a DS8000 system drilled down to the ports of a SAN switch
1 http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic/index.jsp

248 IBM System Storage DS8800: Architecture and Implementation


12.3.4 Storage health management
TPC provides a graphical storage health overlay function. This function allows the user to
easily spot unhealthy areas through color coding. If the SSPC is monitored on a regular basis,
TPC can be configured to show new alerts when the GUI is launched. This can be done by
selecting Preferences  Edit General  On Login Show  All Active Alerts.

However, if the TPC console is not regularly monitored for health status changes, configure
alerts to avoid health issues going unrecognized for a significant amount of time. To configure
alerts for the DS8000 system, in the navigation tree, select Disk Manager  Alerting and
right-click Storage Subsystem Alerts. In the window displayed, the predefined alert trigger
conditions and the Storage Subsystems can be selected. Regarding the DS8000 system, the
predefined alert triggers can be categorized into:
򐂰 Capacity changes applied to cache, volumes, and Extent Pools
򐂰 Status changes to online/offline of storage subsystems, volumes Extent Pools, and disks
򐂰 Device not found for storage subsystems, volumes Extent Pools, and disks
򐂰 Device newly discovered for storage subsystem, volume Extent Pool, and disk
򐂰 Version of storage subsystems changed

12.3.5 Display host volumes through SVC to the assigned DS8000 volume
With SSPC’s TPC-BE, you can create a table to display the name of host volumes assigned
to an SVC vDisk and the DS8000 volume ID associated to this vDisk. For a fast view, select
SVC VDisks  MDisk  DS8000 Volume ID. To populate this host, select Volume name 
SVC  DS8000 Volume ID view (TPC-BE SVC and DS8000 probe setup is required). To
display the table, as demonstrated in Figure 12-22, select TPC  Disk Manager 
Reporting  Storage Subystems  Volume to Backend Volume Assignment  By
Volume and select Generate Report.

Figure 12-22 Example of three DS8000 volumes assigned to one vDisk and the name associated to
this vDisk (tabular view split into two pieces for better overview of the columns)

Chapter 12. System Storage Productivity Center 249


250 IBM System Storage DS8800: Architecture and Implementation
13

Chapter 13. Configuration using the DS


Storage Manager GUI
The DS Storage Manager provides a graphical user interface (GUI) to configure the IBM
System Storage DS8000 series and manage DS8000 Copy Services. The DS Storage
Manager GUI (DS GUI) is invoked from SSPC. In this chapter, we explain the possible ways
to access the DS GUI, and how to use it to configure the storage on the DS8000.

This chapter includes the following sections:


򐂰 DS Storage Manager GUI overview
򐂰 Logical configuration process
򐂰 Examples of configuring DS8000 storage
򐂰 Examples of exploring DS8000 storage status and hardware

For information about Copy Services configuration in the DS8000 family using the DS GUI,
see the following IBM Redbooks publications:
򐂰 IBM System Storage DS8000: Copy Services for Open Systems, SG24-6788
򐂰 IBM System Storage DS8000: Copy Services for IBM System z, SG24-6787

For information about DS GUI changes related to disk encryption, see IBM System Storage
DS8700: Disk Encryption Implementation and Usage Guidelines, REDP-4500.

For information about DS GUI changes related to LDAP authentication, see IBM System
Storage DS8000: LDAP Authentication, REDP-4505.

Note: Some of the screen captures in this chapter might not reflect the latest version of the
DS GUI code.

© Copyright IBM Corp. 2011. All rights reserved. 251


13.1 DS Storage Manager GUI overview
In this section, we describe the DS Storage Manager GUI (DS GUI) access method design.
The DS GUI code resides on the DS8000 Hardware Management Console (HMC) and we
discuss different access methodologies.

13.1.1 Accessing the DS GUI


The DS GUI code at the DS8000 HMC is invoked at the SSPC from the Tivoli Storage
Productivity Center (TPC) GUI. The DS Storage Manager communicates with the DS
Network Interface Server, which is responsible for communication with the two controllers of
the DS8000.

The new Internet protocol IPv6 supports access to the DS8000 HMC.

You can access the DS GUI in any of the following ways:


򐂰 Through the System Storage Productivity Center (SSPC)
򐂰 From TPC on a workstation connected to the HMC
򐂰 From a browser connected to SSPC or TPC on any server
򐂰 Using Microsoft Windows Remote Desktop through the SSPC
򐂰 Directly at the HMC

These different access capabilities, using Basic authentication, are shown in Figure 13-1. In
our illustration, SSPC connects to two HMCs managing two DS8000 storage complexes.
Although you have different options to access DS GUI, SSPC is the preferred access method.

Authentication
Browser
without LDAP TPC GUI

TCP/IP User Authentication


Directly is managed by the
TCP/IP ESSNI Server
regardless of type
of Connection
TPC DS8000 HMC 1
TPC GUI
DS GUI
SSPC ESSNI DS8800
Server Complex 1
ESSNI
Client User repository

DS8000 HMC 2

ESSNI DS8800
Server Complex 2

User repository

Remote desktop DS CLI


Client

Figure 13-1 Accessing the DS8700 GUI

The DS8800 supports the ability to use a Single Point of Authentication function for the GUI
and CLI through an centralized LDAP server. This capability is supported with SSPC running
on 2805-MC5 hardware that has TPC Version 4.2.1 (or later) preloaded. If you have an older

252 IBM System Storage DS8800: Architecture and Implementation


SSPC hardware version with a lower TPC version, you have to upgrade TPC to V4.2.1 to take
advantage of the Single Point of Authentication function for the GUI and CLI through a
centralized LDAP server.

The different access capabilities of the LDAP authentication are shown in Figure 13-2. In this
illustration, TPC connects to two HMCs managing two DS8800 storage complexes.

Note: For detailed information about LDAP-based authentication, see IBM System Storage
DS8000: LDAP Authentication, REDP-4505.

LDAP Authentication
Browser The authentication is now
TPC GUI managed through the
Authentication Server, a
TPC component, and a new
Authentication Client at
Directly the HMC

TCP/IP 1,2,3
1 1 TCP/IP

DS8800 HMC 1
TPC 4.2 3
2 ESSNI
TPC GUI ESSNI Server 10 DS8800
DS GUI Client Complex 1
Host System 4 9
SSPC
TIP Authentication
6 5 Client
Authentication 8
LDAP Service Server
7
DS8800 HMC 2
ESSNI
1 1,2,3 Server
The Authentication DS8800
Server provides the Complex 2
connection to the Authentication
LDAP or other Client
repositories

Remote desktop DS CLI


Client

Figure 13-2 LDAP authentication to access the DS 8800 GUI and CLI

Chapter 13. Configuration using the DS Storage Manager GUI 253


Accessing the DS GUI through SSPC
As previously stated, the recommended method for accessing the DS GUI is through SSPC.

To access the DS GUI through SSPC, perform the following steps:


1. Log in to your SSPC server and launch the IBM Tivoli Storage Productivity Center.
2. Type in your Tivoli Storage Productivity Center user ID and password.
3. In the Tivoli Storage Productivity Center window shown in Figure 13-3, click Element
Management (above the Navigation Tree) to launch the Element Manager.

Figure 13-3 SSPC: Launch Element Manager

Note: Here we assume that the DS8000 storage subsystem (Element Manager) is already
configured in TPC, as described in 12.2, “SSPC setup and configuration” on page 233.

254 IBM System Storage DS8800: Architecture and Implementation


4. After the Element Manager is launched, click the disk system you want to access, as
shown in Figure 13-4.

Figure 13-4 SSPC: Select the DS8000

5. You are presented with the DS GUI Welcome window for the selected disk system, as
shown in Figure 13-5 on page 255.

Figure 13-5 SSPC: DS GUI Welcome window

Chapter 13. Configuration using the DS Storage Manager GUI 255


Accessing the DS GUI from a browser connected to SSPC
To access the DS GUI, you can connect to SSPC using a web browser, and then use the
instructions given in “Accessing the DS GUI through SSPC” on page 254.

Accessing the DS GUI from a browser connected to a TPC workstation


To access the DS GUI, you can connect to a TPC workstation using a web browser, and then
use the instructions in “Accessing the DS GUI through SSPC” on page 254. For information
about how to access a TPC workstation through a web browser, see “Accessing the TPC on
SSPC” on page 233.

Accessing the DS GUI through a remote desktop connection to SSPC


You can use remote desktop connection to SSPC. After you are connected to SSPC, follow
the instructions in “Accessing the DS GUI through SSPC” on page 254 to access the DS GUI.
For information how to connect to SSPC using remote desktop, see “Accessing the TPC on
SSPC” on page 233.

13.1.2 DS GUI Welcome window


After you log on, you see the DS Storage Manager Welcome window, as shown in
Figure 13-6.

Figure 13-6 DS GUI Welcome window

In the Welcome window of the DS8000 Storage Manger GUI, you see buttons for accessing
the following options:
򐂰 Show all tasks: Opens the Task Manager window, where you can end a task or switch to
another task.
򐂰 Hide Task List: Hides the Task list and expands your work area.
򐂰 Toggle Banner: Removes the banner with the IBM System Storage name and expands the
working space.

256 IBM System Storage DS8800: Architecture and Implementation


򐂰 Information Center: Launches the Information Center. The Information Center is the online
help for the DS8000. The Information Center provides contextual help for each window,
but also is independently accessible from the Internet.
򐂰 Close Task: Closes the active task.

The left side of the window is the navigation pane.

DS GUI window options


Figure 13-7 shows an example of the Disk Configuration - Extentpools window. Several
important options available on this page are also on many of the other windows of DS
Storage Manager. We explain several of these options (see Figure 13-7 for an overview).

Figure 13-7 Example of the Extentpools window

The DS GUI displays the configuration of your DS8000 in tables. There are several options
you can use:
򐂰 To download the information from the table, click Download. This can be useful if you
want to document your configuration. The file is in comma-separated value (.csv) format
and you can open the file with a spreadsheet program. This function is also useful if the
table on the DS8000 Manager consists of several pages; the .csv file includes all pages.
򐂰 The Print report option opens a new window with the table in HTML format and starts the
printer dialog box if you want to print the table.
򐂰 The Select Action drop-down menu provides you with specific actions that you can
perform. Select the object you want to access and then the appropriate action (for
example, Create or Delete).
򐂰 There are also buttons to set and clear filters so that only specific items are displayed in
the table (for example, show only FB extentpools in the table). This can be useful if you
have tables with a large number of items.

Chapter 13. Configuration using the DS Storage Manager GUI 257


13.2 Logical configuration process
When performing the initial logical configuration, the first step is to create the storage
complex (processor complex) along with the definition of the hardware of the storage unit.
The storage unit can have one or more storage images (storage facility images).

When performing the logical configuration, the following approach is likely to be the most
straightforward:
1. Start by defining the storage complex.
2. Create arrays, ranks, and Extent Pools.
3. Create open system volumes.
4. Create count key data (CKD) LSSs and volumes.
5. Create host connections and volume groups.

Long Running Tasks Summary window


Some logical configuration tasks have dependencies on the successful completion of other
tasks, for example, you cannot create ranks on arrays until the array creation is complete.
The Long Running Tasks Summary window assists you in this process by reporting the
progress and status of these long-running tasks.

Figure 13-8 shows the successful completion of the different tasks (adding capacity and
creating new volumes). Click the specific task link to get more information about the task.

Figure 13-8 Long Running Task Summary window

258 IBM System Storage DS8800: Architecture and Implementation


13.3 Examples of configuring DS8000 storage
In the following sections, we show an example of a DS8000 configuration made through the
DS GUI.

For each configuration task (for example, creating an array), the process guides you through
windows where you enter the necessary information. During this process, you have the ability
to go back to make modifications or cancel the process. At the end of each process, you get a
verification window where you can verify the information that you entered before you submit
the task.

13.3.1 Define storage complex


During the DS8000 installation, your IBM service representative customizes the setup of your
storage complex based on information that you provide in the customization worksheets. After
you log into the DS GUI and before you start the logical configuration, check the status of your
storage system.

In the My Work section of the DS GUI welcome window, navigate to Manage Hardware 
Storage Complexes. The Storage Complexes Summary window opens, as shown in
Figure 13-9.

Figure 13-9 Storage Complexes Summary window

You should have at least one storage complex listed in the table. If you have more than one
DS8800 system or any other DS8000 family model in your environment connected to the
same network, you can define it here by adding a new storage complex. Select Add from the
Select action drop-down menu in order to add a new storage complex (see Figure 13-10 on
page 260).

Chapter 13. Configuration using the DS Storage Manager GUI 259


Figure 13-10 Select Storage Complex Add window

The Add Storage Complex window opens, as shown in Figure 13-11.

Figure 13-11 Add Storage Complex window

Provide the IP address of the Hardware Management Console (HMC) connected to the new
storage complex that you wish to add and click OK to continue. A new storage complex is
added to the table, as shown in Figure 13-12 on page 261.

260 IBM System Storage DS8800: Architecture and Implementation


Figure 13-12 New storage complex is added

Having all the DS8000 storage complexes defined together provides flexible control and
management. The status information indicates the healthiness of each storage complex. By
clicking the status description link of any storage complex, you can obtain more detailed
health check information for various vital DS8000 components (see Figure 13-13).

Figure 13-13 Check the status details

Different status descriptions may be reported for your storage complexes. These descriptions
depend on the availability of the vital storage complexes components. In Figure 13-14, we
show an example of different status states.

Chapter 13. Configuration using the DS Storage Manager GUI 261


Figure 13-14 Different Storage Complex Status states

A Critical status indicates unavailable vital storage complex resources. An Attention status
may be triggered by some resources being unavailable. Because they are redundant, the
storage complex is still operational. One example is when only one storage server inside a
storage image is offline, as shown in Figure 13-15.

Figure 13-15 One storage server is offline

We recommend checking the status of your storage complex and proceeding with logical
configuration (create arrays, ranks, Extent Pools, or volumes) only when your HMC consoles
are connected to the storage complex and both storage servers inside the storage image are
online and operational.

262 IBM System Storage DS8800: Architecture and Implementation


13.3.2 Create arrays

Tip: You do not necessarily need to create arrays first and then ranks. You can proceed
directly with the creation of Extent Pools, as explained in 13.3.4, “Create Extent Pools” on
page 274.

To create an array, perform the following steps in the DS GUI:


1. In the DS GUI welcome window, from the My Work section, expand Configure Storage
and click Disk Configuration. This brings up the Disk Configuration window
(Figure 13-16).

Figure 13-16 Disk Configuration window

Note: If you have defined more storage complexes or storage images, be sure to select the
right storage image before you start creating arrays. From the Storage image drop-down
menu, select the desired storage image you want to access.

In our example, the DS8000 capacity is still not assigned to open systems or System z.

Chapter 13. Configuration using the DS Storage Manager GUI 263


2. Click the Array Sites tab to check the available storage that is required to create the array
(see Figure 13-17).

Figure 13-17 Array sites

3. In our example, all array sites are unassigned and therefore eligible to be used for array
creation. Each array site has eight physical disk drives. In order to discover more details
about each array site, select the desired array site and click Properties in the Select
Action drop-down menu. The Single Array Site Properties window opens. It provides
general array site characteristics, as shown in Figure 13-18.

Figure 13-18 Select Array Site Properties

264 IBM System Storage DS8800: Architecture and Implementation


4. Click the Status tab to get more information about the Disk Drive Modules (DDMs) and
the state of each DDM, as shown in Figure 13-19.

Figure 13-19 Single Array Site Properties: Status

5. All DDMs in this array site are in the Normal state. Click OK to close the Single Array Site
Properties window and go back to the Disk Configuration main window.
6. After we identify the unassigned and available storage, we can create an array. Click the
Array tab in the Manage Disk Configuration section and select Create Arrays in the
Select action drop-down menu, as shown in Figure 13-20.

Figure 13-20 Select Create Arrays

Chapter 13. Configuration using the DS Storage Manager GUI 265


7. The Create New Arrays window opens, as shown in Figure 13-21.

Figure 13-21 Create New Arrays window

You need to provide the following information:


– RAID Type: The supported or available RAID types are RAID 5, RAID 6, and RAID 10.
– Type of configuration: There are two options available:
• Automatic is the default, and it allows the system to choose the best array sites
configuration based on your capacity and DDM type selection.
• The Manual option can be used if you want to have more control over the
resources. When you select this option, a table of available array sites is displayed.
You have to manually select array sites from the table.
– If you select the Automatic configuration type, you need to provide additional
information:
• From the DA Pair Usage drop-down menu, select the appropriate action. The
Spread among all pairs option balances arrays evenly across all available Device
Adapter (DA) pairs. There are another two options available: Spread among least
used pairs and Sequentially fill all pairs. The bar graph displays, in real-time, the
effect of your choice.
• From the Drive Class drop-down menu, select the DDM type you wish to use for the
new array.
• From the Select capacity to configure drop-down menu, click the desired total
capacity.
If you want to create many arrays with different characteristics (RAID and DDM type) in
one task, select Add Another Array as many times as required.

266 IBM System Storage DS8800: Architecture and Implementation


In our example (see Figure 13-22), we created two RAID 6 arrays on 146 GB 15 K DDMs
and two arrays on 300 GB 10 K DDMs and RAID 5.
Click OK to continue.

Figure 13-22 Creating new arrays

8. The Create array verification window is displayed (Figure 13-23). It lists all array sites
chosen for the new arrays we want to create. At this stage, you can still change your
configuration by deleting the array sites from the lists and adding new array sites if
required. Click Create All once you decide to continue with the proposed configuration.

Figure 13-23 Create array verification window

Chapter 13. Configuration using the DS Storage Manager GUI 267


9. Wait for the message in Figure 13-24 to appear and then click Close.

Figure 13-24 Creating arrays completed

10.The window in Figure 13-25 shows the newly created arrays. You can see that the graph in
the Disk Configuration Summary section has changed accordingly and now includes the
new capacity we used for creating arrays.

Figure 13-25 List of all created arrays

268 IBM System Storage DS8800: Architecture and Implementation


13.3.3 Create ranks

Tip: You do not necessarily need to create arrays first and then ranks. You can proceed
directly with the creation of Extent Pools (see 13.3.4, “Create Extent Pools” on page 274).

To create an rank, perform the following steps in the DS GUI:


1. In the DS GUI welcome window, from the My Work section, expand Configure Storage
and click Disk Configuration. This brings up the Disk Configuration window shown in
Figure 13-25 on page 268. Click the Ranks tab to start working with ranks. Select Create
Rank from the Select action drop-down menu, as shown in Figure 13-26.

Note: If you have defined more storage complexes/storage images, be sure to select
the right storage image before you start creating ranks. From the Storage image
drop-down menu, select the desired storage image you want to access.

Figure 13-26 Select Create Ranks

2. The Create New Ranks window opens (see Figure 13-27).

Figure 13-27 Create New Rank window

Chapter 13. Configuration using the DS Storage Manager GUI 269


In order to create a rank, you have to provide the following information:
– Storage Type: The type of extent for which the rank is to be configured. The storage
type can be set to one of the following values:
• Fixed block (FB) extents =1 GB. In fixed block architecture, the data (the logical
volumes) is mapped over fixed-size blocks or sectors.
• Count key data (CKD) extents = CKD Mod 1. In count-key-data architecture, the
data field stores the user data.
– RAID Type: The supported or available RAID types are RAID 5, RAID 6, and RAID 10.
– Type of configuration: There are two options available:
• Automatic is the default and it allows the system to choose the best configuration of
the physical resources based on your capacity and DDM type selection.
• The Manual option can be used if you want have more control over the resources.
When you select this option, a table of available array sites is displayed. You have
to manually select resources from the table.
– Encryption Group indicates if encryption is enabled or disabled for ranks. Select 1 from
the Encryption Group drop-down menu if the encryption feature is enabled on this
machine. Otherwise, select None.
– If you select the Automatic configuration type, you need to provide additional
information:
• From the DA Pair Usage drop-down menu, select the appropriate action. The
Spread among all pairs option balances ranks evenly across all available Device
Adapter (DA) pairs. There are another two options available: Spread among least
used pairs and Sequentially fill all pairs. The bar graph displays, in real-time, the
effect of your choice.
• From the Drive Class drop-down menu, select the DDM type you wish to use for the
new array.
• From the Select capacity to configure drop-down menu, click the desired total
capacity.
If you want to create many ranks with different characteristics (Storage, RAID, and DDM
type) in one task, select Add Another Rank as many times as required.
In our example, we create two CKD with RAID 6 using 146 GB 15 K DDMs and two FB
ranks on 300 GB 10 K DDMs with RAID 5.
Click OK to continue.

270 IBM System Storage DS8800: Architecture and Implementation


3. The Create rank verification window is displayed (Figure 13-28). Each array site listed in
the table is assigned to the corresponding array we created in 13.3.2, “Create arrays” on
page 263. At this stage, you can still change your configuration by deleting the ranks from
the lists and adding new ranks if required. Click Create All after you decide to continue
with the proposed configuration.

Figure 13-28 Create rank verification window

4. The message in Figure 13-29 appears.

Figure 13-29 Creating ranks: In progress message

The duration of the Create rank task is longer than the Create array task. Click the View
Details button in order to check the overall progress. It takes you to the Long Running
Task Summary window, which shows all tasks executed on this DS8800 storage
subsystem. Click the task link name (which has an In progress state) or select it and click
Properties from the Select action drop-down menu, as shown in Figure 13-30.

Chapter 13. Configuration using the DS Storage Manager GUI 271


Figure 13-30 Long Running Task Summary: Select task properties

In the task properties window, you can see the progress and task details, as shown in
Figure 13-31.

Figure 13-31 Long Running Task Summary: Task properties

272 IBM System Storage DS8800: Architecture and Implementation


5. After the task is completed, go back to Disk Configuration and, under the Rank tab, check
the list of newly created ranks (see Figure 13-32).

Figure 13-32 List of all created ranks

The bar graph in the Disk Configuration Summary section has changed. There are ranks for
both CKD and FB, but they are not assigned to Extent Pools.

Chapter 13. Configuration using the DS Storage Manager GUI 273


13.3.4 Create Extent Pools
To create an Extent Pool, perform the following steps in the DS GUI:
1. In the DS GUI welcome window, from the My Work section, expand Configure Storage
and click Disk Configuration. This opens the Disk Configuration window and the Extent
Pool information (see Figure 13-33).
The bar graph in the Disk Configuration Summary section provides information about
unassigned and assigned capacity. In our example, there are ranks defined, but still not
assigned to any Extent Pool.
Select Create Extent Pools from the Select action drop-down menu, as shown in
Figure 13-33.

Figure 13-33 Select Create Extent Pools

Note: If you have defined more storage complexes or storage images, be sure to select the
right storage image before you create Extent Pools. From the Storage image drop-down
menu, select the desired storage image you want to access.

274 IBM System Storage DS8800: Architecture and Implementation


2. The Create New Extent Pools window opens, as shown in Figure 13-34. Scroll down to
see the rest of the window and provide input for all the fields, as shown in Figure 13-35.

Figure 13-34 Create New Extent Pools window: part 1

Figure 13-35 Create New Extent Pools window: part 2

Chapter 13. Configuration using the DS Storage Manager GUI 275


To create an Extent Pool, you have to provide the following information:
– Storage Type: The type of extent for which the rank is to be configured. The storage
type can be set to one of the following values:
• Fixed block (FB) extents = 1 GB. In the fixed block architecture, the data (the logical
volumes) is mapped over fixed-size blocks or sectors.
• Count key data (CKD) extents = CKD Mod 1. In the count-key-data architecture, the
data field stores the user data.
– RAID Type: The supported or available RAID types are RAID 5, RAID 6, and RAID 10.
– Type of configuration: There are two options available:
• Automatic is the default and it allows the system to choose the best configuration of
physical resources based on your capacity and DDM type selection.
• The Manual option can be used if you want have more control over the resources.
When you select this option, a table of available array sites is displayed. You have
to manually select resources from the table.
– Encryption Group indicates if encryption is enabled or disabled for ranks. Select 1 from
the Encryption Group drop-down menu if the encryption feature is enabled on this
machine. Otherwise, select None.
– If you select the Automatic configuration type, you need to provide additional
information:
• From the DA Pair Usage drop-down menu, select the appropriate action. The
Spread among all pairs option balances ranks evenly across all available Device
Adapter (DA) pairs. For example, no more than half of the ranks attached to a DA
pair is assigned to each server, so that each server's DA within the DA pair has the
same number of ranks. There are another two options available: Spread among
least used pairs and Sequentially fill all pairs. The bar graph displays, in real-time,
the effect of your choice.
• From Drive Class drop-down menu, select the DDM type you wish to use for the
new array.
• From the Select capacity to configure drop-down menu, click the desired total
capacity.
– Number of Extent Pools: Choose the number of Extent Pools using previously selected
ranks. The ideal configuration creates two Extent Pools per storage type, dividing all
ranks equally among each pool. There are three available options:
• Two Extent Pools (ease of management)
• Single Extent Pool
• Extent Pool for each rank (physical isolation)
– Nickname Prefix and Suffix: Provides an unique name for each Extent Pool. This setup
is very useful if you have many Extent Pools, each assigned to different hosts and
platforms.
– Server assignment: The Automatic option allows the system to determine the best
server for each Extent Pool. It is the only choice when you select the Two Extent Pool
option as the number of Extent Pools.
– Storage Threshold: Specify the maximum percentage of the storage threshold to
generate an alert. This allows you to make adjustments before a storage full condition
occurs.

276 IBM System Storage DS8800: Architecture and Implementation


– Storage reserved: Specifies the percentage of the total Extent Pool capacity that is
reserved. This percentage is prevented from being allocated to volumes or
space-efficient storage.
3. If you have both the FB and CKD storage type, or you have different types of DDMs
installed, you need to create more Extent Pools accordingly. In order to create all the
required Extent Pools in one task, select Add Another Pool as many times as required.
In our example (Figure 13-36), we create, for each storage type, two Extent Pools.
Click OK to continue.

Figure 13-36 Create New Extent Pools for FB and CKD

Chapter 13. Configuration using the DS Storage Manager GUI 277


4. The Create Extent Pool verification window opens (Figure 13-37), where you check the
names of the Extent Pools that are going to be created, their capacity, server assignments,
RAID protection and other information. If you want to add capacity to the Extent Pools or
add another Extent Pool, select the appropriate action from the Select action drop-down
list. After you are satisfied with the specified values, click Create all to create the Extent
Pools.

Figure 13-37 Create Extent Pool verification window

5. The message shown in Figure 13-38 appears.

Figure 13-38 Creating Extent Pools: In progress message

Click the View Details button to check the overall progress. It takes you to the Long
Running Task Summary window, where you can see all tasks executed on this DS8800
storage subsystem. Click your task link name (which has the In progress state) to see the
task progress, as shown in Figure 13-39.

278 IBM System Storage DS8800: Architecture and Implementation


Figure 13-39 Long Running Task Summary: Task properties

6. After the task is completed, go back to Disk Configuration and, under the Extent Pools tab,
check the list of newly created ranks (see Figure 13-40).

Figure 13-40 List of all created Extent Pools

Chapter 13. Configuration using the DS Storage Manager GUI 279


The bar graph in the Disk Configuration Summary section has changed. There are ranks
assigned to Extent Pools and you can create new volumes from each Extent Pool capacity.
7. Before you start allocating space on each Extent Pool, you can check its definitions and
verify if all the settings match the ones for your planned logical configuration design. There
are many options available from the Select action drop-down menu, such as add or
remove capacity to the pool, view Extent Pool or DDM properties, or dynamically merge
two Extent Pools, as shown in Figure 13-41. To check the Extent Pool properties, select
the desired Extent Pool and, from the Select action drop-down menu, click Properties.

Figure 13-41 Select Extent Pools properties

8. The Single Pool properties window opens (Figure 13-42). Basic Extent Pool information is
provided here as well as volume relocation related information. You can, if necessary,
change the Extent Pool Name, Storage Threshold, and Storage Reserved values and
select Apply to commit all the changes.

Figure 13-42 SIngle Pool Properties: General tab

9. For more information about drive types or ranks included in the Extent Pool, select the
appropriate tab. Click OK to return to the Disk Configuration window.
10.To discover more details about the DDMs, select the desired Extent Pool from the Disk
Configuration window and, from the Select action drop-down menu, click DDM
Properties, as shown in Figure 13-43.

280 IBM System Storage DS8800: Architecture and Implementation


Figure 13-43 Extent Pool: DDM Properties

Use the DDM Properties window to view all the DDMs that are associated with the
selected Extent Pool and to determine the DDMs state. You can print the table, download
it in .csv format, and modify the table view by selecting the appropriate icon at the top of
the table.
Click OK to return to the Disk Configuration window.

13.3.5 Configure I/O ports


Before you can assign host attachments to I/O ports, you must define the format of the I/O
ports. There are four FCP/FICON ports on each card, and each port is independently
configurable using the following steps:
1. Expand Manage hardware.
2. Select Storage Complexes. The Storage Complexes Summary window opens, as shown
in see Figure 13-44.

Figure 13-44 Storage Complexes Summary window

3. Select the storage image for which you want to configure the ports and, from the Select
action drop-down menu, click Configure I/O Ports (under the Storage Image section of
the menu).
4. The Configure I/O Port window opens, as shown in Figure 13-45.

Chapter 13. Configuration using the DS Storage Manager GUI 281


Here you select the ports that you want to format and then click the desired port format
(FcSf, FC-AL, or FICON) from the Select action drop-down menu.

Figure 13-45 Select I/O port format

You get a warning message that the ports might become unusable by the hosts that are
currently connected to them.
5. You can repeat this step to format all ports to their required function. Multiple port selection
is supported.

13.3.6 Configure logical host systems


In this section, we show you how to configure host systems. This applies only for open
systems hosts. A default FICON host definition is automatically created after you define an
I/O port to be a FICON port.

To create a new host system, do the following:


1. Expand Manage hardware.
2. Select Host Connections. The Host systems window displays, as shown in Figure 13-46.

Figure 13-46 Host Connections summary

282 IBM System Storage DS8800: Architecture and Implementation


In our example, we do not have any host connections defined yet. Under the Tasks
section, there are shortcut links for different actions. If you want to modify the I/O port
configuration previously defined, you can click the Configure I/O ports link.

Tip: You can use the View host port login status link to query the host that is logged
into the system or use this window to debug host access and switch configuration
issues.

If you have more than one storage image, you have to select the right one and then, to
create a new host, select the Create new host connection link in the Tasks section.
3. The resulting windows guide you through the host configuration, beginning with the
window in Figure 13-47.

Figure 13-47 Define Host Ports window

In the General host information window, enter the following information:


a. Host Connection Nickname: Name of the host.
b. Port Type: You must specify whether the host is attached over an FC Switch fabric
(P-P) or direct FC arbitrated loop to the DS8000.
c. Host Type: In our example, we create a Windows host. The drop-down menu gives you
a list of host types from which to select.
d. Enter the Host WWPN numbers of the host or select the WWPN from the drop-down
menu and click the Add button next to it.

Chapter 13. Configuration using the DS Storage Manager GUI 283


Once the host entry is added into the table, you can manually add a description of each
host. When you have entered the necessary information, click Next.
4. The Map Host Ports to a Volume Group window appears, as shown in Figure 13-48 on
page 284. In this window, you can choose the following options:
– Select the option Map at a later time to create a host connection without mapping host
ports to a volume group.
– Select the option Map to a new volume group to create a new volume group to use in
this host connection.
– Select the option Map to an existing volume group to map to a volume group that is
already defined. Choose an existing volume group from the menu. Only volume groups
that are compatible with the host type that you selected from the previous window are
displayed.
In our example, we have only first two options available, because we have not created any
volume groups on this machine. Therefore, we will map to a new volume group.
Click Next once you select the appropriate option.

Figure 13-48 Map Host Ports to a Volume Group window

The Define I/O Ports window opens, as shown in Figure 13-49.

Figure 13-49 Define I/O Ports window

284 IBM System Storage DS8800: Architecture and Implementation


5. From the Define I/O ports window, you can choose if you want to automatically assign
your I/O ports or manually select them from the table. Defining I/O ports determines which
I/O ports can be used by the host ports in this host connection. If specific I/O ports are
chosen, then the host ports are only able to access the volume group on those specific I/O
ports. After defining I/O ports, selecting the Next button directs you to the verification
window where you can approve your choices before you commit them.
The Verification window opens, as shown in Figure 13-50.

Figure 13-50 Verification window

6. In the Verification window, check the information that you entered during the process. If
you want to make modifications, select Back, or you can cancel the process. After you
have verified the information, click Finish to create the host system. This action takes you
back to the Host Connection window and Manage Host Connections table, where you can
see the list of all created host connections.

If you need to make changes to a host system definition, select your host in the Manage Host
Connections table and choose appropriate the action from the drop-down menu, as shown in
Figure 13-51.

Note: Be aware that you have other selection possibilities. We show only one way here.

Chapter 13. Configuration using the DS Storage Manager GUI 285


Figure 13-51 Modify host connections

13.3.7 Create fixed block volumes


This section explains the creation of fixed block (FB) volumes:
1. Expand Configure Storage.
2. Select Open Systems Volumes to open the Open Systems Storage Summary window
shown in Figure 13-52.

Figure 13-52 Open Systems Volumes window

286 IBM System Storage DS8800: Architecture and Implementation


3. If you have more than one storage image, you have to select the appropriate one.
In the Tasks pane at the bottom of the window, click Create new volumes. The Create
Volumes window shown in Figure 13-53 appears.

Figure 13-53 Create Volumes: Select Extent Pools

4. The table in Create Volumes window contains all the Extent Pools that were previously
created for the FB storage type. To ensure a balanced configuration, select Extent Pools in
pairs (one from each server). If you select multiple pools, the new volumes are assigned to
the pools based on the assignment option that you select on this window.
Click Next to continue. The Define Volume Characteristics window appears, as shown in
Figure 13-54.

Chapter 13. Configuration using the DS Storage Manager GUI 287


Figure 13-54 Add Volumes: Define Volume Characteristics

5. Select the Volume type, Size, Volume quantity, Storage allocation method, Extent
allocation method, Nickname prefix, Nickname suffix, and one or more volume groups (if
you want to add this new volume to a previously created volume group).
When your selections are complete, click Add Another if you want to create more
volumes with different characteristics or click OK to continue. The Create Volumes window
opens, as shown in Figure 13-55 on page 288.

Figure 13-55 Create Volumes window

6. If you need to make any further modifications to the volumes in the table, select the
volumes you are about to modify and choose the appropriate action from the Select action
drop-down menu. Otherwise, click Next to continue the process.
7. We need to select LSS for all created volumes. In our example, we select the Automatic
assignment method, where the system assigns five volumes addresses to LSS 00 and five
volumes addresses to LSS 01 (see Figure 13-56).

288 IBM System Storage DS8800: Architecture and Implementation


Figure 13-56 Select LSS

8. Click Finish to continue.

Chapter 13. Configuration using the DS Storage Manager GUI 289


9. The Create Volumes Verification window shown in Figure 13-57 opens, listing all the
volumes that are going to be created. If you want to add more volumes or modify the
existing volumes, you can do so by selecting the appropriate action from the Select action
drop-down list. Once you are satisfied with the specified values, click Create all to create
the volumes.

Figure 13-57 Create Volumes Verification window

10.The Creating Volumes information window opens. Depending on the number of volumes,
the process can take a while to complete. Optionally, click the View Details button in order
to check the overall progress. It takes you to the Long Running Task Properties window,
where you can see the task progress.
11.After the creation is complete, a final window opens. You can select View detail or Close.
If you click Close, you return to the main Open system Volumes window, as shown in
Figure 13-58.

Figure 13-58 Open Systems Volumes: Summary

290 IBM System Storage DS8800: Architecture and Implementation


12.The bar graph in the Open Systems - Storage Summary section has changed. From there,
you can now select other actions, such as Manage existing volumes. The Manage
Volumes window is shown in Figure 13-59.

Figure 13-59 Open Systems Volumes: Manage Volumes

13.3.8 Create volume groups


To create a volume group, perform this procedure:
1. Expand Configure Storage.
2. Select Open Systems Volume Groups.
3. To create a new volume group, select Create from the Select action drop-down menu, as
shown in Figure 13-60.

Figure 13-60 Open Systems Volume Groups window: Select Create

Chapter 13. Configuration using the DS Storage Manager GUI 291


The Define Volume Group Properties window shown in Figure 13-61 opens.

Figure 13-61 Create Volume Group Properties window

4. In the Define Volume Group Properties window, enter the nickname for the volume group
and select the host type from which you want to access the volume group. If you select
one host (for example, IBM System p), all other host types with the same addressing
method are automatically selected. This does not affect the functionality of the volume
group; it supports the host type selected.
5. Select the volumes to include in the volume group. If you have to select a large number of
volumes, you can specify the LSS so that only these volumes display in the list, and then
you can select all.
6. Click Next to open the Verification window shown in Figure 13-62.

Figure 13-62 Create New Volume Group Verification window

292 IBM System Storage DS8800: Architecture and Implementation


7. In the Verification window, check the information you entered during the process. If you
want to make modifications, select Back, or you can cancel the process altogether. After
you verify the information, click Finish to create the host system attachment. After the
creation completes, a last window appears, where you can select View detail or Close.
8. After you select Close, you will see the new volume group in the Volume Group window.

13.3.9 Create LCUs and CKD volumes


In this section, we show how to create logical control units (LCUs) and CKD volumes. This is
only necessary for IBM System z.

Important: The LCUs you create must match the logical control unit definitions on the host
I/O configuration. More precisely, each LCU ID number you select during the create
process must correspond to a CNTLUNIT definition in the HCD/IOCP with the same
CUADD number. It is vital that the two configurations match each other.

Perform the following steps:


1. Select Configure Storage  System z Volumes and LCUs in the My Work task list. The
System z Storage Summary window shown in Figure 13-63 opens.

Figure 13-63 System z Volumes and LCUs window

2. Select a storage image from the Select storage image drop-down menu if you have more
than one. The window is refreshed to show the LCUs in the storage image.

Chapter 13. Configuration using the DS Storage Manager GUI 293


3. To create new LCUs, select Create new LCUs with volumes from the tasks list.
The Define LCUs (Figure 13-64) window opens.

Figure 13-64 Create LCUs window

4. Select the LCUs you want to create. You can select them from the list displayed on the left
by clicking the number, or you can use the map. When using the map, click the available
LCU square. You have to enter all the other necessary parameters for the selected LCUs.
– Starting SSID: Enter a Subsystem ID (SSID) for the LCU. The SSID is a four character
hexadecimal number. If you create multiple LCUs at one time, the SSID number is
incremented by one for each LCU. The LCUs attached to the same operating system
image must have different SSIDs. We recommend that you use unique SSID numbers
across your whole environment.
– LCU type: Select the LCU type you want to create. Select 3990 Mod 6 unless your
operating system does not support Mod 6. The options are:
• 3990 Mod 3
• 3990 Mod 3 for TPF
• 3990 Mod 6
The following parameters affect the operation of certain Copy Services functions:
– Concurrent copy session timeout: The time in seconds that any logical device on this
LCU in a concurrent copy session stays in a long busy state before suspending a
concurrent copy session.

294 IBM System Storage DS8800: Architecture and Implementation


– z/OS Global Mirror Session timeout: The time in seconds that any logical device in a
z/OS Global Mirror session (XRC session) stays in long busy before suspending the
XRC session. The long busy occurs because the data mover has not offloaded data
when the logical device (or XRC session) is no longer able to accept additional data.
With recent enhancements to z/OS Global Mirror, there is now an option to suspend
the z/OS Global Mirror session instead of presenting the long busy status to the
applications.
– Consistency group timeout: The time in seconds that remote mirror and copy
consistency group volumes on this LCU stay extended long busy after an error that
causes a consistency group volume to suspend. While in the extended long busy state,
I/O is prevented from updating the volume.
– Consistency group timeout enabled: Check the box to enable remote mirror and copy
consistency group timeout option on the LCU.
– Critical mode enabled: Check the box to enable critical heavy mode. Critical heavy
mode controls the behavior of the remote copy and mirror pairs that have a primary
logical volume on this LCU.
When all necessary selections have been made, click Next to proceed to the next window.
5. In the next window (Figure 13-65), you must configure your base volumes and, optionally,
assign alias volumes. The Parallel Access Volume (PAV) license function should be
activated in order to use alias volumes.

Figure 13-65 Create Volumes window

Chapter 13. Configuration using the DS Storage Manager GUI 295


Define the base volume characteristics in the first third of this window with the following
information:
– Base type:
• 3380 Mod 2
• 3380 Mod 3
• 3390 Custom
• 3390 Standard Mod 3
• 3390 Standard Mod 9
• 3390 Mod A (used for Extended Address Volumes - EAV)
– Volume size: This field must be changed if you use the volume type 3390 Custom or
3390 Mode A.
– Size format: This format only has to be changed if you want to enter a special number
of cylinders. This can also only be used by 3390 Custom or 3390 Mode A volume
types.
– Volume quantity: Here you must enter the number of volumes you want to create.
– Base start address: The starting address of volumes you are about to create. Specify a
decimal number in the range of 0 - 255. This defaults to the value specified in the
Address Allocation Policy definition.
– Order: Select the address allocation order for the base volumes. The volume
addresses are allocated sequentially, starting from the base start address in the
selected order. If an address is already allocated, the next free address is used.
– Storage allocation method: This field only appear on boxes that have the FlashCopy
SE function activated. The options are:
• Standard: Allocate standard volumes.
• Track Space Efficient (TSE): Allocate Space Efficient volumes to be used as
FlashCopy SE target volumes.
– Extent allocation method: Defines how volume extents are allocated on the ranks in the
Extent Pool. This field is not applicable for TSE volumes. The options are:
• Rotate extents: The extents of a volume are allocated on all ranks in the Extent Pool
in a round-robin fashion. This function is called Storage Pool Striping. This
allocation method can improve performance because the volume is allocated on
multiple ranks. It also helps to avoid hotspots by spreading the workload more
evenly on the ranks. This is the default allocation method.
• Rotate volumes: All extents of a volume are allocated on the rank that contains
most free extents. If the volume does not fit on any one rank, it can span multiple
ranks in the Extent Pool.
Select Assign the alias volume to these base volumes if you us PAV or Hyper PAV and
provide the following information:
– Alias start address: Enter the first alias address as a decimal number between 0 - 255.
– Order: Select the address allocation order for the alias volumes. The volume
addresses are allocated sequentially starting from the alias start address in the
selected order.
– Evenly assign alias volumes among bases: When you select this option, you have to
enter the number of alias you want to assign to each base volume.

296 IBM System Storage DS8800: Architecture and Implementation


– Assign aliases using a ratio of aliases to base volume: This option gives you the ability
to assign alias volumes using a ratio of alias volumes to base volumes. The first value
gives the number you assign to each alias volume and the second value selects to
which alias volume you want to assign an alias. If you select 1, each base volume will
get a alias volume. If you select 2, every second base volume gets an alias volume. If
you select 3, every third base volume gets an alias volume. The selection starts always
with the first volume.

Note: You can assign all aliases in the LCU to just one base volume if you have
implemented HyperPAV or Dynamic alias management. With HyperPAV, the alias
devices are not permanently assigned to any base volume even though you initially
assign each to a certain base volume. Rather, they reside in a common pool and are
assigned to base volumes as needed on a per I/O basis. With Dynamic alias
management, WLM will eventually move the aliases from the initial base volume to
other volumes as needed.

If your host system is using Static alias management, you need to assign aliases to all
base volumes on this window, because the alias assignments made here are
permanent in nature. To change the assignments later, you have to delete and
re-create aliases.

In the last section of this window, you can optionally assign the alias nicknames for your
volumes:
– Nickname prefix: If you select a nickname suffix of None, you must enter a nickname
prefix in this field. Blanks are not allowed. If you select a nickname suffix of Volume ID
or Custom, you can leave this field blank.
– Nickname suffix: You can select None as described above. If you select Volume ID,
you have to enter a four character volume ID for the suffix, and if you select Custom,
you have to enter four digit hexadecimal number or a five digit decimal number for the
suffix.
– Start: If you select Hexadecimal sequence, you have to enter a number in this field.

Note: The nickname is not the System z VOLSER of the volume. The VOLSER is
created later when the volume is initialized by the ICKDSF INIT command.

Click OK to proceed. The Create Volumes window shown in Figure 13-66 appears.

Figure 13-66 Create Volumes window

Chapter 13. Configuration using the DS Storage Manager GUI 297


6. In the Create Volumes window (Figure 13-66 on page 297), you can select the just created
volumes in order to modify or delete them. You also can create more volumes if this is
necessary at the time. Select Next if you do not need to create more volumes at this time.
7. In the next window (Figure 13-67), you can change the Extent Pool assignment to your
LCU. Select Finish if you do not want to make any changes here.

Figure 13-67 LCU to Extent Pool Assignment window

8. The Create LCUs Verification window appears, as shown in Figure 13-68, where you can
see list of all the volumes that are going to be created. If you want to add more volumes or
modify the existing ones, you can do so by selecting the appropriate action from the Select
action drop-down list. Once you are satisfied with the specified values, click Create all to
create the volumes.

Figure 13-68 Create LCUs Verification window

9. The Creating Volumes information window opens. Depending on the number of volumes,
the process can take a while to complete. Optionally, click the View Details button in order
to check the overall progress. This action takes you to the Long Running Task Properties
window, where you can see the task progress.

298 IBM System Storage DS8800: Architecture and Implementation


10.After the creation is complete, a final window is displayed. You can select View detail or
Close. If you click Close, you are returned to the main Open system Volumes window, as
shown in Figure 13-69.

Figure 13-69 System z Volumes and LCUs: Summary

The bar graph in the System z Storage Summary section has changed.

13.3.10 Additional actions on LCUs and CKD volumes


When you select Manage existing LCUs and Volumes (Figure 13-69), you can perform
additional actions at the LCU or volume level.

As shown in Figure 13-70, you have the following options:


򐂰 Create: See 13.3.9, “Create LCUs and CKD volumes” on page 293 for information about
this option.
򐂰 Clone LCU: See 13.3.9, “Create LCUs and CKD volumes” on page 293 for more
information about this option. Here all properties from the selected LCU will be cloned.
򐂰 Add Volumes: Here you can add base volumes to the selected LCU. See 13.3.9, “Create
LCUs and CKD volumes” on page 293 for more information about this option.
򐂰 Add Aliases: Here you can add alias volumes without creating additional base volumes.
򐂰 Properties: Here you display the additional properties. You can also change some of them,
such as the timeout value.
򐂰 Delete: Here you can delete the selected LCU. This must be confirmed, because you will
also delete all volumes that will contain data.

Chapter 13. Configuration using the DS Storage Manager GUI 299


Figure 13-70 Manage LCUs and Volumes window 1

The next window (Figure 13-71) shows that you can take actions at the volume level once you
have selected an LCU:
򐂰 Increase capacity: Use this action to increase the size of a volume. The capacity of a 3380
volume cannot be increased. After the operation completes, you need to use the ICKDSF
REFORMAT REFVTOC command to adjust the volume VTOC to reflect the additional
cylinders. Note that the capacity of a volume cannot be decreased.
򐂰 Add Aliases: Use this action when you want to define additional aliases without creating
new base volumes.
򐂰 View properties: Here you can view the volumes properties. The only value you change is
the nickname. You can also see if the volume is online from the DS8700 side.
򐂰 Delete: Here you can delete the selected volume. This must be confirmed, because you
will also delete all alias volumes and data on this volume.

300 IBM System Storage DS8800: Architecture and Implementation


Figure 13-71 Manage LCUs and Volumes window 2

Tip: After initializing the volumes using the ICKDSF INIT command, you also will see the
VOLSERs in this window. This is not done in this example.

The Increase capacity action can be used to dynamically expand volume capacity without
needing to bring the volume offline in z/OS. It is good practice to start using 3390 Mod A once
you can expand the capacity and change the device type of your existing 3390 Mod 3, 3390
Mod 9, and 3390 Custom volumes. Keep in mind that 3390 Mod A volumes can only be used
on z/OS V1.10 or later and that after the capacity has been increased on DS8700, you need
to run a ICKDSF to rebuild the VTOC Index, allowing it to recognize the new volume size.

Chapter 13. Configuration using the DS Storage Manager GUI 301


13.4 Other DS GUl functions
In this section, we discuss additional DS GUI functions introduced with the DS8000 series.

13.4.1 Check the status of the DS8000


Perform these steps in order to display and explore the overall status of your DS8800 system:
1. In the My Work section in the DS GUI welcome window, navigate to Manage Hardware 
Storage Complexes. The Storage Complexes Summary window opens. Select your
storage complex and, from the Select action drop-down menu, click System Summary, as
shown in Figure 13-72.

Figure 13-72 Select Storage Unit System Summary

2. The new Storage Complex window provides general DS8800 system information. It is
divided into four sections (see Figure 13-73):
a. System Summary: You can quickly identify the percentage of capacity that is currently
used, and the available and used capacity for opens systems and System z. In
addition, you can check the system state and obtain more information by clicking the
state link.
b. Management Console information.
c. Performance: Provides performance graphs for host MBps, host KIOps, rank MBps,
and rank KIOps. This information is periodically updated every 60 seconds.
d. Racks: Represents the physical configuration.

302 IBM System Storage DS8800: Architecture and Implementation


Figure 13-73 System Summary overview

3. In the Rack section, the number of racks shown matches the racks physically installed in
the storage unit. If you position the mouse pointer over the rack, additional rack
information is displayed, such as the rack number, the number of DDMs, and the number
of host adapters (see Figure 13-74).

Figure 13-74 System Summary: rack information

Chapter 13. Configuration using the DS Storage Manager GUI 303


13.4.2 Explore the DS8800 hardware
DS8800 GUI allows you to explore hardware installed in your DS8800 system by locating
specific physical and logical resources (arrays, ranks, Extent Pools, and others). Hardware
Explorer shows system hardware and a mapping between logical configuration objects and
DDMs.

You can explore the DS8800 hardware components and discover the correlation between
logical and physical configuration by performing the following steps:
1. In the My Work section in the DS GUI welcome window, navigate to Manage Hardware 
Storage Complexes.
2. The Storage Complexes Summary window opens. Select your storage complex an, from
the Select action drop-down menu, click System Summary.
3. Select the Hardware Explorer tab to switch to the Hardware Explorer window (see
Figure 13-75).

Figure 13-75 Hardware Explorer window

4. In this window, you can explore the specific hardware resources installed by selecting the
appropriate component under the Search rack criteria by resources drop-down menu. In
the Rack section of the window, there is a front and rear view of the DS8800 rack. You can
interact with the rack image to locate resources. To view a larger image of a specific
location (displayed in the right pane of the window), use your mouse to move the yellow
box to the desired location across the DS8800 front and rear view.

304 IBM System Storage DS8800: Architecture and Implementation


5. In order to check where the physical disks of arrays are located, change the search criteria
to Array and from the Available Resources section, click one or more array IDs that you
want to explore. After you click the array ID, the location of each DDM is highlighted in the
rack image. Each disk has an appropriate array ID label. Use your mouse to move the
yellow box in the rack image on the left to the desired location across the DS8800 front
and rear view to view the magnified view of this section, as shown in Figure 13-76.

Figure 13-76 View arrays

6. After you have identified the location of array DDMs, you can position the mouse pointer
over the specific DDM to display more information, as shown in Figure 13-77.

Figure 13-77 DDM information

Chapter 13. Configuration using the DS Storage Manager GUI 305


7. Change the search criteria to Extent Pool to discover more about each Extent Pool
location. Select as many Extent Pools as you need in the Available Resources section and
find the physical location of each one, as shown in Figure 13-78.

Figure 13-78 View Extent Pools

8. Another very useful function in the Hardware Explorer GUI section is the ability to identify
the physical location of each FCP or FICON port. Change the search criteria to I/O Ports
and select one or more ports in the Available Resources section. Use your mouse to move
the yellow box in the rack image to the rear DS8800 view (bottom pane), where the I/O
ports are located (see Figure 13-79).

Figure 13-79 View I/O ports

Click the highlighted port to discover its basic properties and status.

306 IBM System Storage DS8800: Architecture and Implementation


14

Chapter 14. Configuration with the DS


Command-Line Interface
In this chapter, we explain how to configure storage on the IBM System Storage DS8000
storage subsystem by using the DS Command-Line Interface (DS CLI). We include the
following sections:
򐂰 DS Command-Line Interface overview
򐂰 Configuring the I/O ports
򐂰 Configuring the DS8000 storage for FB volumes
򐂰 Configuring DS8000 Storage for Count Key Data Volumes

For information about using the DS CLI for Copy Services configuration, encryption handling,
or LDAP usage, refer to the documents listed here.

For Copy Services configuration in the DS8000 using the DS CLI, see the following books:
򐂰 IBM System Storage DS: Command-Line Interface User's Guide, GC53-1127
򐂰 IBM System Storage DS8000: Copy Services for Open Systems, SG24-6788
򐂰 IBM System Storage DS8000: Copy Services for IBM System z, SG24-6787

For DS CLI commands related to disk encryption, see IBM System Storage DS8700 Disk
Encryption Implementation and Usage Guidelines, REDP-4500.

For DS CLI commands related to LDAP authentication, see IBM System Storage DS8000:
LDAP Authentication, REDP-4505.

© Copyright IBM Corp. 2011. All rights reserved. 307


14.1 DS Command-Line Interface overview
The command-line interface provides a full-function command set that allows you to check
your Storage Unit configuration and perform specific application functions when necessary.
For detailed information about DS CLI use and setup, refer to IBM System Storage DS:
Command-Line Interface User's Guide, GC53-1127.

The following list highlights a few of the functions that you can perform with the DS CLI:
򐂰 Create user IDs that can be used with the GUI and the DS CLI.
򐂰 Manage user ID passwords.
򐂰 Install activation keys for licensed features.
򐂰 Manage storage complexes and units.
򐂰 Configure and manage Storage Facility Images.
򐂰 Create and delete RAID arrays, ranks, and Extent Pools.
򐂰 Create and delete logical volumes.
򐂰 Manage host access to volumes.
򐂰 Check the current Copy Services configuration that is used by the Storage Unit.
򐂰 Create, modify, or delete Copy Services configuration settings.
򐂰 Integrate LDAP policy usage and configuration.
򐂰 Implement encryption functionality.

Note: The DSCLI version must correspond to the LMC level installed on your system. You
can have more versions of DSCLI installed on your system, each in its own directory.

14.1.1 Supported operating systems for the DS CLI


The DS Command-Line Interface can be installed on many operating system platforms,
including AIX, HP-UX, Red Hat Linux, SUSE Linux, Novell NetWare, IBM i i5/OS, Oracle
Solaris, HP OpenVMS, VMware ESX, and Microsoft Windows.

Important: For the most recent information about currently supported operating systems,
refer to the IBM System Storage DS8000 Information Center website at:
http://publib.boulder.ibm.com/infocenter/ds8000ic/index.jsp

The DS CLI is supplied and installed via a CD that ships with the machine. The installation
does not require a reboot of the open systems host. The DS CLI requires Java 1.4.1 or later.
Java 1.4.2 is the preferred JRE on Windows, AIX, and Linux, and is supplied on the CD. Many
hosts might already have a suitable level of Java installed. The installation program checks for
this requirement during the installation process and does not install the DS CLI if you do not
have the correct version of Java.

The installation process can be performed through a shell, such as the bash or korn shell, or
the Windows command prompt, or through a GUI interface. If performed via a shell, it can be
performed silently using a profile file. The installation process also installs software that
allows the DS CLI to be completely uninstalled should it no longer be required.

14.1.2 User accounts


DS CLI communicates with the DS8000 system through the HMC console. Either the primary
or secondary HMC console may be used. DS CLI access is authenticated using HMC user
accounts. The same user IDs can used for both DS CLI and DS GUI access. See 9.5, “HMC
user management” on page 191 for further detail about user accounts.

308 IBM System Storage DS8800: Architecture and Implementation


14.1.3 DS CLI profile
To access a DS8000 system with the DS CLI, you need to provide certain information with the
dscli command. At a minimum, the IP address or host name of the DS8000 HMC, a user
name, and a password are required. You can also provide information such as the output
format for list commands, the number of rows per page in the command-line output, and
whether a banner is included with the command-line output.

If you create one or more profiles to contain your preferred settings, you do not have to
specify this information each time you use DS CLI. When you launch DS CLI, all you need to
do is to specify a profile name with the dscli command. You can override the profile’s values
by specifying a different parameter value with the dscli command.

When you install the command-line interface software, a default profile is installed in the
profile directory with the software. The file name is dscli.profile, for example,
c:\Program Files\IBM\DSCLI\profile\dscli.profile for the Windows platform and
/opt/ibm/dscli/profile/dscli.profile for UNIX and Linux platforms.

You have several options for using profile files:


򐂰 You can modify the system default profile dscli.profile.
򐂰 You can make a personal default profile by making a copy of the system default profile as
<user_home>/dscli/profile/dscli.profile. The default home directory <user_home> is
designated as follows:
– Windows system: C:\Documents and Settings\<user_name>
– UNIX/Linux system: /home/<user_name>
򐂰 You can create specific profiles for different Storage Units and operations. Save the profile
in the user profile directory. For example:
– c:\Program Files\IBM\DSCLI\profile\operation_name1
– c:\Program Files\IBM\DSCLI\profile\operation_name2

Attention: The default profile file created when you install the DS CLI will potentially be
replaced every time you install a new version of the DS CLI. It is a good practice to open
the default profile and then save it as a new file. You can then create multiple profiles and
reference the relevant profile file using the -cfg parameter.

These profile files can be specified using the DS CLI command parameter -cfg
<profile_name>. If the -cfg file is not specified, the user’s default profile is used. If a user’s
profile does not exist, the system default profile is used.

Note: If there are two profiles with the same name, one in default system’s directory and
one in your personal directory, your personal profile will be taken.

Profile change illustration


A simple way to edit the profile is to do the following:
1. From the Windows desktop, double-click the DS CLI icon.
2. In the command window that opens, enter the command cd profile.
3. In the profile directory, enter the command notepad dscli.profile, as shown in
Example 14-1.

Chapter 14. Configuration with the DS Command-Line Interface 309


Example 14-1 Command prompt operation
C:\Program Files\ibm\dscli>cd profile
C:\Program Files\IBM\dscli\profile>notepad dscli.profile

4. The notepad opens with the DS CLI profile in it. There are four lines you can consider
adding. Examples of these lines are shown in bold in Example 14-2.

Tip: The default newline delimiter is a UNIX delimiter, which may render text in notepad as
one long line. Use a text editor that correctly interprets UNIX line endings.

Example 14-2 DS CLI profile example


# DS CLI Profile
#
# Management Console/Node IP Address(es)
# hmc1 and hmc2 are equivalent to -hmc1 and -hmc2 command options.
#hmc1:127.0.0.1
#hmc2:127.0.0.1

# Default target Storage Image ID


# "devid" and "remotedevid" are equivalent to
# "-dev storage_image_ID" and "-remotedev storeage_image_ID" command options,
respectively.
#devid: IBM.2107-AZ12341
#remotedevid:IBM.2107-AZ12341

devid: IBM.2107-75ABCDE
hmc1: 10.0.0.250
username: admin
password: passw0rd

Adding the serial number by using the devid parameter, and the HMC IP address by using the
hmc1 parameter, is strongly suggested. Not only does this help you to avoid mistakes when
using more profiles, but also you do not need to specify this parameter for some dscli
commands that require it. Additionally, if you specify dscli profile for copy services usage, then
using the remotedevid parameter is strongly suggested for the same reasons. To determine a
storage system’s id, use the lssi CLI command.

Although adding the user name and password parameters will simplify the DS CLI startup, it
is not suggested that you add them because a password is saved in clear text in the profile
file. It is better to create an encrypted password file with the managepwfile CLI command. A
password file generated using the managepwfile command is located in the directory
user_home_directory/dscli/profile/security/security.dat.

Important: Use care if adding multiple devid and HMC entries. Only one should be
uncommented (or more literally, unhashed) at any one time. If you have multiple hmc1 or
devid entries, the DS CLI uses the one closest to the bottom of the profile.

There are other customization parameters that affect dscli output; the most important are:
򐂰 banner - date/time with dscli version is printed for each command.
򐂰 header - column names are printed.
򐂰 paging - for interactive mode, it breaks output after a certain number of rows (24 by
default).

310 IBM System Storage DS8800: Architecture and Implementation


14.1.4 Command structure
This is a description of the components and structure of a command-line interface command.

A command-line interface command consists of one to four types of components, arranged in


the following order:
1. The command name: Specifies the task that the command-line interface is to perform.
2. Flags: Modify the command. They provide additional information that directs the
command-line interface to perform the command task in a specific way.
3. Flags parameter: Provides information that is required to implement the command
modification that is specified by a flag.
4. Command parameters: Provide basic information that is necessary to perform the
command task. When a command parameter is required, it is always the last component
of the command, and it is not preceded by a flag.

14.1.5 Using the DS CLI application


You have to log into the DS CLI application to use the command modes. There are three
command modes for the DS CLI:
򐂰 Single-shot command mode
򐂰 Interactive command mode
򐂰 Script command mode

Single-shot command mode


Use the DS CLI single-shot command mode if you want to issue an occasional command but
do not want to keep a history of the commands that you have issued.

You must supply the login information and the command that you want to process at the same
time. Follow these steps to use the single-shot mode:
1. Enter:
dscli -hmc1 <hostname or ip address> -user <adm user> -passwd <pwd> <command>
or
dscli -cfg <dscli profile> <command>
2. Wait for the command to process and display the end results.

Example 14-3 shows the use of the single-shot command mode.

Example 14-3 Single-shot command mode


C:\Program Files\ibm\dscli>dscli -hmc1 10.10.10.1 -user admin -passwd pwd lsuser
Name Group State
=====================
admin admin locked
admin admin active
exit status of dscli = 0

Note: When typing the command, you can use the host name or the IP address of the
HMC. It is also important to understand that every time a command is executed in single
shut mode, the user must be authenticated. The authentication process can take a
considerable amount of time.

Chapter 14. Configuration with the DS Command-Line Interface 311


Interactive command mode
Use the DS CLI interactive command mode when you have multiple transactions to process
that cannot be incorporated into a script. The interactive command mode provides a history
function that makes repeating or checking prior command usage easy to do.

Perform the following steps:


1. Log on to the DS CLI application at the directory where it is installed.
2. Provide the information that is requested by the information prompts. The information
prompts might not appear if you have provided this information in your profile file. The
command prompt switches to a dscli command prompt.
3. Begin using the DS CLI commands and parameters. You are not required to begin each
command with dscli because this prefix is provided by the dscli command prompt.
4. Use the quit or exit command to end interactive mode.

Tip: In interactive mode for long outputs, the message Press Enter To Continue...
appears. The number of rows can be specified in the profile file. Optionally, you can turn off
the paging feature in the profile file by using the paging:off parameter.

Example 14-4 shows the use of interactive command mode.

Example 14-4 Interactive command mode


# dscli -cfg ds8800.profile
dscli> lsarraysite
arsite DA Pair dkcap (10^9B) State Array
===========================================
S1 0 450.0 Assigned A0
S2 0 450.0 Assigned A1
S3 0 450.0 Assigned A2
S4 0 450.0 Assigned A3
S5 0 450.0 Assigned A4
S6 0 450.0 Assigned A5
S7 1 146.0 Assigned A6
S8 1 146.0 Assigned A7
S9 1 146.0 Assigned A8
S10 1 146.0 Assigned A9
S11 1 146.0 Assigned A10
S12 1 146.0 Assigned A11
S13 2 600.0 Assigned A12
S14 2 600.0 Assigned A13
S15 2 600.0 Assigned A14
S16 2 600.0 Assigned A15
S17 2 600.0 Assigned A16
S18 2 600.0 Assigned A17
S19 3 146.0 Assigned A18
S20 3 146.0 Assigned A19
S21 3 146.0 Assigned A20
S22 3 146.0 Assigned A21
Press Enter To Continue...

S23 3 146.0 Assigned A22


S24 3 146.0 Assigned A23
dscli> lssi
Name ID Storage Unit Model WWNN State ESSNet
==============================================================================
ATS_04 IBM.2107-75TV181 IBM.2107-75TV180 951 500507630AFFC29F Online Enabled

312 IBM System Storage DS8800: Architecture and Implementation


Note: When typing the command, you can use the host name or the IP address of the
HMC. In this case, only a single authentication need to take place.

Script command mode


Use the DS CLI script command mode if you want to use a sequence of DS CLI commands. If
you want to run a script that only contains DS CLI commands, then you can start DS CLI in
script mode. The script that DS CLI executes can only contain DS CLI commands.

In Example 14-5, we show the contents of a DS CLI script file. Note that it only contains DS
CLI commands, although comments can be placed in the file using a hash symbol (#). Empty
lines are also allowed. One advantage of using this method is that scripts written in this format
can be used by the DS CLI on any operating system into which you can install DS CLI.

For script command mode, you can turn off the banner and header for easier output parsing.
Also, you can specify an output format that might be easier to parse by your script.

Example 14-5 Example of a DS CLI script file


# Sample ds cli script file
# Comments can appear if hashed
lsarraysite
lsarray
lsrank

In Example 14-6, we start the DS CLI using the -script parameter and specifying a profile
and the name of the script that contains the commands from Example 14-5.

Example 14-6 Executing DS CLI with a script file


# dscli -cfg ds8800.profile -script ds8800.script
arsite DA Pair dkcap (10^9B) State Array
===========================================
S1 0 450.0 Assigned A0
S2 0 450.0 Assigned A1
S3 0 450.0 Assigned A2
S4 0 450.0 Assigned A3
CMUC00234I lsarray: No Array found.
CMUC00234I lsrank: No Rank found.

Note: The DS CLI script can contain only DS CLI commands. Using shell commands
results in process failure. You can add comments in the scripts prefixed by the hash symbol
(#). It must be the first non-blank character on the line.

Only one single authentication process is needed to execute all the script commands.

14.1.6 Return codes


When the DS CLI exits, the exit status code is provided. This is effectively a return code. If DS
CLI commands are issued as separate commands (rather than using script mode), then a
return code will be presented for every command. If a DS CLI command fails (for example,
due to a syntax error or the use of an incorrect password), then a failure reason and a return
code will be presented. Standard techniques to collect and analyze return codes can be used.

Chapter 14. Configuration with the DS Command-Line Interface 313


The return codes used by the DS CLI are listed in Table 14-1.

Table 14-1 Return code table


Return code Category Description

0 Success The command was successful.

2 Syntax error There is a syntax error in the command.

3 Connection error There was a connection problem to the server.

4 Server error The DS CLI server had an error.

5 Authentication error The password or user ID details are incorrect.

6 Application error The DS CLI application had an error.

14.1.7 User assistance


The DS CLI is designed to include several forms of user assistance. The main form of user
assistance is through the help command. Examples of usage include:
򐂰 help lists all the available DS CLI commands.
򐂰 help -s lists all the DS CLI commands with brief descriptions of each one.
򐂰 help -l lists all the DS CLI commands with their syntax information.

To obtain information about a specific DS CLI command, enter the command name as a
parameter of the help command. Examples of usage include:
򐂰 help <command name> gives a detailed description of the specified command.
򐂰 help -s <command name> gives a brief description of the specified command.
򐂰 help -l <command name> gives syntax information about the specified command.

Example 14-7 shows the output of the help command.

Example 14-7 Displaying a list of all commands in DS CLI using the help command
# dscli -cfg ds8800.profile help
applydbcheck lsframe mkpe setdbcheck
applykey lshba mkpprc setdialhome
chauthpol lshostconnect mkpprcpath setenv
chckdvol lshosttype mkrank setflashrevertible
chextpool lshostvol mkreckey setioport
chfbvol lsioport mkremoteflash setnetworkport
chhostconnect lskey mksession setoutput
chkeymgr lskeygrp mksestg setplex
chlcu lskeymgr mkuser setremoteflashrevertible
chlss lslcu mkvolgrp setrmpw
chpass lslss offloadauditlog setsim
chrank lsnetworkport offloaddbcheck setsmtp
chsession lspe offloadss setsnmp
chsestg lsportprof pausegmir setvpn
chsi lspprc pausepprc showarray
chsp lspprcpath quit showarraysite
chsu lsproblem resumegmir showauthpol
chuser lsrank resumepprc showckdvol
chvolgrp lsremoteflash resyncflash showcontactinfo
clearvol lsserver resyncremoteflash showenv
closeproblem lssession reverseflash showextpool
commitflash lssestg revertflash showfbvol
commitremoteflash lssi revertremoteflash showgmir
cpauthpol lsss rmarray showgmircg

314 IBM System Storage DS8800: Architecture and Implementation


diagsi lsstgencl rmauthpol showgmiroos
dscli lssu rmckdvol showhostconnect
echo lsuser rmextpool showioport
exit lsvolgrp rmfbvol showkeygrp
failbackpprc lsvolinit rmflash showlcu
failoverpprc lsvpn rmgmir showlss
freezepprc managedbcheck rmhostconnect shownetworkport
help managehostconnect rmkeygrp showpass
helpmsg managepwfile rmkeymgr showplex
initckdvol managereckey rmlcu showrank
initfbvol mkaliasvol rmpprc showsestg
lsaddressgrp mkarray rmpprcpath showsi
lsarray mkauthpol rmrank showsp
lsarraysite mkckdvol rmreckey showsu
lsauthpol mkesconpprcpath rmremoteflash showuser
lsavailpprcport mkextpool rmsession showvolgrp
lsckdvol mkfbvol rmsestg testauthpol
lsda mkflash rmuser testcallhome
lsdbcheck mkgmir rmvolgrp unfreezeflash
lsddm mkhostconnect sendpe unfreezepprc
lsextpool mkkeygrp sendss ver
lsfbvol mkkeymgr setauthpol whoami
lsflash mklcu setcontactinfo

Man pages
A man page is available for every DS CLI command. Man pages are most commonly seen in
UNIX-based operating systems and give information about command capabilities. This
information can be displayed by issuing the relevant command followed by the -h, -help, or
-? flags.

14.2 Configuring the I/O ports


Set the I/O ports to the desired topology. In Example 14-8, we list the I/O ports by using the
lsioport command. Note that I0000-I0003 are on one adapter card, while I0100-I0103 are on
another card.

Example 14-8 Listing the I/O ports


dscli> lsioport -dev IBM.2107-7503461
ID WWPN State Type topo portgrp
===============================================================
I0000 500507630300008F Online Fibre Channel-SW SCSI-FCP 0
I0001 500507630300408F Online Fibre Channel-SW SCSI-FCP 0
I0002 500507630300808F Online Fibre Channel-SW SCSI-FCP 0
I0003 500507630300C08F Online Fibre Channel-SW SCSI-FCP 0
I0100 500507630308008F Online Fibre Channel-LW FICON 0
I0101 500507630308408F Online Fibre Channel-LW SCSI-FCP 0
I0102 500507630308808F Online Fibre Channel-LW FICON 0
I0103 500507630308C08F Online Fibre Channel-LW FICON 0

There are three possible topologies for each I/O port:


SCSI-FCP Fibre Channel switched fabric (also called point to point)
FC-AL Fibre Channel arbitrated loop
FICON FICON (for System z hosts only)

Chapter 14. Configuration with the DS Command-Line Interface 315


In Example 14-9, we set two I/O ports to the FICON topology and then check the results.

Example 14-9 Changing topology using setioport


dscli> setioport -topology ficon I0001
CMUC00011I setioport: I/O Port I0001 successfully configured.
dscli> setioport -topology ficon I0101
CMUC00011I setioport: I/O Port I0101 successfully configured.
dscli> lsioport
ID WWPN State Type topo portgrp
===============================================================
I0000 500507630300008F Online Fibre Channel-SW SCSI-FCP 0
I0001 500507630300408F Online Fibre Channel-SW FICON 0
I0002 500507630300808F Online Fibre Channel-SW SCSI-FCP 0
I0003 500507630300C08F Online Fibre Channel-SW SCSI-FCP 0
I0100 500507630308008F Online Fibre Channel-LW FICON 0
I0101 500507630308408F Online Fibre Channel-LW FICON 0
I0102 500507630308808F Online Fibre Channel-LW FICON 0
I0103 500507630308C08F Online Fibre Channel-LW FICON 0

14.3 Monitoring the I/O ports


Monitoring of the I/O ports is one of the most important tasks of the system administrator.
Here is the point where the HBAs, SAN, and DS8700 exchange information. If one of these
components has problems due to hardware or configuration issues, all the others will be
affected as well.

Example 14-10 on page 317 shows the output of the showioport -metrics command, which
illustrates the many important metrics returned by the command. It provides the performance
counter of the port and the FCLink error counter. The FCLink error counter is used to
determine the health of the overall communication.

There are groups of errors that point to specific problem areas:


򐂰 Any non-zero figure in the counters LinkFailErr, LossSyncErr, LossSigErr, and
PrimSeqErr indicates that the SAN probably has HBAs attached to it that are unstable.
These HBAs log in and log out to the SAN and create name server congestion and
performance degradation.
򐂰 If the InvTxWordErr counter increases by more than 100 per day, the port is receiving light
from a source that is not an SFP. The cable connected to the port is not covered at the end
or the I/O port is not covered by a cap.
򐂰 The CRCErr counter shows the errors that arise between the last sending SFP in the SAN
and the receiving port of the DS8700. These errors do not appear in any other place in the
data center. You must replace the cable that is connected to the port or the SFP in the
SAN.
򐂰 The link reset counters LRSent and LRRec also suggest that there are hardware defects in
the SAN; these errors need to be investigated.
򐂰 The counters IllegalFrame, OutOrdData, OutOrdACK, DupFrame, InvRelOffset, SeqTimeout,
and BitErrRate point to congestions in the SAN and can only be influenced by
configuration changes in the SAN.

316 IBM System Storage DS8800: Architecture and Implementation


Example 14-10 Listing the I/O ports with showioport -metrics
dscli> showioport -dev IBM.2107-7503461 -metrics I0041
ID I0041
Date 09/30/2009 16:24:12 MST
<< cut here >>
LinkFailErr (FC) 0
LossSyncErr (FC) 0
LossSigErr (FC) 0
PrimSeqErr (FC) 0
InvTxWordErr (FC) 0
CRCErr (FC) 0
LRSent (FC) 0
LRRec (FC) 0
IllegalFrame (FC) 0
OutOrdData (FC) 0
OutOrdACK (FC) 0
DupFrame (FC) 0
InvRelOffset (FC) 0
SeqTimeout (FC) 0
BitErrRate (FC) 0

14.4 Configuring the DS8000 storage for FB volumes


This section goes through examples of a typical DS8000 storage configuration when
attaching to open systems hosts. We perform the DS8000 storage configuration by going
through the following steps:
1. Create arrays.
2. Create ranks.
3. Create Extent Pools.
4. Optionally, create repositories for track space efficient volumes.
5. Create volumes.
6. Create volume groups.
7. Create host connections.

14.4.1 Create arrays


In this step, we create the arrays. Before creating the arrays, it is a best practice to first list the
arrays sites. Use the lsarraysite to list the array sites, as shown in Example 14-11.

Important: Remember that an array for a DS8000 can only contain one array site, and a
DS8000 array site contains eight disk drive modules (DDMs).

Example 14-11 Listing array sites


dscli> lsarraysite
arsite DA Pair dkcap (10^9B) State Array
=============================================
S1 0 146.0 Unassigned -
S2 0 146.0 Unassigned -
S3 0 146.0 Unassigned -
S4 0 146.0 Unassigned -

Chapter 14. Configuration with the DS Command-Line Interface 317


In Example 14-11, we can see that there are four array sites and that we can therefore create
four arrays.

We can now issue the mkarray command to create arrays, as shown in Example 14-12. You
will notice that in this case we have used one array site (in the first array, S1) to create a single
RAID 5 array. If we wished to create a RAID 10 array, we would have to change the -raidtype
parameter to 10, and if we wished to create a RAID 6 array, we would have to change the
-raidtype parameter to 6 (instead of 5).

Example 14-12 Creating arrays with mkarray


dscli> mkarray -raidtype 5 -arsite S1
CMUC00004I mkarray: Array A0 successfully created.
dscli> mkarray -raidtype 5 -arsite S2
CMUC00004I mkarray: Array A1 successfully created.

We can now see what arrays have been created by using the lsarray command, as shown in
Example 14-13.

Example 14-13 Listing the arrays with lsarray


dscli> lsarray
Array State Data RAIDtype arsite Rank DA Pair DDMcap (10^9B)
=====================================================================
A0 Unassigned Normal 5 (6+P+S) S1 - 0 146.0
A1 Unassigned Normal 5 (6+P+S) S2 - 0 146.0

We can see in this example the type of RAID array and the number of disks that are allocated
to the array (in this example 6+P+S, which means the usable space of the array is 6 times the
DDM size), as well as the capacity of the DDMs that are used and which array sites were
used to create the arrays.

14.4.2 Create ranks


Once we have created all the arrays that are required, we then create the ranks using the
mkrank command. The format of the command is mkrank -array Ax -stgtype xxx, where xxx
is either fixed block (FB) or count key data (CKD), depending on whether you are configuring
for open systems or System z hosts.

Once we have created all the ranks, we run the lsrank command. This command displays all
the ranks that have been created, to which server the rank is attached, the RAID type, and the
format of the rank, whether it is Fixed Block (FB) or Count Key Data (CKD).

Example 14-14 shows the mkrank commands and the result of a successful lsrank -l
command.

Example 14-14 Creating and listing ranks with mkrank and lsrank
dscli> mkrank -array A0 -stgtype fb
CMUC00007I mkrank: Rank R0 successfully created.
dscli> mkrank -array A1 -stgtype fb
CMUC00007I mkrank: Rank R1 successfully created.
dscli> lsrank -l
ID Group State datastate Array RAIDtype extpoolID extpoolnam stgtype exts usedexts
=======================================================================================
R0 - Unassigned Normal A0 5 - - fb 773 -
R1 - Unassigned Normal A1 5 - - fb 773 -

318 IBM System Storage DS8800: Architecture and Implementation


14.4.3 Create Extent Pools
The next step is to create Extent Pools. Here are some points to remember when creating the
Extent Pools:
򐂰 Each Extent Pool has an associated rank group that is specified by the -rankgrp
parameter, which defines the Extent Pools’ server affinity (either 0 or 1, for server0 or
server1).
򐂰 The Extent Pool type is either FB or CKD and is specified by the -stgtype parameter.
򐂰 The number of Extent Pools can range from one to as many as there are existing ranks.
However, to associate ranks with both servers, you need at least two Extent Pools.
򐂰 It is best practice for all ranks in an Extent Pool to have the same characteristics, that is,
the same DDM type, size, and RAID type.

For easier management, we create empty Extent Pools related to the type of storage that is in
the pool. For example, create an Extent Pool for high capacity disk, create another for high
performance, and, if needed, Extent Pools for the CKD environment.

When an Extent Pool is created, the system automatically assigns it an Extent Pool ID, which
is a decimal number starting from 0, preceded by the letter P. The ID that was assigned to an
Extent Pool is shown in the CMUC00000I message, which is displayed in response to a
successful mkextpool command. Extent pools associated with rank group 0 get an even ID
number. Extent pools associated with rank group 1 get an odd ID number. The Extent Pool ID
is used when referring to the Extent Pool in subsequent CLI commands. It is therefore good
practice to make note of the ID.

Example 14-15 shows one example of Extent Pools you could define on your machine. This
setup would require a system with at least six ranks.

Example 14-15 An Extent Pool layout plan


FB Extent Pool high capacity 300gb disks assigned to server 0 (FB_LOW_0)
FB Extent Pool high capacity 300gb disks assigned to server 1 (FB_LOW_1)
FB Extent Pool high performance 146gb disks assigned to server 0 (FB_High_0)
FB Extent Pool high performance 146gb disks assigned to server 0 (FB_High_1)
CKD Extent Pool High performance 146gb disks assigned to server 0 (CKD_High_0)
CKD Extent Pool High performance 146gb disks assigned to server 1 (CKD_High_1)

Note that the mkextpool command forces you to name the Extent Pools. In Example 14-16,
we first create empty Extent Pools using the mkextpool command. We then list the Extent
Pools to get their IDs. Then we attach a rank to an empty Extent Pool using the chrank
command. Finally, we list the Extent Pools again using lsextpool and note the change in the
capacity of the Extent Pool.

Example 14-16 Extent Pool creation using mkextpool, lsextpool, and chrank
dscli> mkextpool -rankgrp 0 -stgtype fb FB_high_0
CMUC00000I mkextpool: Extent Pool P0 successfully created.
dscli> mkextpool -rankgrp 1 -stgtype fb FB_high_1
CMUC00000I mkextpool: Extent Pool P1 successfully created.
dscli> lsextpool
Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols
===========================================================================================
FB_high_0 P0 fb 0 below 0 0 0 0 0
FB_high_1 P1 fb 1 below 0 0 0 0 0
dscli> chrank -extpool P0 R0
CMUC00008I chrank: Rank R0 successfully modified.

Chapter 14. Configuration with the DS Command-Line Interface 319


dscli> chrank -extpool P1 R1
CMUC00008I chrank: Rank R1 successfully modified.
dscli> lsextpool
Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols
===========================================================================================
FB_high_0 P0 fb 0 below 773 0 773 0 0
FB_high_1 P1 fb 1 below 773 0 773 0 0

After having assigned a rank to an Extent Pool, we should be able to see this change when
we display the ranks. In Example 14-17, we can see that rank R0 is assigned to extpool P0.

Example 14-17 Displaying the ranks after assigning a rank to an Extent Pool
dscli> lsrank -l
ID Group State datastate Array RAIDtype extpoolID extpoolnam stgtype exts usedexts
===================================================================================
R0 0 Normal Normal A0 5 P0 FB_high_0 fb 773 0
R1 1 Normal Normal A1 5 P1 FB_high_1 fb 773 0

Creating a repository for Track Space Efficient volumes


If the DS8000 has the IBM FlashCopy SE feature, you can create Track Space Efficient (TSE)
volumes that can be used as FlashCopy targets. Before you can create TSE volumes, you
must create a space efficient repository in the Extent Pool. The repository provides space to
store the data associated with TSE logical volumes. Only one repository is allowed per Extent
Pool. A repository has a physical capacity that is available for storage allocations by TSE
volumes and a virtual capacity that is the sum of LUN/volume sizes of all space efficient
volumes. The physical repository capacity is allocated when the repository is created. If there
are several ranks in the Extent Pool, the repository’s extents are striped across the ranks
(Storage Pool Striping).

Example 14-18 shows the creation of a repository. The unit type of the real capacity (-repcap)
and virtual capacity (-vircap) sizes can be specified with the -captype parameter. For FB
Extent Pools, the unit type can be either GB (default) or blocks.

Example 14-18 Creating a repository for Space Efficient volumes


dscli> mksestg -repcap 100 -vircap 200 -extpool p9
CMUC00342I mksestg: The space-efficient storage for the Extent Pool P9 has been
created successfully.

You can obtain information about the repository with the showsestg command. Example 14-19
shows the output of the showsestg command. You might particularly be interested in how
much capacity is used within the repository by checking the repcapalloc value.

Example 14-19 Getting information about a Space Efficient repository


dscli> showsestg p9
extpool P9
stgtype fb
datastate Normal
configstate Normal
repcapstatus below
%repcapthreshold 0
repcap(GiB) 100.0
repcap(Mod1) -
repcap(blocks) 209715200
repcap(cyl) -

320 IBM System Storage DS8800: Architecture and Implementation


repcapalloc(GiB/Mod1) 0.0
%repcapalloc 0
vircap(GiB) 200.0
vircap(Mod1) -
vircap(blocks) 419430400
vircap(cyl) -
vircapalloc(GiB/Mod1) 0.0
%vircapalloc 0
overhead(GiB/Mod1) 3.0
reqrepcap(GiB/Mod1) 100.0
reqvircap(GiB/Mod1) 200.0

Note that some more storage is allocated for the repository in addition to repcap size. In
Example 14-19 on page 320, the line that starts with overhead indicates that 3 GB had been
allocated in addition to the repcap size.

A repository can be deleted with the rmsestg command.

Note: In the current implementation, it is not possible to expand a Space Efficient


repository. The physical size or the virtual size of the repository cannot be changed.
Therefore, careful planning is required. If you have to expand a repository, you must delete
all TSE logical volumes and the repository itself, then recreate a new repository.

14.4.4 Creating FB volumes


We are now able to create volumes and volume groups. When we create them, we should try
to distribute them evenly across the two rank groups in the storage unit.

Creating standard volumes


The format of the command that we use to create a volume is:
mkfbvol -extpool pX -cap xx -name high_fb_0#h 1000-1003

In Example 14-20, we have created eight volumes, each with a capacity of 10 GB. The first
four volumes are assigned to rank group 0 and the second four are assigned to rank group 1.

Example 14-20 Creating fixed block volumes using mkfbvol


dscli> lsextpool
Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols
===========================================================================================
FB_high_0 P0 fb 0 below 773 0 773 0 0
FB_high_1 P1 fb 1 below 773 0 773 0 0
dscli> mkfbvol -extpool p0 -cap 10 -name high_fb_0_#h 1000-1003
CMUC00025I mkfbvol: FB volume 1000 successfully created.
CMUC00025I mkfbvol: FB volume 1001 successfully created.
CMUC00025I mkfbvol: FB volume 1002 successfully created.
CMUC00025I mkfbvol: FB volume 1003 successfully created.
dscli> mkfbvol -extpool p1 -cap 10 -name high_fb_1_#h 1100-1103
CMUC00025I mkfbvol: FB volume 1100 successfully created.
CMUC00025I mkfbvol: FB volume 1101 successfully created.
CMUC00025I mkfbvol: FB volume 1102 successfully created.
CMUC00025I mkfbvol: FB volume 1103 successfully created.

Looking closely at the mkfbvol command used in Example 14-20 on page 321, we see that
volumes 1000 - 1003 are in extpool P0. That Extent Pool is attached to rank group 0, which

Chapter 14. Configuration with the DS Command-Line Interface 321


means server 0. Now rank group 0 can only contain even numbered LSSs, so that means
volumes in that Extent Pool must belong to an even numbered LSS. The first two digits of the
volume serial number are the LSS number, so in this case, volumes 1000 - 1003 are in
LSS 10.

For volumes 1100 - 1003 in Example 14-20 on page 321, the first two digits of the volume
serial number are 11, which is an odd number, which signifies they belong to rank group 1.
Also note that the -cap parameter determines size, but because the -type parameter was not
used, the default size is a binary size. So these volumes are 10 GB binary, which equates to
10,737,418,240 bytes. If we used the parameter -type ess, then the volumes would be
decimally sized and would be a minimum of 10,000,000,000 bytes in size.

In Example 14-20 on page 321 we named the volumes using naming scheme high_fb_0_#h,
where #h means you are using the hexadecimal volume number as part of the volume name.
This can be seen in Example 14-21, where we list the volumes that we have created using the
lsfbvol command. We then list the Extent Pools to see how much space we have left after
the volume creation.

Example 14-21 Checking the machine after creating volumes by using lsextpool and lsfbvol
dscli> lsfbvol
Name ID accstate datastate configstate deviceMTM datatype extpool cap (2^30B)
=========================================================================================
high_fb_0_1000 1000 Online Normal Normal 2107-922 FB 512 P0 10.0
high_fb_0_1001 1001 Online Normal Normal 2107-922 FB 512 P0 10.0
high_fb_0_1002 1002 Online Normal Normal 2107-922 FB 512 P0 10.0
high_fb_0_1003 1003 Online Normal Normal 2107-922 FB 512 P0 10.0
high_fb_1_1100 1100 Online Normal Normal 2107-922 FB 512 P1 10.0
high_fb_1_1101 1101 Online Normal Normal 2107-922 FB 512 P1 10.0
high_fb_1_1102 1102 Online Normal Normal 2107-922 FB 512 P1 10.0
high_fb_1_1103 1103 Online Normal Normal 2107-922 FB 512 P1 10.0
dscli> lsextpool
Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols
===========================================================================================
FB_high_0 P0 fb 0 below 733 5 733 0 4
FB_high_1 P1 fb 1 below 733 5 733 0 4

Important: For the DS8000, the LSSs can be ID 00 to ID FE. The LSSs are in address
groups. Address group 0 is LSS 00 to 0F, address group 1 is LSS 10 to 1F, and so on. The
moment you create an FB volume in an address group, then that entire address group can
only be used for FB volumes. Be aware of this fact when planning your volume layout in a
mixed FB/CKD DS8000.

Storage Pool Striping


When creating a volume, you have a choice of how the volume is allocated in an Extent Pool
with several ranks. The extents of a volume can be kept together in one rank (as long as there
is enough free space on that rank). The next rank is used when the next volume is created.
This allocation method is called rotate volumes.

You can also specify that you want the extents of the volume you are creating to be evenly
distributed across all ranks within the Extent Pool. This allocation method is called rotate
extents.

The extent allocation method is specified with the -eam rotateexts or -eam rotatevols option
of the mkfbvol command (see Example 14-22).

322 IBM System Storage DS8800: Architecture and Implementation


Note: In DS8800 with Licensed Machine Code (LMC) 6.6.xxx, the default allocation policy
has changed to rotate extents.

Example 14-22 Creating a volume with Storage Pool Striping


dscli> mkfbvol -extpool p53 -cap 15 -name ITSO-XPSTR -eam rotateexts 1720
CMUC00025I mkfbvol: FB volume 1720 successfully created.

The showfbvol command with the -rank option (see Example 14-23) shows that the volume
we created is distributed across 12 ranks and how many extents on each rank were allocated
for this volume.

Example 14-23 Getting information about a striped volume


dscli> showfbvol -rank 1720
Name ITSO-XPSTR
ID 1720
accstate Online
datastate Normal
configstate Normal
deviceMTM 2107-900
datatype FB 512
addrgrp 1
extpool P53
exts 15
captype DS
cap (2^30B) 15.0
cap (10^9B) -
cap (blocks) 31457280
volgrp -
ranks 12
dbexts 0
sam Standard
repcapalloc -
eam rotateexts
reqcap (blocks) 31457280
==============Rank extents==============
rank extents
============
R24 2
R25 1
R28 1
R29 1
R32 1
R33 1
R34 1
R36 1
R37 1
R38 1
R40 2
R41 2

Track Space Efficient volumes


When your DS8000 has the IBM FlashCopy SE feature, you can create Track Space Efficient
(TSE) volumes to be used as FlashCopy target volumes. A repository must exist in the Extent
Pool where you plan to allocate TSE volumes (see “Creating a repository for Track Space
Efficient volumes” on page 320).

Chapter 14. Configuration with the DS Command-Line Interface 323


A Track Space Efficient volume is created by specifying the -sam tse parameter with the
mkfbvol command (Example 14-24).

Example 14-24 Creating a Space Efficient volume


dscli> mkfbvol -extpool p53 -cap 40 -name ITSO-1721-SE -sam tse 1721
CMUC00025I mkfbvol: FB volume 1721 successfully created.

When listing Space Efficient repositories with the lssestg command (see Example 14-25),
we can see that in Extent Pool P53 we have a virtual allocation of 40 extents (GB), but that
the allocated (used) capacity repcapalloc is still zero.

Example 14-25 Getting information about Space Efficient repositories


dscli> lssestg -l
extentpoolID stgtype datastate configstate repcapstatus %repcapthreshold repcap (2^30B) vircap repcapalloc vircapalloc
======================================================================================================================
P4 ckd Normal Normal below 0 64.0 1.0 0.0 0.0
P47 fb Normal Normal below 0 70.0 282.0 0.0 264.0
P53 fb Normal Normal below 0 100.0 200.0 0.0 40.0

This allocation comes from the volume just created. To see the allocated space in the
repository for just this volume, we can use the showfbvol command (see Example 14-26).

Example 14-26 Checking the repository usage for a volume


dscli> showfbvol 1721
Name ITSO-1721-SE
ID 1721
accstate Online
datastate Normal
configstate Normal
deviceMTM 2107-900
datatype FB 512
addrgrp 1
extpool P53
exts 40
captype DS
cap (2^30B) 40.0
cap (10^9B) -
cap (blocks) 83886080
volgrp -
ranks 0
dbexts 0
sam TSE
repcapalloc 0
eam -
reqcap (blocks) 83886080

Dynamic Volume Expansion


A volume can be expanded without having to remove the data within the volume. You can
specify a new capacity by using the chfbvol command (see Example 14-27).

Note: The new capacity must be larger than the previous one; you cannot shrink the
volume.

324 IBM System Storage DS8800: Architecture and Implementation


Example 14-27 Expanding a striped volume
dscli> chfbvol -cap 20 1720
CMUC00332W chfbvol: Some host operating systems do not support changing the volume
size. Are you sure that you want to resize the volume? [y/n]: y
CMUC00026I chfbvol: FB volume 1720 successfully modified.

Because the original volume had the rotateexts attribute, the additional extents are also
striped (see Example 14-28).

Example 14-28 Checking the status of an expanded volume


dscli> showfbvol -rank 1720
Name ITSO-XPSTR
ID 1720
accstate Online
datastate Normal
configstate Normal
deviceMTM 2107-900
datatype FB 512
addrgrp 1
extpool P53
exts 40
captype DS
cap (2^30B) 20.0
cap (10^9B) -
cap (blocks) 41943040
volgrp -
ranks 2
dbexts 0
sam Standard
repcapalloc -
eam rotateexts
reqcap (blocks) 41943040
==============Rank extents==============
rank extents
============
R24 20
R25 20

Important: Before you can expand a volume, you must delete all Copy Services
relationships for that volume.

Deleting volumes
FB volumes can be deleted by using the rmfbvol command.

Starting with Licensed Machine Code (LMC) level 6.5.1.xx, the command includes new
options to prevent the accidental deletion of volumes that are in use. A FB volume is
considered to be “in use”, if it is participating in a Copy Services relationship or if the volume
has received any I/O operation in the previous 5 minutes.

Volume deletion is controlled by the -safe and -force parameters (they cannot be specified
at the same time) as follows:
򐂰 If neither of the parameters is specified, the system performs checks to see whether or not
the specified volumes are in use. Volumes that are not in use will be deleted and the ones
in use will not be deleted.

Chapter 14. Configuration with the DS Command-Line Interface 325


򐂰 If the -safe parameter is specified, and if any of the specified volumes are assigned to a
user-defined volume group, the command fails without deleting any volumes.
򐂰 The -force parameter deletes the specified volumes without checking to see whether or
not they are in use.

In Example 14-29, we create volumes 2100 and 2101. We then assign 2100 to a volume
group. We then try to delete both volumes with the -safe option, but the attempt fails without
deleting either of the volumes. We are able to delete volume 2101 with the -safe option
because it is not assigned to a volume group. Volume 2100 is not in use, so we can delete it
by not specifying either parameter.

Example 14-29 Deleting a FB volume


dscli> mkfbvol -extpool p1 -cap 12 -eam rotateexts 2100-2101
CMUC00025I mkfbvol: FB volume 2100 successfully created.
CMUC00025I mkfbvol: FB volume 2101 successfully created.
dscli> chvolgrp -action add -volume 2100 v0
CMUC00031I chvolgrp: Volume group V0 successfully modified.
dscli> rmfbvol -quiet -safe 2100-2101
CMUC00253E rmfbvol: Volume IBM.2107-75NA901/2100 is assigned to a user-defined volume
group. No volumes were deleted.
dscli> rmfbvol -quiet -safe 2101
CMUC00028I rmfbvol: FB volume 2101 successfully deleted.
dscli> rmfbvol 2100
CMUC00027W rmfbvol: Are you sure you want to delete FB volume 2100? [y/n]: y
CMUC00028I rmfbvol: FB volume 2100 successfully deleted.

14.4.5 Creating volume groups


Fixed block volumes are assigned to open systems hosts using volume groups, which is not
to be confused with the term volume groups used in AIX. A fixed bock volume can be a
member of multiple volume groups. Volumes can be added or removed from volume groups
as required. Each volume group must be either SCSI MAP256 or SCSI MASK, depending on
the SCSI LUN address discovery method used by the operating system to which the volume
group will be attached.

Determining if an open systems host is SCSI MAP256 or SCSI MASK


First, we determine what sort of SCSI host with which we are working. Then we use the
lshostype command with the -type parameter of scsimask and then scsimap256.

In Example 14-30, we can see the results of each command.

Example 14-30 Listing host types with the lshostype command


dscli> lshosttype -type scsimask
HostType Profile AddrDiscovery LBS
==================================================
Hp HP - HP/UX reportLUN 512
SVC San Volume Controller reportLUN 512
SanFsAIX IBM pSeries - AIX/SanFS reportLUN 512
pSeries IBM pSeries - AIX reportLUN 512
zLinux IBM zSeries - zLinux reportLUN 512
dscli> lshosttype -type scsimap256
HostType Profile AddrDiscovery LBS
=====================================================
AMDLinuxRHEL AMD - Linux RHEL LUNPolling 512
AMDLinuxSuse AMD - Linux Suse LUNPolling 512
AppleOSX Apple - OSX LUNPolling 512
Fujitsu Fujitsu - Solaris LUNPolling 512

326 IBM System Storage DS8800: Architecture and Implementation


HpTru64 HP - Tru64 LUNPolling 512
HpVms HP - Open VMS LUNPolling 512
LinuxDT Intel - Linux Desktop LUNPolling 512
LinuxRF Intel - Linux Red Flag LUNPolling 512
LinuxRHEL Intel - Linux RHEL LUNPolling 512
LinuxSuse Intel - Linux Suse LUNPolling 512
Novell Novell LUNPolling 512
SGI SGI - IRIX LUNPolling 512
SanFsLinux - Linux/SanFS LUNPolling 512
Sun SUN - Solaris LUNPolling 512
VMWare VMWare LUNPolling 512
Win2000 Intel - Windows 2000 LUNPolling 512
Win2003 Intel - Windows 2003 LUNPolling 512
Win2008 Intel - Windows 2008 LUNPolling 512
iLinux IBM iSeries - iLinux LUNPolling 512
nSeries IBM N series Gateway LUNPolling 512
pLinux IBM pSeries - pLinux LUNPolling 512

Having determined the host type, we can now make a volume group. In Example 14-31, the
example host type we chose is AIX, and in Example 14-30, we can see the address discovery
method for AIX is scsimask.

Example 14-31 Creating a volume group with mkvolgrp and displaying it


dscli> mkvolgrp -type scsimask -volume 1000-1002,1100-1102 AIX_VG_01
CMUC00030I mkvolgrp: Volume group V11 successfully created.
dscli> lsvolgrp
Name ID Type
=======================================
ALL CKD V10 FICON/ESCON All
AIX_VG_01 V11 SCSI Mask
ALL Fixed Block-512 V20 SCSI All
ALL Fixed Block-520 V30 OS400 All
dscli> showvolgrp V11
Name AIX_VG_01
ID V11
Type SCSI Mask
Vols 1000 1001 1002 1100 1101 1102

In this example, we added volumes 1000 to 1002 and 1100 to 1102 to the new volume group.
We did this task to spread the workload evenly across the two rank groups. We then listed all
available volume groups using lsvolgrp. Finally, we listed the contents of volume group V11,
because this was the volume group we created.

We might also want to add or remove volumes to this volume group at a later time. To achieve
this goal, we use chvolgrp with the -action parameter. In Example 14-32, we add volume
1003 to volume group V11. We display the results, and then remove the volume.

Example 14-32 Changing a volume group with chvolgrp


dscli> chvolgrp -action add -volume 1003 V11
CMUC00031I chvolgrp: Volume group V11 successfully modified.
dscli> showvolgrp V11
Name AIX_VG_01
ID V11
Type SCSI Mask
Vols 1000 1001 1002 1003 1100 1101 1102
dscli> chvolgrp -action remove -volume 1003 V11

Chapter 14. Configuration with the DS Command-Line Interface 327


CMUC00031I chvolgrp: Volume group V11 successfully modified.
dscli> showvolgrp V11
Name AIX_VG_01
ID V11
Type SCSI Mask
Vols 1000 1001 1002 1100 1101 1102

Important: Not all operating systems can deal with the removal of a volume. Consult your
operating system documentation to determine the safest way to remove a volume from a
host.

All operations with volumes and volume groups described previously can also be used with
Space Efficient volumes as well.

14.4.6 Creating host connections


The final step in the logical configuration process is to create host connections for your
attached hosts. You will need to assign volume groups to those connections. Each host HBA
can only be defined once, and each host connection (hostconnect) can only have one volume
group assigned to it. Remember that a volume can be assigned to multiple volume groups.

In Example 14-33, we create a single host connection that represents one HBA in our
example AIX host. We use the -hosttype parameter using the hosttype we have in
Example 14-30 on page 326. We allocated it to volume group V11. At this point, provided that
the SAN zoning is correct, the host should be able to see the logical unit numbers (LUNs) in
volume group V11.

Example 14-33 Creating host connections using mkhostconnect and lshostconnect


dscli> mkhostconnect -wwname 100000C912345678 -hosttype pSeries -volgrp V11 AIX_Server_01
CMUC00012I mkhostconnect: Host connection 0000 successfully created.
dscli> lshostconnect
Name ID WWPN HostType Profile portgrp volgrpID ESSIOport
=========================================================================================
AIX_Server_01 0000 100000C912345678 pSeries IBM pSeries - AIX 0 V11 all

You can also use simply -profile instead of -hosttype. However, this is not a best practice.
Using the -hosttype parameter actually invokes both parameters (-profile and -hosttype).
In contrast, simply using -profile leaves the -hosttype column unpopulated.

There is also the option in the mkhostconnect command to restrict access to only certain I/O
ports. This is done with the -ioport parameter. Restricting access in this way is usually
unnecessary. If you want to restrict access for certain hosts to certain I/O ports on the
DS8000, do this by way of zoning on your SAN switch.

Managing hosts with multiple HBAs


If you have a host with multiple HBAs, you have two considerations:
򐂰 For the GUI to consider multiple host connects to be used by the same server, the host
connects must have the same name. In Example 14-34 on page 329, host connects 0010
and 0011 appear in the GUI as a single server with two HBAs. However, host connects
000E and 000F appear as two separate hosts even though in reality they are used by the
same server. If you do not plan to use the GUI to manage host connections, then this is
not a major consideration. Using more verbose hostconnect naming might make
management easier.

328 IBM System Storage DS8800: Architecture and Implementation


򐂰 If you want to use a single command to change the assigned volume group of several
hostconnects at the same time, then you need to assign these hostconnects to a unique
port group and then use the managehostconnect command. This command changes the
assigned volume group for all hostconnects assigned to a particular port group.

When creating hosts, you can specify the -portgrp parameter. By using a unique port group
number for each attached server, you can easily detect servers with multiple HBAs.

In Example 14-34, we have six host connections. By using the port group number, we see
that there are three separate hosts, each with two HBAs. Port group 0 is used for all hosts that
do not have a port group number set.

Example 14-34 Using the portgrp number to separate attached hosts


dscli> lshostconnect
Name ID WWPN HostType Profile portgrp volgrpID
===========================================================================================
bench_tic17_fc0 0008 210000E08B1234B1 LinuxSuse Intel - Linux Suse 8 V1 all
bench_tic17_fc1 0009 210000E08B12A3A2 LinuxSuse Intel - Linux Suse 8 V1 all
p630_fcs0 000E 10000000C9318C7A pSeries IBM pSeries - AIX 9 V2 all
p630_fcs1 000F 10000000C9359D36 pSeries IBM pSeries - AIX 9 V2 all
p615_7 0010 10000000C93E007C pSeries IBM pSeries - AIX 10 V3 all
p615_7 0011 10000000C93E0059 pSeries IBM pSeries - AIX 10 V3 all

Changing host connections


If we want to change a host connection, we can use the chhostconnect command. This
command can be used to change nearly all parameters of the host connection except for the
worldwide port name (WWPN). If you need to change the WWPN, you need to create a whole
new host connection. To change the assigned volume group, use either chhostconnect to
change one hostconnect at a time, or use the managehostconnect command to
simultaneously reassign all the hostconnects in one port group.

14.4.7 Mapping open systems host disks to storage unit volumes


When you have assigned volumes to an open systems host, and you have then installed the
DS CLI on this host, you can run the DS CLI command lshostvol on this host. This command
maps assigned LUNs to open systems host volume names.

In this section, we give examples for several operating systems. In each example, we assign
several logical volumes to an open systems host. We install DS CLI on this host. We log on to
this host and start DS CLI. It does not matter which HMC we connect to with the DS CLI. We
then issue the lshostvol command.

Important: The lshostvol command communicates only with the operating system of the
host on which the DS CLI is installed. You cannot run this command on one host to see the
attached disks of another host.

AIX: Mapping disks when using Multipath I/O


In Example 14-35, we have an AIX server that uses Multipath I/O (MPIO). We have two
volumes assigned to this host, 1800 and 1801. Because MPIO is used, we do not see the
number of paths.

In fact, from this display, it is not possible to tell if MPIO is even installed. You need to run the
pcmpath query device command to confirm the path count.

Chapter 14. Configuration with the DS Command-Line Interface 329


Example 14-35 lshostvol on an AIX host using MPIO
dscli> lshostvol
Disk Name Volume Id Vpath Name
==========================================
hdisk3 IBM.2107-1300819/1800 ---
hdisk4 IBM.2107-1300819/1801 ---

Note: If you use Open HyperSwap on a host, the lshostvol command may fail to show
any devices

AIX: Mapping disks when Subsystem Device Driver is used


In Example 14-36, we have an AIX server that uses Subsystem Device Driver (SDD). We
have two volumes assigned to this host, 1000 and 1100. Each volume has four paths.

Example 14-36 lshostvol on an AIX host using SDD


dscli> lshostvol
Disk Name Volume Id Vpath Name
============================================================
hdisk1,hdisk3,hdisk5,hdisk7 IBM.2107-1300247/1000 vpath0
hdisk2,hdisk4,hdisk6,hdisk8 IBM.2107-1300247/1100 vpath1

Hewlett-Packard UNIX (HP-UX): mapping disks when not using SDD


In Example 14-37, we have an HP-UX host that does not have SDD. We have two volumes
assigned to this host, 1105 and 1106.

Example 14-37 lshostvol on an HP-UX host that does not use SDD
dscli> lshostvol
Disk Name Volume Id Vpath Name
==========================================
c38t0d5 IBM.2107-7503461/1105 ---
c38t0d6 IBM.2107-7503461/1106

HP-UX or Solaris: Mapping disks when using SDD


In Example 14-38, we have a Solaris host that has SDD installed. Two volumes are assigned
to the host, 4205 and 4206 using two paths. The Solaris command iostat -En can also
produce similar information. The output of lshostvol on an HP-UX host looks exactly the
same, with each vpath made up of disks with controller, target, and disk (c-t-d) numbers.
However, the addresses used in the example for the Solaris host would not work in an HP-UX
system.

Attention: Current releases of HP-UX only support addresses up to 3FFF.

Example 14-38 lshostvol on a Solaris host that has SDD


dscli> lshostvol
Disk Name Volume Id Vpath Name
==================================================
c2t1d0s0,c3t1d0s0 IBM.2107-7520781/4205 vpath2
c2t1d1s0,c3t1d1s0 IBM.2107-7520781/4206 vpath1

330 IBM System Storage DS8800: Architecture and Implementation


Solaris: Mapping disks when not using SDD
In Example 14-39, we have a Solaris host that does not have SDD installed. It instead uses
an alternative multipathing product. We have two volumes assigned to this host, 4200 and
4201. Each volume has two paths. The Solaris command iostat -En can also produce
similar information.

Example 14-39 lshostvol on a Solaris host that does not have SDD
dscli> lshostvol
Disk Name Volume Id Vpath Name
==========================================
c6t1d0 IBM-2107.7520781/4200 ---
c6t1d1 IBM-2107.7520781/4201 ---
c7t2d0 IBM-2107.7520781/4200 ---
c7t2d1 IBM-2107.7520781/4201 ---

Windows: Mapping disks when not using SDD or using SDDDSM


In Example 14-40, we run lshostvol on a Windows host that does not use SDD or uses
SDDDSM. The disks are listed by Windows Disk number. If you want to know which disk is
associated with which drive letter, you need to look at the Windows Disk manager.

Example 14-40 lshostvol on a Windows host that does not use SDD or uses SDDDSM
dscli> lshostvol
Disk Name Volume Id Vpath Name
==========================================
Disk2 IBM.2107-7520781/4702 ---
Disk3 IBM.2107-75ABTV1/4702 ---
Disk4 IBM.2107-7520781/1710 ---
Disk5 IBM.2107-75ABTV1/1004 ---
Disk6 IBM.2107-75ABTV1/1009 ---
Disk7 IBM.2107-75ABTV1/100A ---
Disk8 IBM.2107-7503461/4702 ---

Windows: Mapping disks when using SDD


In Example 14-41, we run lshostvol on a Windows host that uses SDD. The disks are listed
by Windows Disk number. If you want to know which disk is associated with which drive letter,
you need to look at the Windows Disk manager.

Example 14-41 lshostvol on a Windows host that does not use SDD
dscli> lshostvol
Disk Name Volume Id Vpath Name
============================================
Disk2,Disk2 IBM.2107-7503461/4703 Disk2
Disk3,Disk3 IBM.2107-7520781/4703 Disk3
Disk4,Disk4 IBM.2107-75ABTV1/4703 Disk4

14.5 Configuring DS8000 Storage for Count Key Data Volumes


To configure the DS8000 storage for count key data (CKD) volumes, you follow almost exactly
the same steps as for fixed block (FB) volumes.

Note that there is one additional step, which is to create Logical Control Units (LCUs), as
displayed in the following list.

Chapter 14. Configuration with the DS Command-Line Interface 331


1. Create arrays.
2. Create CKD ranks.
3. Create CKD Extent Pools.
4. Optionally, create repositories for Track Space Efficient volumes.
5. Create LCUs.
6. Create CKD volumes.

You do not have to create volume groups or host connects for CKD volumes. If there are I/O
ports in Fibre Channel connection (FICON) mode, access to CKD volumes by FICON hosts is
granted automatically.

14.5.1 Create arrays


Array creation for CKD is exactly the same as for fixed block (FB). See 14.4.1, “Create arrays”
on page 317.

14.5.2 Ranks and Extent Pool creation


When creating ranks and Extent Pools, you need to specify -stgtype ckd, as shown in
Example 14-42.

Example 14-42 Rank and Extent Pool creation for CKD


dscli> mkrank -array A0 -stgtype ckd
CMUC00007I mkrank: Rank R0 successfully created.
dscli> lsrank
ID Group State datastate Array RAIDtype extpoolID stgtype
==============================================================
R0 - Unassigned Normal A0 6 - ckd
dscli> mkextpool -rankgrp 0 -stgtype ckd CKD_High_0
CMUC00000I mkextpool: Extent Pool P0 successfully created.
dscli> chrank -extpool P2 R0
CMUC00008I chrank: Rank R0 successfully modified.
dscli> lsextpool
Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvol
===========================================================================================
CKD_High_0 2 ckd 0 below 252 0 287 0 0

Create a Space Efficient repository for CKD Extent Pools


If the DS8000 has the IBM FlashCopy SE feature, you can create Track Space Efficient (TSE)
volumes that can be used as FlashCopy targets. Before you can create TSE volumes, you
must create a Space Efficient repository in the Extent Pool. The repository provides space to
store the data associated with TSE logical volumes. Only one repository is allowed per Extent
Pool. A repository has a physical capacity that is available for storage allocations by TSE
volumes and a virtual capacity that is the sum of LUN/volume sizes of all Space Efficient
volumes. The physical repository capacity is allocated when the repository is created. If there
are several ranks in the Extent Pool, the repository’s extents are striped across the ranks
(Storage Pool Striping).

Space Efficient repository creation for CKD Extent Pools is identical to that of FB Extent
Pools, with the exception that the size of the repository’s real capacity and virtual capacity are
expressed either in cylinders or as multiples of 3390 model 1 disks (the default for CKD
Extent Pools), instead of in GB or blocks, which apply to FB Extent Pools only.

Example 14-43 shows the creation of a repository.

332 IBM System Storage DS8800: Architecture and Implementation


Example 14-43 Creating a Space Efficient repository for CKD volumes
dscli> mksestg -repcap 100 -vircap 200 -extpool p1
CMUC00342I mksestg: The space-efficient storage for the Extent Pool P1 has been
created successfully.

You can obtain information about the repository with the showsestg command. Example 14-44
shows the output of the showsestg command. You might particularly be interested in how
much capacity is used in the repository; to obtain this information, check the repcapalloc
value.

Example 14-44 Getting information about a Space Efficient CKD repository


dscli> showsestg p1
extpool P1
stgtype ckd
datastate Normal
configstate Normal
repcapstatus below
%repcapthreshold 0
repcap(GiB) 88.1
repcap(Mod1) 100.0
repcap(blocks) -
repcap(cyl) 111300
repcapalloc(GiB/Mod1) 0.0
%repcapalloc 0
vircap(GiB) 176.2
vircap(Mod1) 200.0
vircap(blocks) -
vircap(cyl) 222600
vircapalloc(GiB/Mod1) 0.0
%vircapalloc 0
overhead(GiB/Mod1) 4.0
reqrepcap(GiB/Mod1) 100.0
reqvircap(GiB/Mod1) 200.0

Note that some storage is allocated for the repository in addition to repcap size. In
Example 14-44, the line that starts with overhead indicates that 4 GB had been allocated in
addition to the repcap size.

A repository can be deleted by using the rmsestg command.

Important: In the current implementation, it is not possible to expand a repository. The


physical size or the virtual size of the repository cannot be changed. Therefore, careful
planning is required. If you have to expand a repository, you must delete all TSE volumes
and the repository itself and then create a new repository.

14.5.3 Logical control unit creation


When creating volumes for a CKD environment, you must create logical control units (LCUs)
before creating the volumes. In Example 14-45, you can see what happens if you try to create
a CKD volume without creating an LCU first.

Example 14-45 Trying to create CKD volumes without an LCU


dscli> mkckdvol -extpool p2 -cap 262668 -name ITSO_EAV1_#h C200
CMUN02282E mkckdvol: C200: Unable to create CKD logical volume: CKD volumes require a CKD
logical subsystem.

Chapter 14. Configuration with the DS Command-Line Interface 333


We must use the mklcu command first. The format of the command is:
mklcu -qty XX -id XX -ssXX

To display the LCUs that we have created, we use the lslcu command.

In Example 14-46, we create two LCUs using the mklcu command, and then list the created
LCUs using the lslcu command. Note that by default the LCUs that were created are 3990-6.

Example 14-46 Creating a logical control unit with mklcu


dscli> mklcu -qty 2 -id BC -ss BC00
CMUC00017I mklcu: LCU BC successfully created.
CMUC00017I mklcu: LCU BD successfully created.
dscli> lslcu
ID Group addrgrp confgvols subsys conbasetype
=============================================
BC 0 C 0 0xBC00 3990-6
BD 1 C 0 0xBC01 3990-6

Also note that because we created two LCUs (using the parameter -qty 2), the first LCU,
which is ID BC (an even number), is in address group 0, which equates to rank group 0. The
second LCU, which is ID BD (an odd number), is in address group 1, which equates to rank
group 1. By placing the LCUs into both address groups, we maximize performance by
spreading workload across both rank groups of the DS8000.

Note: For the DS8000, the CKD LCUs can be ID 00 to ID FE. The LCUs fit into one of 16
address groups. Address group 0 is LCUs 00 to 0F, address group 1 is LCUs 10 to 1F, and
so on. If you create a CKD LCU in an address group, then that address group cannot be
used for FB volumes. Likewise, if there were, for example, FB volumes in LSS 40 to 4F
(address group 4), then that address group cannot be used for CKD. Be aware of this
limitation when planning the volume layout in a mixed FB/CKD DS8000.

14.5.4 Create CKD volumes


Having created an LCU, we can now create CKD volumes by using the mkckdvol command.
The format of the mkckdvol command is:
mkckdvol -extpool P2 -cap 262668 -datatype 3390-A -eam rotatevols -name
ITSO_EAV1_#h BC06

The major difference to note here is that the capacity is expressed in either cylinders or as
CKD extents (1,113 cylinders). In order to not waste space, use volume capacities that are a
multiple of 1,113 cylinders. Also new is the support of DS8000 Licensed Machine Code
5.4.xx.xx for Extended Address Volumes (EAV). This support expands the maximum size of a
CKD volume to 262,668 cylinders and creates a new device type, 3390 Model A. This new
volume can only be used by IBM z/OS systems running V1.10 or later versions.

Note: For 3390-A volumes, the size can be specified from 1 to 65,520 in increments of 1
and from 65,667 (next multiple of 1113) to 262,668 in increments of 1113.

In Example 14-47, we create a single 3390-A volume using 262,668 cylinders.

Example 14-47 Creating CKD volumes using mkckdvol


dscli> mkckdvol -extpool P2 -cap 262668 -datatype 3390-A -eam rotatevols -name ITSO_EAV1_#h BC06
CMUC00021I mkckdvol: CKD Volume BC06 successfully created.

334 IBM System Storage DS8800: Architecture and Implementation


dscli> lsckdvol
Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl)
================================================================================================
ITSO_BC00 BC00 Online Normal Normal 3390-9 CKD Base - P2 10017
ITSO_BC01 BC01 Online Normal Normal 3390-9 CKD Base - P2 10017
ITSO_BC02 BC02 Online Normal Normal 3390-9 CKD Base - P2 10017
ITSO_BC03 BC03 Online Normal Normal 3390-9 CKD Base - P2 10017
ITSO_BC04 BC04 Online Normal Normal 3390-9 CKD Base - P2 10017
ITSO_BC05 BC05 Online Normal Normal 3390-9 CKD Base - P2 10017
ITSO_EAV1_BC06 BC06 Online Normal Normal 3390-A CKD Base - P2 262668
ITSO_BD00 BD00 Online Normal Normal 3390-9 CKD Base - P3 10017
ITSO_BD01 BD01 Online Normal Normal 3390-9 CKD Base - P3 10017
ITSO_BD02 BD02 Online Normal Normal 3390-9 CKD Base - P3 10017
ITSO_BD03 BD03 Online Normal Normal 3390-9 CKD Base - P3 10017
ITSO_BD04 BD04 Online Normal Normal 3390-9 CKD Base - P3 10017
ITSO_BD05 BD05 Online Normal Normal 3390-9 CKD Base - P3 10017

Remember, we can only create CKD volumes in LCUs that we have already created.

You also need to be aware that volumes in even numbered LCUs must be created from an
Extent Pool that belongs to rank group 0. Volumes in odd numbered LCUs must be created
from an Extent Pool in rank group 1.

Storage pool striping


When creating a volume, you have a choice about how the volume is allocated in an Extent
Pool with several ranks. The extents of a volume can be kept together in one rank (as long as
there is enough free space on that rank). The next rank is used when the next volume is
created. This allocation method is called rotate volumes.

You can also specify that you want the extents of the volume to be evenly distributed across
all ranks within the Extent Pool. This allocation method is called rotate extents.

The extent allocation method is specified with the -eam rotateexts or -eam rotatevols
option of the mkckdvol command (see Example 14-48).

Note: In DS8800 with Licensed Machine Code (LMC) 6.6.xxx, the default allocation policy
has changed to rotate extents.

Example 14-48 Creating a CKD volume with Extent Pool striping


dscli> mkckdvol -extpool p4 -cap 10017 -name ITSO-CKD-STRP -eam rotateexts 0080
CMUC00021I mkckdvol: CKD Volume 0080 successfully created.

The showckdvol command with the -rank option (see Example 14-49) shows that the volume
we created is distributed across two ranks, and it also displays how many extents on each
rank were allocated for this volume.

Example 14-49 Getting information about a striped CKD volume


dscli> showckdvol -rank 0080
Name ITSO-CKD-STRP
ID 0080
accstate Online
datastate Normal
configstate Normal
deviceMTM 3390-9
volser -

Chapter 14. Configuration with the DS Command-Line Interface 335


datatype 3390
voltype CKD Base
orgbvols -
addrgrp 0
extpool P4
exts 9
cap (cyl) 10017
cap (10^9B) 8.5
cap (2^30B) 7.9
ranks 2
sam Standard
repcapalloc -
eam rotateexts
reqcap (cyl) 10017
==============Rank extents==============
rank extents
============
R4 4
R30 5

Track Space Efficient volumes


When your DS8000 has the IBM FlashCopy SE feature, you can create Track Space Efficient
(TSE) volumes to be used as FlashCopy target volumes. A repository must exist in the Extent
Pool where you plan to allocate TSE volumes (see “Create a Space Efficient repository for
CKD Extent Pools” on page 332).

A Track Space Efficient volume is created by specifying the -sam tse parameter with the
mkckdvol command (see Example 14-50).

Example 14-50 Creating a Space Efficient CKD volume


dscli> mkckdvol -extpool p4 -cap 10017 -name ITSO-CKD-SE -sam tse 0081
CMUC00021I mkckdvol: CKD Volume 0081 successfully created.

When listing Space Efficient repositories with the lssestg command (see Example 14-51),
we can see that in Extent Pool P4 we have a virtual allocation of 7.9 GB, but that the allocated
(used) capacity repcapalloc is still zero.

Example 14-51 Obtaining information about Space Efficient CKD repositories


dscli> lssestg -l
extentpoolID stgtype datastate configstate repcapstatus %repcapthreshold repcap (2^30B) vircap repcapalloc vircapalloc
======================================================================================================================
P4 ckd Normal Normal below 0 100.0 200.0 0.0 7.9

This allocation comes from the volume just created. To see the allocated space in the
repository for just this volume, we can use the showckdvol command (see Example 14-52).

Example 14-52 Checking the repository usage for a CKD volume


dscli> showckdvol 0081
Name ITSO-CKD-SE
ID 0081
accstate Online
datastate Normal
configstate Normal
deviceMTM 3390-9
volser -

336 IBM System Storage DS8800: Architecture and Implementation


datatype 3390
voltype CKD Base
orgbvols -
addrgrp 0
extpool P4
exts 9
cap (cyl) 10017
cap (10^9B) 8.5
cap (2^30B) 7.9
ranks 0
sam TSE
repcapalloc 0
eam -
reqcap (cyl) 10017

Dynamic Volume Expansion


A volume can be expanded without having to remove the data within the volume. You can
specify a new capacity by using the chckdvol command (see Example 14-53). The new
capacity must be larger than the previous one; you cannot shrink the volume.

Example 14-53 Expanding a striped CKD volume


dscli> chckdvol -cap 30051 0080
CMUC00332W chckdvol: Some host operating systems do not support changing the
volume size. Are you sure that you want to resize the volume? [y/n]: y
CMUC00022I chckdvol: CKD Volume 0080 successfully modified.

Because the original volume had the rotateexts attribute, the additional extents are also
striped (see Example 14-54).

Example 14-54 Checking the status of an expanded CKD volume


dscli> showckdvol -rank 0080
Name ITSO-CKD-STRP
ID 0080
accstate Online
datastate Normal
configstate Normal
deviceMTM 3390-9
volser -
datatype 3390
voltype CKD Base
orgbvols -
addrgrp 0
extpool P4
exts 27
cap (cyl) 30051
cap (10^9B) 25.5
cap (2^30B) 23.8
ranks 2
sam Standard
repcapalloc -
eam rotateexts
reqcap (cyl) 30051
==============Rank extents==============

Chapter 14. Configuration with the DS Command-Line Interface 337


rank extents
============
R4 13
R30 14

Note: Before you can expand a volume, you first have to delete all Copy Services
relationships for that volume, and you may not specify both -cap and -datatype for the
chckdvol command.

It is possible to expand a 3390 Model 9 volume to a 3390 Model A. You can do that just by
specifying a new capacity for an existing Model 9 volume. When you increase the size of a
3390-9 volume beyond 65,520 cylinders, its device type automatically changes to 3390-A.
However, keep in mind that a 3390 Model A can only be used in z/OS V1.10 and later (see
Example 14-55).

Example 14-55 Expanding a 3390 to a 3390-A


*** Command to show CKD volume definition before expansion:

dscli> showckdvol BC07


Name ITSO_EAV2_BC07
ID BC07
accstate Online
datastate Normal
configstate Normal
deviceMTM 3390-9
volser -
datatype 3390
voltype CKD Base
orgbvols -
addrgrp B
extpool P2
exts 9
cap (cyl) 10017
cap (10^9B) 8.5
cap (2^30B) 7.9
ranks 1
sam Standard
repcapalloc -
eam rotatevols
reqcap (cyl) 10017

*** Command to expand CKD volume from 3390-9 to 3390-A:

dscli> chckdvol -cap 262668 BC07


CMUC00332W chckdvol: Some host operating systems do not support changing the volume size.
Are you sure that you want to resize the volu
me? [y/n]: y
CMUC00022I chckdvol: CKD Volume BC07 successfully modified.

*** Command to show CKD volume definition after expansion:

dscli> showckdvol BC07


Name ITSO_EAV2_BC07
ID BC07
accstate Online
datastate Normal
configstate Normal
deviceMTM 3390-A

338 IBM System Storage DS8800: Architecture and Implementation


volser -
datatype 3390-A
voltype CKD Base
orgbvols -
addrgrp B
extpool P2
exts 236
cap (cyl) 262668
cap (10^9B) 223.3
cap (2^30B) 207.9
ranks 1
sam Standard
repcapalloc -
eam rotatevols
reqcap (cyl) 262668

You cannot reduce the size of a volume. If you try, an error message is displayed, as shown in
Example 14-56.

Example 14-56 Reducing a volume size


dscli> chckdvol -cap 10017 BC07
CMUC00332W chckdvol: Some host operating systems do not support changing the volume size.
Are you sure that you want to resize the volume? [y/n]: y
CMUN02541E chckdvol: BC07: The expand logical volume task was not initiated because the
logical volume capacity that you have requested is less than the current logical volume
capacity.

Deleting volumes
CKD volumes can be deleted by using the rmckdvol command. FB volumes can be deleted
by using the rmfbvol command.

Starting with Licensed Machine Code (LMC) level 6.5.1.xx, the command includes a new
capability to prevent the accidental deletion of volumes that are in use. A CKD volume is
considered to be in use if it is participating in a Copy Services relationship, or if the IBM
System z path mask indicates that the volume is in a “grouped state” or online to any host
system. A CKD volume is considered to be in use if it is participating in a Copy Services
relationship, or if the volume has had any I/O in the last five minutes.

If the -force parameter is not specified with the command, volumes that are in use are not
deleted. If multiple volumes are specified and some are in use and some are not, the ones not
in use will be deleted. If the -force parameter is specified on the command, the volumes will
be deleted without checking to see whether or not they are in use.

In Example 14-57, we try to delete two volumes, 0900 and 0901. Volume 0900 is online to a
host, while 0901 is not online to any host and not in a Copy Services relationship. The
rmckdvol 0900-0901 command deletes just volume 0901, which is offline. To delete volume
0900, we use the -force parameter.

Example 14-57 Deleting CKD volumes


dscli> lsckdvol 0900-0901
Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl)
========================================================================================
ITSO_J 0900 Online Normal Normal 3390-9 CKD Base - P1 10017
ITSO_J 0901 Online Normal Normal 3390-9 CKD Base - P1 10017

dscli> rmckdvol -quiet 0900-0901

Chapter 14. Configuration with the DS Command-Line Interface 339


CMUN02948E rmckdvol: 0900: The Delete logical volume task cannot be initiated because the
Allow Host Pre-check Control Switch is set to true and the volume that you have specified
is online to a host.
CMUC00024I rmckdvol: CKD volume 0901 successfully deleted.

dscli> lsckdvol 0900-0901


Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl)
========================================================================================
ITSO_J 0900 Online Normal Normal 3390-9 CKD Base - P1 10017

dscli> rmckdvol -force 0900


CMUC00023W rmckdvol: Are you sure you want to delete CKD volume 0900? [y/n]: y
CMUC00024I rmckdvol: CKD volume 0900 successfully deleted.

dscli> lsckdvol 0900-0901


CMUC00234I lsckdvol: No CKD Volume found.

340 IBM System Storage DS8800: Architecture and Implementation


Part 4

Part 4 Maintenance and


upgrades
The topics covered in this part include:
򐂰 Licensed machine code
򐂰 Monitoring with Simple Network Management Protocol
򐂰 Remote support
򐂰 Capacity upgrades and CoD

© Copyright IBM Corp. 2011. All rights reserved. 341


342 IBM System Storage DS8800: Architecture and Implementation
15

Chapter 15. Licensed machine code


In this chapter, we discuss considerations related to the planning and installation of new
licensed machine code (LMC) bundles on the IBM System Storage DS8800 series. We cover
the following topics in this chapter:
򐂰 How new microcode is released
򐂰 Bundle installation
򐂰 Concurrent and non-concurrent updates
򐂰 Code updates
򐂰 Host adapter firmware updates
򐂰 Loading the code bundle
򐂰 Post-installation activities
򐂰 Summary

© Copyright IBM Corp. 2011. All rights reserved. 343


15.1 How new microcode is released
The various components of the DS8800 system use firmware that can be updated as new
releases become available. These components include device adapters, host adapters,
power supplies, and Fibre Channel interface cards. In addition, the microcode and internal
operating system that run on the HMCs and each central processer complex (CEC) can be
updated. As IBM continues to develop the DS8800, new functional features will also be
released through new licensed machine code (LMC) levels.

When IBM releases new microcode for the DS8800, it is released in the form of a bundle. The
term bundle is used because a new code release can include updates for various DS8800
components. These updates are tested together and then the various code packages are
bundled together into one unified release. In general, when referring to what code level is
being used on a DS8800, the term bundle should be used. Components within the bundle will
each have their own revision levels.

For a DS8000 Cross-Reference table of Code Bundles, visit the following site:
http://www.ibm.com/
򐂰 Click Support & Downloads  Support by Product  Storage
򐂰 Select 1. Choose your products  Disk systems  Enterprise Storage Servers 
DS8800
򐂰 Select 2. Choose your task  Downloads
򐂰 Select 3. See your results  View your page
򐂰 Click DS8800 Code Bundle Information

The Cross-Reference Table shows the levels of code for Release 6, which is installed on the
DS8800. It should be updated as new bundles are released. It is important to always match
your DS CLI version to the bundle installed on your DS8800.

For the DS8800, the naming convention of bundles is PR.MM.AA.E, where the letters refer to:
P Product (8 = DS8800)
R Release (6)
MM Maintenance Level (xx)
AA Service Pack (xx)
E EFIX level (0 is base, and 1.n is the interim fix build above base level.)

15.2 Bundle installation

Important: Licensed Machine Code is always provided and installed by IBM Service
Engineers. Licensed Machine Code is not a client-serviceable task.

It is likely that a new bundle will include updates for the following components:
򐂰 Linux OS for the HMC
򐂰 AIX OS for the CECs
򐂰 Microcode for HMC and CECs
򐂰 Microcode/Firmware for Host Adapters

344 IBM System Storage DS8800: Architecture and Implementation


It is less likely that a bundle includes updates for the following components:
򐂰 Firmware for Power subsystem (PPS, RPC, and BBU)
򐂰 Firmware for Storage DDMs
򐂰 Firmware for Fibre Channel interface cards
򐂰 Firmware for Device Adapters
򐂰 Firmware for Hypervisor on CECs

The installation process involves several stages:


1. Update the HMC code. The new code version is supplied on CD or downloaded using FTP
or SFTP (Secure File Transfer). This can potentially involve updates to the internal Linux
version of the HMC, updates to the HMC licensed machine code, and updates to the
firmware of the HMC hardware.
2. Load new DS8800 licensed machine code (LMC) onto the HMC and from there to the
internal storage of each CEC.
3. Occasionally, new Primary Power Supply (PPS) and Rack Power Control (RPC) firmware
is released. New firmware can be loaded into each RPC card and PPS directly from the
HMC. Each RPC and PPS is quiesced, updated, and resumed one at a time until all of
them have been updated. There are usually no service interruptions for power updates.
4. Occasionally, new firmware for the Hypervisor, service processor, system planar, and I/O
enclosure planars is released. This firmware can be loaded into each device directly from
the HMC. Activation of this firmware might require a shutdown and reboot of each CEC,
one at a time. This would cause each CEC to fail over its logical subsystems to the
alternate CEC. Certain updates do not require this step, or it might occur without
processor reboots. See 4.3, “CEC failover and failback” on page 63 for more information.
5. Perform updates to the CEC operating system (currently AIX V6.1), plus updates to the
internal LMC, performed one at a time. The updates cause each CEC to fail over its logical
subsystems to the alternate CEC. This process also updates the firmware running in each
device adapter owned by that CEC.
6. Perform updates to the host adapters. For DS8800 host adapters, the impact of these
updates on each adapter is less than 2.5 seconds and should not affect connectivity. If an
update were to take longer than this, the multipathing software on the host, or control-unit
reconfigured initiation (CUIR), would direct I/O to a different host adapter. If a host is
attached with only a single path, connectivity would be lost. See 4.4.2, “Host connections”
on page 68 for more information about host attachments.
7. Very occasionally, new DDM firmware is released. New firmware can be loaded
concurrently to the drives.

While the installation process described above might seem complex, it does not require a
great deal of user intervention. The code installer normally simply starts the distribution and
activation process and then monitors its progress using the HMC.

Important: An upgrade of the DS8800 microcode might require that you upgrade the DS
CLI on workstations. Check with your IBM representative regarding the description and
contents of the release bundle.

Chapter 15. Licensed machine code 345


15.3 Concurrent and non-concurrent updates
The DS8800 allows for concurrent microcode updates. This means that code updates can be
installed with all attached hosts up and running with no interruption to your business
applications. It is also possible to install microcode update bundles non-concurrently, with all
attached hosts shut down. However, this should not be necessary. This method is usually only
employed at DS8800 installation time.

15.4 Code updates


The microcode that runs on the HMC normally gets updated as part of a new code bundle.
The HMC can hold up to six different versions of code. Each CEC can hold three different
versions of code (the previous version, the active version, and the next version). Most
organizations should plan for two code updates per year.

Best practice: Many clients with multiple DS8800 systems follow the updating schedule
detailed here, wherein the HMC is updated 1 to 2 days before the rest of the bundle is
applied.

Prior to the update of the CEC operating system and microcode, a pre-verification test is run
to ensure that no conditions exist that need to be corrected. The HMC code update will install
the latest version of the pre-verification test. Then the newest test can be run and if problems
are detected, there are one to two days before the scheduled code installation window to
correct them. An example of this procedure is illustrated here:
Thursday Copy or download the new code bundle to the HMCs.
Update the HMC(s) to the new code bundle.
Run the updated preverification test.
Resolve any issues raised by the preverification test.
Saturday Update the SFIs.

Note that the actual time required for the concurrent code load varies based on the bundle
that you are currently running and the bundle to which you are updating. Always consult with
your IBM service representative regarding proposed code load schedules.

Additionally, it is good practice at regular intervals to check multipathing drivers and SAN
switch firmware levels for current levels.

15.5 Host adapter firmware updates


One of the final steps in the concurrent code load process is updating the host adapters.
Normally, every code bundle contains new host adapter code. For DS8800 Fibre Channel
cards, regardless of whether they are used for open systems attachment or System z
(FICON) attachment, the update process is concurrent to the attached hosts. The Fibre
Channel cards use a technique known as adapter fast-load. This allows them to switch to the
new firmware in less than two seconds. This fast update means that single path hosts, hosts
that are fiber boot, and hosts that do not have multipathing software do not need to be shut
down during the update. They can keep operating during the host adapter update, because
the update is so fast. This also means that no SDD path management should be necessary.

346 IBM System Storage DS8800: Architecture and Implementation


Remote Mirror and Copy path considerations
For Remote Mirror and Copy paths that use Fibre Channel ports, there are no special
considerations. The ability to perform a fast-load means that no interruption occurs to the
Remote Mirror operations.

Control Unit-Initiated Reconfiguration


Control Unit-Initiated Reconfiguration (CUIR) prevents loss of access to volumes in System z
environments due to incorrect or wrong path handling. This function automates channel path
management in System z environments in support of selected DS8800 service actions.
Control Unit-Initiated Reconfiguration is available for the DS8800 when operated in the z/OS
and z/VM environments. The CUIR function automates channel path vary on and vary off
actions to minimize manual operator intervention during selected DS8800 service actions.

CUIR allows the DS8800 to request that all attached system images set all paths required for
a particular service action to the offline state. System images with the appropriate level of
software support respond to these requests by varying off the affected paths, and either
notifying the DS8800 subsystem that the paths are offline, or that it cannot take the paths
offline. CUIR reduces manual operator intervention and the possibility of human error during
maintenance actions, at the same time reducing the time required for the maintenance
window. This is particularly useful in environments where there are many systems attached to
a DS8800.

15.6 Loading the code bundle


The DS8800 code bundle installation is performed by the IBM Service Engineer. Contact your
IBM service representative to discuss and arrange the required services.

15.7 Post-installation activities


After a new code bundle has been installed, you might need to perform the following tasks:
1. Upgrade the DS CLI of external workstations. For the majority of new release code
bundles, there is a corresponding new release of DS CLI. Make sure you upgrade to the
new version of DS CLI to take advantage of any improvements IBM has made.
2. Verify the connectivity from each DS CLI workstation to the DS8800.
3. Verify the connectivity from the SSPC to the DS8800.
4. Verify the connectivity from any stand-alone TPC Element Manager to the DS8800.
5. Verify the connectivity from the DS800 to all TKLM Key Servers in use.

15.8 Summary
IBM might release changes to the DS8800 series Licensed Machine Code. These changes
may include code fixes and feature updates relevant to the DS8800.

These updates and the information regarding them are detailed on the DS8000 Code
Cross-Reference website as previously mentioned.

It is important that the Code Bundle installations are planned and coordinated to ensure
connectivity is maintained to the DS8800 system; this includes the DS CLI and the SSPC.

Chapter 15. Licensed machine code 347


348 IBM System Storage DS8800: Architecture and Implementation
16

Chapter 16. Monitoring with Simple Network


Management Protocol
This chapter provides information about the Simple Network Management Protocol (SNMP)
notifications and messages for the IBM System Storage DS8000 series. This chapter covers
the following topics:
򐂰 Simple Network Management Protocol overview
򐂰 SNMP notifications

© Copyright IBM Corp. 2011. All rights reserved. 349


16.1 Simple Network Management Protocol overview
SNMP has become a standard for monitoring an IT environment. With SNMP, a system can
be monitored, and event management, based on SNMP traps, can be automated.

SNMP is an industry-standard set of functions for monitoring and managing TCP/IP-based


networks. SNMP includes a protocol, a database specification, and a set of data objects. A
set of data objects forms a Management Information Base (MIB).

SNMP provides a standard MIB that includes information such as IP addresses and the
number of active TCP connections. The actual MIB definitions are encoded into the agents
running on a system.

MIB-2 is the Internet standard MIB that defines over 100 TCP/IP specific objects, including
configuration and statistical information, such as:
򐂰 Information about interfaces
򐂰 Address translation
򐂰 IP, Internet-control message protocol (ICMP), TCP, and User Datagram Protocol (UDP)

SNMP can be extended through the use of the SNMP Multiplexing protocol (SMUX protocol)
to include enterprise-specific MIBs that contain information related to a specific environment
or application. A management agent (a SMUX peer daemon) retrieves and maintains
information about the objects defined in its MIB and passes this information on to a
specialized network monitor or network management station (NMS).

The SNMP protocol defines two terms, agent and manager, instead of the terms client and
server, which are used in many other TCP/IP protocols.

16.1.1 SNMP agent


An SNMP agent is a daemon process that provides access to the MIB objects on IP hosts on
which the agent is running. The agent can receive SNMP get or SNMP set requests from
SNMP managers and can send SNMP trap requests to SNMP managers.

Agents send traps to the SNMP manager to indicate that a particular condition exists on the
agent system, such as the occurrence of an error. In addition, the SNMP manager generates
traps when it detects status changes or other unusual conditions while polling network
objects.

16.1.2 SNMP manager


An SNMP manager can be implemented in two ways. An SNMP manager can be
implemented as a simple command tool that can collect information from SNMP agents. An
SNMP manager also can be composed of multiple daemon processes and database
applications. This type of complex SNMP manager provides you with monitoring functions
using SNMP. It typically has a graphical user interface for operators. The SNMP manager
gathers information from SNMP agents and accepts trap requests sent by SNMP agents.

16.1.3 SNMP trap


A trap is a message sent from an SNMP agent to an SNMP manager without a specific
request from the SNMP manager.

350 IBM System Storage DS8800: Architecture and Implementation


SNMP defines six generic types of traps and allows definition of enterprise-specific traps. The
trap structure conveys the following information to the SNMP manager:
򐂰 Agent’s object that was affected
򐂰 IP address of the agent that sent the trap
򐂰 Event description (either a generic trap or enterprise-specific trap, including trap number)
򐂰 Time stamp
򐂰 Optional enterprise-specific trap identification
򐂰 List of variables describing the trap

16.1.4 SNMP communication


The SNMP manager sends SNMP get, get-next, or set requests to SNMP agents, which
listen on UDP port 161, and the agents send back a reply to the manager. The SNMP agent
can be implemented on any kind of IP host, such as UNIX workstations, routers, and network
appliances.

You can gather various information about the specific IP hosts by sending the SNMP get and
get-next requests, and can update the configuration of IP hosts by sending the SNMP set
request.

The SNMP agent can send SNMP trap requests to SNMP managers, which listen on UDP
port 162. The SNMP trap1 requests sent from SNMP agents can be used to send warning,
alert, or error notification messages to SNMP managers.

Note that you can configure an SNMP agent to send SNMP trap requests to multiple SNMP
managers. Figure 16-1 illustrates the characteristics of SNMP architecture and
communication.

Figure 16-1 SNMP architecture and communication

Chapter 16. Monitoring with Simple Network Management Protocol 351


16.1.5 Generic SNMP security
The SNMP protocol uses the community name for authorization. Most SNMP
implementations use the default community name public for a read-only community and
private for a read-write community. In most cases, a community name is sent in a plain-text
format between the SNMP agent and the manager. Some SNMP implementations have
additional security features, such as the restriction of the accessible IP addresses.

Therefore, you should be careful about the SNMP security. At the very least, do not allow
access to hosts that are running the SNMP agent from networks or IP hosts that do not
necessarily require access.

You might want to physically secure the network to which you send SNMP packets by using a
firewall, because community strings are included as plain text in SNMP packets.

16.1.6 Message Information Base


The objects, which you can get or set by sending SNMP get or set requests, are defined as a
set of databases called the Message Information Base (MIB). The structure of MIB is defined
as an Internet standard in RFC 1155; the MIB forms a tree structure.

Most hardware and software vendors provide you with extended MIB objects to support their
own requirements. The SNMP standards allow this extension by using the private sub-tree,
called enterprise specific MIB. Because each vendor has a unique MIB sub-tree under the
private sub-tree, there is no conflict among vendors’ original MIB extensions.

16.1.7 SNMP trap request


An SNMP agent can send SNMP trap requests to SNMP managers to inform them about the
change of values or status on the IP host where the agent is running. There are seven
predefined types of SNMP trap requests, as shown in Table 16-1.

Table 16-1 SNMP trap request types


Trap type Value Description

coldStart 0 Restart after a crash.

warmStart 1 Planned restart.

linkDown 2 Communication link is down.

linkUp 3 Communication link is up.

authenticationFailure 4 Invalid SNMP community string was used.

egpNeighborLoss 5 EGP neighbor is down.

enterpriseSpecific 6 Vendor-specific event happened.

A trap message contains pairs of an OID and a value shown in Table 16-1 to notify the cause
of the trap message. You can also use type 6, the enterpriseSpecific trap type, when you have
to send messages that do not fit other predefined trap types, for example, DISK I/O error
and application down. You can also set an integer value field called Specific Trap on your
trap message.

352 IBM System Storage DS8800: Architecture and Implementation


16.1.8 DS8000 SNMP configuration
SNMP for the DS8000 is designed in such a way that the DS8000 only sends traps in case of
a notification. The traps can be sent to a defined IP address.

The DS8000 does not have an SNMP agent installed that can respond to SNMP polling. The
default Community Name is set to public.

The management server that is configured to receive the SNMP traps receives all the generic
trap 6 and specific trap 3 messages, which are sent in parallel with the Call Home to IBM.

Before configuring SNMP for the DS8000, you are required to get the destination address for
the SNMP trap and also the port information on which the Trap Daemon listens.

Tip: The standard port for SNMP traps is port 162.

16.2 SNMP notifications


The HMC of the DS8000 sends an SNMPv1 trap in two cases:
򐂰 A serviceable event was reported to IBM using Call Home.
򐂰 An event occurred in the Copy Services configuration or processing.

A serviceable event is posted as a generic trap 6 specific trap 3 message. The specific trap 3
is the only event that is sent for serviceable events. For reporting Copy Services events,
generic trap 6 and specific traps 100, 101, 102, 200, 202, 210, 211, 212, 213, 214, 215, 216,
or 217 are sent.

16.2.1 Serviceable event using specific trap 3


In Example 16-1, we see the contents of generic trap 6 specific trap 3. The trap holds the
information about the serial number of the DS8000, the event number that is associated with
the manageable events from the HMC, the reporting Storage Facility Image (SFI), the system
reference code (SRC), and the location code of the part that is logging the event.

The SNMP trap is sent in parallel with a Call Home for service to IBM.

Example 16-1 SNMP special trap 3 of an DS8000


Nov 14, 2005 5:10:54 PM CET
Manufacturer=IBM
ReportingMTMS=2107-922*7503460
ProbNm=345
LparName=null
FailingEnclosureMTMS=2107-922*7503460
SRC=10001510
EventText=2107 (DS 8000) Problem
Fru1Loc=U1300.001.1300885
Fru2Loc=U1300.001.1300885U1300.001.1300885-P1

For open events in the event log, a trap is sent every eight hours until the event is closed. Use
the following link the discover explanations about all System Reference Codes (SRC):
http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000sv/index.jsp

Chapter 16. Monitoring with Simple Network Management Protocol 353


In this page, select Messages and codes  List of system reference codes and firmware
codes.

16.2.2 Copy Services event traps


For state changes in a remote Copy Services environment, there are 13 traps implemented.
The traps 1xx are sent for a state change of a physical link connection. The 2xx traps are sent
for state changes in the logical Copy Services setup. For all of these events, no Call Home is
generated and IBM is not notified.

This chapter describes only the messages and the circumstances when traps are sent by the
DS8000. For detailed information about these functions and terms, refer to IBM System
Storage DS8000: Copy Services for IBM System z, SG24-6787 and IBM System Storage
DS8000: Copy Services for Open Systems, SG24-6788.

Physical connection events


Within the trap 1xx range, a state change of the physical links is reported. The trap is sent if
the physical remote copy link is interrupted. The Link trap is sent from the primary system.
The PLink and SLink columns are only used by the 2105 ESS disk unit.

If one or several links (but not all links) are interrupted, a trap 100, as shown in Example 16-2,
is posted and indicates that the redundancy is degraded. The RC column in the trap
represents the return code for the interruption of the link; return codes are listed in Table 16-2
on page 355.

Example 16-2 Trap 100: Remote Mirror and Copy links degraded
PPRC Links Degraded
UNIT: Mnf Type-Mod SerialNm LS
PRI: IBM 2107-922 75-20781 12
SEC: IBM 2107-9A2 75-ABTV1 24
Path: Type PP PLink SP SLink RC
1: FIBRE 0143 XXXXXX 0010 XXXXXX 15
2: FIBRE 0213 XXXXXX 0140 XXXXXX OK

If all links all interrupted, a trap 101, as shown in Example 16-3, is posted. This event
indicates that no communication between the primary and the secondary system is possible.

Example 16-3 Trap 101: Remote Mirror and Copy links are inoperable
PPRC Links Down
UNIT: Mnf Type-Mod SerialNm LS
PRI: IBM 2107-922 75-20781 10
SEC: IBM 2107-9A2 75-ABTV1 20
Path: Type PP PLink SP SLink RC
1: FIBRE 0143 XXXXXX 0010 XXXXXX 17
2: FIBRE 0213 XXXXXX 0140 XXXXXX 17

After the DS8000 can communicate again using any of the links, trap 102, as shown in
Example 16-4, is sent after one or more of the interrupted links are available again.

Example 16-4 Trap 102: Remote Mirror and Copy links are operational
PPRC Links Up
UNIT: Mnf Type-Mod SerialNm LS
PRI: IBM 2107-9A2 75-ABTV1 21

354 IBM System Storage DS8800: Architecture and Implementation


SEC: IBM 2107-000 75-20781 11
Path: Type PP PLink SP SLink RC
1: FIBRE 0010 XXXXXX 0143 XXXXXX OK
2: FIBRE 0140 XXXXXX 0213 XXXXXX OK

Table 16-2 lists the Remote Mirror and Copy return codes.

Table 16-2 Remote Mirror and Copy return codes


Return code Description

02 Initialization failed. ESCON link reject threshold exceeded


when attempting to send ELP or RID frames.

03 Timeout. No reason available.

04 There are no resources available in the primary storage


unit for establishing logical paths because the maximum
number of logical paths have already been established.

05 There are no resources available in the secondary storage


unit for establishing logical paths because the maximum
number of logical paths have already been established.

06 There is a secondary storage unit sequence number, or


logical subsystem number, mismatch.

07 There is a secondary LSS subsystem identifier (SSID)


mismatch, or failure of the I/O that collects the secondary
information for validation.

08 The ESCON link is offline. This is caused by the lack of


light detection coming from a host, peer, or switch.

09 The establish failed. It is retried until the command


succeeds or a remove paths command is run for the path.
Note: The attempt-to-establish state persists until the
establish path operation succeeds or the remove remote
mirror and copy paths command is run for the path.

0A The primary storage unit port or link cannot be converted


to channel mode if a logical path is already established on
the port or link. The establish paths operation is not retried
within the storage unit.

10 Configuration error. The source of the error is one of the


following:
򐂰 The specification of the SA ID does not match the
installed ESCON adapter cards in the primary
controller.
򐂰 For ESCON paths, the secondary storage unit
destination address is zero and an ESCON Director
(switch) was found in the path.
򐂰 For ESCON paths, the secondary storage unit
destination address is not zero and an ESCON
director does not exist in the path. The path is a direct
connection.

14 The Fibre Channel path link is down.

15 The maximum number of Fibre Channel path retry


operations has been exceeded.

Chapter 16. Monitoring with Simple Network Management Protocol 355


Return code Description

16 The Fibre Channel path secondary adapter is not Remote


Mirror and Copy capable. This could be caused by one of
the following conditions:
򐂰 The secondary adapter is not configured properly or
does not have the current firmware installed.
򐂰 The secondary adapter is already a target of 32
different logical subsystems (LSSs).

17 The secondary adapter Fibre Channel path is not


available.

18 The maximum number of Fibre Channel path primary login


attempts has been exceeded.

19 The maximum number of Fibre Channel path secondary


login attempts has been exceeded.

1A The primary Fibre Channel adapter is not configured


properly or does not have the correct firmware level
installed.

1B The Fibre Channel path was established but degraded due


to a high failure rate.

1C The Fibre Channel path was removed due to a high failure


rate.

Remote Mirror and Copy events


If you have configured Consistency Groups and a volume within this Consistency Group is
suspended due to a write error to the secondary device, trap 200 (Example 16-5) is sent. One
trap per LSS, which is configured with the Consistency Group option, is sent. This trap can be
handled by automation software, such as TPC for Replication, to freeze this Consistency
Group. The SR column in the trap represents the suspension reason code, which explains the
cause of the error that suspended the remote mirror and copy group. Suspension reason
codes are listed in Table 16-3 on page 359.

Example 16-5 Trap 200: LSS Pair Consistency Group Remote Mirror and Copy pair error
LSS-Pair Consistency Group PPRC-Pair Error
UNIT: Mnf Type-Mod SerialNm LS LD SR
PRI: IBM 2107-922 75-03461 56 84 08
SEC: IBM 2107-9A2 75-ABTV1 54 84

Trap 202, as shown in Example 16-6, is sent if a Remote Copy Pair goes into a suspend state.
The trap contains the serial number (SerialNm) of the primary and secondary machine, the
logical subsystem or LSS (LS), and the logical device (LD). To avoid SNMP trap flooding, the
number of SNMP traps for the LSS is throttled. The complete suspended pair information is
represented in the summary. The last row of the trap represents the suspend state for all pairs
in the reporting LSS. The suspended pair information contains a hexadecimal string of a
length of 64 characters. By converting this hex string into binary, each bit represents a single
device. If the bit is 1, then the device is suspended; otherwise, the device is still in full duplex
mode.

Example 16-6 Trap 202: Primary Remote Mirror and Copy devices on the LSS were suspended
because of an error

Primary PPRC Devices on LSS Suspended Due to Error


UNIT: Mnf Type-Mod SerialNm LS LD SR

356 IBM System Storage DS8800: Architecture and Implementation


PRI: IBM 2107-922 75-20781 11 00 03
SEC: IBM 2107-9A2 75-ABTV1 21 00
Start: 2005/11/14 09:48:05 CST
PRI Dev Flags (1 bit/Dev, 1=Suspended):
C000000000000000000000000000000000000000000000000000000000000000

Trap 210, as shown in Example 16-7, is sent when a Consistency Group in a Global Mirror
environment was successfully formed.

Example 16-7 Trap210: Global Mirror initial Consistency Group successfully formed
2005/11/14 15:30:55 CET
Asynchronous PPRC Initial Consistency Group Successfully Formed
UNIT: Mnf Type-Mod SerialNm
IBM 2107-922 75-20781
Session ID: 4002

Trap 211, as shown in Example 16-8, is sent if the Global Mirror setup got into an severe error
state, where no attempts are made to form a Consistency Group.

Example 16-8 Trap 211: Global Mirror Session is in a fatal state


Asynchronous PPRC Session is in a Fatal State
UNIT: Mnf Type-Mod SerialNm
IBM 2107-922 75-20781
Session ID: 4002

Trap 212, shown in Example 16-9, is sent when a Consistency Group cannot be created in a
Global Mirror relationship. Some of the reasons might be:
򐂰 Volumes have been taken out of a copy session.
򐂰 The Remote Copy link bandwidth might not be sufficient.
򐂰 The FC link between the primary and secondary system is not available.

Example 16-9 Trap 212: Global Mirror Consistency Group failure - Retry will be attempted
Asynchronous PPRC Consistency Group Failure - Retry will be attempted
UNIT: Mnf Type-Mod SerialNm
IBM 2107-922 75-20781
Session ID: 4002

Trap 213, shown in Example 16-10, is sent when a Consistency Group in a Global Mirror
environment can be formed after a previous Consistency Group formation failure.

Example 16-10 Trap 213: Global Mirror Consistency Group successful recovery
Asynchronous PPRC Consistency Group Successful Recovery
UNIT: Mnf Type-Mod SerialNm
IBM 2107-9A2 75-ABTV1
Session ID: 4002

Chapter 16. Monitoring with Simple Network Management Protocol 357


Trap 214, shown in Example 16-11, is sent if a Global Mirror Session is terminated using the
DS CLI command rmgmir or the corresponding GUI function.

Example 16-11 Trap 214: Global Mirror Master terminated


2005/11/14 15:30:14 CET
Asynchronous PPRC Master Terminated
UNIT: Mnf Type-Mod SerialNm
IBM 2107-922 75-20781
Session ID: 4002

Trap 215, shown in Example 16-12, is sent if, in the Global Mirror Environment, the master
detects a failure to complete the FlashCopy commit. The trap is sent after a number of commit
retries have failed.

Example 16-12 Trap 215: Global Mirror FlashCopy at Remote Site unsuccessful
Asynchronous PPRC FlashCopy at Remote Site Unsuccessful
A UNIT: Mnf Type-Mod SerialNm
IBM 2107-9A2 75-ABTV1
Session ID: 4002

Trap 216, shown in Example 16-13, is sent if a Global Mirror Master cannot terminate the
Global Copy relationship at one of his subordinates. This might occur if the master is
terminated with rmgmir but the master cannot terminate the copy relationship on the
subordinate. You might need to run a rmgmir against the subordinate to prevent any
interference with other Global Mirror sessions.

Example 16-13 Trap 216: Global Mirror subordinate termination unsuccessful


Asynchronous PPRC Slave Termination Unsuccessful
UNIT: Mnf Type-Mod SerialNm
Master: IBM 2107-922 75-20781
Slave: IBM 2107-921 75-03641
Session ID: 4002

Trap 217, shown in Example 16-14, is sent if a Global Mirror environment was suspended by
the DS CLI command pausegmir or the corresponding GUI function.

Example 16-14 Trap 217: Global Mirror paused


Asynchronous PPRC Paused
UNIT: Mnf Type-Mod SerialNm
IBM 2107-9A2 75-ABTV1
Session ID: 4002

Trap 218, shown in Example 16-15, is sent if a Global Mirror has exceeded the allowed
threshold for failed consistency group formation attempts.

Example 16-15 Trap 218: Global Mirror number of consistency group failures exceed threshold
Global Mirror number of consistency group failures exceed threshold
UNIT: Mnf Type-Mod SerialNm
IBM 2107-9A2 75-ABTV1
Session ID: 4002

358 IBM System Storage DS8800: Architecture and Implementation


Trap 219, shown in Example 16-16, is sent if a Global Mirror has successfully formed a
consistency group after one or more formation attempts had previously failed.

Example 16-16 Trap 219: Global Mirror first successful consistency group after prior failures
Global Mirror first successful consistency group after prior failures
UNIT: Mnf Type-Mod SerialNm
IBM 2107-9A2 75-ABTV1
Session ID: 4002

Trap 220, shown in Example 16-17, is sent if a Global Mirror has exceeded the allowed
threshold of failed FlashCopy commit attempts.

Example 16-17 Trap 220: Global Mirror number of FlashCopy commit failures exceed threshold
Global Mirror number of FlashCopy commit failures exceed threshold
UNIT: Mnf Type-Mod SerialNm
IBM 2107-9A2 75-ABTV1
Session ID: 4002

Trap 221, shown in Example 16-18, is sent when the repository has reached the user-defined
warning watermark or when physical space is completely exhausted.

Example 16-18 Trap 221: Space Efficient repository or overprovisioned volume has reached a warning
watermark
Space Efficient Repository or Over-provisioned Volume has reached a warning
watermark
UNIT: Mnf Type-Mod SerialNm
IBM 2107-9A2 75-ABTV1
Session ID: 4002

Table 16-3 shows the Copy Services suspension reason codes.

Table 16-3 Copy Services suspension reason codes


Suspension reason code (SRC) Description

03 The host system sent a command to the primary


volume of a Remote Mirror and Copy volume pair
to suspend copy operations. The host system
might have specified either an immediate
suspension or a suspension after the copy
completed and the volume pair reached a full
duplex state.

04 The host system sent a command to suspend the


copy operations on the secondary volume.
During the suspension, the primary volume of the
volume pair can still accept updates but updates
are not copied to the secondary volume. The
out-of-sync tracks that are created between the
volume pair are recorded in the change recording
feature of the primary volume.

Chapter 16. Monitoring with Simple Network Management Protocol 359


Suspension reason code (SRC) Description

05 Copy operations between the Remote Mirror and


Copy volume pair were suspended by a primary
storage unit secondary device status command.
This system resource code can only be returned
by the secondary volume.

06 Copy operations between the Remote Mirror and


Copy volume pair were suspended because of
internal conditions in the storage unit. This
system resource code can be returned by the
control unit of either the primary volume or the
secondary volume.

07 Copy operations between the remote mirror and


copy volume pair were suspended when the
secondary storage unit notified the primary
storage unit of a state change transition to
simplex state. The specified volume pair between
the storage units is no longer in a copy
relationship.

08 Copy operations were suspended because the


secondary volume became suspended as a
result of internal conditions or errors. This system
resource code can only be returned by the
primary storage unit.

09 The Remote Mirror and Copy volume pair was


suspended when the primary or secondary
storage unit was rebooted or when the power
was restored.
The paths to the secondary storage unit might not
be disabled if the primary storage unit was turned
off. If the secondary storage unit was turned off,
the paths between the storage units are restored
automatically, if possible. After the paths have
been restored, issue the mkpprc command to
resynchronize the specified volume pairs.
Depending on the state of the volume pairs, you
might have to issue the rmpprc command to
delete the volume pairs and reissue a mkpprc
command to reestablish the volume pairs.

0A The Remote Mirror and Copy pair was


suspended because the host issued a command
to freeze the Remote Mirror and Copy group.
This system resource code can only be returned
if a primary volume was queried.

16.3 SNMP configuration


The SNMP for the DS8000 is designed to send traps as notifications. The DS8000 does not
have an SNMP agent installed that can respond to SNMP polling. Also, the SNMP community
name for Copy Service-related traps is fixed and set to public.

360 IBM System Storage DS8800: Architecture and Implementation


SNMP preparation on the HMC
During the planning for the installation (see 9.3.4, “Monitoring with the HMC” on page 188),
the IP addresses of the management system are provided for the IBM service personnel. This
information must be applied by the IBM service personnel during the installation. Also, the
IBM service personnel can configure the HMC to either send a notification for every
serviceable event, or send a notification for only those events that Call Home to IBM.

The network management server that is configured on the HMC receives all the generic trap
6 specific trap 3 messages, which are sent in parallel with any events that Call Home to IBM.

SNMP preparation with the DS CLI


Perform the configuration for receiving the Copy Services-related traps using the DS CLI.
Example 16-19 shows how SNMP is enabled by using the chsp command.

Example 16-19 Configuring the SNMP using dscli


dscli> chsp -snmp on -snmpaddr 10.10.10.11,10.10.10.12
Date/Time: November 16, 2005 10:14:50 AM CET IBM DSCLI Version: 5.1.0.204
CMUC00040I chsp: Storage complex IbmStoragePlex_2 successfully modified.

dscli> showsp
Date/Time: November 16, 2005 10:15:04 AM CET IBM DSCLI Version: 5.1.0.204
Name IbmStoragePlex_2
desc ATS #1
acct -
SNMP Enabled
SNMPadd 10.10.10.11,10.10.10.12
emailnotify Disabled
emailaddr -
emailrelay Disabled
emailrelayaddr -
emailrelayhost -

SNMP preparation for the management software


For the DS8000, you can use the ibm2100.mib file, which is delivered on the DS CLI CD.
Alternatively, you can download the latest version of the DS CLI CD image from the following
address:
ftp://ftp.software.ibm.com/storage/ds8000/updates/DS8K_Customer_Download_Files/CLI

Chapter 16. Monitoring with Simple Network Management Protocol 361


362 IBM System Storage DS8800: Architecture and Implementation
17

Chapter 17. Remote support


In this chapter, we discuss the outbound (Call Home and Support Data offload) and inbound
(code download and remote support) communications for the IBM System Storage DS8000.
This chapter covers the following topics:
򐂰 Introduction to remote support
򐂰 IBM policies for remote support
򐂰 Remote connection types
򐂰 DS8000 support tasks
򐂰 Scenarios
򐂰 Audit logging

© Copyright IBM Corp. 2011. All rights reserved. 363


17.1 Introduction to remote support
Remote support is a complex topic that requires close scrutiny and education for all parties
involved. IBM is committed to servicing the DS8000, whether it be warranty work, planned
code upgrades, or management of a component failure, in a secure and professional manner.
Dispatching service personnel to come to your site and perform maintenance on the system
is still a part of that commitment. But as much as possible, IBM wants to minimize downtime
and maximize efficiency by performing many support tasks remotely.

This plan of providing support remotely must be balanced with the client’s expectations for
security. Maintaining the highest levels of security in a data connection is a primary goal for
IBM. This goal can only be achieved by careful planning with a client and a thorough review of
all the options available.

17.1.1 Suggested reading


The following publications may be of assistance in understanding IBM’s remote support
offerings:
򐂰 IBM System Storage DS8000 Introduction and Planning Guide, GC35-0515, contains
additional information about physical planning. You can download it at the following
address:
http://www.ibm.com/systems/storage/disk/ds8000/index.html
򐂰 A Comprehensive Guide to Virtual Private Networks, Volume I: IBM Firewall, Server and
Client Solutions, SG24-5201, can be downloaded at the following address:
http://www.redbooks.ibm.com/abstracts/sg245201.html?Open
򐂰 The Security Planning website is available at the following address:
http://publib16.boulder.ibm.com/doc_link/en_US/a_doc_lib/aixbman/security/ipsec
_planning.htm
򐂰 VPNs Illustrated: Tunnels, VPNs, and IPSec, by Jon C. Snader
򐂰 VPN Implementation, S1002693, which can be downloaded at the following address:
http://www.ibm.com/support/docview.wss?&rs=1114&uid=ssg1S1002693

17.1.2 Organization of this chapter


A list of the relevant terminology for remote support is first presented. The remainder of this
chapter is organized as follows:
򐂰 Connections
We review the different types of connections that can be made from the HMC to the world
outside of the DS8000.
򐂰 Tasks
We review the various support tasks that need to be run on those connections.
򐂰 Scenarios
We illustrate a scenario about how each task is performed over the different types of
remote connections.

364 IBM System Storage DS8800: Architecture and Implementation


17.1.3 Terminology and definitions
Listed here are brief explanations of some of the terms to be used when discussing remote
support. See “Abbreviations and acronyms” on page 399 for a full list of terms and acronyms
used in this book. Having an understanding of these terms will contribute to your discussions
on remote support and security concerns. A generic definition will be presented here and
then more specific information about how IBM implements the idea is given further on in this
chapter.

IP network
There are many protocols running on Local Area Networks (LANs) around the world. Most
companies us the Transmission Control Protocol/Internet Protocol (TCP/IP) standard for their
connectivity between workstations and servers. IP is also the networking protocol of the
global Internet. Web browsing and e-mail are two of the most common applications that run
on top of an IP network. IP is the protocol used by the DS8000 HMC to communicate with
external systems, such as the SSPC or DS CLI workstations. There are two varieties of IP;
refer to 8.3.2, “System Storage Productivity Center and network access” on page 166 for a
discussion about the IPv4 and IPv6 networks.

SSH
Secure Shell is a protocol that establishes a secure communications channel between two
computer systems. The term SSH is also used to describe a secure ASCII terminal session
between two computers. SSH can be enabled on a system when regular Telnet and FTP are
disabled, making it possible to only communicate with the computer in a secure manner.

FTP
File Transfer Protocol is a method of moving binary and text files from one computer system
to another over an IP connection. It is not inherently secure as it has no provisions for
encryption and only simple user and password authentication. FTP is considered appropriate
for data that is already public, or if the entirety of the connection is within the physical
boundaries of a private network.

SFTP
SSH File Transfer Protocol is unrelated to FTP. It is another file transfer method that is
implemented inside a SSH connection. SFTP is generally considered to be secure enough for
mission critical data and for moving sensitive data across the global Internet. FTP ports
(usually ports 20/21) do not have be open through a firewall for SFTP to work.

SSL
Secure Sockets Layer refers to methods of securing otherwise unsecure protocols such as
HTTP (websites), FTP (files), or SMTP (e-mail). Carrying HTTP over SSL is often referred to
as HTTPS. An SSL connection over the global Internet is considered reasonably secure.

VPN
A Virtual Private Network is a private “tunnel” through a public network. Most commonly, it
refers to using specialized software and hardware to create a secure connection over the
Internet. The two systems, although physically separate, behave as though they are on the
same private network. A VPN allows a remote worker or an entire remote office to remain part
of a company’s internal network. VPNs provide security by encrypting traffic, authenticating
sessions and users, and verifying data integrity.

Chapter 17. Remote support 365


Business-to-Business VPN
Business-to-business is a term for specialized VPN services for secure connections between
IBM and its clients. This offering is also known as Client Controlled VPN and Site-to-Site
VPN. This offering is in direct response to client concerns about being in control of VPN
sessions with their vendors. It includes the use of a hardware VPN appliance inside the
client’s network, presumably one that can interact with many vendors’ VPN clients.

IPSec
Internet Protocol Security is a suite of protocols used to provide a secure transaction between
two systems that use the TCP/IP network protocol. IPSec focuses on authentication and
encryption, two of the main ingredients of a secure connection. Most VPNs used on the
Internet use IPSec mechanisms to establish the connection.

Firewall
A firewall is a device that controls whether data is allowed to travel onto a network segment.
Firewalls are deployed at the boundaries of networks. They are managed by policies which
declare what traffic can pass based on the sender’s address, the destination address, and the
type of traffic. Firewalls are an essential part of network security and their configuration must
be taken into consideration when planning remote support activities.

Bandwidth
Bandwidth refers to the characteristics of a connection and how they relate to moving data.
Bandwidth is affected by the physical connection, the logical protocols used, physical
distance, and the type of data being moved. In general, higher bandwidth means faster
movement of larger data sets.

17.2 IBM policies for remote support


The following guidelines are at the core of IBM’s remote support strategies for the DS8000:
򐂰 When the DS8000 needs to transmit service data to IBM, no host data of any kind is
included. Only logs and process dumps are gathered for troubleshooting. The I/O from
host adapters and the contents of NVS cache memory are never transmitted.
򐂰 When a VPN session with the DS8000 is needed, the HMC will always initiate such
connections and only to predefined IBM servers/ports. There is never any active process
that is “listening” for incoming sessions on the HMC.
򐂰 IBM maintains multiple-level internal authorizations for any privileged access to the
DS8000 components. Only approved IBM service personnel can gain access to the tools
that provide the one-time security codes for HMC command-line access.
򐂰 While the HMC is based on a Linux operating system, IBM has disabled or removed all
unnecessary services, processes, and IDs. This includes standard Internet services such
as telnet, ftp, r commands, and rcp programs.

366 IBM System Storage DS8800: Architecture and Implementation


17.3 Remote connection types
The DS8000 HMC has a connection point for the client’s network by a standard Ethernet
(10/100/1000 Mb) cable. The HMC also has a connection point for a phone line by the modem
port. These two physical connections offer four possibilities for sending and receiving data
between the DS8000 and IBM. The connection types are:
򐂰 Asynchronous modem connection
򐂰 IP network connection
򐂰 IP network connection with VPN
򐂰 IP network connection with Business-to-Business VPN

In the most secure environments, both of these physical connections (Ethernet and modem)
remain unplugged. The DS8000 serves up storage for its connected hosts, but has no other
communication with the outside world. This means that all configuration tasks have to be
done while standing at the HMC (there is no usage of the SSPC or DS CLI). This level of
security, known as an air gap, also means that there is no way for the DS8000 to alert anyone
that it has encountered a problem and there is no way to correct such a problem other than to
be physically present at the system.

So rather than leaving the modem and Ethernet disconnected, clients will provide these
connections and then apply policies on when they are to be used and what type of data they
may carry. Those policies are enforced by the settings on the HMC and the configuration of
client network devices, such as routers and firewalls. The next four sections discuss the
capabilities of each type of connection.

17.3.1 Modem
A modem creates a low-speed asynchronous connection using a telephone line plugged into
the HMC modem port. This type of connection favors transferring small amounts of data. It is
relatively secure because the data is not traveling across the Internet. However, this type of
connection is not terribly useful due to bandwidth limitations. Average connection speed in the
US mainland is 28-36 Kbps, and can be less in other parts of the world.

DS8000 HMC modems can be configured to call IBM and send small status messages.
Authorized support personnel can call the HMC and get privileged access to the command
line of the operating system. Typical PEPackage transmission over a modem line could take
15 to 20 hours depending on the quality of the connection. Code downloads over a modem
line are not possible.

The client has control over whether or not the modem will answer an incoming call. These
options are changed from the WebUI on the HMC by selecting Service Management 
Manage Inbound Connectivity, as shown in Figure 17-1.

Chapter 17. Remote support 367


Figure 17-1 Service Management in WebUI

The HMC provides several settings to govern the usage of the modem port:
򐂰 Unattended Session
This check box allows the HMC to answer modem calls without operator intervention. If
this is not checked, then someone must go to the HMC and allow for the next expected
call. IBM Support must contact the client every time they need to dial in to the HMC.
򐂰 Duration: Continuous
This option indicates that the HMC can answer all calls at all times.
򐂰 Duration: Automatic
This option indicates that the HMC will answer all calls for n days following the creation of
any new Serviceable Event (problem).
򐂰 Duration: Temporary
This option sets a starting and ending date, during which the HMC will answer all calls.

These options are shown in Figure 17-2. See Figure 17-3 on page 374 for an illustration of a
modem connection.

Select this option to allow the HMC


modem to receive unattended calls

Modem will always answer

Modem will answer for n days


after a new problem is opened

Modem will answer during


this time period only
Figure 17-2 Modem settings

368 IBM System Storage DS8800: Architecture and Implementation


17.3.2 IP network
Network connections are considered high speed in comparison to a modem. Enough data
can flow through a network connection to make it possible to run a graphical user interface
(GUI). Managing a DS8000 from an SSPC would not be possible over a modem line; it
requires the bandwidth of a network connection.

HMCs connected to a client IP network, and eventually to the Internet, can send status
updates and offloaded problem data to IBM using SSL sessions. They can also use FTP to
retrieve new code bundles from the IBM code repository. It typically take less than an hour to
move the information.

Though favorable for speed and bandwidth, network connections introduce security concerns.
Care must be taken to:
򐂰 Verify the authenticity of data, that is, is it really from the sender it claims to be?
򐂰 Verify the integrity of data, that is, has it been altered during transmission?
򐂰 Verify the security of data, that is, can it be captured and decoded by unwanted systems?

The Secure Sockets Layer (SSL) protocol is one answer to these questions. It provides
transport layer security with authenticity, integrity, and confidentiality, for a secure connection
between the client network and IBM. Some of the features that are provided by SSL are:
򐂰 Client and server authentication to ensure that the appropriate machines are exchanging
data
򐂰 Data signing to prevent unauthorized modification of data while in transit
򐂰 Data encryption to prevent the exposure of sensitive information while data is in transit

See Figure 17-4 on page 375 for an illustration of a basic network connection.

17.3.3 IP network with traditional VPN


Adding a VPN “tunnel” to an IP network greatly increases the security of the connection
between the two endpoints. Data can be verified for authenticity and integrity. Data can be
encrypted so that even if it is captured enroute, it cannot be “replayed” or deciphered.

Having the safety of running within a VPN, IBM can use its service interface (WebUI) to:
򐂰 Check the status of components and services on the DS8000 in real time
򐂰 Queue up diagnostic data offloads
򐂰 Start, monitor, pause, and restart repair service actions

Performing the following steps result in the HMC creating a VPN tunnel back to the IBM
network, which service personnel can then use. There is no VPN service that sits idle, waiting
for a connection to be made by IBM. Only the HMC is allowed to initiate the VPN tunnel, and
it can only be made to predefined IBM addresses. The steps to create a VPN tunnel from the
DS8000 HMC to IBM are listed here:
1. IBM support calls the HMC using the modem. After the first level of authentications, the
HMC is asked to launch a VPN session.
2. The HMC hangs up the modem call and initiates a VPN connection back to a predefined
address or port within IBM Support.
3. IBM Support verifies that they can see and use the VPN connection from an IBM internal
IP address.
4. IBM Support launches the WebUI or other high-bandwidth tools to work on the DS8000.

Chapter 17. Remote support 369


See Figure 17-5 on page 376 for an illustration of a traditional VPN connection.

17.3.4 IP network with Business-to-Business VPN


The Business-to-Business VPN option does not add any new functionality; IBM Support can
perform all of the tasks as with the traditional HMC-based VPN. What a Business-to-Business
VPN does provide is a greater measure of control over the VPN sessions by the client.
Instead of a VPN tunnel being created between the HMC and the IBM network, a tunnel is
created from the client’s VPN appliance to the IBM network. This option has also been
referred to as client controlled VPN.

Clients who work with many vendors that have their own remote support systems often own
and manage a VPN appliance, a server that sits on the edge of their network and creates
tunnels with outside entities. This is true for many companies that have remote workers,
outside sales forces, or small branch offices. Because the device is already configured to
meet the client’s security requirements, they only need to add appropriate policies for IBM
support. Most commercially-available VPN servers are interoperable with the IPSec-based
VPN that IBM needs to establish. Using a Business-to-Business VPN layout leverages the
investment that a client has already made in establishing secure tunnels into their network.

The VPN tunnel that gets created is valid for IBM Remote Support use only and has to be
configured both on the IBM and client sides. This design provides several advantages for the
client:
򐂰 Allows the client to use Network Address Translation (NAT) so that the HMC is given a
non-routable IP address behind the company firewall.
򐂰 Allows the client to inspect the TCP/IP packets that are sent over this VPN.
򐂰 Allows the client to disable the VPN on their device for “lockdown” situations.

Note that the Business-to-Business VPN only provides the tunnel that service personnel can
use to actively work with the HMC from within IBM. To offload data or call home, the HMC still
needs to have one of the following:
򐂰 Modem access
򐂰 Non-VPN network access (SSL connection)
򐂰 Traditional VPN access

See Figure 17-6 on page 377 for an illustration of a Business-to-Business VPN connection.

17.4 DS8000 support tasks


DS8000 support tasks are tasks that require the HMC to contact the outside world. Some
tasks can be performed using either the modem or the network connection, and some can
only be done over a network. The combination of tasks and connection types is illustrated in
17.5, “Scenarios” on page 373. The support tasks that require the DS8000 to connect to
outside resources are:
򐂰 Call Home and heartbeat
򐂰 Data offload
򐂰 Code download
򐂰 Remote support

370 IBM System Storage DS8800: Architecture and Implementation


17.4.1 Call Home and heartbeat (outbound)
Here we discuss the Call Home and heartbeat capabilities.

Call Home
Call Home is the capability of the HMC to contact IBM Service to report a service event. This
is referred to as Call Home for service. The HMC provides machine reported product data
(MRPD) information to IBM by way of the Call Home facility. The MRPD information includes
installed hardware, configurations, and features. The Call Home also includes information
about the nature of a problem so that an active investigation can be launched. Call Home is a
one-way communication, with data moving from the DS8000 HMC to the IBM data store.

Heartbeat
The DS8000 also uses the Call Home facility to send proactive heartbeat information to IBM.
A heartbeat is a small message with some basic product information so that IBM knows the
unit is operational. By sending heartbeats, both IBM and the client ensure that the HMC is
always able to initiate a full Call Home to IBM in the case of an error. If the heartbeat
information does not reach IBM, a service call to the client will be made to investigate the
status of the DS8000. Heartbeats represent a one-way communication, with data moving
from the DS8000 HMC to the IBM data store.

The Call Home facility can be configured to:


򐂰 Use the HMC modem
򐂰 Use the Internet via a SSL connection
򐂰 Use the Internet via a VPN tunnel from the HMC to IBM

Call Home information and heartbeat information are stored in the IBM internal data store so
the support representatives have access to the records.

17.4.2 Data offload (outbound)


For many DS8000 problem events, such as a hardware component failure, a large amount of
diagnostic data is generated. This data may include text and binary log files, firmware dumps,
memory dumps, inventory lists, and timelines. These logs are grouped into collections by the
component that generated them or the software service that owns them. The entire bundle is
collected together in what is called a PEPackage. A DS8000 PEPackage can be quite large,
often exceeding 100 MB. In some cases, more than one may be needed to properly diagnose
a problem. The HMC is a focal point, gathering and storing all the data packages. So the
HMC must be accessible if a service action requires the information. The data packages must
be offloaded from the HMC and sent in to IBM for analysis. The offload can be done several
ways
򐂰 Modem offload
򐂰 Standard FTP offload
򐂰 SSL offload

Modem offload
The HMC can be configured to support automatic data offload using the internal modem and
a regular phone line. Offloading a PEPackage over a modem connection is extremely slow, in
many cases taking 15 to 20 hours. It also ties up the modem for this time so that IBM support
cannot dial in to the HMC to perform command-line tasks. If this is the only connectivity option
available, be aware that the overall process of remote support will be delayed while data is in
transit.

Chapter 17. Remote support 371


Standard FTP offload
The HMC can be configured to support automatic data offload using File Transfer Protocol
(FTP) over a network connection. This traffic can be examined at the client’s firewall before
moving across the Internet. FTP offload allows IBM Service personnel to dial in to the HMC
using the modem line while support data is being transmitted to IBM over the network.

Note: FTP offload of data is supported as an outbound service only. There is no active
FTP server running on the HMC that can receive connection requests.

When a direct FTP session across the Internet is not available or desirable, a client can
configure the FTP offload to use a client-provided FTP proxy server. The client then becomes
responsible for configuring the proxy to forward the data to IBM.

The client is required to manage its firewalls so that FTP traffic from the HMC (or from an FTP
proxy) can pass onto the Internet.

SSL offload
For environments that do not permit FTP traffic out to the Internet, the DS8000 also supports
offload of data using SSL security. In this configuration, the HMC uses the client-provided
network connection to connect to the IBM data store, the same as in a standard FTP offload.
But with SSL, all the data is encrypted so that it is rendered unusable if intercepted.

Client firewall settings between the HMC and the Internet for SSL setup require four IP
addresses open on port 443 based on geography as detailed here:
򐂰 North and South America
129.42.160.48 IBM Authentication Primary
207.25.252.200 IBM Authentication Secondary
129.42.160.49 IBM Data Primary
207.25.252.204 IBM Data Secondary
򐂰 All other regions
129.42.160.48 IBM Authentication Primary
207.25.252.200 IBM Authentication Secondary
129.42.160.50 IBM Data Primary
207.25.252.205 IBM Data Secondary

17.4.3 Code download (inbound)


DS8800 microcode updates are published as bundles that can be downloaded from IBM. As
explained in 15.6, “Loading the code bundle” on page 347, there are three possibilities for
acquiring code on the HMC:
򐂰 Load the new code bundle using CDs.
򐂰 Download the new code bundle directly from IBM using FTP.
򐂰 Download the new code bundle directly from IBM using SFTP.

Loading code bundles from CDs is the only option for DS8000 installations that have no
outside connectivity at all. If the HMC is connected to the client network then IBM support will
download the bundles from IBM using either FTP or SFTP.

FTP
If allowed, the support representative will open an FTP session from the HMC to the IBM
code repository and download the code bundle(s) to the HMC. The client firewall will need to
be configured to allow the FTP traffic to pass.

372 IBM System Storage DS8800: Architecture and Implementation


SFTP
If FTP is not allowed, an SFTP session can be used instead. SFTP is a more secure file
transfer protocol running within an SSH session, as defined in 17.1.3, “Terminology and
definitions” on page 365. If this option is used, the client firewall will need to be configured to
allow the SSH traffic to pass.

After the code bundle is acquired from IBM, the FTP or SFTP session will be closed and the
code load can take place without needing to communicate outside of the DS8000.

17.4.4 Remote support (inbound and two-way)


Remote support describes the most interactive level of assistance from IBM. After a problem
comes to the attention of the IBM Support Center and it is determined that the issue is more
complex than a straightforward parts replacement, the problem will likely be escalated to
higher levels of responsibility within IBM Support. This could happen at the same time that a
support representative is being dispatched to the client site.

IBM may need to trigger a data offload, perhaps more than one, and at the same time be able
to interact with the DS8000 to dig deeper into the problem and develop an action plan to
restore the system to normal operation. This type of interaction with the HMC is what requires
the most bandwidth.

If the only available connectivity is by modem, then IBM Support will have to wait until any
data offload is complete and then attempt the diagnostics and repair from a command-line
environment on the HMC. This process is slower and more limited in scope than if a network
connection can be used.

If a VPN is available, either from the HMC directly to IBM or by using VPN devices
(Business-to-Business VPN option), then enough bandwidth is available for data offload and
interactive troubleshooting to be done at the same time. IBM Support will be able to use
graphical tools (WebUI and others) to diagnose and repair the problem.

17.5 Scenarios
Now that the four connection options have been reviewed (see 17.3, “Remote connection
types” on page 367) and the tasks have been reviewed (see 17.4, “DS8000 support tasks” on
page 370), we can examine how each task is performed given the type of access available to
the DS8000.

17.5.1 No connections
If both the modem or the Ethernet are not physically connected and configured, then the
tasks are performed as follows:
򐂰 Call Home and heartbeat: The HMC will not send heartbeats to IBM. The HMC will not call
home if a problem is detected. IBM Support will need to be notified at the time of
installation to add an exception for this DS8000 in the heartbeats database, indicating that
it is not expected to contact IBM.
򐂰 Data offload: If absolutely required and allowed by the client, diagnostic data can be
burned onto a DVD, transported to an IBM facility, and uploaded to the IBM data store.
򐂰 Code download: Code must be loaded onto the HMC using CDs carried in by the Service
Representative.

Chapter 17. Remote support 373


򐂰 Remote support: IBM cannot provide any remote support for this DS8000. All diagnostic
and repair tasks must take place with an operator physically located at the console.

17.5.2 Modem only


If the modem is the only connectivity option, then the tasks are performed as follows:
򐂰 Call Home and heartbeat: The HMC will use the modem to call IBM and send the Call
Home data and the heartbeat data. These calls are of short duration.
򐂰 Data offload: Once data offload is triggered, the HMC will use the modem to call IBM and
send the data package. Depending on the package size and line quality, this call could
take up to 20 hours to complete.
򐂰 Code download: Code must be loaded onto the HMC using CDs carried in by the Service
Representative. There is no method of download if only a modem connection is available.
򐂰 Remote support: If the modem line is available (not being used to offload data or send Call
Home data), IBM Support can dial in to the HMC and execute commands in a
command-line environment. IBM Support cannot utilize a GUI or any high-bandwidth
tools.

See Figure 17-3 for an illustration of a modem-only connection.

IBM Remote Support

Support Staff dial into HMC


for command line access
No GUI

OR

Phone
Line

Data offloads and Call Home


go to IBM over modem line
(one way traffic)

IBM Data Store


Phone Line
DS8700

Figure 17-3 Remote support with modem only

374 IBM System Storage DS8800: Architecture and Implementation


17.5.3 Modem and network with no VPN
If the modem and network access, without VPN, are provided, then the tasks are performed
as follows:
򐂰 Call Home and heartbeat: The HMC will use the network connection to send Call Home
data and heartbeat data to IBM across the Internet.
򐂰 Data offload: The HMC will use the network connection to send offloaded data to IBM
across the Internet. Standard FTP or SSL sockets may be used.
򐂰 Code download: Code can be downloaded from IBM using the network connection. The
download can be done using FTP or SFTP.
򐂰 Remote support: Even though there is a network connection, it is not configured to allow
VPN traffic, so remote support must be done using the modem. If the modem line is not
busy, IBM Support can dial in to the HMC and execute commands in a command-line
environment. IBM Support cannot utilize a GUI or any high-bandwidth tools.

See Figure 17-4 for an illustration of a modem and network connection without using VPN
tunnels.

Customer’s Firewall IB M R emote Support


Support Staff dial
into H MC fo r command line access
No GUI

Data offloads and Call Home


go to IBM o ver
Internet via FTP or SSL
(one way traffic)

Internet
Phone
Line
HMC h as no open network
ports to receive connection s

IBM ’s Firew all IBM Data Store


Phone Line
DS8700

Figure 17-4 Remote support with modem and network (no VPN)

17.5.4 Modem and traditional VPN


If the modem and a VPN-enabled network connection is provided, then the tasks are
performed as follows:
򐂰 Call Home and heartbeat: The HMC will use the network connection to send Call Home
data and heartbeat data to IBM across the Internet, outside of a VPN tunnel.
򐂰 Data offload: The HMC will use the network connection to send offloaded data to IBM
across the Internet, outside of a VPN tunnel. Standard FTP or SSL sockets may be used.

Chapter 17. Remote support 375


򐂰 Code download: Code can be downloaded from IBM using the network connection. The
download can be done using FTP or SFTP outside of a VPN tunnel.
򐂰 Remote support: Upon request, the HMC establishes a VPN tunnel across the Internet to
IBM. IBM Support can use a GUI and high-bandwidth tools to interact with the HMC at the
same time that data is offloading.

See Figure 17-5 for an illustration of a modem and network connection plus traditional VPN.

Customer’s Firewall IBM Remote Support


The firewalls can easily identify the traffic
based on the ports used

The VPN connection from HMC to IBM is


encrypted and authenticated

Internet
Data offloads and Call Home Phone
go to IBM over IBM’s VPN Line
Internet via FTP or SSL Device
(one way traffic)

IBM’s Firewall IBM Data Store


Phone Line
DS8700

Figure 17-5 Remote support with modem and traditional VPN

17.5.5 Modem and Business-to-Business VPN


If a modem plus a network connection plus a Business-to-Business VPN appliance are
installed, then the tasks are performed as follows:
򐂰 Call Home and heartbeat: The HMC will use the network connection to send Call Home
data and heartbeat data to IBM across the Internet, outside of a VPN tunnel.
򐂰 Data offload: The HMC will use the network connection to send offloaded data to IBM
across the Internet, outside of a VPN tunnel. Standard FTP or SSL sockets may be used.
򐂰 Code download: Code can be downloaded from IBM using the network connection. The
download can be done using FTP or SFTP outside of a VPN tunnel.
򐂰 Remote support: The VPN tunnel is established between the client’s VPN appliance and
the IBM VPN appliance. IBM Support can use a GUI and high-bandwidth tools to interact
with the HMC at the same time as data offload. The HMC does not have to be involved in
establishing the VPN session.

376 IBM System Storage DS8800: Architecture and Implementation


See Figure 17-6 for an illustration of a modem and network connection plus
Business-to-Business VPN deployment.

Customer’s Firewall IBM Remote Support


The firewalls can easily identify the
traffic based on the ports used

Customer’s
VPN Device

The connection between the VPN


devices is encrypted and authenticated

Data offloads and Call Home


go to IBM over
Internet
IBM’s VPN Phone
Internet via FTP or SSL Line
(one way traffic) Device

IBM’s Firewall IBM Data Store


Phone Line
DS8700

Figure 17-6 Remote support with modem and Business-to-Business VPN

17.6 Audit logging


The DS8000 offers an audit logging security function designed to track and log changes
made by administrators using either Storage Manager DS GUI or DS CLI. This function also
documents remote support access activity to the DS8000. The auditlogs can be downloaded
by DS CLI or Storage Manager.

Example 17-1 illustrates the DS CLI command offloadauditlog, which provides clients with
the ability to offload the audit logs to the DS CLI workstation in a directory of their choice.

Example 17-1 DS CLI command to download audit logs


dscli> offloadauditlog -logaddr smc1 c:\auditlogs\7520781_2009oct11.txt
Date/Time: October 11, 2009 7:02:25 PM MST IBM DSCLI Version: 5.4.30.253
CMUC00243I offloadauditlog: Audit log was successfully offloaded from smc1 to c:
\auditlogs\7520781_2009oct11.txt.

Chapter 17. Remote support 377


The downloaded auditlog is a text file that provides information about when a remote access
session started and ended and what remote authority level was applied. A portion of the
downloaded file is shown in Example 17-2.

Example 17-2 Audit log entries related to a remote support event via modem
U,2009/10/05
18:20:49:000,,1,IBM.2107-7520780,N,8000,Phone_started,Phone_connection_started
U,2009/10/05 18:21:13:000,,1,IBM.2107-7520780,N,8036,Authority_to_root,Challenge
Key = 'ZyM1NGMs'; Authority_upgrade_to_root,,,
U,2009/10/05’18:26:02:000,,1,IBM.2107-7520780,N,8002,Phone_ended,Phone_connection_
ended

The Challenge Key shown is not a password on the HMC. It is a token shown to the IBM
support representative that is dialing in to the DS8000. The representative must use the
Challenge Key in an IBM internal tool to generate a Response Key that is given to the HMC.
The Response Key acts as a one-time authorization to the features of the HMC. The
Challenge and Response Keys change every time a remote connection is made.

The Challenge-Response process must be repeated again if the representative needs to


escalate privileges to access the HMC command-line environment. There is no direct user
login and no root login through the modem on a DS8000 HMC.

For a detailed description about how auditing is used to record “who did what and when” in
the audited system, as well as a guide to log management, visit the following address:

http://csrc.nist.gov/publications/nistpubs/800-92/SP800-92.pdf

378 IBM System Storage DS8800: Architecture and Implementation


18

Chapter 18. Capacity upgrades and CoD


This chapter discusses aspects of implementing capacity upgrades and Capacity on Demand
(CoD) with the IBM System Storage DS8800. This chapter covers the following topics:
򐂰 Installing capacity upgrades
򐂰 Using Capacity on Demand (CoD)

© Copyright IBM Corp. 2011. All rights reserved. 379


18.1 Installing capacity upgrades
Storage capacity can be ordered and added to the DS8800 through disk drive sets. A disk
drive set includes 16 disk drive modules (DDM) of the same capacity and spindle speed
(RPM). DS8800 disk drive modules are available in the following varieties:
򐂰 Small Form Factor (SFF) Serial Attached SCSI (SAS) DDMs without encryption
– 146 GB, 15 K RPM
– 450 GB, 10 K RPM
– 600 GB, 10 K RPM
򐂰 Small Form Factor (SFF) Serial Attached SCSI (SAS) DDMs with encryption
– 450 GB, 10 K RPM
– 600 GB, 10 K RPM
򐂰 Small Form Factor (SFF) Solid State DDMs (SSD)
– 300 GB

Note: Full Disk Encryption (FDE) drives can only be added to a DS8800 that was initially
ordered with FDE drives installed. See IBM System Storage DS8700 Disk Encryption
Implementation and Usage Guidelines, REDP-4500, for more information about full disk
encryption restrictions.

The disk drives are installed in Storage Enclosures (SEs). A storage enclosure interconnects
the DDMs to the controller cards that connect to the device adapters. Each storage enclosure
contains a redundant pair of controller cards. Each of the controller cards also has redundant
trunking. Figure 18-1 illustrates a Storage Enclosure.

Figure 18-1 DS8800 Storage Enclosure

380 IBM System Storage DS8800: Architecture and Implementation


Storage enclosures are always installed in pairs, with one enclosure in the upper part of the
unit and one enclosure in the lower part. A storage enclosure pair can be populated with one,
two, or three disk drive sets (16, 32, or 48 DDMs). All DDMs in a disk enclosure pair must be
of the same type (capacity and speed). Most commonly, each storage enclosure is shipped
full with 24 DDMs, meaning that each pair has 48 DDMs. If a disk enclosure pair is populated
with only 16 or 32 DDMs, disk drive filler modules called baffles are installed in the vacant
DDM slots. This is to maintain the correct cooling airflow throughout the enclosure.

Each storage enclosure attaches to two device adapters (DAs). The DAs are the RAID
adapter cards that connect the CECs to the DDMs. The DS8800 DA cards are always
installed as a redundant pair, so they are referred to as DA pairs.

Physical installation and testing of the device adapters, storage enclosure pairs, and DDMs
are performed by your IBM service representative. After the additional capacity is added
successfully, the new storage appears as additional unconfigured array sites.

You might need to obtain new license keys and apply them to the storage image before you
start configuring the new capacity; see Chapter 10, “IBM System Storage DS8800 features
and license keys” on page 203 for more information. You cannot create ranks using the new
capacity if this causes your machine to exceed its license key limits. Be aware that applying
increased feature activation codes is a concurrent action, but a license reduction or
deactivation is often a disruptive action.

Note: Special restrictions in terms of placement and intermixing apply when adding Solid
State Drives. Refer to DS8000: Introducing Solid State Drives, REDP-4522.

18.1.1 Installation order of upgrades


Individual machine configurations vary, so it is not possible to give an exact pattern for the
order in which every storage upgrade will be installed. This is because it is possible to order a
machine with multiple underpopulated storage enclosures (SEs) across the device adapter
(DA) pairs. This is done to allow future upgrades to be performed with the fewest physical
changes. Note, however, that all storage upgrades are concurrent, in that adding capacity to a
DS8800 does not require any downtime.

As a general rule, when adding capacity to a DS8800, storage hardware is populated in the
following order:
1. DDMs are added to underpopulated enclosures. Whenever you add 16 DDMs to a
machine, eight DDMs are installed into the upper storage enclosure and eight into the
lower storage enclosure. If you add a complete 48 pack, then 24 are installed in the upper
storage enclosure and 24 are installed in the lower storage enclosure.
2. After the first storage enclosure pair on a DA pair is fully populated with DDMs (48 DDMs
total), the next two storage enclosures to be populated will be connected to a new DA pair.
The DA cards are installed into the I/O enclosures that are located at the bottom of the
racks. They are not located in the storage enclosures.
3. After each DA pair has two fully populated storage enclosure pairs (96 DDMs total),
another storage enclosure pair is added to an existing storage enclosure pair.

Chapter 18. Capacity upgrades and CoD 381


18.1.2 Checking how much total capacity is installed
There are four DS CLI commands you can use to check how many DAs, SEs, and DDMs are
installed in your DS8800. They are:
򐂰 lsda
򐂰 lsstgencl
򐂰 lsddm
򐂰 lsarraysite

When the -l parameter is added to these commands, additional information is shown. In the
next section, we show examples of using these commands.

For these examples, the target DS8800 has 2 device adapter pairs (total 4 DAs) and 4
fully-populated storage enclosure pairs (total 8 SEs). This means there are 128 DDMs and 16
array sites because each array site consists of 8 DDMs. In the examples, 10 of the array sites
are in use, and 6 are Unassigned meaning that no array is created on that array site. The
example system also uses full disk encryption-capable DDMs.

In Example 18-1, a listing of the device adapter cards is shown.

Example 18-1 List the device adapters


dscli> lsda -l IBM.2107-1301511
Date/Time: September 21, 2010 3:21:21 PM CEST IBM DSCLI Version: 6.6.x.xxx DS: IBM.2107-1301511
ID State loc FC Server DA pair interfs
========================================================================================================
IBM.1400-1B3-05065/R0-P1-C3 Online U1400.1B3.RJ05065-P1-C3 - 0 2 0x0230,0x0231,0x0232,0x0233
IBM.1400-1B3-05065/R0-P1-C6 Online U1400.1B3.RJ05065-P1-C6 - 0 3 0x0260,0x0261,0x0262,0x0263
IBM.1400-1B4-05066/R0-P1-C3 Online U1400.1B4.RJ05066-P1-C3 - 1 3 0x0330,0x0331,0x0332,0x0333
IBM.1400-1B4-05066/R0-P1-C6 Online U1400.1B4.RJ05066-P1-C6 - 1 2 0x0360,0x0361,0x0362,0x0363

In Example 18-2, a listing of the storage enclosures is shown.

Example 18-2 List the storage enclosures


dscli> lsstgencl IBM.2107-1301511
Date/Time: September 21, 2010 3:25:09 PM CEST IBM DSCLI Version: 6.6.x.xxx DS: IBM.2107-1301511
ID Interfaces interadd stordev cap (GB) RPM
=====================================================================================
IBM.2107-D02-00086/R3-S15 0x0060,0x0130,0x0061,0x0131 0x1 24 146.0 15000
IBM.2107-D02-00255/R2-S07 0x0460,0x0530,0x0461,0x0531 0x0 24 146.0 15000
IBM.2107-D02-00271/R3-S13 0x0460,0x0530,0x0461,0x0531 0x1 24 146.0 15000
IBM.2107-D02-00327/R2-S05 0x0630,0x0760,0x0631,0x0761 0x1 24 146.0 15000
IBM.2107-D02-00363/R2-S06 0x0632,0x0762,0x0633,0x0763 0x1 24 146.0 15000

In Example 18-3, a listing of the storage drives is shown. Because there are 128 DDMs in the
example machine, only a partial list is shown here.

Example 18-3 List the DDMs (abbreviated)


dscli> lsddm IBM.2107-75NR571
Date/Time: September 21, 2010 3:27:58 PM CEST IBM DSCLI Version: 6.6.x.xxx DS: IBM.2107-75NR571
ID DA Pair dkcap (10^9B) dkuse arsite State
===============================================================================
IBM.2107-D02-00769/R1-P1-D1 2 450.0 array member S5 Normal
IBM.2107-D02-00769/R1-P1-D2 2 450.0 array member S5 Normal
IBM.2107-D02-00769/R1-P1-D3 2 450.0 spare required S1 Normal
IBM.2107-D02-00769/R1-P1-D4 2 450.0 array member S2 Normal
IBM.2107-D02-00769/R1-P1-D5 2 450.0 array member S2 Normal

382 IBM System Storage DS8800: Architecture and Implementation


IBM.2107-D02-00769/R1-P1-D6 2 450.0 array member S1 Normal
IBM.2107-D02-00769/R1-P1-D7 2 450.0 array member S6 Normal
IBM.2107-D02-00769/R1-P1-D8 2 450.0 array member S6 Normal
IBM.2107-D02-00769/R1-P1-D9 2 450.0 array member S4 Normal
IBM.2107-D02-00769/R1-P1-D10 2 450.0 array member S3 Normal
IBM.2107-D02-00769/R1-P1-D11 2 450.0 array member S1 Normal
IBM.2107-D02-00769/R1-P1-D12 2 450.0 array member S3 Normal

In Example 18-4, a listing of the array sites is shown.

Example 18-4 List the array sites


dscli> lsarraysite -dev IBM.2107-75NR571
Date/Time: September 21, 2010 3:31:08 PM CEST IBM DSCLI Version: 6.6.x.xxx DS: IBM.2107-75NR571
arsite DA Pair dkcap (10^9B) State Array
===========================================
S1 2 450.0 Assigned A0
S2 2 450.0 Assigned A1
S3 2 450.0 Assigned A2
S4 2 450.0 Assigned A3
S5 2 450.0 Assigned A4
S6 2 450.0 Assigned A5
S7 0 600.0 Assigned A6
S8 0 600.0 Assigned A7
S9 0 600.0 Assigned A8
S10 0 600.0 Assigned A9
S11 0 600.0 Assigned A10
S12 0 600.0 Assigned A11

18.2 Using Capacity on Demand (CoD)


IBM offers Capacity on Demand (CoD) solutions that are designed to meet the changing
storage needs of rapidly growing e-businesses. This section discusses CoD on the DS8800.

There are various rules about CoD and these are explained in IBM System Storage DS8000
Introduction and Planning Guide, GC35-0515. This section explains aspects of implementing
a DS8800 that has CoD disk packs.

18.2.1 What is Capacity on Demand


The Standby CoD offering is designed to provide you with the ability to tap into additional
storage and is particularly attractive if you have rapid or unpredictable growth, or if you simply
want extra storage to be there when you need it.

In many database environments, it is not unusual to have very rapid growth in the amount of
disk space required for your business. This can create a problem if there is an unexpected
and urgent need for disk space and no time to create a purchase order or wait for the disk to
be delivered.

With this offering, up to six Standby CoD disk drive sets (96 disk drives) can be
factory-installed or field-installed into your system. To activate, you logically configure the disk
drives for use. This is a nondisruptive activity that does not require intervention from IBM.
Upon activation of any portion of a Standby CoD disk drive set, you must place an order with
IBM to initiate billing for the activated set. At that time, you can also order replacement CoD
disk drive sets.

Chapter 18. Capacity upgrades and CoD 383


This offering allows you to purchase licensed functions based upon your machine’s physical
capacity, excluding unconfigured Standby CoD capacity. This can help improve your cost of
ownership, because your extent of IBM authorization for licensed functions can grow at the
same time you need your disk capacity to grow.

Contact your IBM representative to obtain additional information regarding Standby CoD
offering terms and conditions.

18.2.2 Determining if a DS8800 has CoD


A common question is how to determine if a DS8800 has CoD disks installed. There are two
important indicators that you need to check for:
򐂰 Is the CoD indicator present in the Disk Storage Feature Activation (DSFA) website?
򐂰 What is the Operating Environment License (OEL) limit displayed by the lskey DS CLI
command?

Verifying CoD on the DSFA website


The data storage feature activation (DSFA) website provides feature activation codes and
license keys to technically activate functions acquired for your IBM storage products. To check
for the CoD indicator on the DSFA website, you need to perform the following tasks:
1. Get the machine signature using DS CLI. Connect with the DS CLI and execute showsi
-fullid as shown in Example 18-5 on page 384. The signature is a unique value that can
only be accessed from the machine. You will also need to record the Machine Type
displayed as well as the Machine Serial Number (ending with 0).

Example 18-5 Displaying the machine signature


dscli> showsi -fullid IBM.2107-75NR571
Date/Time: September 21, 2010 3:35:13 PM CEST IBM DSCLI Version: 6.6.x.xxx DS:
IBM.2107-75NR571
Name imaginary1
desc -
ID IBM.2107-75NR571
Storage Unit IBM.2107-75NR570
Model 951
WWNN 5005070009FFC5D5
Signature b828-2f64-eb24-4f17 <============ Machine Signature
State Online
ESSNet Enabled
Volume Group IBM.2107-75NR571/V0
os400Serial 5D5
NVS Memory 2.0 GB
Cache Memory 50.6 GB
Processor Memory 61.4 GB
MTS IBM.2421-75NR570 <=======Machine Type (2421) and S/N (75NR570)
numegsupported 1

2. Now log on to the DSFA website at:


http://www.ibm.com/storage/dsfa

384 IBM System Storage DS8800: Architecture and Implementation


Select IBM System Storage DS8000 Series from the DSFA start page. The next screen
requires you to choose the Machine Type and then enter the serial number and signature, as
shown in Figure 18-2 on page 385.

Figure 18-2 DSFA machine specifics

On the View Authorization Details screen, the feature code 0901 Standby CoD indicator is
shown for DS8800 installations with Capacity on Demand. This is illustrated in Figure 18-3 on
page 386. If instead you see 0900 Non-Standby CoD, it means that the CoD feature has not
been ordered for your machine.

Chapter 18. Capacity upgrades and CoD 385


Figure 18-3 Verifying CoD using DSFA

Verifying CoD on the DS8800


Normally, new features or feature limits are activated using the DS CLI applykey command.
However, CoD does not have a discrete key. Instead, the CoD feature is installed as part of
the Operating Environment License (OEL) key. The interesting thing is that an OEL key that
activates CoD will change the feature limit from the limit that you have paid for, to the largest
possible number.

In Example 18-6, you can see how the OEL key is changed. The machine in this example is
licensed for 80 TB of OEL, but actually has 82 TB of disk installed because it has 2 TB of CoD
disks. However, if you attempt to create ranks using the final 2 TB of storage, the command
will fail because it exceeds the OEL limit. After a new OEL key with CoD is installed, the OEL
limit will increase to an enormous number (9.9 million TB). This means that rank creation will
succeed for the last 2 TB of storage.

386 IBM System Storage DS8800: Architecture and Implementation


Example 18-6 Applying an OEL key that contains CoD
dscli> lskey IBM.2107-75ABCD1
Date/Time: October 21, 2009 2:47:26 PM MST IBM DSCLI Version: 6.5.0.xxx DS: IBM.2107-75ABCD1
Activation Key Authorization Level (TB) Scope
====================================================================
Operating environment (OEL) 80.3 All

dscli> applykey -key 1234-5678-9ABC-DEF0-1234-5678-9ABC-DEF0 IBM.2107-75ABCD1


Date/Time: October 21, 2009 2:47:26 PM MST IBM DSCLI Version: 6.5.0.xxx DS: IBM.2107-75ABCD1
CMUC00199I applykey: Licensed Machine Code successfully applied to storage image IBM.2107-75ABCD1

dscli> lskey IBM.2107-75ABCD1


Date/Time: October 21, 2009 2:47:26 PM MST IBM DSCLI Version: 6.5.0.sss DS: IBM.2107-75ABCD1
Activation Key Authorization Level (TB) Scope
====================================================================
Operating environment (OEL) 9999999 All

18.2.3 Using the CoD storage


In this section, we review the tasks required to start using CoD storage.

CoD array sites


If CoD storage is installed, it will be a maximum of 96 CoD disk drives. Because 16 drives
make up a drive set, a better use of terminology is to say a machine can have up to 6 drive
sets of CoD disk. Because 8 drives are used to create an array site, this means that a
maximum of 12 array sites of CoD can potentially exist in a machine. If a machine has, for
example, 384 disk drives installed, of which 96 disk drives are CoD, then there are a total of
48 array sites, of which 12 are CoD. From the machine itself, there is no way to tell how many
of the array sites in a machine are CoD array sites as opposed to array sites you can start
using right away. During the machine order process, this must be clearly understood and
documented.

Which array sites are the CoD array sites


Given a sample DS8800 with 48 array sites, of which 8 represent CoD disks, the client should
configure only 40 of the 48 array sites. This assumes that all the disk drives are the same
size. It is possible to order CoD drive sets of different sizes. In this case, you would need to
understand how many of each size have been ordered and ensure that the correct number of
array sites of each size are left unused until they are needed for growth.

How to start using the CoD array sites


Use the standard DS CLI (or DS GUI) commands to configure storage starting with mkarray,
then mkrank, and so on. After the ranks are members of an Extent Pool, then volumes can be
created. See Chapter 13, “Configuration using the DS Storage Manager GUI” on page 251,
and Chapter 14, “Configuration with the DS Command-Line Interface” on page 307 for more
information about this topic.

What if you accidentally configure a CoD array site


Given the sample DS8800 with 48 array sites, of which 8 represent CoD disks, if you
accidentally configure 41 array sites but did not intend to start using the CoD disks yet, then
use the rmarray command immediately to return that array site to an unassigned state. If
volumes have been created and those volumes are in use, then you have started to use the
CoD arrays and should contact IBM to inform IBM that the CoD storage is now in use.

Chapter 18. Capacity upgrades and CoD 387


What you do after the CoD array sites are in use
After you have started to use the CoD array sites (and remember that IBM requires that a
Standby CoD disk drive set must be activated within a twelve-month period from the date of
installation; all such activation is permanent), then contact IBM so that the CoD indicator can
be removed from the machine. You must place an order with IBM to initiate billing for the
activated set.

At that time, you can also order replacement Standby CoD disk drive sets. If new CoD disks
are ordered and installed, then a new OEL key will also be issued and should be applied
immediately. If no more CoD disks are desired, or the DS8800 has reached maximum
capacity, then an OEL key will be issued to reflect that CoD is no longer enabled on the
machine.

388 IBM System Storage DS8800: Architecture and Implementation


A

Appendix A. Tools and service offerings


This appendix provides information about the tools that are available to help you when
planning, managing, migrating, and analyzing activities with your DS8800. In this appendix,
we also reference the sites where you can find information about the service offerings that are
available from IBM to help you in several of the activities related to the DS8800
implementation.

© Copyright IBM Corp. 2011. All rights reserved. 389


Capacity Magic
Because of the additional flexibility and configuration options storage subsystems provide, it
becomes a challenge to calculate the raw and net storage capacity of disk subsystems such
as the DS8800. You have to invest considerable time, and you need an in-depth technical
understanding of how spare and parity disks are assigned. You also need to consider the
simultaneous use of disks with different capacities and configurations that deploy RAID 5,
RAID 6, and RAID 10.

Capacity Magic can do the physical (raw) to effective (net) capacity calculations automatically,
taking into consideration all applicable rules and the provided hardware configuration
(number and type of disk drive sets).

Capacity Magic is designed as an easy-to-use tool with a single, main interface. It offers a
graphical interface that allows you to enter the disk drive configuration of a DS8800 and other
IBM subsystems, the number and type of disk drive sets, and the RAID type. With this input,
Capacity Magic calculates the raw and net storage capacities. The tool also has functionality
that lets you display the number of extents that are produced per rank, as shown in
Figure A-1.

Figure A-1 Configuration window

Figure A-1 shows the configuration window that Capacity Magic provides for you to specify
the desired number and type of disk drive sets.

390 IBM System Storage DS8800: Architecture and Implementation


Figure A-2 shows the resulting output report that Capacity Magic produces. This report is also
helpful in planning and preparing the configuration of the storage in the DS8800, because it
also displays extent count information.

Figure A-2 Capacity Magic output report

Note: Capacity Magic is a tool used by IBM and IBM Business Partners to model disk
storage subsystem effective capacity as a function of physical disk capacity to be installed.
Contact your IBM Representative or IBM Business Partner to discuss a Capacity Magic
study.

Disk Magic
Disk Magic is a Windows-based disk subsystem performance modeling tool. It supports disk
subsystems from multiple vendors, but it offers the most detailed support for IBM subsystems.
Currently Disk Magic supports modelling to advanced-function disk subsystems, such as the
DS8000 series, DS6000, ESS, DS4000, DS5000, N-Series and the SAN Volume Controller.

A critical design objective for Disk Magic is to minimize the amount of input that you must
enter, while offering a rich and meaningful modeling capability. The following list provides
several examples of what Disk Magic can model, but it is by no means complete:
򐂰 Move the current I/O load to a different disk subsystem model.
򐂰 Merge the current I/O load of multiple disk subsystems into a single DS8700.
򐂰 Insert a SAN Volume Controller in an existing disk configuration.
򐂰 Increase the current I/O load.
򐂰 Implement a storage consolidation.
򐂰 Increase the disk subsystem cache size.

Appendix A. Tools and service offerings 391


򐂰 Change to larger capacity disk drives.
򐂰 Change to higher disk rotational speed.
򐂰 Upgrade from ESCON to FICON host adapters.
򐂰 Upgrade from SCSI to Fibre Channel host adapters.
򐂰 Increase the number of host adapters.
򐂰 Use fewer or more Logical Unit Numbers (LUNs).
򐂰 Activate Metro Mirror.
򐂰 Activate z/OS Global Mirror.
򐂰 Activate Global Mirror.

With the availability of SSD, Disk Magic perform support modelling when SSD ranks are
included in the configuration. In a z/OS environment, Disk Magic can provide an estimation of
which volumes are good SSD candidates and migrate those volumes to SSD in the model. In
an Open System environment, Disk Magic can model the SSD on a server basis.

Note: Disk Magic is a tool used by IBM and IBM Business Partners to model disk storage
subsystem performance. Contact your IBM Representative or IBM Business Partner to
discuss a Disk Magic study.

HyperPAV analysis
Traditional aliases allow you to simultaneously process multiple I/O operations to the same
logical volume. The question is, how many aliases do you need to assign to the LCUs in your
DS8000?

It is difficult to predict the ratio of aliases to base addresses required to minimize IOSQ time. If
the ratio is too high, this limits the amount of physical volumes that can be addressed, due to
the 64 K addressing limit. If the ratio is too small, then you may see high IOSQ times, which
will impact the business service commitments.

HyperPAV can help improve performance by reducing the IOSQ Time and also help in
reducing the number of aliases required in an LCU, which would free up more addresses to
be used as base-addresses.

To estimate how many aliases are needed, a HyperPAV analysis can be performed using
SMF records 70 through 78. The analysis results provide guidance about how many aliases
are required. This analysis can be performed against IBM and non-IBM disk subsystems.

Note: Contact your IBM Representative or IBM Business Partner to discuss a HyperPAV
study.

FLASHDA
The FLASHDA is a tool written in SAS that can help with deciding which datasets or volumes
are best candidates to be migrated to SSD from HDD.

The prerequisite to use this tool are APARs OA25688 and OA25559, which will report DISC
Time separately by Read I/O and Write I/O. The tool uses SMF 42 subtype 6 and SMF 74
subtype 5 records and provides a list by dataset, showing the amount of accumulated DISC
Time for the Read I/O operations during the time period selected.

If complete SMF records 70 through 78 are also provided, the report can be tailored to show
the report by dataset by each disk subsystem. It can also show the report by volume by disk

392 IBM System Storage DS8800: Architecture and Implementation


subsystem. If you are running z/OS v1R10, you can also include the number of cylinders
used by volume.

Figure 18-4 lists the output of the FLASHDA tool. It shows the Dataset name with the Address
and Volser where the dataset resides and the Total DISC Time in milliseconds for all Read
I/Os. This list is sorted in descending order to show which datasets would benefit the most
when moved to an SSD rank.

Addre ss V olse r Da ta se t na m e Tota l Rd DIS C


C1D2 I10YY 5 IM S 10.DXX.W W XPRT11 5,184,281
2198 XA GY A A DB2P AG.DS NDB D.XX10X97I.CGP L.I0001.YY 01 3,530,406
783E XA 2Y 58 DB2P A2.DS NDB D.XX40X97E .RE SB .I0001.Y Y 02 2,978,921
430A Y 14Y 3S DB214.DSNDBD.ZZXDS E NT.W W XS C.I0001.Y Y01 2,521,349
21B 2 XA GY C6 DB2P AG.DS NDB D.XX40XTK L.M S EG.J0001.Y Y06 2,446,672
7A10 XA 2Y 76 DB2P A2.DS NDB D.XX40X97E .RE SB .I0001.Y Y 03 2,123,498
7808 XA 2Y 04 DB2P A2.DS NDB D.XX40X97E .RE SB .I0001.Y Y 01 1,971,660
2A13 X39Y60 DB2X39.DS NDB D.Y Y 30X956.M ONX.J0001.Y Y 01 1,440,200
2B60 X39Y12 DB2X39.DS NDB D.Y Y 40X975.EQUI.J0001.YY 02 1,384,468
C1D2 I10YY 5 IM S 10.DJX.W W JIFP C1 1,284,444
783B XA 2Y 55 DB2P A2.DS NDB D.XX40X97E .RE SB .I0001.Y Y 04 1,185,571
2B5A X39Y06 DB2X39.DS NDB D.Y Y 40X975.EQUI.J0001.YY 01 1,016,916
Figure 18-4 FLASHDA output

The next report, shown in Figure 18-5, is based on the preceding FLASHDA output and from
information extracted from the SMF records. This report shows the ranking of the Total Read
DISC Time in milliseconds by volume. It also shows the number of cylinders defined for that
volume and the serial number of the disk subsystem (DSS) where that volume resides.

A d d re s s V o ls e r T o ta l R d D IS C # c y ls DSS
C 1D 2 I1 0 YY5 6 ,5 9 2 ,2 2 9 32760 IB M -K L Z0 1
2198 X A G YA A 3 ,6 0 8 ,0 5 2 65520 IB M -M N 7 2 1
783E X A 2 Y5 8 3 ,0 3 2 ,3 7 7 65520 IB M -O P 6 6 1
430A Y1 4 Y3 S 2 ,6 5 4 ,0 8 3 10017 IB M -K L Z0 1
21B 2 X A G YC 6 2 ,6 4 8 ,1 2 6 65520 IB M -M N 7 2 1
7A 10 X A 2 Y7 6 2 ,3 8 9 ,5 1 2 65520 IB M -O P 6 6 1
7808 X A 2 Y0 4 2 ,1 0 2 ,7 4 1 65520 IB M -O P 6 6 1
22A A X A G Y8 4 1 ,4 5 8 ,6 9 6 65520 IB M -M N 7 2 1
2193 X A G YA 5 1 ,4 5 5 ,0 5 7 65520 IB M -M N 7 2 1
2A 13 X 3 9 Y6 0 1 ,4 4 4 ,7 0 8 65520 A B C -0 4 7 4 9
21B 5 X A G YC 9 1 ,4 2 9 ,2 3 1 65520 IB M -M N 7 2 1
2B 60 X 3 9 Y1 2 1 ,3 8 7 ,4 0 9 65520 A B C -0 4 7 4 9
Figure 18-5 Total Read DISC Time report by Volume

Using the report by dataset, you can select the datasets that are used by your critical
applications and migrate them to the SSD ranks.

If you use the report by volume, you can decide how many volumes you want to migrate to
SSD, and calculate how many SSD ranks are needed to accommodate the volumes that you
selected. A Disk Magic study can be performed to see how much performance improvement
can be achieved by migrating those volumes to SSD.

Note: Contact your IBM Representative or IBM Business Partner to discuss a FLASHDA
study.

Appendix A. Tools and service offerings 393


IBM i SSD Analyzer Tool
The SSD Analyzer Tool is designed to help you determine if SSDs could help improve
performance on your IBM i system(s). The tool works with the performance data that is
collected on your system and works with releases V5R4 and V6R1.

Figure 18-6 shows the detailed analysis report by job name, sorted descending by Disk Read
Wait Total Seconds. This list can be used to select the data used by the job that would get
the highest benefit when migrated to an SSD media.

SSD Data Analysis - Jobs Sorted by Disk Read Time


Performance member I289130103 in library SB01
Time period from 2009-10-16-13.01.05.000000 to 2009-10-16-14.00.00.000000
CPU Disk Read Disk Read Disk
Total Wait Total Wait Average Read Wait
Job Name Seconds Seconds Seconds /CPU
------------------------- --------- ----------- ------------ ---------
FSP084CU/SISADMIN/460669 30.426 3,468.207 .006636 114
FSP200CU/SISADMIN/516129 33.850 3,461.419 .006237 102
FSP173CU/SISADMIN/387280 48.067 3,427.064 .006548 71
FSP174CU/SISADMIN/499676 71.951 3,395.609 .007191 47
FSP110CU/SISADMIN/487761 33.360 3,295.738 .006799 99
FSP006CU/SISADMIN/516028 78.774 2,962.103 .007409 38
FSP002CU/SISADMIN/516000 79.025 2,961.518 .007441 37
FSP004CU/SISADMIN/516010 78.640 2,957.033 .007412 38

Figure 18-6 IBM i SSD Analyzer Tool - DETAIL report

Note: Contact your IBM Representative or IBM Business Partner to discuss an IBM i SSD
analysis.

IBM Tivoli Storage Productivity Center


IBM Tivoli Productivity Center (previously known as the TotalStorage Productivity Center) is a
standard software package for managing complex storage environments. One subcomponent
of this package is IBM Tivoli Storage Productivity Center for Disk (TPC for Disk), which is
designed to help reduce the complexity of managing SAN storage devices by allowing
administrators to configure, manage, and performance monitor storage from a single console.

TPC for Disk is designed to:


򐂰 Configure multiple storage devices from a single console
򐂰 Monitor and track the performance of SAN-attached Storage Management Interface
Specification (SMI-S) compliant storage devices
򐂰 Enable proactive performance management by setting performance thresholds based on
performance metrics and the generation of alerts

IBM Tivoli Productivity Center for Disk centralizes the management of networked storage
devices that implement the SNIA SMI-S specification, which includes the IBM System
Storage DS family, XIV®, N series, and SAN Volume Controller (SVC). It is designed to help
reduce storage management complexity and costs while improving data availability,
centralizing management of storage devices through open standards (SMI-S), enhancing
storage administrator productivity, increasing storage resource utilization, and offering
proactive management of storage devices. IBM Tivoli Productivity Center for Disk offers the

394 IBM System Storage DS8800: Architecture and Implementation


ability to discover storage devices using Service Location Protocol (SLP) and provides the
ability to configure devices, in addition to gathering event and errors logs and launching
device-specific applications or elements.

For more information, see Managing Disk Subsystems using IBM TotalStorage Productivity
Center, SG24-7097. Also, refer to the following address:
http://www.ibm.com/servers/storage/software/center/index.html

IBM Certified Secure Data Overwrite


STG Lab Services offers IBM Certified Secure Data Overwrite service for the DS8000 series,
and ESS models 800 and 750. This offering is meant to overcome the following issues:
򐂰 Deleted data does not mean gone forever. Usually, deleted means that the pointers to the
data are invalidated and the space can be reused. Until the space is reused, the data
remains on the media and what remains can be read with the right tools.
򐂰 Regulations and business prudence require that the data actually be removed when the
media is no longer available.

The service executes a multipass overwrite of the data disks in the storage system:
򐂰 It operates on the entire box.
򐂰 It is three pass overwrite, which is compliant with the DoD 5220.20-M procedure for
purging disks.
– Writes all sectors with zeros.
– Writes all sectors with ones.
– Writes all sectors with a pseudo-random pattern.
– Each pass reads back a random sample of sectors to verify the writes are done.
򐂰 There is a a fourth pass of zeros with InitSurf.
򐂰 IBM also purges client data from the server and HMC disks.

Certificate of completion
After the overwrite process has been completed, IBM delivers a complete report containing:
򐂰 A certificate of completion listing:
– The serial number of the systems overwritten.
– The dates and location the service was performed.
– The overwrite level.
– The names of the engineers delivering the service and compiling the report.
򐂰 A description of the service and the report
򐂰 On a per data drive serial number basis:
– The G-list prior to overwrite.
– The pattern run against the drive.
– The success or failure of each pass.
– The G-list after the overwrite.
– Whether the overwrite was successful or not for each drive.

Appendix A. Tools and service offerings 395


Figure A-3 shows a sample report by drive.

DS8000 PROD1 (Serial # 12345)


Disk Drive Sector Sector Sector Sector Sector
Disk DDM Barcode Overwrite
Serial# Defects at Defects After Defects After Defects After Defects After
Drive Serial# (Visible) Status
(Electronic) Start 1st Pass 2nd Pass 3rd Pass 4th Pass
pdisk0 3HY6LFKZ 350A8459 Successful 0 0 0 0 0
pdisk1 3HY6N92M 350B055D Successful 0 0 0 0 0
pdisk2 3HY6FV79 350B0756 Successful 0 0 0 0 0
pdisk3 3HY6N1NA 350B075C Successful 0 0 0 0 0
pdisk4 3HY6MAQ0 350B0D07 Successful 0 0 0 0 0
pdisk5 3HY6P1E5 350B0D3D Successful 0 0 0 0 0
pdisk6 3HY6P2PG 350B0D79 Successful 0 0 0 0 0
pdisk7 3HY6P2QB 350B0D85 Successful 0 0 0 0 0
pdisk8 3HY6P2PR 350B0D86 Successful 0 0 0 0 0
pdisk9 3HY6P16J 350B0D86 Successful 0 0 0 0 0
pdisk10 3HY6P2BX 350B0DA9 Successful 0 0 0 0 0
pdisk11 3HY6P3LF 350B0DCF Successful 0 0 0 0 0
pdisk12 3HY6P3L2 350B0DDC Successful 21 21 21 21 21
pdisk13 3HY6M5ZX 350B0DDD Successful 0 0 0 0 0
pdisk14 3HY6P3K6 350B0DDE Successful 0 0 0 0 0
pdisk15 3HY6P3JW 350B0DE2 Successful 0 0 0 0 0
pdisk16 3HY6NNZF 350B0DE4 Successful 0 0 0 0 0
pdisk18 3HY6NPQM 350B0E0C Successful 0 0 0 0 0
pdisk19 3HY6NYZ0 350B0E0D Failed 0 0 – – –
pdisk20 3HY6P484 350B0E1F Successful 0 0 0 0 0
pdisk21 3HY6P4NZ 350B0E72 Successful 0 0 0 0 0
pdisk22 3HY6P5JB 350B0E73 Successful 0 0 0 0 0
pdisk23 3HY6SVDT 350BF7DC Successful 0 0 0 0 0
pdisk24 3HY6XJCV 350C916F Successful 0 0 0 0 0
pdisk25 3HY6XLQ6 350C91E4 Successful 0 0 0 0 0

Figure A-3 Sample report by drive

Drives erased
As a result of the erase service, all disks in the storage system are erased. Figure A-4 shows
all the drives that are covered by the Secure Data Overwrite Service.

Figure A-4 Drives erased and their contents

396 IBM System Storage DS8800: Architecture and Implementation


IBM Global Technology Services: service offerings
IBM can assist you in deploying IBM System Storage DS8800 subsystems, IBM Tivoli
Productivity Center, and SAN Volume Controller solutions. IBM Global Technology Services
has the right knowledge and expertise to reduce your system and data migration workload, as
well as the time, money, and resources needed to achieve a system-managed environment.

For more information about available services, contact your IBM representative or IBM
Business Partner, or visit the following addresses:
http://www.ibm.com/services/
http://www.ibm.com/servers/storage/services/disk.html

For details about available IBM Business Continuity and Recovery Services, contact your IBM
Representative or visit the following address:
http://www.ibm.com/services/continuity

For details about educational offerings related to specific products, visit the following address:
http://www.ibm.com/services/learning/index.html

Select your country, and then select the product as the category.

IBM STG Lab Services: Service offerings


In addition to the IBM Global Technology Services, the STG Lab Services and Training teams
are set up to assist you with one-off, client-tailored solutions and services that will help you in
your daily work with IBM Hardware and Software components. For more information about
this topic, go to the following address:

http://www.ibm.com/systems/services/labservices/

Appendix A. Tools and service offerings 397


398 IBM System Storage DS8800: Architecture and Implementation
Abbreviations and acronyms
AAL Arrays Across Loops EB Exabyte
AC Alternating Current ECC Error Checking and Correction
AL-PA Arbitrated Loop Physical EDF Extended Distance FICON
Addressing
EEH Enhanced Error Handling
AMP Adaptive Multistream Prefetching
EPO Emergency Power Off
API Application Programming Interface
EPOW Emergency Power Off Warning
ASCII American Standard Code for
ESCON Enterprise Systems Connection
Information Interchange
ESS Enterprise Storage Server
ASIC Application Specific Integrated
Circuit ESSNI Enterprise Storage Server Network
Interface
B2B Business-to-Business VPN
FATA Fibre Channel Attached
BBU Battery Backup Unit
Technology Adapter
CEC Central Electronics Complex
FB Fixed Block
CG Consistency Group
FC Flash Copy
CHPID Channel Path ID
FCAL Fibre Channel Arbitrated Loop
CIM Common Information Model
FCIC Fibre Channel Interface Card
CKD Count Key Data
FCP Fibre Channel Protocol
CoD Capacity on Demand
FCSE FlashCopy Space Efficient
CPU Central Processing Unit
FDE Full Disk Encryption
CSDO Certified Secure Data Overwrite
FFDC First Failure Data Capture
CUIR Control Unit Interface
FICON Fiber Connection
Reconfiguration
FIR Fault Isolation Register
DA Device Adapter
FRR Failure Recovery Routines
DASD Direct Access Storage Device
FTP File Transfer Protocol
DC Direct Current
GB Gigabyte
DDM Disk Drive Module
GC Global Copy
DFS Distributed File System
GM Global Mirror
DFW DASD Fast Write
GSA Global Storage Architecture
DHCP Dynamic Host Configuration
Protocol GUI Graphical User Interface
DMA Direct Memory Access HA Host Adapter
DMZ De-Militarized Zone HACMP™ High Availability Cluster
Multi-Processing
DNS Domain Name System
HBA Host Bus Adapter
DPR Dynamic Path Reconnect
HCD Hardware Configuration Definition
DPS Dynamic Path Selection
HMC Hardware Management Console
DSCIMCLI Data Storage Common Information
Model Command-Line Interface HSM Hardware Security Module
DSCLI Data Storage Command-Line HTTP Hypertext Transfer Protocol
Interface HTTPS Hypertext Transfer Protocol over
DSFA Data Storage Feature Activation SSL
DVE Dynamic Volume Expansion IBM International Business Machines
Corporation
EAV Extended Address Volume

© Copyright IBM Corp. 2011. All rights reserved. 399


IKE Internet Key Exchange NAT Network Address Translation
IKS Isolated Key Server NFS Network File System
IOCDS Input/Output Configuration Data NIMOL Network Installation Management
Set on Linux
IOPS Input Output Operations per NTP Network Time Protocol
Second NVRAM Non-Volatile Random Access
IOSQ Input/Output Supervisor Queue Memory
IPL Initial Program Load NVS Non-Volatile Storage
IPSec Internet Protocol Security OEL Operating Environment License
IPv4 Internet Protocol version 4 OLTP Online Transaction Processing
IPv6 Internet Protocol version 6 PATA Parallel Attached Technology
ITSO International Technical Support Adapter
Organization PAV Parallel Access Volumes
IWC Intelligent Write Caching PB Petabyte
JBOD Just a Bunch of Disks PCI-X Peripheral Component Interconnect
JFS Journaling File System Extended

KB Kilobyte PCIe Peripheral Component Interconnect


Express
Kb Kilobit
PCM Path Control Module
Kbps Kilobits per second
PFA Predictive Failure Analysis
KVM Keyboard-Video-Mouse
PHYP POWER® Systems Hypervisor
L2TP Layer 2 Tunneling Protocol
PLD Power Line Disturbance
LBA Logical Block Addressing
PM Preserve Mirror
LCU Logical Control Unit
PMB Physical Memory Block
LDAP Lightweight Directory Access
Protocol PPRC Peer-to-Peer Remote Copy

LED Light Emitting Diode PPS Primary Power Supply

LFU Least Frequently Used PSTN Public Switched Telephone


Network
LIC Licensed Internal Code
PTC Point-in-Time Copy
LIP Loop initialization Protocol
RAM Random Access Memory
LMC Licensed Machine Code
RAS Reliability, Availability,
LPAR Logical Partition Serviceability
LRU Least Recently Used RIO Remote Input/Output
LSS Logical SubSystem RPC Rack Power Control
LUN Logical Unit Number RPM Revolutions per Minute
LVM Logical Volume Manager RPO Recovery Point Objective
MB Megabyte SAN Storage Area Network
Mb Megabit SARC Sequential Adaptive Replacement
Mbps Megabits per second Cache
MFU Most Frequently Used SATA Serial Attached Technology
Adapter
MGM Metro Global Mirror
SCSI Small Computer System Interface
MIB Management Information Block
SDD Subsystem Device Driver
MM Metro Mirror
SDM System Data Mover
MPIO Multipath Input/Output
SE Storage Enclosure
MRPD Machine Reported Product Data
SFI Storage Facility Image
MRU Most Recently Used

400 IBM System Storage DS8800: Architecture and Implementation


SFTP SSH File Transfer Protocol YB Yottabyte
SMIS Storage Management Initiative ZB Zettabyte
Specification zHPF High Performance FICON for z
SMP Symmetric Multiprocessor zIIP z9 Integrated Information
SMS Storage Management Subsystem Processor
SMT Simultaneous Multithreading
SMTP Simple Mail Transfer Protocol
SNIA Storage Networking Industry
Association
SNMP Simple Network Monitoring
Protocol
SOI Silicon on Insulator
SP Service Processor
SPCN System Power Control Network
SPE Small Programming Enhancement
SRM Storage Resource Management
SSD Solid State Drive
SSH Secure Shell
SSIC System Storage Interoperation
Center
SSID Subsystem Identifier
SSL Secure Sockets Layer
SSPC System Storage Productivity Center
SVC SAN Volume Controller
TB Terabyte
TCE Translation Control Entry
TCO Total Cost of Ownership
TCP/IP Transmission Control Protocol /
Internet Protocol
TKLM Tivoli Key Lifecycle Manager
TPC Tivoli Storage Productivity Center
TPC-BE Tivoli Storage Productivity Center
Basic Edition
TPC-R Tivoli Storage Productivity Center
for Replication
TPC-SE Tivoli Storage Productivity Center
Standard Edition
UCB Unit Control Block
UDID Unit Device Identifier
UPS Uninterruptable Power Supply
VPN Virtual Private Network
VTOC Volume Table of Contents
WLM Workload Manager
WUI Web User Interface
WWPN Worldwide Port Name
XRC Extended Remote Copy

Abbreviations and acronyms 401


402 IBM System Storage DS8800: Architecture and Implementation
Related publications

The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.

IBM Redbooks publications


For information about ordering these publications, see “How to get IBM Redbooks
publications” on page 404. Note that some of the documents referenced here may be
available in softcopy only.
򐂰 IBM System Storage DS8000 Series: IBM FlashCopy SE, REDP-4368
򐂰 Multiple Subchannel Sets: An Implementation View, REDP-4387
򐂰 IBM System Storage DS8700 Disk Encryption Implementation and Usage Guidelines,
REDP-4500
򐂰 IBM System Storage DS8000: LDAP Authentication, REDP-4505
򐂰 DS8000: Introducing Solid State Drives, REDP-4522
򐂰 DS8000 Thin Provisioning, REDP-4554
򐂰 A Comprehensive Guide to Virtual Private Networks, Volume I: IBM Firewall, Server and
Client Solutions, SG24-5201
򐂰 IBM System Storage DS8000: Copy Services for IBM System z, SG24-6787
򐂰 IBM System Storage DS8000: Copy Services for Open Systems, SG24-6788
򐂰 Managing Disk Subsystems using IBM TotalStorage Productivity Center, SG24-7097
򐂰 DS8000 Performance Monitoring and Tuning, SG24-7146
򐂰 Migrating to IBM System Storage DS8000, SG24-7432
򐂰 IBM System Storage Productivity Center Deployment Guide, SG24-7560-00
򐂰 IBM Tivoli Storage Productivity Center V4.2 Release Guide, SG24-7725-00
򐂰 IBM System Storage DS8000: Host Attachment and Interoperability, SG24-8887

Other publications
These publications are also relevant as further information sources. Note that some of the
documents referenced here may be available in softcopy only.
򐂰 IBM System Storage DS8000 Introduction and Planning Guide, GC27-2297
򐂰 DS8000 Introduction and Planning Guide, GC35-0515
򐂰 IBM System Storage DS: Command-Line Interface User's Guide, GC53-1127
򐂰 System Storage Productivity Center Software Installation and User´s Guide, SC23-8823
򐂰 IBM System Storage Productivity Center Introduction and Planning Guide, SC23-8824
򐂰 IBM System Storage DS8000 User´s Guide, SC26-7915
򐂰 IBM System Storage DS8000: Command-Line Interface User´s Guide, SC26-7916
򐂰 IBM System Storage DS8000 Host Systems Attachment Guide, SC26-7917

© Copyright IBM Corp. 2011. All rights reserved. 403


򐂰 System Storage Productivity Center User’s Guide, SC27-2336
򐂰 IBM System Storage Multipath Subsystem Device Driver User’s Guide, SC30-4131
򐂰 “Outperforming LRU with an adaptive replacement cache algorithm,” by N. Megiddo and D.
S. Modha, in IEEE Computer, volume 37, number 4, pages 58–65, 2004
򐂰 “SARC: Sequential Prefetching in Adaptive Replacement Cache” by Binny Gill, et al,
Proceedings of the USENIX 2005 Annual Technical Conference, pages 293–308
򐂰 “AMP: Adaptive Multi-stream Prefetching in a Shared Cache” by Binny Gill, et al, in
USENIX File and Storage Technologies (FAST), February 13 - 16, 2007, San Jose, CA
򐂰 VPNs Illustrated: Tunnels, VPNs, and IPSec, by Jon C. Snader, Addison-Wesley
Professional (November 5, 2005), ISBN-10: 032124544X

Online resources
These websites and URLs are also relevant as further information sources:
򐂰 IBM Disk Storage Feature Activation (DSFA) website
http://www.ibm.com/storage/dsfa
򐂰 Documentation for the DS8000
http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic/index.jsp
򐂰 System Storage Interoperation Center (SSIC)
http://www.ibm.com/systems/support/storage/config/ssic
򐂰 Security Planning website
http://publib16.boulder.ibm.com/doc_link/en_US/a_doc_lib/aixbman/security/ipsec
_planning.htm
򐂰 VPN Implementation, S1002693:
http://www.ibm.com/support/docview.wss?&rs=1114&uid=ssg1S1002693

How to get IBM Redbooks publications


You can search for, view, or download IBM Redbooks publications, Redpapers, Hints and
Tips, draft publications and Additional materials, as well as order hardcopy IBM Redbooks
publications or CD-ROMs, at this website:
ibm.com/redbooks

Help from IBM


IBM Support and downloads
ibm.com/support

IBM Global Services


ibm.com/services

404 IBM System Storage DS8800: Architecture and Implementation


Index
Call Home 15, 72, 188, 226, 353
Numerics Capacity Magic 390
2805 54 Capacity on Demand 27, 379
2805-MC4 230 capacity upgrade 380
2805-MC5 166 CEC 10, 23, 38, 56–57
3390 Model A 338 Central Electronic Complex (CEC) 10, 23, 55, 57
951 4 chfbvol 324
95E 4 Chipkill 60
chpass 197
A chuser 197
AAL 77 CIM 180
benefits 49 agent 167, 184
Accelerated Graphics Port (AGP) 36 CIMOM 235
activate licenses CKD volumes 92
applying activation codes using the DS CLI 216 allocation and deletion 96
functions 204 CLOCK 133
Adaptive Multi-stream Prefetching (AMP) 17, 38, 132 code update 72
Adaptive Multistream Prefetching (AMP) 7 commands
address groups 100 structure 311
Advanced Function Licenses 186 community name 352
activation 186 components 29
affinity 90 concurrent copy session timeout 294
air cooling 5 configuration
alias devices 297 flow 226
allocation 96 configuration state 98
allocation unit 95 configuring 98
AMP 17, 38, 132 configuring the DS8000 317, 331
applying activation codes using the DS CLI 216 configuring using DS CLI
architecture 35 configuring the DS8000 317, 331
array sites 87 consistency group
arrays 47, 87 timeout 295
arrays across loops (AAL) 77 Consistency Group FlashCopy 112
authorization 204 Control Unit Initiated Reconfiguration see CUIR
cooling 5
disk enclosure 51
B cooperative caching 128
baffle 381 Copy Services
base frame 4, 30, 56 event traps 354
battery backup assemblies 51 CSCAN 133
battery backup unit (BBU) 30–31 CUIR 71
battery pack 23
bay 68
BBU 30–31 D
business class 5 DA 44
Business Class Cabling 22, 26, 56 Fibre Channel 123
business class configuration 5 DA pair 174
business continuity 7, 12 daemon 350
Business-to-Business 184 data migration
Disk Magic 391
data set FlashCopy 112–113
C DB2 132
cables 164 DDM 49, 73, 123
cache 23, 130 hot pluggable 78
pollution 132 DDR2 39
caching algorithm 130 deconfiguring 98

© Copyright IBM Corp. 2011. All rights reserved. 405


default profile 309 simplified LUN masking 15
demand paging 130 SNMP configuration 353
demand-paged data 42 user management using DS CLI 192
destage 35, 95 DS8000
device adapter 10 activate license functions 204
device adapter (DA) 25, 37, 44, 174 activation of Advanced Function Licenses 186
DFSA 206 applying activation codes using the DS CLI 216
diagela 59 arrays 47
Diagnostic Error Log Analysis (diagela) 59 base frame 30
disk drive set 11 battery backup assemblies 51
disk drives business continuity 7
capacity 172 Capacity Magic 390
disk enclosure 30, 44 components 29
power and cooling 51 configuration
Disk Magic 391 flow 226
Disk Storage Feature Activation (DSFA) 206 configuring 317, 331
disk subsystem 44 considerations prior to installation 158
disk virtualization 86 data placement 134
DoD 5220.20-M 395 DDM 49
DPR 70 Disk Magic 391
DS API 108 disk subsystem 44
DS CIM Command-Line Interface (DSCIMCLI) 184 DS CLI console
DS CLI 108, 167, 182 DS CLI highlights 308
applying activation codes 216 DS HMC
command structure 311 planning 177
configuring second HMC 200 ESCON 164
console expansion frame 31
default profile 309 external DS HMC 199
help 314 Fibre Channel disk drives 10
highlights 308 Fibre Channel/FICON 164
interactive mode 311–312 FICON 19
operating systems 308 floor type and loading 160
return codes 313 frames 30
script command mode 313 host adapter 10
script mode 311 host interface and cables 164
single-shot mode 311 host prerequisites 187
user accounts 308 I/O priority queuing 18
user assistance 314 IBM Redbooks publications 403
user management 192 input voltage 163
DS Command-Line Interface see DS CLI maintenance windows 187
DS GUI 252 microcode 187
DS HMC modular expansion 30
external 199 Multiple Allegiance 18
planning 177 network connectivity planning 165
DS HMC planning online resources 404
activation of Advanced Function Licenses 186 PAV 18
host prerequisites 187 performance 16, 121
latest DS8000 microcode 187 planning for growth 175
maintenance windows 187 power and cooling 50
time synchronization 188 power connectors 163
DS Open API 184 power consumption 163
DS SM 108 power control features 164
user management 195 Power Line Disturbance (PLD) 164
DS Storage Manager GUI 34 power requirements 163
DS® family 4 POWER5 41, 126
DS6000 PPS 50
business continuity 12 processor complex 40
Capacity Magic 390 project planning 229, 251
dynamic LUN/volume creation and deletion 15 RAS 55, 57
large LUN and CKD volume support 15 Remote Mirror and Copy connectivity 171

406 IBM System Storage DS8800: Architecture and Implementation


remote power control 169 EPO switch 35, 81
remote support 168 Error Checking and Correcting (ECC) 60
room space and service clearance 162 ESCON 69, 164
RPC 50 ESS 800
SAN connection 169 Capacity Magic 390
scalability 15 ESSNI 179
SDD Ethernet
server-based 39 switches 52
service 14 Ethernet adapter 29
service processor 80 expansion frame 22, 24, 31, 56
setup 14 expansion unit 4
SMP 39 Extended Address Volumes (EAV) 7, 16, 92
spare 47 Extended Distance FICON 151–152
sparing considerations 172 Extended Remote Copy (XRC) 14, 117
SPCN 80 extent pools 90, 94
stripe size 139 extent rotation 96
supported environment 12 extent type 89–90
System z performance 18 external DS HMC 199
technical environment 179
time synchronization 188
z/OS Metro/Global Mirror 14 F
DS8000 series 4 failback 66
DS8700 failover 65
disk drive set 11 FATA disk drives
expansion enclosure 47 capacity 172
I/O enclosure 43 differences with FC 130
processor memory 42 performance 130
RIO-G 42 FC-AL
storage capacity 11 switched 10
switched FC-AL 46 FCP 23
DS8800 FDE 7, 11, 26, 82
architecture 35 Fibre Channel
disk enclosure 44 distances 50
EPO 34 host adapters 49
rack operator window 34 Fibre Channel/FICON 164
service processor 42 FICON 11, 19, 23, 35, 69
SPCN host adapters 49
dscimcli 167 File Transfer Protocol (FTP) 372
DSFA 206, 209 fixed block LUNs 91
Dynamic alias 297 FlashCopy 12–13, 108
dynamic LUN/volume creation and deletion 15 benefits 111
Dynamic Path Reconnect (DPR) 70 Consistency Group 112
Dynamic Volume Expansion 6 data set 112
Dynamic Volume Expansion (DVE) 98, 105, 324, 337 establish on existing RMC primary 112
inband commands 13, 112
incremental 13, 112
E Multiple Relationship 13, 112
eam rotateexts 322 no background copy 111
eam rotatevols 322 options 111
Early Power-Off Warning (EPOW) 42, 62 persistent 112
Earthquake Resistance Kit 84 Refresh Target Volume 112
EAV 92, 147 FlashCopy SE 7, 13, 109
ECC 60, 74 floating spare 78
Element Manager 180, 254 floor type and loading 160
emergency power off (EPO) 81 footprint 4
energy consumption 4 frames 30
Enterprise Choice 23 base 30
warranty 204 expansion 31
Enterprise Storage Server Network Interface server Full Disk Encryption 6
(ESSNI) 179 Full Disk Encryption (FDE) 7, 10–11, 26, 82, 174, 206,
EPO 34, 81 380

Index 407
functions inband commands 13, 112
activate license 204 increase capacity 301
Incremental FlashCopy 13, 112
index scan 132
G indicator 204
Global Copy 12, 14, 108, 114, 119 initckdvol 95
Global Mirror 12, 14, 108, 115, 119 initfbvol 95
how it works 116 input voltage 163
installation
H DS8000 checklist 158
HA 10, 49 Intelligent Write Caching (IWC) 7, 38, 42, 133
hard drive rebuild 61 interactive mode
Hardware Management Console (HMC) 11, 166, 169 DS CLI 311–312
help 315 internal fabric 10
DS CLI 314 IOCDS 70
High Performance FICON for z (zHPF) 153 IOPS 129
hit ratio 42 IOSQ Time 148
HMC 14, 34, 52, 57, 71, 166 IPv6 7, 52
HMC planning isolated key server (IKS) 11, 54
technical environment 179 IWC 7, 38, 42, 133
host
interface 164 L
prerequisite microcode 187 lane 36
host adapter 10 large LUN and CKD volume support 15
host adapter (HA) 6 lateral change 213
host adapter see HA LCU 293
host adapters 23 LCU type 294
Fibre Channel 49 LDAP authentication 7, 307
FICON 49 LDAP based authentication 191
four port 125 Least Recently Used (LRU) 131
host attachment 101 licensed function
hosttype 328 authorization 204
HWN021724W 243 indicator 204
HyperPAV 18, 147–148, 297 Licensed Machine Code (LMC) 6
HyperPAV license 205 logical configuration 158
HyperSwap 7 logical control unit (LCU) 63, 70, 99
Hypervisor 58 logical size 94
Hypervisor (PHYP) 58 logical subsystem see LSS
logical volumes 91
I long busy state 294
I/O enclosure 43, 51 long busy wait 128
I/O latency 42 longwave 50
I/O priority queuing 18, 151 LSS 63, 99
I/O tower 28 lsuser 193
i5/OS 17, 93 LUNs
IBM Certified Secure Data Overwrite 16 allocation and deletion 96
IBM FlashCopy SE 13, 94, 320, 332 fixed block 91
IBM Redbooks publications 403 masking 15
IBM System Storage Interoperability Center (SSIC) 187 System i 93
IBM System Storage N series 137 LVM
IBM Tivoli Storage Productivity Center Basic Edition (TPC striping 137
BE) 53
IBM TotalStorage Multipath Subsystem Device Driver see M
SDD machine reported product data (MPRD) 371
IBM TotalStorage Productivity Center 11 machine type 23
IBM TotalStorage Productivity Center for Data 231 maintenance windows 187
IBM TotalStorage Productivity Center for Disk 231 man page 315
IKS 11, 54, 83 Management Information Base (MIB) 350
impact 95 managepwfile 194, 310

408 IBM System Storage DS8800: Architecture and Implementation


memory DIMM 39 FATA disk drives 130
metadata 71 open systems 140
Metro Mirror 12, 14, 108, 114, 119 determining the number of paths to a LUN 140
Metro/Global Mirror 12 where to attach the host 141
MIB 350, 352 workload characteristics 134
microcode z/OS 142
update 72 connect to System z hosts 142
mkckdvol 335 disk array sizing 129
mkfbvol 321 Performance Accelerator feature 28
mkrank 222 Persistent FlashCopy 112
mkuser 197 physical partition (PP) 139
Model 941 22 physical paths 247
Model 951 4, 22–23 physical planning 157
Model 95E 22 delivery and staging area 160
modified data 35 floor type and loading 160
modular expansion 30 host interface and cables 164
Most Recently Used (MRU) 131 input voltage 163
MRPD 371 network connectivity planning 165
Multiple Allegiance 18, 150 planning for growth 175
Multiple Reader 14, 18 power connectors 163
Multiple Relationship FlashCopy 13, 112 power consumption 163
Multiple Subchannel Sets (MSS) 148 power control features 164
Power Line Disturbance (PLD) 164
power requirements 163
N Remote Mirror and Copy connectivity 171
native device interface 235 remote power control 169
network connectivity planning 165 remote support 168
Network Installation Management on Linux (NIMoL) 61 room space and service clearance 162
Network Time Protocol (NTP) 188 sparing considerations 172
NMS 350 storage area network connection 169
non-disruptive upgrade 5 physical size 94
non-volatile storage (NVS) 37, 64 planning
NTP 188 DS Hardware Management Console 155
Nucleus Initialization Program (NIP) 145 logical 155
NVS 37, 42, 58, 64 physical 155
project 155
O planning for growth 175
OEL 204 power 51
offloadauditlog 377 BBU 79
online resources 404 disk enclosure 51
open systems I/O enclosure 51
performance 140 PPS 79
sizing 140 processor enclosure 51
Operating Environment License (OEL) 159, 204 RPC 80
Out of Band Fabric agent 247 power and cooling 50
over provisioning 94 BBU 79
PPS 79
RPC 80
P power connectors 163
Parallel Access Volumes see PAV power consumption 163
PAV 18, 99 power control card 31
PCI Express 10, 31, 36, 38 power control features 164
adapter 164 Power Distribution Unit (PDU) 79
slot 43 power distribution units (PDU) 23, 79
PCI express 6 Power Distribution Units (PDUs) 79
PCI-X 36 Power Line Disturbance (PLD) 164
PDU 23 power loss 67
Peer-to-Peer Remote Copy (PPRC) 114 power requirements 163
performance Power Sequence Controller (PSC) 169
data placement 134 power subsystem 79

Index 409
POWER5 41, 126 repository 94, 320, 332
POWER6+ 5, 10 repository size 94
PPRC-XD 114 return codes
PPS 30, 79 DS CLI 313
Predictive Failure Analysis (PFA) 74 RIO-G 31, 38, 42
prefetch wastage 132 interconnect 62
prefetched data 42 RMC 12–13, 108, 113, 171
prefetching 130 Global Copy 114
primary frame 56 Global Mirror 115
primary power supplies (PPS) 72 Metro Mirror 114
primary power supply see PPS rmsestg 321
priority queuing 151 rmuser 197
processor complex 10, 40, 57 role 242
processor enclosure room space 162
power 51 rotate extents 7, 96, 296
project plan rotate volumes 296, 322, 335
considerations prior to installation 158 rotated volume 96
physical planning 157 rotateexts 325, 337
roles 159 RPC 30, 50
project planning 229, 251 RPO 119
information required 159 RPQ 174
PTC 111

S
R SAN 69
rack operator window 34 SAN Volume Controller (SVC) 137
rack power control cards see RPC SARC 16–17, 42, 130, 132
RAID 10 SAS 8, 26
AAL 77 SAS disk drive 4
drive failure 77 SAS drive 5
implementation 77 SAS-2 drive 10
RAID 5 scalability 15
drive failure 74 DS8000
RAID 6 7, 75, 97 scalability 127
implementation 76 script command mode
raidtype 318 DS CLI 313
RANDOM 17 script mode
ranks 88, 94 DS CLI 311
RAS 55 scrubbing 74
CUIR 71 SDD 17, 140
fault avoidance 59 Secure Data Overwrite 395
First Failure Data Capture 59 Security Administrator 191
naming 56 self-healing 60
rebuild time 76 SEQ 17
reconfiguring 98 SEQ list 132
Recovery Key 83 Sequential prefetching in Adaptive Replacement Cache
Recovery Point Objective see RPO see SARC
Redbooks Web site 404 sequential read throughput 5
Contact us xvi sequential write throughput 5
reduction 213 Serial Attached SCSI 26
related publications 403 Serial Attached SCSI (SAS) 86
help from IBM 404 server affinity 90
how to get IBM Redbooks publications 404 server-based SMP 39
online resources 404 service clearance 162
reliability, availability, serviceability see RAS service processor 42, 80
Remote Mirror and Copy function see RMC session timeout 295
Remote Mirror and Copy see RMC SFI 57
remote power control 169 S-HMC 11
remote support 72, 168 shortwave 50
reorg 137 showckdvol 335
repcapalloc 320 showfbvol 336

410 IBM System Storage DS8800: Architecture and Implementation


showpass 194 storage unit 56
showsestg 320 stripe 94
showuser 193 size 139
silicon on insulator (SOI) 10 striped volume 97
simplified LUN masking 15 switched FC-AL 10
simultaneous multi-threading (SMT) 39 advantages 45
single-shot mode DS8700 implementation 46
DS CLI 311 symmetric multiprocessor (SMP) 38
sizing System Data Mover (SDM) 118
open systems 140 System i
z/OS 142 LUNs 93
SMT 39 system power control network see SPCN
SMUX 350 System Storage Productivity Center (SSPC) 7, 11, 57,
SNMP 188, 350 166
agent 350–351 System z
configuration 353, 360 performance 18
Copy Services event traps 354
manage 350
notifications 353 T
preparation for the management software 361 temporal ordering 133
preparation on the DS HMC 361 Thin Provisioning 94
preparation with DS CLI 361 Three Site BC 232
trap 350, 352 time synchronization 188
trap 101 354 Tivoli Key Lifecycle Manager (TKLM) 11, 54
trap 202 356 Tivoli Productivity Center 394
trap 210 357 Tivoli Storage Productivity Center for Replication (TPC-R)
trap 211 357 108
trap 212 357 TKLM 11, 54
trap 213 357 tools
trap 214 358 Capacity Magic 390
trap 215 358 topology 11
trap 216 358 TotalStorage Productivity Center (TPC) 180
trap 217 358 TotalStorage Productivity Center for Fabric 231
trap request 350 TotalStorage Productivity Center for Replication (TPC-R)
SOI 10 8, 71, 232
Solid State Drive (SSD) 7, 10, 129, 174 TotalStorage Productivity Standard Edition (TPC-SE)
Space Efficient 317 231
Space Efficient volume 105 TPC
spares 47, 78 native device interface 243
floating 78 TPC for Disk 394
sparing 78, 172 TPC-R 108
sparing considerations 172 track 95, 109
spatial ordering 133 Track Space Efficient (TSE) 94, 296, 320
SPCN 42, 62, 80 Track Space Efficient Volumes 94
SSD 7 translation control entry (TCE) 58
SSID 294 trap 350, 352
SSL connection 15
SSPC 11, 57, 186, 230 U
install 233 user accounts
SSPC user management 240 DS CLI 308
Standard Cabling 22 user assistance
Standby Capacity on Demand see Standby CoD DS CLI 314
Standby CoD 12, 27, 175 user management
storage area network connection 169 using DS CLI 192
storage capacity 11 using DS SM 195
storage complex 56 user management using DS SM 195
storage facility image 57 user role 242
Storage Hardware Management Console (HMC) 57
Storage Hardware Management Console see HMC
Storage Pool Striping 7, 90, 96, 135–137, 296, 322

Index 411
V
value based licensing 204
Virtual Private Network (VPN) 168
virtual space 94
virtualization
abstraction layers for disk 86
address groups 100
array sites 87
arrays 87
benefits 104
concepts 85
definition 86
extent pools 90
hierarchy 103
host attachment 101
logical volumes 91
ranks 88
volume group 102
volume groups 102, 284
volume manager 97
volumes
CKD 92
VPN 15

W
warranty 204
Web UI 180, 184
web-based user interface (Web UI) 180
window
rack operator 34
WLM 145
Workload Manager 145

X
XRC 14, 117
XRC session 295

Z
z/OS Global Mirror 12, 14, 18, 23, 108, 117, 120
z/OS Global Mirror session timeout 295
z/OS Metro/Global Mirror 14, 116, 118
z/OS Workload Manager 145
zHPF 6, 18, 153
extended distance 154
multitrack 154
zIIP 118

412 IBM System Storage DS8800: Architecture and Implementation


IBM System Storage DS8800:
Architecture and Implementation
IBM System Storage DS8800:
Architecture and Implementation
IBM System Storage DS8800: Architecture and Implementation
(0.5” spine)
0.475”<->0.873”
250 <-> 459 pages
IBM System Storage DS8800: Architecture and Implementation
IBM System Storage DS8800:
Architecture and Implementation
IBM System Storage DS8800:
Architecture and Implementation
Back cover ®

IBM System Storage DS8800


Architecture and Implementation
®

High Density Storage This IBM® Redbooks® publication describes the concepts,
architecture, and implementation of the IBM System INTERNATIONAL
Enclosure
Storage® DS8800 storage subsystem. The book provides TECHNICAL
reference information to assist readers who need to plan SUPPORT
8 Gbps Host Adapters for, install, and configure the DS8800. ORGANIZATION
4-Port Device Adapter The IBM System Storage DS8800 is the most advanced
model in the IBM DS8000 lineup. It introduces IBM
POWER6+-based controllers, with dual two-way or dual
four-way processor complex implementations. It also BUILDING TECHNICAL
features enhanced 8 Gpbs device adapters and host INFORMATION BASED ON
adapters. PRACTICAL EXPERIENCE
The DS8800 is equipped with high-density storage
IBM Redbooks are developed
enclosures populated with 24 small form factor SAS-2 by the IBM International
drives. Solid State Drives are also available, as well as Technical Support
support for the Full Disk Encryption (FDE) feature. Organization. Experts from
IBM, Customers and Partners
Its switched Fibre Channel architecture, dual processor from around the world create
complex implementation, high availability design, and timely technical information
incorporated advanced Point-in-Time Copy and Remote based on realistic scenarios.
Mirror and Copy functions make the DS8800 system Specific recommendations
suitable for mission-critical business functions. are provided to help you
implement IT solutions more
Host attachment and interoperability topics for the DS8000 effectively in your
series including the DS8800 are now covered in the IBM environment.
Redbooks publication, IBM System Storage DS8000: Host
Attachment and Interoperability, SG24-8887.

For more information:


ibm.com/redbooks

SG24-8886-00 ISBN 0738435066

You might also like