Ibm System Storage
Ibm System Storage
Bertrand Dufrasne
Doug Acuff
Pat Atkinson
Urban Biel
Hans Paul Drumm
Jana Jamsek
Peter Kimmel
Gero Schmidt
Alexander Warmuth
ibm.com/redbooks
International Technical Support Organization
January 2011
SG24-8886-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page xi.
This edition applies to the IBM System Storage DS8800 with DS8000 Licensed Machine Code (LMC) level
6.6.xxx.xx (bundle version 86.0.xxx.xx)).
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
Contents v
7.5.3 Data placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
7.6 Performance and sizing considerations for open systems . . . . . . . . . . . . . . . . . . . . . 140
7.6.1 Determining the number of paths to a LUN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
7.6.2 Dynamic I/O load-balancing: Subsystem Device Driver . . . . . . . . . . . . . . . . . . . 140
7.6.3 Automatic port queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
7.6.4 Determining where to attach the host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
7.7 Performance and sizing considerations for System z . . . . . . . . . . . . . . . . . . . . . . . . . 142
7.7.1 Host connections to System z servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
7.7.2 Parallel Access Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
7.7.3 z/OS Workload Manager: Dynamic PAV tuning . . . . . . . . . . . . . . . . . . . . . . . . . 145
7.7.4 HyperPAV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
7.7.5 PAV in z/VM environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
7.7.6 Multiple Allegiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
7.7.7 I/O priority queuing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
7.7.8 Performance considerations on Extended Distance FICON . . . . . . . . . . . . . . . . 151
7.7.9 High Performance FICON for z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
7.7.10 Extended distance High Performance FICON . . . . . . . . . . . . . . . . . . . . . . . . . 154
Chapter 10. IBM System Storage DS8800 features and license keys . . . . . . . . . . . . 203
10.1 IBM System Storage DS8800 licensed functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
10.2 Activation of licensed functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
10.2.1 Obtaining DS8800 machine information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
10.2.2 Obtaining activation codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
10.2.3 Applying activation codes using the GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
10.2.4 Applying activation codes using the DS CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
10.3 Licensed scope considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
10.3.1 Why you get a choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
10.3.2 Using a feature for which you are not licensed . . . . . . . . . . . . . . . . . . . . . . . . . 218
10.3.3 Changing the scope to All . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
10.3.4 Changing the scope from All to FB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
10.3.5 Applying an insufficient license feature key . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
10.3.6 Calculating how much capacity is used for CKD or FB. . . . . . . . . . . . . . . . . . . 221
Contents vii
12.3.5 Display host volumes through SVC to the assigned DS8000 volume. . . . . . . . 249
Contents ix
Capacity Magic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
Disk Magic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
HyperPAV analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
FLASHDA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
IBM i SSD Analyzer Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
IBM Tivoli Storage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
IBM Certified Secure Data Overwrite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
IBM Global Technology Services: service offerings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
IBM STG Lab Services: Service offerings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX 5L™ Power Architecture® System p®
AIX® Power Systems™ System Storage DS®
DB2® POWER5™ System Storage®
DS4000® POWER5+™ System x®
DS6000™ POWER6+™ System z10®
DS8000® POWER6® System z®
Enterprise Storage Server® PowerHA™ Tivoli®
ESCON® PowerPC® TotalStorage®
FICON® POWER® WebSphere®
FlashCopy® Redbooks® XIV®
GPFS™ Redpapers™ z/OS®
HACMP™ Redbooks (logo) ® z/VM®
HyperSwap® RMF™ z10™
i5/OS® S/390® z9®
IBM® System i®
AMD, the AMD Arrow logo, and combinations thereof, are trademarks of Advanced Micro Devices, Inc.
Disk Magic, and the IntelliMagic logo are trademarks of IntelliMagic BV in the United States, other countries,
or both.
Novell, SUSE, the Novell logo, and the N logo are registered trademarks of Novell, Inc. in the United States
and other countries.
Oracle, JD Edwards, PeopleSoft, Siebel, and TopLink are registered trademarks of Oracle Corporation and/or
its affiliates.
Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other
countries, or both.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Intel Xeon, Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks
of Intel Corporation or its subsidiaries in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
This IBM® Redbooks® publication describes the concepts, architecture, and implementation
of the IBM System Storage® DS8800 storage subsystem. The book provides reference
information to assist readers who need to plan for, install, and configure the DS8800.
The IBM System Storage DS8800 is the most advanced model in the IBM DS8000® lineup. It
introduces IBM POWER6+™-based controllers, with dual two-way or dual four-way processor
complex implementations. It also features enhanced 8 Gpbs device adapters and host
adapters.
The DS8800 is equipped with high-density storage enclosures populated with 24 small form
factor SAS-2 drives. Solid State Drives are also available, as well as support for the Full Disk
Encryption (FDE) feature.
Its switched Fibre Channel architecture, dual processor complex implementation, high
availability design, and incorporated advanced Point-in-Time Copy and Remote Mirror and
Copy functions make the DS8800 system suitable for mission-critical business functions.
Host attachment and interoperability topics for the DS8000 series including the DS8800 are
now covered in the IBM Redbooks publication, IBM System Storage DS8000: Host
Attachment and Interoperability, SG24-8887.
To read about DS8000 Copy Services functions see the Redbooks IBM System Storage
DS8000: Copy Services for Open Environments, SG24-6788, and DS8000 Copy Services for
IBM System z, SG24-6787.
For information related to specific features, see IBM System Storage DS8700: Disk
Encryption Implementation and Usage Guidelines, REDP-4500, IBM System Storage
DS8000: LDAP Authentication, REDP-4505, and DS8000: Introducing Solid State Drives,
REDP-4522.
Bertrand Dufrasne is an IBM Certified Consulting IT Specialist and Project Leader for IBM
System Storage disk products at the International Technical Support Organization, San Jose
Center. He has worked at IBM in various IT areas. Bertrand has written many IBM Redbooks
publications and has also developed and taught technical workshops. Before joining the
ITSO, he worked for IBM Global Services as an Application Architect in the retail, banking,
telecommunication, and health care industries. He holds a Masters degree in Electrical
Engineering from the Polytechnic Faculty of Mons, Belgium.
Doug Acuff is an Advisory Software Engineer for the DS8000 System Level Serviceability
team in Tucson, Arizona. He has been with IBM for ten years as a member of both test and
development teams for IBM Systems Storage products including ESS, DS8000 and
DS6000™ models. His responsibilities include testing DS8000 hardware and firmware,
having led multiple hardware test teams in both Functional Verification and System Level
Test. Doug holds a Masters degree in Information Systems from the University of New
Mexico.
Urban Biel is an IT Specialist in IBM GTS Slovakia. His areas of expertise include System
p®, AIX®, Linux®, PowerHA™, DS6000/DS8000/SVC, Softek, and GPFS™. He has been
involved in various projects that typically include HA/DR solutions implementation using
DS8000 copy services with AIX/PowerHA. He also executed several storage and server
migrations. Urban holds a second degree in Electrical Engineering and Informatics from the
Technical University of Kosice.
Hans Paul Drumm is an IT Specialist in IBM Germany. He has 25 years of experience in the
IT industry, and has worked at IBM for nine years. Hans holds a degree in Computer Science
from the University of Kaiserslautern. His areas of expertise include Solaris, HP-UX and
z/OS®, with a special focus on disk subsystem attachment.
Jana Jamsek is an IT Specialist with IBM Slovenia. She works in Storage Advanced
Technical Support for Europe as a specialist for IBM Storage Systems and the IBM i (i5/OS®)
operating system. Jana has eight years of experience working with the IBM System i®
platform and its predecessor models, as well as eight years of experience working with
storage. She holds a Masters degree in Computer Science and a degree in Mathematics from
the University of Ljubljana, Slovenia.
Peter Kimmel is an IT Specialist and ATS team lead of the Enterprise Disk Solutions team at
the European Storage Competence Center (ESCC) in Mainz, Germany. He joined IBM
Storage in 1999 and since then has worked with all Enterprise Storage Server® (ESS) and
System Storage DS8000 generations, with a focus on architecture and performance. He has
been involved in the Early Shipment Programs (ESPs) of these early installs, and has
co-authored several DS8000 IBM Redbooks publications. Peter holds a Diploma (MSc)
degree in Physics from the University of Kaiserslautern.
Gero Schmidt is an IT Specialist in the IBM ATS technical sales support organization in
Germany. He joined IBM in 2001, working at the European Storage Competence Center
(ESCC) in Mainz and providing technical support for a broad range of IBM storage products
(SSA, ESS, DS4000, DS6000, and DS8000) in Open Systems environments with a primary
focus on storage subsystem performance. He participated in the product rollout and beta test
program for the DS6000/DS8000 series. Gero holds a degree in Physics (Dipl.-Phys.) from
the Technical University of Braunschweig, Germany.
Many thanks to the following people at the IBM Systems Lab Europe in Mainz, Germany, who
helped with equipment provisioning and preparation:
Uwe Heinrich Müller, Uwe Schweikhard, Günter Schmitt, Jörg Zahn, Mike Schneider, Hartmut
Bohnacker, Stephan Weyrich, Uwe Höpfne, Werner Wendler
Special thanks to the Enterprise Disk team manager Bernd Müller and the ESCC director
Klaus-Jürgen Rünger for their continuous interest and support regarding the ITSO Redbooks
projects.
John Bynum
Worldwide Technical Support Management
James Davison, Dale Anderson, Brian Cagno, Stephen Blinick, Brian Rinaldi, John Elliott,
Kavitah Shah, Rick Ripberger, Denise Luzar, Stephen Manthorpe, Jeff Steffan
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Preface xv
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
More detailed information about those functions and features is provided in subsequent
chapters.
The IBM System Storage DS8000 series encompasses the flagship disk enterprise storage
products in the IBM System Storage portfolio. The DS8800, which is the IBM fourth
generation high-end disk system, represents the latest in this series introducing new small
form factor 2.5-inch SAS disk drive technology, POWER6+ processors, and new 8 Gbps disk
adapter (DA) and host adapter (HA) cards.
The IBM System Storage DS8800, shown in Figure 1-1, is designed to support the most
demanding business applications with its exceptional all-around performance and data
throughput. Combined with the world-class business resiliency and encryption features of the
DS8800, this provides a unique combination of high availability, performance, and security. Its
tremendous scalability, broad server support, and virtualization capabilities can help simplify
the storage environment by consolidating multiple storage systems onto a single DS8800.
Introducing new high density storage enclosures, the DS8800 model offers a considerable
reduction in footprint and energy consumption, thus making it the most space- and
energy-efficient model in the DS8000 series.
Figure 1-1 IBM System Storage DS8800, the IBM fourth generation high-end disk system
The IBM System Storage DS8800 adds Models 951 (base frame) and 95E (expansion unit) to
the 242x machine type family, delivering cutting edge technology, improved space and energy
The IBM System Storage DS8800 supports DS8000 Licensed Machine Code (LMC)
6.6.xxx.xx (bundle version 86.0.xxx.xx), or later.
The DS8800 inherits most of the features of its predecessors in the DS8000 series including:
Storage virtualization offered by the DS8000 series allows organizations to allocate
system resources more effectively and better control application quality of service. The
DS8000 series improves the cost structure of operations and lowers energy consumption
through a tiered storage environment.
The Dynamic Volume Expansion simplifies management by enabling easier, online
volume expansion to support application data growth, and to support data center
migrations to larger volumes to ease addressing constraints.
The DS8800 is available with different models and configurations, which are discussed in
detail in Chapter 2, “IBM System Storage DS8800 models” on page 21.
Figure 1-2 and Figure 1-3 show the front and rear view of a DS8800 base frame (model 951)
with two expansion frames (model 95E), which is the current maximum DS8800 system
configuration.
Figure 1-3 DS8800 base frame with two expansion frames (rear view, 2-way, no PLD option)
Compared to the POWER5+ processor in the DS8300 models, the POWER6+ processor can
enable over a 50% performance improvement in I/O operations per second in transaction
processing workload environments. Additionally, sequential workloads can receive as much
as 200% bandwidth improvement. The DS8800 offers either a dual 2-way processor complex
or a dual 4-way processor complex. A processor complex is also referred to as a storage
server or central processor complex (CPC).
Device adapters
The DS8800 offers 8 Gbps device adapters. These adapters provide improved IOPS
performance, throughput, and scalability. They are optimized for SSD technology and
architected for long-term support for scalability growth. These capabilities complement the
POWER6+ server family to provide significant performance enhancements allowing up to
400% improvement in single adapter throughput performance.
Host adapters
The DS8800 series offers host connectivity with four-port or eight-port 8 Gbps Fibre
Channel/FICON host adapters. The 8 Gbps Fibre Channel/FICON Host Adapters are offered
in longwave and shortwave, and auto-negotiate to 8 GBps, 4 Gbps, or 2 Gbps link speeds.
Each port on the adapter can be individually configured to operate with Fibre Channel
Utilizing IBM Tivoli Storage Productivity Center (TPC) Basic Edition software, SSPC extends
the capabilities available through the IBM DS Storage Manager. SSPC offers the unique
capability to manage a variety of storage devices connected across the storage area network
(SAN). The rich, user-friendly graphical user interface provides a comprehensive view of the
storage topology, from which the administrator can explore the health of the environment at
an aggregate or in-depth view. Moreover, the TPC Basic Edition, which is pre-installed on the
SSPC, can be optionally upgraded to TPC Standard Edition, which includes enhanced
functionality including monitoring and reporting capabilities that may be used to enable more
in-depth performance reporting, asset and capacity reporting, automation for the DS8000,
and to manage other resources, such as other storage devices, server file systems, tape
drives, tape libraries, and SAN environments.
For DS8800 storage systems shipped with Full Disk Encryption (FDE) drives, two TKLM key
servers are required. An isolated key server (IKS) with dedicated hardware and
non-encrypted storage resources is required and can be ordered from IBM.
There is space for a maximum of 240 disk drive modules (DDMs) in the base frame (951),
336 DDMs in the first expansion frame (95E) and another 480 DDMs in the second expansion
frame (95E). With a maximum of 1056 DDMs, the DS8800 model 951 with the dual 4-way
feature, using 600 GB drives, currently provides up to 633 TB of physical storage capacity
with two expansion frames (95E) in a considerably smaller footprint and up to 40% less power
consumption than previous generations of DS8000.
The DS8800 storage capacity can be configured as RAID 5, RAID 6, RAID 10, or as a
combination (some restrictions apply for Full Disk Encryption (FDE) and Solid State Drives).
The DS8800 supports over 90 platforms. For the most current list of supported platforms, see
the DS8000 System Storage Interoperation Center at:
http://www.ibm.com/systems/support/storage/config/ssic/index.jsp
This rich support of heterogeneous environments and attachments, along with the flexibility to
easily partition the DS8800 storage capacity among the attached environments, can help
support storage consolidation requirements and dynamic, changing environments.
For data protection and availability needs, the DS8800 provides Metro Mirror, Global Mirror,
Global Copy, Metro/Global Mirror, and z/OS Global Mirror, which are Remote Mirror and Copy
functions. These functions are also available and are fully interoperable with previous models
of the DS8000 family and even the ESS 800 and 750 models. These functions provide
storage mirroring and copying over large distances for disaster recovery or availability
purposes.
We briefly discuss Copy Services in Chapter 6, “IBM System Storage DS8800 Copy Services
overview” on page 107. For detailed information on Copy Services, see the Redbooks IBM
System Storage DS8000: Copy Services for Open Systems, SG24-6788, and IBM System
Storage DS8000: Copy Services for IBM System z, SG24-6787.
Incremental FlashCopy
Incremental FlashCopy provides the capability to refresh a LUN or volume involved in a
FlashCopy relationship. When a subsequent FlashCopy is initiated, only the data required to
make the target current with the source's newly established point-in-time is copied.
Consistency Groups
FlashCopy Consistency Groups can be used to maintain a consistent point-in-time copy
across multiple LUNs or volumes, or even multiple DS8000, ESS 800, and ESS 750 systems.
IBM FlashCopy SE
The IBM FlashCopy SE feature provides a “space efficient” copy capability that can greatly
reduce the storage capacity needed for point-in-time copies. Only the capacity needed to
save pre-change images of the source data is allocated in a copy repository. This enables
more space efficient utilization than is possible with the standard FlashCopy function.
Furthermore, less capacity can mean fewer disk drives and lower power and cooling
requirements, which can help reduce costs and complexity. FlashCopy SE can be especially
useful in the creation of temporary copies for tape backup, online application checkpoints, or
copies for disaster recovery testing. For more information about FlashCopy SE, see IBM
System Storage DS8000 Series: IBM FlashCopy SE, REDP-4368.
Metro Mirror
Metro Mirror, previously called Peer-to-Peer Remote Copy (PPRC), provides a synchronous
mirror copy of LUNs or volumes at a remote site within 300 km. Metro Mirror Consistency
Groups, when used with a supporting application, can be used to maintain data and
transaction consistency across multiple LUNs or volumes, or even multiple DS8000,
ESS 800, and ESS 750 systems.
Global Copy
Global Copy, previously called Extended Distance Peer-to-Peer Remote Copy (PPRC-XD), is
a non-synchronous long distance copy option for data migration and backup.
Global Mirror
Global Mirror provides an asynchronous mirror copy of LUNs or volumes over virtually
unlimited distances. The distance is typically limited only by the capabilities of the network
and channel extension technology being used. A Global Mirror Consistency Group is used to
maintain data consistency across multiple LUNs or volumes, or even multiple DS8000,
ESS 800, and ESS 750 systems.
Metro/Global Mirror
Metro/Global Mirror is a three-site data replication solution for both Open Systems and the
System z environments. Local site (site a) to intermediate site (site b) provides high
availability replication using synchronous Metro Mirror, and intermediate site (site b) to remote
site (site c) provides long distance disaster recovery replication using asynchronous Global
Mirror.
z/OS Global Mirror also offers Incremental Resync, which can significantly reduce the time
needed to restore a DR environment after a HyperSwap in a three-site z/OS Metro/Global
Mirror configuration, after it is possible to change the copy target destination of a copy relation
without requiring a full copy of the data.
For maintenance and service operations, the Storage Hardware Management Console
(HMC) is the focal point. The management console is a dedicated workstation that is
physically located (installed) inside the DS8800 storage system and that can automatically
The HMC is also the interface for remote services (Call Home and Call Back), which can be
configured to meet client requirements. It is possible to allow one or more of the following:
Call on error (machine-detected)
Connection for a few days (client-initiated)
Remote error investigation (service-initiated)
The remote connection between the management console and the IBM Service organization
is done using a virtual private network (VPN) point-to-point connection over the internet or
modem. A new secure SSL connection protocol option is now available for call home support
and additional audit logging.
The DS8800 storage system can be ordered with an outstanding four-year warranty, an
industry first, on both hardware and software.
Large LUN and large count key data (CKD) volume support
You can configure LUNs and volumes to span arrays, allowing for larger LUN sizes up to
2 TB. The maximum CKD volume size is 262,668 cylinders (about 223 GB), greatly reducing
the number of volumes to be managed and creating a new volume type on z/OS called 3390
Model A. This capability is referred to as Extended Address Volumes and requires z/OS 1.10
or later.
An STG Lab Services Offering for all the DS8000 series and the ESS models 800 and 750
includes the following services:
Multi-pass overwrite of the data disks in the storage system
Purging of client data from the server and HMC disks
Note: The secure overwrite functionality is offered as a service exclusively and is not
intended for use by clients, IBM Business Partners, or IBM field support personnel.
To discover more about the IBM Certified Secure Data Overwrite service offerings, contact
your IBM sales representative or IBM Business Partner.
With all these components, the DS8800 is positioned at the top of the high performance
category.
SARC, AMP, and IWC play complementary roles. While SARC is carefully dividing the cache
between the RANDOM and the SEQ lists to maximize the overall hit ratio, AMP is managing
the contents of the SEQ list to maximize the throughput obtained for the sequential
workloads. IWC manages the write cache and decides what order and rate to destage to disk.
Solid State Drives are high-IOPS class enterprise storage devices targeted at Tier 0,
I/O-intensive workload applications that can use a high level of fast-access storage. Solid
State Drives offer a number of potential benefits over Hard Disk Drives, including better IOPS
performance, lower power consumption, less heat generation, and lower acoustical noise.
The DS8800 can take even better advantage of Solid Disk Drives due to its faster 8 Gbps
Fibre Channel protocol device adapters (DAs) compared to previous models of the DS8000
family.
SDD is provided with the DS8000 series at no additional charge. Fibre Channel (SCSI-FCP)
attachment configurations are supported in the AIX, HP-UX, Linux, Windows®, Novell
NetWare, and Oracle Solaris environments.
Note: Support for multipath is included in an IBM i server as part of Licensed Internal Code
and the IBM i operating system (including i5/OS).
For more information about SDD, see IBM System Storage DS8000: Host Attachment and
Interoperability, SG24-8887.
Chapter 7, “Performance” on page 121,gives you more information about the performance
aspects of the DS8000 family.
Chapter 7, “Performance” on page 121, gives you more information about these performance
enhancements.
See IBM System Storage DS8000: Copy Services for IBM System z, SG24-6787, for a
detailed discussion of z/OS Global Mirror and related enhancements.
Note: Model 951 supports nondisruptive upgrades from dual 2-way to dual 4-way.
Former 92E and 94E expansion frames cannot be reused in the DS8800.
A model 951 supports nondisruptive upgrades from a 48 DDM install to a full two
expansion rack unit.
Only one expansion frame can be added concurrently to the business class
configuration.
Addition of a second expansion frame to a business class configuration requires
recabling as standard class. Note that this is disruptive, and is available by RPQ only.
Table 2-1 provides a comparison of the DS8800 model 951 and its available combination of
resources.
Table 2-1 DS8800 series model comparison 951 and additional resources
Base Cabling Expansion Processor Max DDMs Max Max
model model type processor host
memory adapters
Depending on the DDM sizes (which can be different within a 951or 95E) and the number of
DDMs, the total capacity is calculated accordingly.
Each Fibre Channel/FICON host adapter has four or eight Fibre Channel ports, providing up
to 128 Fibre Channel ports for a maximum configuration.
Figure 2-1 DS8800 base frame with covers removed: front and rear
Figure 2-2 shows the maximum configuration for a DS8800 Model 951 base frame with one
95E expansion frame.
Figure 2-2 DS8800 configuration: 951 base unit with one 95E expansion frame: front
There are no additional I/O enclosures installed for the second expansion frame. The result of
installing all possible 1056 DDMs is that they will be distributed nearly evenly over all the
Figure 2-3 DS8800 models 951/95E maximum configuration with 1056 disk drives: front
The DS8800 business cabling class option of the Model 951 is available as a dual 2-way
processor complex with installation enclosures for up to 240 DDMs and 4 FC adapter cards.
Figure 2-4 shows the maximum configuration of 2-way standard versus 2-way business class
cabling.
Figure 2-4 DS8800 2-way processor standard cabling versus business cabling: front
The DS8800 offers a selection of disk drives, including Serial Attached SCSI second
generation drives (SAS) that feature a 6 Gbps interface. These drives, using a 2.5-inch form
factor, provide increased density and thus increased performance per frame. The SAS drives
are available in 146 GB (15K RPM) as well as 450 GB (10K RPM) and 600 GB (10K RPM)
capacities.
Besides SAS hard disk drives (HDDs), it is also possible to install 300 GB Solid State Drives
(SSDs) in the DS8800. There are restrictions regarding how many SSDs drives are supported
and how configurations are intermixed. SSDs are only supported in RAID 5 arrays. Solid
State Drives can be ordered in drive groups of sixteen. The suggested configuration of SSDs
is sixteen drives per DA pair. The maximum configuration of SSDs is 48 per DA pair.
SSDs may be intermixed on a DA pair with spinning drives, but intermix between SSDs and
spinning drives is not supported within storage enclosure pairs.
The DS8800 can be ordered with Full Disk Encryption (FDE) drives, with a choice of 450 GB
(10K RPM) and 600 GB (10K RPM) FC drives. You cannot intermix FDE drives with other
drives in a DS8800 system. For additional information about FDE drives, see IBM Encrypted
Storage Overview and Customer Requirements at the following site:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101479
The DS8800 model 951 can have up to 144 DDMs and 4 FC adapter cards in the 2-way
standard configuration.
A summary of the capacity characteristics is listed in Table 2-2. The minimum capacity is
achieved by installing one eight drive group of GB SSD drives.
Table 2-2 Capacity comparison of device adapters, DDMs, and storage capacity
Component 2-way base 2-way base 4-way base 4-way (one 4-way (two
frame with frame, with two I/O expansion expansion
one I/O business enclosure frame) frames)
enclosure class cabling pairs
pair
DA pairs 1 or 2 1 or 2 1 to 4 1 to 8 1 to 8
Physical 2.3 to 86 TB 2.3 to 144 TB 2.3 to 144 TB 0.6 to 346 TB 0.6 to 633 TB
capacity
A significant benefit of the DS8800 series is the ability to add DDMs without disruption for
maintenance. IBM offers capacity on demand solutions that are designed to meet the
changing storage needs of rapidly growing e-business. The Standby Capacity on Demand
(CoD) offering is designed to provide you with the ability to tap into additional storage and is
particularly attractive if you have rapid or unpredictable storage growth.
Up to six standby CoD disk drive sets (96 disk drives) can be concurrently field-installed into
your system. To activate, you simply logically configure the disk drives for use, which is a
nondisruptive activity that does not require intervention from IBM.
Upon activation of any portion of a standby CoD disk drive set, you must place an order with
IBM to initiate billing for the activated set. At that time, you can also order replacement
standby CoD disk drive sets. For more information about the standby CoD offering, refer to
the DS8800 series announcement letter, which can be found at the following address:
http://www.ibm.com/common/ssi/index.wss
When ordering 432 disk drives, you get eight DA pairs, which is the maximum number of DA
pairs. Adding more drives will not add DA pairs.
Figure 3-1 shows some of the frame variations that are possible with the DS8800. The left
frame is a base frame that contains the processors. In this example, it has two 4-way IBM
System p POWER6+ servers, because only 4-way systems can have expansion frames. The
center frame is an expansion frame that contains additional I/O enclosures but no additional
processors. The right frame is an expansion frame that contains simply disks and no
processors, I/O enclosures, or batteries. Each frame contains a frame power area with power
supplies and other power-related hardware. A DS8800 can consist of up to three frames.
The base frame can contain up to ten disk enclosures, each of which can contain up to 24
disk drives. In a maximum configuration, the base frame can hold 240 disk drives. Disk drives
are either hard disk drives (HDD) with real spinning disks or Solid State Drives (SSD), which
have no moving parts and enable a significant increase in random transactional processing.
A disk enclosure contains either HDDs or SSDs. With SSDs in a disk enclosure, it contains
either 16 drives or a partially populated disk enclosure with eight SSD drives. A disk
The base frame can be configured using either standard or business class cabling. Standard
cabling is optimized for performance and allows for highly scalable configurations with large
long term growth. The business class option allows a system to be configured with more
drives per device adapter, thereby reducing configuration cost and increasing adapter
utilization. This configuration option is intended for configurations where capacity and high
resource utilization is of the most importance. Scalability is limited in the business class
option.
Standard cabling supports either two-way processors with one I/O enclosure pair or four-way
processors with two I/O enclosure pairs. Standard cabling with one I/O enclosure pair
supports up to two DA pairs and six storage enclosures (144 DDMs). Standard cabling with
two I/O enclosure pairs supports up to four DA pairs and ten storage enclosures (240 DDMs).
Business class cabling utilizes two-way processors and one I/O enclosure pair. Business
class cabling supports two DA pairs and up to ten storage enclosures (240 DDMs).
Notes:
Business class cabling is available as an initial order only. A business class cabling
configuration can only be ordered as a base frame with no expansion frames.
DS8800s do not support model conversion, that is, business class and standard class
cabling conversions are not supported. Re-cabling is available as an RPQ only and is
disruptive.
Inside the disk enclosures are cooling fans located in the storage enclosure power supply
units. These fans blow exhaust to the rear of the frame.
Between the disk enclosures and the processor complexes are two Ethernet switches and a
Storage Hardware Management Console (HMC).
The base frame contains two processor complexes (CPCs). These System p POWER6+
servers contain the processor and memory that drive all functions within the DS8800.
Finally, the base frame contains two or four I/O enclosures. These I/O enclosures provide
connectivity between the adapters and the processors. Each I/O enclosure can contain up to
two device adapters and up to two host adapters.
The interprocessor complex communication still utilizes the RIO-G loop as in previous models
of the DS8000 family. However, this RIO-G loop no longer has to handle data traffic, which
greatly improves performance.
The first expansion frame can hold up to 14 storage enclosures, which contain the disk drives.
They are described as 24-packs, because each enclosure can hold 24 small form factor (SFF)
disks. In a maximum configuration, the first expansion frame can hold 336 disk drives.
Disk drives are either hard disk drives (HDDs) with real spinning disks or Solid State Drives
(SSDs), which have no moving parts and enable a significant increase in random
transactional processing. A disk enclosure contains either HDDs or SSDs but not both.
With SSDs in a disk drive enclosure pair, it is either 16 drives or a half-populated disk
enclosure with eight SSD drives. A disk enclosure populated with HDDs contains always 16
or 24 drives. Note that only up to 48 SSDs can be configured per Device Adapter (DA) pair,
and it is not advisable to configure more than 16 SSDs per DA pair. Example 3-1 shows the
configuration required to get to the maximum SSD configuration (384 DDMs).
The next 16 SSDs would go into DA pair 2 (first frame, bottom 2 enclosures), then
the next 16 SSDs into DA pair 0 ( first frame, next 2 enclosures) etc until all 8
DA pairs had 32 SSDs
Then the next 16 SSDs would go into DA pair 2 again (first frame, bottom 2
enclosures, filling the enclosures), then the next 16 SSDs into DA pair 0 ( first
frame, next 2 enclosures, filling the enclosures) etc until all 8 DA pairs had 48
SSDs.
The second expansion frame would be all HDDs for a system total of 384 SSDs and
672 HDDs
The second expansion frame can hold up to 20 storage enclosures. In the maximum
configuration, the second expansion frame can hold 480 disk drives. The maximum
configuration with the base frame and two expansion frames is 1056 drives.
An expansion frame contains I/O enclosures and adapters if it is the first expansion frame that
is attached to a DS8800 4-way system. Note that you cannot add any expansion frame to a
DS8800 2-way system.
The second expansion frame cannot have I/O enclosures and adapters. If the expansion
frame contains I/O enclosures, the enclosures provide connectivity between the adapters and
the processors. The adapters contained in the I/O enclosures can be either device adapters
or host adapters, or both. The expansion frame model is called 95E. You cannot use
expansion frames from previous DS8000 models as expansion frames for a DS8800 storage
system.
Note: The business class cabling configuration is limited to one expansion frame and a
maximum of 576 DDMs. This restriction can be removed by converting to standard cabling,
which is disruptive. This conversion is only available via RPQ.
Each panel has two line cord indicators, one for each line cord. For normal operation, both of
these indicators are illuminated if each line cord is supplying correct power to the frame.
There is also a fault indicator on the bottom. If this indicator is illuminated, use the DS Storage
Manager GUI or the HMC Manage Serviceable Events menu to determine why this indicator
is illuminated.
There is also an EPO switch near the top of the primary power supplies (PPS). This switch is
only for emergencies. Tripping the EPO switch will bypass all power sequencing control and
result in immediate removal of system power. Do not trip this switch unless the DS8000 is
creating a safety hazard or is placing human life at risk. Data in non-volatile storage (NVS) will
not be destaged and will be lost. Figure 3-4 shows the EPO switch.
There is no power on/off switch on the operator window because power sequencing is
managed through the HMC. This ensures that all data in nonvolatile storage, known as
modified data, is destaged properly to disk prior to power down. It is not possible to shut down
or power off the DS8800 from the operator window, except in an emergency, and using the
EPO switch.
In effect, the DS8800 consists of two processor complexes. Each processor complex has
access to multiple host adapters to connect to Fibre Channel or FICON hosts. A DS8800 can
have up to 16 host adapters with 4 or 8 I/O ports on each adapter.
Fibre Channel adapters are also used to connect to internal fabrics, which are Fibre Channel
switches to which the disk drives are connected.
Also see Chapter 4, “RAS on IBM System Storage DS8800” on page 55 and 7.1.4,
“POWER6+: heart of the DS8800 dual-cluster design” on page 126 for additional information
about the POWER6 processor.
PCI Express was designed to replace the general-purpose PCI expansion bus, the high-end
PCI-X bus, and the Accelerated Graphics Port (AGP) graphics card interface.
PCI Express is a serial I/O interconnect. Transfers are bidirectional, which means data can
flow to and from a device simultaneously. The PCI Express infrastructure involves a switch so
that more than one device can transfer data at the same time.
Unlike previous PC expansion interfaces, rather than being a bus, it is structured around
point-to-point full duplex serial links called lanes. Lanes can be grouped by 1x, 4x, 8x, 16x, or
32x, and each lane is high speed, using an 8b/10b encoding that results in 2.5 Gbps = 250
MBps per lane in a generation 1 implementation. Bytes are distributed across the lanes to
provide a high throughput (see Figure 3-5).
By te 7
By te 6
By te 5
By te 2
per lane
in each Byte 0 Byte 1 Byte 2 Byte 3
direction
As shown in Figure 3-6, a bridge is used to translate the x8 Gen 1 lanes from the processor to
the x4 Gen 2 lanes used by the I/O enclosures.
You can learn more about PCI Express at the following site:
http://www.redbooks.ibm.com/Redbooks.nsf/RedbookAbstracts/tips0456.html?Open
The attached hosts interact with software running on the complexes to access data on logical
volumes. The servers manage all read and write requests to the logical volumes on the disk
arrays. During write requests, the servers use fast-write, in which the data is written to volatile
memory on one complex and persistent memory on the other complex. The server then
reports the write as complete before it has been written to disk. This provides much faster
write performance. Persistent memory is also called nonvolatile storage (NVS). Additional
information about this topic is available in 3.5, “Host adapters” on page 49.
When a host performs a read operation, the processor complexes, also called CPCs, fetch
the data from the disk arrays using the high performance switched disk architecture. The data
is then cached in volatile memory in case it is required again. The servers attempt to
anticipate future reads by an algorithm known as Sequential prefetching in Adaptive
Replacement Cache (SARC). Data is held in cache as long as possible using this smart
caching algorithm. If a cache hit occurs where requested data is already in cache, then the
host does not have to wait for it to be fetched from the disks. The cache management has
been enhanced by breakthrough caching technologies from IBM Research, such as the
Adaptive Multi-stream Prefetching (AMP) and Intelligent Write Caching (IWC) (see 7.4,
“DS8000 superior caching algorithms” on page 130).
Both the device and host adapters operate on high bandwidth fault-tolerant point-to-point
4-lane Generation 2 PCI Express interconnections. The device adapters feature an 8 Gb
Fibre Channel interconnect speed with a 6 Gb SAS connection to the disk drives for each
connection and direction. On a DS8800, as on a DS8700, the data traffic is isolated from the
processor complex communication that utilizes the RIO-G loop.
Figure 3-7 on page 38 shows how the DS8800 hardware is shared between the servers. On
the left side is one processor complex (CPC). The CPC uses the N-way symmetric
multiprocessor (SMP) of the complex to perform its operations. It records its write data and
Additionally, the System p POWER6+ processors used in the DS8800 support the execution
of two independent threads concurrently. This capability is referred to as simultaneous
multi-threading (SMT). The two threads running on the single processor share a common L1
cache. The SMP/SMT design minimizes the likelihood of idle or overworked processors, while
a distributed processor design is more susceptible to an unbalanced relationship of tasks to
processors.
The design decision to use SMP memory as an I/O cache is a key element of the IBM storage
architecture. Although a separate I/O cache could provide fast access, it cannot match the
access speed of the SMP main memory.
All memory installed on any processor complex is accessible to all processors in that
complex. The addresses assigned to the memory are common across all processors in the
same complex. Alternatively, using the main memory of the SMP as the cache leads to a
partitioned cache. Each processor has access to the processor complex’s main memory, but
not to that of the other complex. You should keep this in mind with respect to load balancing
between processor complexes.
Figure 3-9 shows the DS8800 with the 4-way feature. In this case, four I/O enclosures are
required.
Figure 3-10 DS8800 with expansion frame and eight I/O enclosures
The DS8800 features IBM POWER6+ server technology. Compared to the POWER5+ based
processor models in DS8100 and DS8300, the POWER6 processor can achieve up to a 50%
performance improvement in I/O operations per second in transaction processing workload
environments and up to 150% throughput improvement for sequential workloads.
For details about the server hardware used in the DS8800, see IBM System p 570 Technical
Overview and Introduction, REDP-4405, found at:
http://www.redbooks.ibm.com/redpieces/pdfs/redp4405.pdf
The effectiveness of a read cache depends upon the hit ratio, which is the fraction of requests
that are served from the cache without necessitating a read from the disk (read miss).
To help achieve dramatically greater throughput and faster response times, the DS8800 uses
Sequential-prefetching in Adaptive Replacement Cache (SARC). SARC is an efficient
adaptive algorithm for managing read caches with both:
Demand-paged data: It finds recently used data in the cache.
Prefetched data: It copies data speculatively into the cache before it is even requested.
The decision of when and what to prefetch is made in accordance with the Adaptive
Multi-stream Prefetching (AMP), a cache management algorithm.
The Intelligent Write Caching (IWC) manages the write cache and decides in what order and
at what rate to destage.
For details about cache management, see 7.4, “DS8000 superior caching algorithms” on
page 130.
The SP performs predictive failure analysis based on any recoverable processor errors. The
SP can monitor the operation of the firmware during the boot process, and it can monitor the
operating system for loss of control. This enables the service processor to take appropriate
action.
The SPCN monitors environmentals such as power, fans, and temperature. Environmental
critical and noncritical conditions can generate Early Power-Off Warning (EPOW) events.
Critical events trigger appropriate signals from the hardware to the affected components to
prevent any data loss without operating system or firmware involvement. Non-critical
environmental events are also logged and reported.
3.3.2 RIO-G
In a DS8800, the RIO-G ports are used for inter-processor communication only. RIO stands
for remote I/O. The RIO-G has evolved from earlier versions of the RIO interconnect.
Each RIO-G port can operate at 1 GHz in bidirectional mode and is capable of passing data in
each direction on each cycle of the port. It is designed as a high performance, self-healing
interconnect.
In a 4-way configuration, the 2 GX+ buses connect to different PCI-X adapters in the I/O
enclosures. One GX+ bus is connected to the lower I/O enclosures in each frame. The other
GX+ bus is connected to the upper I/O enclosures in each frame. This is to optimize
performance in a single-frame 4-way configuration. The connections are displayed in
Figure 3-12.
A 2-way configuration would only use the lower pairs of I/O enclosures. A 4-way configuration
is always required for an expansion frame. Slots 3 and 6 are used for the device adapters.
Slots 1 and 4 are available to install up to two host adapters per I/O enclosure. There can be
a total of 8 host adapters in a 4-I/O enclosure configuration and 16 host adapters in an 8-I/O
enclosure configuration using an expansion frame.
We describe the disk subsystem components in the remainder of this section. Also see 4.6,
“RAS on the disk subsystem” on page 72 for additional information.
Each DS8800 device adapter (DA) card offers four FC-AL ports. These ports are used to
connect the processor complexes through the I/O enclosures to the disk enclosures. The
adapter is responsible for managing, monitoring, and rebuilding the RAID arrays. The adapter
provides remarkable performance thanks to a high function/high performance ASIC. To
ensure maximum data integrity, it supports metadata creation and checking.
The DAs are installed in pairs for redundancy in connecting to each disk enclosure. This is
why we refer to them as pairs.
Note: If a DDM is not present, its slot must be occupied by a dummy carrier. Without a
drive or a dummy, cooling air does not circulate properly.
The DS8800 also supports Solid State Drives (SSDs). SSDs also come in disk enclosures
either partially populated with 8 disks, 16 disks, or fully populated with 24 disks. They have
the same form factor as the traditional disks. SSDs and other disks cannot be intermixed
within the same enclosure pair.
Each DDM is an industry standard Serial Attached SCSI (SAS) disk. The DDMs are 2.5-inch
small form factor disks. This size allows 24 disk drives to be installed in each storage
enclosure. Each disk plugs into the disk enclosure backplane. The backplane is the electronic
and physical backbone of the disk enclosure.
The enclosure has a redundant pair of interface control cards (IC) that provides the
interconnect logic for the disk access and a SES processor for enclosure services. The ASIC
is an 8 Gbps FC-al switch with a Fibre Channel (FC) to SAS conversion logic on each disk
port. The FC and SAS conversion function provides speed aggregation on the FC
The DS8000 architecture employs dual redundant switched FC-AL access to each of the disk
enclosures. The key benefits of doing this are:
Two independent networks to access the disk enclosures.
Four access paths to each DDM.
Each device adapter port operates independently.
Double the bandwidth over traditional FC-AL loop implementations.
When a connection is made between the device adapter and a disk, the storage enclosure
uses backbone cabling at 8 Gbps which is translated from fibre channel to SAS to the disk
drives. This means that a mini-loop is created between the device adapter port and the disk;
see Figure 3-15.
Expansion
Storage enclosures are added in pairs and disks are added in groups of 16. It takes three
orders of 16 DDMs to fully populate a disk enclosure pair (top and bottom).
For example, if a DS8800 had six disk enclosures total and all the enclosures were fully
populated with disks, there would be 144 DDMs in three enclosure pairs. If an additional order
of 16 DDMs were purchased, then two new disk enclosures would be added. The switched
networks do not need to be broken to add these enclosures. They are simply added to the
end of the loop; eight DDMs will go in the upper enclosure and the remaining eight DDMs will
go in the lower enclosure.
If an additional 16 DDMs are subsequently ordered, they will be added to the same (upper
and lower) enclosure pair. If a third set of 16 DDMs are ordered, they would be used to fill up
that pair of disk enclosures. These additional DDMs added have to be of the same type as the
DDMs already residing in the two enclosures.
The intention is to only have four spares per DA pair, but this number can increase depending
on DDM intermix. Four DDMs of the largest capacity and at least two DDMs of the fastest
RPM are needed. If all DDMs are the same size and RPM, four spares are sufficient.
Figure 3-16 is used to show the DA pair layout. One DA pair creates two switched loops. The
upper enclosures populate one loop, and the lower enclosures populate the other loop. Each
enclosure places two switches onto each loop. Each enclosure can hold up to 24 DDMs.
DDMs are purchased in groups of 16. Half of the new DDMs go into the upper enclosure and
half of the new DDMs go into the lower enclosure.
Having established the physical layout, the diagram is now changed to reflect the layout of the
array sites, as shown in Figure 3-17. Array site 1 in green (the darker disks) uses the four left
DDMs in each enclosure. Array site 2 in yellow (the lighter disks), uses the four right DDMs in
each enclosure. When an array is created on each array site, half of the array is placed on
each loop. A fully populated enclosure pair would have six array sites.
The DS8800 supports 146 GB (15 K rpm), 450 GB (10 K rpm), 600 GB (10 k rpm), SAS disk
drive sets. The DS8800 also supports 450 GB (10 K rpm) and 600 GB (10 k rpm) encrypting
SAS drive stes.The DS8800 supports Solid State Drives (SSD) with a capacity of 300 GB. For
more information about Solid State Drives, see DS8000: Introducing Solid State Drives,
REDP-4522.
For information about encrypted drives and inherent restrictions, see IBM System Storage
DS8700 Disk Encryption Implementation and Usage Guidelines, REDP-4500.
Each DS8800 Fibre Channel card offers four or eight 8 Gbps Fibre Channel ports. The cable
connector required to attach to this card is an LC type. Each 8 Gbps port independently
auto-negotiates to either 2, 4, or 8 Gbps link speed. Each of the ports on one DS8800 host
adapter can also independently be either Fibre Channel protocol (FCP) or FICON. The type
of the port can be changed through the DS Storage Manager GUI or by using DSCLI
commands. A port cannot be both FICON and FCP simultaneously, but it can be changed as
required.
The card itself is PCIe Gen 2. The card is driven by a new high function, that is, high
performance ASIC. To ensure maximum data integrity, it supports metadata creation and
checking. Each Fibre Channel port supports a maximum of 509 host login IDs and 1280
paths. This allows for the creation of very large storage area networks (SANs).
Consult these documents regularly, because they contain the most current information about
server attachment support.
In the second expansion frame, each PPS supplies power to six PDUs supplying power to
storage enclosures. Each PDU supplies power to 4 or 5 storage enclosure pairs (gigapacks).
Each storage enclosure has two power supply units (PSU). The storage enclosure PSU pairs
are connected to two separate PDUs for redundancy. There can also be an optional booster
module that will allow the PPSs to temporarily run the disk enclosures off of a battery, if the
extended power line disturbance feature has been purchased (see Chapter 4, “RAS on IBM
System Storage DS8800” on page 55 for a complete explanation of why this feature might be
necessary for your installation).
Each PPS has internal fans to supply cooling for that power supply.
The FC-AL DDMs are not protected from power loss unless the extended power line
disturbance feature has been purchased.
Changes performed by the storage administrator to a DS8800 configuration, using the GUI or
DSCLI, are passed to the storage system through the HMC.
Note: The DS8800 HMC supports IPv6, the next generation of the Internet Protocol. The
HMC continues to support the IPv4 standard and mixed IPV4 and IPv6 environments.
Ethernet switches
In addition to the Fibre Channel switches installed in each disk enclosure, the DS8000 base
frame contains two 8-port Ethernet switches. Two switches are supplied to allow the creation
of a fully redundant management network. Each processor complex has multiple connections
to each switch to allow each server to access each switch. This switch cannot be used for any
equipment not associated with the DS8800. The switches get power from the internal power
bus and thus do not require separate power outlets. The switches are shown in Figure 3-20.
Note: An SSPC is required to remotely access the DS8800 Storage Manager GUI.
Without installing additional software, clients have the option to upgrade their licenses of:
TPC for Disk (to add performance monitoring capabilities)
TPC for Fabric (to add performance monitoring capabilities)
TPC for Data (to add storage management for open system hosts)
TPC for Replication (to manage Copy Services sessions and support open systems and
z/OS-attached volumes)
TPC Standard Edition (TPC SE) (to add all of these features)
Important: Any DS8800 shipped requires a minimum of one SSPC per data center to
enable the launch of the DS8000 Storage Manager other than from the HMC.
For DS8800 storage subsystems shipped with Full Disk Encryption (FDE) drives, two TKLM
key servers are required. An isolated key server (IKS) with dedicated hardware and
non-encrypted storage resources is required.
The isolated TKLM key server can be ordered from IBM. It is the same hardware as used for
the SSPC. The following software is used on the isolated key server:
Linux operating system
Tivoli Key Lifecycle Manager V, which includes DB2® V9.1 FB4
See 4.8, “RAS and Full Disk Encryption” on page 82 for more information.
For more information, see IBM System Storage DS8700 Disk Encryption Implementation and
Usage Guidelines, REDP-4500.
Storage unit
The term storage unit describes a single DS8800 (base frame plus optional expansion
frames). If your organization has one DS8800, then you have a single storage complex that
contains a single storage unit.
Note: A DS8800 with business class cabling can only add a single expansion frame.
Expansion frame
Expansion frames can be added one at a time to increase the overall capacity of the storage
unit. All expansion frames contain the power and cooling components needed to run the
frame. The first expansion frame contains storage disks and I/O enclosures for the Fibre
Channel loops. The second expansion frame contains storage disks only. Because the Fibre
Channel loops are switched, the addition of an expansion frame is a concurrent operation for
the DS8800.
Each Gigapack has up to 24 disk drive modules (DDM).
The first expansion frame can have a maximum of 336 disk drive modules in 14
Gigapacks.
The second expansion frame can have a maximum of 480 disk drive modules in 20
Gigapacks.
Storage complex
This term storage complex describes a group of DS8000s (that is, DS8300s, DS8700s or
DS8800s) managed by a single management console. A storage complex can, and usually
does, consist of simply a single DS8800 storage unit (primary frame plus optional expansion
frames).
The DS8800 contains two CECs as a redundant pair so that if either fails, the remaining CEC
can continue to run the storage unit. Each CEC can have up to 192 GB of memory and 1 or
2 POWER6+ processor cards. In other models of the DS8000 family, a CEC was also referred
to as a processor complex or a storage server. The CECs are identified as CEC0 and CEC1.
Some chapters and illustrations in this publication refer to Server 0 and Server 1; these are
the same as CEC0 and CEC1 for the DS8800.
Storage HMC
The Storage Hardware Management Console (HMC) is the master console for the DS8800
unit. With connections to the CECs, the client network, the SSPC, and other management
systems, the HMC becomes the focal point for most operations on the DS8800. All storage
configuration and service actions are run through the HMC. Although many other IBM
products also use an HMC, the Storage HMC is unique to the DS8000 family. Throughout this
chapter, it will be referred to as the HMC, but keep in mind we are referring to the Storage
HMC that is cabled to the internal network of the DS8800.
Important: The DS8800 does not divide the CECs into logical partitions. There is only one
SFI, which owns 100 % of the physical resources. Information regarding multi-SFI or
LPARs does not apply to the IBM System Storage DS8800.
The AIX operating system uses PHYP services to manage the translation control entry (TCE)
tables. The operating system communicates the desired I/O bus address to logical mapping,
and the Hypervisor translates that into the I/O bus address to physical mapping within the
specific TCE table. The Hypervisor needs a dedicated memory region for the TCE tables to
translate the I/O address to the partition memory address, and then the Hypervisor can
perform direct memory access (DMA) transfers to the PCI adapters.
The POWER6 processor implements the 64-bit IBM Power Architecture® technology and
capitalizes on all the enhancements brought by the POWER5™ processor. Each POWER6
chip incorporates two dual-threaded Simultaneous Multithreading processor cores, a private
4 MB level 2 cache (L2) for each processor, a 36 MB L3 cache controller shared by the two
processors, integrated memory controller, and data interconnect switch. It is designed to
provide an extensive set of RAS features that include improved fault isolation, recovery from
errors without stopping the processor complex, avoidance of recurring failures, and predictive
failure analysis.
The FIRs are important because they enable an error to be uniquely identified, thus enabling
the appropriate action to be taken. Appropriate actions might include such things as a bus
retry, error checking and correction (ECC), or system firmware recovery routines. Recovery
routines could include dynamic deallocation of potentially failing components.
Errors are logged into the system nonvolatile random access memory (NVRAM) and the SP
event history log, along with a notification of the event to AIX for capture in the operating
system error log. Diagnostic Error Log Analysis (diagela) routines analyze the error log
entries and invoke a suitable action, such as issuing a warning message. If the error can be
recovered, or after suitable maintenance, the service processor resets the FIRs so that they
can accurately record any future errors.
Self-healing
For a system to be self-healing, it must be able to recover from a failing component by first
detecting and isolating the failed component. It should then be able to take it offline, fix or
isolate it, and then reintroduce the fixed or replaced component into service without any
application disruption. Examples include:
Bit steering to redundant memory in the event of a failed memory module to keep the
server operational
Bit scattering, thus allowing for error correction and continued operation in the presence of
a complete chip failure (Chipkill recovery)
Single-bit error correction using Error Checking and Correcting (ECC) without reaching
error thresholds for main, L2, and L3 cache memory
L3 cache line deletes extended from 2 to 10 for additional self-healing
ECC extended to inter-chip connections on fabric and processor bus
Memory scrubbing to help prevent soft-error memory faults
Dynamic processor deallocation
The memory DIMMs also utilize memory scrubbing and thresholding to determine when
memory modules within each bank of memory should be used to replace ones that have
exceeded their threshold of error count (dynamic bit-steering). Memory scrubbing is the
process of reading the contents of the memory during idle time and checking and correcting
any single-bit errors that have accumulated by passing the data through the ECC logic. This
function is a hardware function on the memory controller chip and does not influence normal
system memory performance.
Fault masking
If corrections and retries succeed and do not exceed threshold limits, the system remains
operational with full resources and no client or IBM service representative intervention is
required.
Mutual surveillance
The SP can monitor the operation of the firmware during the boot process, and it can monitor
the operating system for loss of control. This enables the service processor to take
appropriate action when it detects that the firmware or the operating system has lost control.
Mutual surveillance also enables the operating system to monitor for service processor
activity and can request a service processor repair action if necessary.
With AIX V6.1, the kernel has been enhanced with the ability to recover from unexpected
errors. Kernel components and extensions can provide failure recovery routines to gather
serviceability data, diagnose, repair, and recover from errors. In previous AIX versions, kernel
errors always resulted in an unexpected system halt.
See IBM AIX Version 6.1 Differences Guide, SG24-7559, for more information about how
AIX V6.1 adds to the RAS features of AIX 5L™ V5.3.
You can also reference the IBM website for a more thorough review of the features of the IBM
AIX operating system at:
http://www.ibm.com/systems/power/software/aix/index.html
For a rebuild on previous DS8000 models, the IBM service representative would have to load
multiple CDs/DVDs directly onto the CEC being serviced. For the DS8800, there are no
optical drives on the CECs; only the HMC has a DVD drive. For a CEC dual hard drive rebuild,
the service representative acquires the needed code bundles on the HMC, which then runs
as a Network Installation Management on Linux (NIMoL) server. The HMC provides the
operating system and microcode to the CEC over the DS8800 internal network, which is
much faster than reading and verifying from an optical disc.
All of the tasks and status updates for a CEC dual hard drive rebuild are done from the HMC,
which is also aware of the overall service action that necessitated the rebuild. If the rebuild
fails, the HMC manages the errors, including error data, and allows the service representative
to address the problem and restart the rebuild. When the rebuild completes, the server is
automatically brought up for the first time (IML). Once the IML is successful, the service
representative can resume operations on the CEC.
Overall, the rebuild process on a DS8800 is more robust and straightforward, thereby
reducing the time needed to perform this critical service action.
For the DS8700 and DS8800, the I/O Enclosures are wired point-to-point with each CEC
using a PCI Express architecture. This means that only the CEC-to-CEC (XC)
communications are now carried on the RIO-G and the RIO loop configuration is greatly
simplified. Figure 4-1 shows the new fabric design of the DS8800.
Temperature monitoring is also performed. If the ambient temperature goes above a preset
operating range, then the rotation speed of the cooling fans can be increased. Temperature
monitoring also warns the internal microcode of potential environment-related problems. An
orderly system shutdown will occur when the operating temperature exceeds a critical level.
Voltage monitoring provides warning and an orderly system shutdown when the voltage is out
of operational specification.
Following a hardware error that has been flagged by the service processor, the subsequent
reboot of the server invokes extended diagnostics. If a processor or cache has been marked
for deconfiguration by persistent processor deallocation, the boot process will attempt to
proceed to completion with the faulty device automatically deconfigured. Failing I/O adapters
will be deconfigured or bypassed during the boot process.
Creating logical volumes on the DS8800 works through the following constructs:
Storage DDMs are installed into predefined array sites.
These array sites are used to form arrays, structured as RAID 5, RAID 6, or RAID 10
(restrictions apply for Solid State Drives).
These RAID arrays then become members of a rank.
Each rank then becomes a member of an Extent Pool. Each Extent Pool has an affinity to
either server 0 or server 1. Each Extent Pool is either open systems fixed block (FB) or
System z count key data (CKD).
Within each Extent Pool, we create logical volumes. For open systems, these are called
LUNs. For System z, these are called volumes. LUN stands for logical unit number, which
is used for SCSI addressing. Each logical volume belongs to a logical subsystem (LSS).
For open systems, the LSS membership is really only significant for Copy Services. But for
System z, the LSS is the logical control unit (LCU), which equates to a 3990 (a System z disk
controller which the DS8800 emulates). It is important to remember that LSSs that have an
even identifying number have an affinity with CEC 0, and LSSs that have an odd identifying
number have an affinity with CEC 1. When a host operating system issues a write to a logical
volume, the DS8800 host adapter directs that write to the CEC that owns the LSS of which
that logical volume is a member.
Note: For the previous generations of DS8000, the maximum available NVS was 4 GB per
server. For the DS8700 and DS8800, that maximum has been increased to 6 GB per
server.
When a write is issued to a volume and the CECs are both operational, this write data gets
directed to the CEC that owns this volume. The data flow begins with the write data being
placed into the cache memory of the owning CEC. The write data is also placed into the NVS
of the other CEC. The NVS copy of the write data is accessed only if a write failure should
occur and the cache memory is empty or possibly invalid; otherwise, it will be discarded after
the destaging is complete, as shown in Figure 4-2.
Cache Cache
Memory for Memory for
EVEN ODD
numbered numbered
LSS LSS
CEC 0 CEC 1
Figure 4-2 Write data when CECs are dual operational
Figure 4-2 shows how the cache memory of CEC 0 is used for all logical volumes that are
members of the even LSSs. Likewise, the cache memory of CEC 1 supports all logical
volumes that are members of odd LSSs. For every write that gets placed into cache, a second
copy gets placed into the NVS memory located in the alternate CEC. Thus, the normal flow of
data for a write when both CECs are operational is as follows:
1. Data is written to cache memory in the owning CEC.
2. Data is written to NVS memory of the alternate CEC.
3. The write operation is reported to the attached host as completed.
4. The write data is destaged from the cache memory to a disk array.
5. The write data is discarded from the NVS memory of the alternate CEC.
4.3.2 Failover
In the example shown in Figure 4-3, CEC 0 has failed. CEC 1 needs to take over all of the
CEC 0 functions. Because the RAID arrays are on Fibre Channel Loops that reach both
CECs, they can still be accessed via the Device Adapters owned by CEC 1. See 4.6.1, “RAID
configurations” on page 72 for more information about the Fibre Channel Loops.
CEC 0 CEC 1
Failover
Figure 4-3 CEC 0 failover to CEC 1
At the moment of failure, CEC 1 has a backup copy of the CEC 0 write data in its own NVS.
From a data integrity perspective, the concern is for the backup copy of the CEC 1 write data,
which was in the NVS of CEC 0 when it failed. Because the DS8800 now has only one copy
of that data (active in the cache memory of CEC 1), it will perform the following steps:
1. CEC 1 destages the contents of its NVS (the CEC 0 write data) to the disk subsystem.
However, before the actual destage and at the beginning of the failover:
a. The working CEC starts by preserving the data in cache that was backed by the failed
CEC NVS. If a reboot of the single working CEC occurs before the cache data had
been destaged, the write data remains available for subsequent destaging.
b. In addition, the existing data in cache (for which there is still only a single volatile copy)
is added to the NVS so that it remains available if the attempt to destage fails or a
server reboot occurs. This functionality is limited so that it cannot consume more than
85% of NVS space.
2. The NVS and cache of CEC 1 are divided in two, half for the odd LSSs and half for the
even LSSs.
3. CEC 1 now begins processing the I/O for all the LSSs.
The DS8800 can continue to operate in this state indefinitely. There has not been any loss of
functionality, but there has been a loss of redundancy. Any critical failure in the working CEC
would render the DS8800 unable to serve I/O for the arrays, so IBM support should begin
work right away to determine the scope of the failure and to build an action plan to restore the
failed CEC to an operational state.
4.3.3 Failback
The failback process always begins automatically as soon as the DS8800 microcode
determines that the failed CEC has been resumed to an operational state. If the failure was
relatively minor and recoverable by the operating system or DS8800 microcode, then the
resume action will be initiated by the software. If there was a service action with hardware
components replaced, then the IBM service representative or remote support will resume the
failed CEC.
For this example where CEC 0 has failed, we should now assume that CEC 0 has been
repaired and has been resumed. The failback begins with CEC 1 starting to use the NVS in
CEC 0 again, and the ownership of the even LSSs being transferred back to CEC 0. Normal
I/O processing with both CECs operational then resumes. Just like the failover process, the
failback process is invisible to the attached hosts.
In general, recovery actions (failover/failback) on the DS8800 do not impact I/O operation
latency by more than 15 seconds. With certain limitations on configurations and advanced
functions, this impact to latency is often limited to just 8 seconds or less. If you have real-time
response requirements in this area, contact IBM to determine the latest information about
how to manage your storage to meet your requirements.
Important: Unless the power line disturbance feature (PLD) has been purchased, the
BBUs are not used to keep the storage disks in operation. They keep the CECs in
operation long enough to dump NVS contents to internal hard disks.
If both power supplies in the primary frame should stop receiving input power, the CECs
would be informed that they are running on batteries and immediately begin a shutdown
procedure. It is during this shutdown that the entire contents of NVS memory are written to
the CEC hard drives so that the data will be available for destaging after the CECs are
operational again. If power is lost to a single primary power supply (PPS), the ability of the
other power supply to keep all batteries charged is not impacted, so the CECs would remain
online.
The following sections show the steps followed in the event of complete power interruption.
Power loss
When an on-battery condition shutdown begins, the following events occur:
1. All host adapter I/O is blocked.
2. Each CEC begins copying its NVS data to internal disk (not the storage DDMs). For each
CEC, two copies are made of the NVS data.
3. When the copy process is complete, each CEC shuts down.
4. When shutdown in each CEC is complete (or a timer expires), the DS8800 is powered
down.
Power restored
When power is restored to the DS8800, the following events occur:
1. The CECs power on and perform power on self tests and PHYP functions.
2. Each CEC then begins boot up (IML).
3. At a certain stage in the boot process, the CEC detects NVS data on its internal SCSI
disks and begins to destage it to the storage DDMs.
4. When the battery units reach a certain level of charge, the CECs come online and begin to
process host I/O.
Battery charging
In many cases, sufficient charging will occur during the power on self test, operating system
boot, and microcode boot. However, if a complete discharge of the batteries has occurred,
which can happen if multiple power outages occur in a short period of time, then recharging
might take up to two hours.
Note: The CECs will not come online (process host I/O) until the batteries are sufficiently
charged to handle at least one outage.
See 3.2.2, “Peripheral Component Interconnect Express (PCI Express)” on page 36, for more
information about this topic.
You can also discover more about PCI Express at the following site:
http://www.redbooks.ibm.com/Redbooks.nsf/RedbookAbstracts/tips0456.html?Open
The DS8800 I/O enclosures use hot-swap adapters with PCI Express connectors. These
adapters are in blind-swap hot plug cassettes, which allow them to be replaced concurrently.
Each slot can be independently powered off for concurrent replacement of a failed adapter,
installation of a new adapter, or removal of an old one.
In addition, each I/O enclosure has N+1 power and cooling in the form of two power supplies
with integrated fans. The power supplies can be concurrently replaced and a single power
supply is capable of supplying DC power to the whole I/O enclosure.
HBA
HP HP HP HP HP HP HP HP
Host Host
Adapter Adapter
CEC 0
I/O enclosure 2 CEC 1
owning all
owning all PCI PCI
Express Express
even LSS odd LSS
x4 x4
logical logical
volumes I/O enclosure 3 volumes
Host Host
Adapter Adapter
HP HP HP HP HP HP HP HP
HBA HBA
HP HP HP HP HP HP HP HP
Host Host
Adapter Adapter
CEC 0
owning all
I/O enclosure 2 CEC 1
owning all
PCI PCI
even LSS Express Express odd LSS
x4 x4
logical logical
volumes I/O enclosure 3 volumes
Host Host
Adapter Adapter
HP HP HP HP HP HP HP HP
Important: Best practice is that hosts accessing the DS8800 have at least two
connections to separate host ports in separate host adapters on separate I/O enclosures.
SAN/FICON switches
Because a large number of hosts can be connected to the DS8800, each using multiple
paths, the number of host adapter ports that are available in the DS8800 might not be
sufficient to accommodate all the connections. The solution to this problem is the use of SAN
switches or directors to switch logical connections from multiple hosts. In a System z
environment, you will need to select a SAN switch or director that also supports FICON.
A logic or power failure in a switch or director can interrupt communication between hosts and
the DS8800. Provide more than one switch or director to ensure continued availability. Ports
from two different host adapters in two different I/O enclosures should be configured to go
through each of two directors. The complete failure of either director leaves half the paths still
operating.
SDD provides availability through automatic I/O path failover. If a failure occurs in the data
path between the host and the DS8800, SDD automatically switches the I/O to another path.
SDD will also automatically set the failed path back online after a repair is made. SDD also
improves performance by sharing I/O operations to a common disk over multiple active paths
to distribute and balance the I/O workload.
SDD is not available for every supported operating system. See IBM System Storage DS8000
Host Systems Attachment Guide, SC26-7917, and also the interoperability website for
guidance about which multipathing software might be required. Refer to the IBM System
Storage Interoperability Center (SSIC), found at:
http://www.ibm.com/systems/support/storage/config/ssic/index.jsp
For more information about the SDD, see the Redbooks publication IBM System Storage
DS8000: Host attachment and Interoperability, SG24-8887.
System z
In the System z environment, normal practice is to provide multiple paths from each host to a
disk subsystem. Typically, four paths are installed. The channels in each host that can access
each logical control unit (LCU) in the DS8800 are defined in the hardware configuration
definition (HCD) or I/O configuration data set (IOCDS) for that host. Dynamic Path Selection
(DPS) allows the channel subsystem to select any available (non-busy) path to initiate an
operation to the disk subsystem. Dynamic Path Reconnect (DPR) allows the DS8800 to
select any available path to a host to reconnect and resume a disconnected operation, for
example, to transfer data after disconnection due to a cache miss.
These functions are part of the System z architecture and are managed by the channel
subsystem on the host and the DS8800.
A physical FICON path is established when the DS8800 port sees light on the fiber (for
example, a cable is plugged in to a DS8800 host adapter, a processor or the DS8800 is
powered on, or a path is configured online by z/OS). At this time, logical paths are established
through the port between the host and some or all of the LCUs in the DS8800, controlled by
the HCD definition for that host. This happens for each physical path between a System z
CPU and the DS8800. There may be multiple system images in a CPU. Logical paths are
established for each system image. The DS8800 then knows which paths can be used to
communicate between each LCU and each host.
CUIR is available for the DS8800 when operated in the z/OS and z/VM® environments. CUIR
provides automatic channel path vary on and vary off actions to minimize manual operator
intervention during selected DS8800 service actions.
CUIR also allows the DS8800 to request that all attached system images set all paths
required for a particular service action to the offline state. System images with the appropriate
level of software support respond to such requests by varying off the affected paths, and
either notifying the DS8800 subsystem that the paths are offline, or that it cannot take the
paths offline. CUIR reduces manual operator intervention and the possibility of human error
during maintenance actions, while at the same time reducing the time required for the
maintenance. This is particularly useful in environments where there are many z/OS or z/VM
systems attached to a DS8800.
Important: The HMC described here is the Storage HMC, not to be confused with the
SSPC console, which is also required with any new DS8800. SSPC is described in 3.8,
“System Storage Productivity Center” on page 53.
If the HMC is not operational, then it is not possible to perform maintenance, power the
DS8800 up or down, perform modifications to the logical configuration, or perform Copy
Services tasks, such as the establishment of FlashCopies using the DSCLI or DS GUI. Best
practice is to order two management consoles to act as a redundant pair. Alternatively, if
Tivoli Storage Productivity Center for Replication (TPC-R) is used, Copy Services tasks can
be managed by that tool if the HMC is unavailable.
Note: The preceding alternative is only available if you have purchased and configured the
TPC-R management solution.
For a detailed discussion about microcode updates, refer to Chapter 15, “Licensed machine
code” on page 343.
IBM Service personnel located outside of the client facility log in to the HMC to provide
remote service and support. Remote support and the Call Home option are described in
detail in Chapter 17, “Remote support” on page 363.
For information regarding the effective capacity of these configurations, refer to Table 8-9 on
page 173.
Each disk has two separate connections to the backplane. This allows it to be simultaneously
attached to both switches. If either disk enclosure controller card is removed from the
enclosure, the switch that is included in that card is also removed. However, the switch in the
remaining controller card retains the ability to communicate with all the disks and both device
adapters (DAs) in a pair. Equally, each DA has a path to each switch, so it also can tolerate
the loss of a single path. If both paths from one DA fail, then it cannot access the switches;
however, the partner DA retains connection.
...24…
Disk Drive Modules
See 3.4, “Disk subsystem” on page 44 for more information about the disk subsystem of the
DS8800.
Performance of the RAID 5 array returns to normal when the data reconstruction onto the
spare device completes. The time taken for sparing can vary, depending on the size of the
failed DDM and the workload on the array, the switched network, and the DA. The use of
arrays across loops (AAL) both speeds up rebuild time and decreases the impact of a rebuild.
RAID 6 allows for additional fault tolerance by using a second independent distributed parity
scheme (dual parity). Data is striped on a block level across a set of drives, similar to RAID 5
configurations, and a second set of parity is calculated and written across all the drives, as
shown in Figure 4-7.
Drives 1 2 3 4 5 P Q
0 1 2 3 4 P00 P01
5 6 7 8 9 P10 P11
10 11 12 13 14 P20 P21
15 16 17 18 19 P30 P31
P41
During the rebuild of the data on the new drive, the device adapter can still handle I/O
requests of the connected hosts to the affected array. Some performance degradation could
occur during the reconstruction because some device adapters and switched network
resources are used to do the rebuild. Due to the switch-based architecture of the DS8800,
this effect will be minimal. Additionally, any read requests for data on the failed drive require
data to be read from the other drives in the array, and then the DA performs an operation to
reconstruct the data. Any subsequent failure during the reconstruction within the same array
(second drive failure, second coincident medium errors, or a drive failure and a medium error)
can be recovered without loss of data.
Performance of the RAID 6 array returns to normal when the data reconstruction on the spare
device has completed. The rebuild time will vary, depending on the size of the failed DDM and
the workload on the array and the DA. The completion time is comparable to a RAID 5 rebuild,
but slower than rebuilding a RAID 10 array in the case of a single drive failure.
While this data reconstruction is going on, the DA can still service read and write requests to
the array from the hosts. There might be some degradation in performance while the sparing
operation is in progress, because some DA and switched network resources are used to do
the reconstruction. Due to the switch-based architecture of the DS8800, this effect will be
minimal. Read requests for data on the failed drive should not be affected because they can
all be directed to the good RAID 1 array.
Write operations will not be affected. Performance of the RAID 10 array returns to normal
when the data reconstruction onto the spare device completes. The time taken for sparing
can vary, depending on the size of the failed DDM and the workload on the array and the DA.
In relation to a RAID 5, RAID 10 sparing completion time is a little faster. This is because
rebuilding a RAID 5 6+P configuration requires six reads plus one parity operation for each
write, while a RAID 10 3+3 configuration requires one read and one write (essentially a direct
copy).
Floating spares
The DS8800 implements a smart floating technique for spare DDMs. A floating spare is
defined as follows: When a DDM fails and the data it contained is rebuilt onto a spare, then
when the disk is replaced, the replacement disk becomes the spare. The data is not migrated
to another DDM, such as the DDM in the original position the failed DDM occupied.
The DS8800 microcode takes this idea one step further. It might choose to allow the hot
spare to remain where it has been moved, but it can instead choose to migrate the spare to a
more optimum position. This will be done to better balance the spares across the DA pairs,
the loops, and the enclosures. It might be preferable that a DDM that is currently in use as an
array member is converted to a spare. In this case, the data on that DDM will be migrated in
the background onto an existing spare. This process does not fail the disk that is being
migrated, though it does reduce the number of available spares in the DS8800 until the
migration process is complete.
The DS8800 uses this smart floating technique so that the larger or higher RPM DDMs are
allocated as spares, which guarantees that a spare can provide at least the same capacity
and performance as the replaced drive. If we were to rebuild the contents of a 450 GB DDM
onto a 600 GB DDM, then approximately one-fourth of the 600 GB DDM will be wasted,
because that space is not needed. When the failed 450 GB DDM is replaced with a new
450 GB DDM, the DS8800 microcode will most likely migrate the data back onto the recently
replaced 450 GB DDM. When this process completes, the 450 GB DDM will rejoin the array
and the 600 GB DDM will become the spare again.
Another example would be if we fail a 146 GB 15 K RPM DDM onto a 600 GB 10 k RPM
DDM. The data has now moved to a slower DDM and is wasting a lot of space. This means
the array will have a mix of RPMs, which is not desirable. When the failed disk is replaced, the
replacement will be the same type as the failed 15 K RPM disk. Again, a smart migration of
the data will be performed after suitable spares have become available.
Overconfiguration of spares
The DDM sparing policies support the overconfiguration of spares. This possibility might be of
interest to some installations, because it allows the repair of some DDM failures to be
deferred until a later repair action is required.
4.7.1 Components
Here we discuss the power subsystem components.
208 V is produced to be supplied to each I/O enclosure and each processor complex. This
voltage is placed by each supply onto Power Distribution Units (PDUs). Then, this voltage
supplies the disk enclosures.
With the introduction of Gigapack enclosures on DS8800, some changes have been made to
the PPS to support Gigapack power requirements. The 5 V/12 V DDM power module in the
PPS is replaced with a 208 V module. The current 208 V modules are required to support the
Gigapacks. The 208 V input for the Gigapacks is from a Power Distribution Unit (PDU). The
Gigapack enclosure takes 208 V input and provides 5 V/12 V for the DDMs. The PDUs also
distribute power to the CECs and I/O enclosures.
If either PPS fails, the other can continue to supply all required voltage to all power buses in
that frame. The PPSs can be replaced concurrently.
Important: If you install the DS8800 so that both primary power supplies are attached to
the same circuit breaker or the same switchboard, then the DS8800 will not be
well-protected from external power failures. This is a common cause of unplanned
outages.
With the addition of this hardware, the DS8800 will be able to run for up to 50 seconds on
battery power before the CECs begin to copy NVS to internal disk and then shut down. This
would allow for a 50-second interruption to line power with no outage to the DS8800.
Apart from these two contingencies (which are highly unlikely), the EPO switch should never
be used. The reason for this is that when the EPO switch is used, the battery protection for
the NVS storage area is bypassed. Normally, if line power is lost, the DS8800 can use its
internal batteries to destage the write data from NVS memory to persistent storage so that
the data is preserved until power is restored. However, the EPO switch does not allow this
destage process to happen and all NVS cache data is immediately lost. This will most likely
result in data loss.
If the DS8800 needs to be powered off for building maintenance or to relocate it, always use
the HMC to shut it down properly.
The DS8800 provides two important reliability, availability, and serviceability enhancements to
Full Disk Encryption storage: deadlock recovery and support for dual-platform key servers.
For current considerations and best practices regarding DS8000 encryption, see IBM
Encrypted Storage Overview and Customer Requirements, found at:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101479
This link also includes the IBM Notice for Storage Encryption, which must be read by all
clients acquiring an IBM storage device that includes encryption technology.
Thus it becomes possible, due to a planning error or even the use of automatically-managed
storage provisioning, for the System z TKLM server storage to end up residing on the DS8000
that is a client for encryption keys. After a power interruption event, the DS8000 becomes
inoperable because it must retrieve the Data Key (DK) from the TKLM database on the
System z server. The TKLM database becomes inoperable because the System z server has
its OS or application data on the DS8000. This represents a deadlock situation. Figure 4-9
depicts this scenario.
The DS8800 mitigates this problem by implementing a Recovery Key (RK). The Recovery Key
allows the DS8800 to decrypt the Group Key (GK) that it needs to come up to full operation. A
new client role is defined in this process: the security administrator. The security administrator
See IBM System Storage DS8700 Disk Encryption Implementation and Usage Guidelines,
REDP-4500, for a more complete review of the deadlock recovery process and further
information about working with a Recovery Key.
Note: Use the storage HMC to enter a Recovery Key. The Security Administrator and the
Storage Administrator might need to be physically present at the DS8800 to perform the
recovery.
To meet this request, the DS8800 allows propagation of keys across two different key server
platforms. The current IKS is still supported to address the standing requirement for an
isolated key server. Adding a z/OS Tivoli Key Lifecycle Manager (TKLM) Secure Key Mode
server, which is common in Tape Storage environments, is concurrently supported by the
DS8800.
After the key servers are set up, they will each have two public keys. They are each capable of
generating and wrapping two symmetric keys for the DS8800. The DS8800 stores both
wrapped symmetric keys in the key repository. Now either key server is capable of
unwrapping these keys upon a DS8800 retrieval exchange.
See IBM System Storage DS8700 Disk Encryption Implementation and Usage Guidelines,
REDP-4500, for more information regarding the dual-platform TKLM solution. Visit the
following site for further information regarding planning and deployment of TKLM servers:
http://www.ibm.com/developerworks/wikis/display/tivolidoccentral/Tivoli+Key+Lifecy
cle+Manager
Note: Connections to the client’s network are made at the Storage HMC. No client network
connection should ever be made to the DS8800 internal Ethernet switches.
Remote support is a critical topic for clients investing in the DS8800. Refer to Chapter 17,
“Remote support” on page 363 for a more thorough discussion of remote support operations.
Refer to Chapter 9, “DS8800 HMC planning and setup” on page 177 for more information
about planning the connections needed for HMC installation.
A storage unit rack with this optional seismic kit includes cross-braces on the front and rear of
the rack which prevent the rack from twisting. Hardware at the bottom of the rack secures it to
the floor. Depending on the flooring in your environment, specifically non-raised floors,
installation of required floor mounting hardware might be disruptive.
This kit must be special ordered for the DS8800; contact your sales representative for further
information.
For this chapter, the definition of virtualization is the abstraction process from the physical
disk drives to a logical volume that is presented to hosts and servers in a way that they see it
as though it were a physical disk.
The DS8800 uses a switched point to point topology utilizing Serial attached SCSI (SAS) disk
drives that are mounted in disk enclosures. The disk drives can be accessed by a pair of
device adapters. Each device adapter has four paths to the disk drives. One device interface
from each device adapter is connected to a set of FC-AL devices so that either device adapter
has access to any disk drive through two independent switched fabrics (the device adapters
and switches are redundant).
PCIe PCIe
I/O Enclosure
I/O Enclosure
HA DA HA DA
HA DA HA DA
… … …24 … ... …
… … …24 … ... …
Storage enclosure pair
Server 0
Server 1
Switches
… … …24 … ... …
… … …24 … ... …
Because of the switching design, each drive is in close reach of the device adapter, and some
drives will require a few more hops through the Fibre Channel switch. So, it is not really a loop
but a switched FC-AL loop with the FC-AL addressing schema, that is, Arbitrated Loop
Physical Addressing (AL-PA).
Array
Site
.. … … 24 … … ..
.. … … 24 … … ..
Switch
Loop 1 Loop 2
The DDMs in the array site are of the same DDM type, which means the same capacity and
the same speed (rpm).
As you can see from Figure 5-2, array sites span loops. Four DDMs are taken from loop 1 and
another four DDMs from loop 2. Array sites are the building blocks used to define arrays.
5.2.2 Arrays
An array is created from one array site. Forming an array means defining it as a specific
RAID type. The supported RAID types are RAID 5, RAID 6, and RAID 10 (see “RAID 5
implementation in DS8800” on page 74, “RAID 6 implementation in the DS8800” on page 76,
and “RAID 10 implementation in DS8800” on page 77). For each array site, you can select a
RAID type (remember that Solid State Drives can only be configured as RAID 5). The process
of selecting the RAID type for an array is also called defining an array.
Note: In a DS8000 series implementation, one array is defined using one array site.
Figure 5-3 shows the creation of a RAID 5 array with one spare, also called a 6+P+S array (it
has a capacity of 6 DDMs for data, capacity of one DDM for parity, and a spare drive).
According to the RAID 5 rules, parity is distributed across all seven drives in this example.
On the right side in Figure 5-3, the terms D1, D2, D3, and so on stand for the set of data
contained on one disk within a stripe on the array. If, for example, 1 GB of data is written, it is
distributed across all the disks of the array.
Array
Site
D1 D7 D13 ...
D2 D8 D14 ...
D3 D9 D15 ...
D6 P D17 ...
So, an array is formed using one array site, and although the array could be accessed by
each adapter of the device adapter pair, it is managed by one device adapter. You define
which server is managing this array later on in the configuration path.
5.2.3 Ranks
In the DS8000 virtualization hierarchy, there is another logical construct called a rank. When
defining a new rank, its name is chosen by the DS Storage Manager, for example, R1, R2, or
R3, and so on. You have to add an array to a rank.
Note: In the DS8000 implementation, a rank is built using just one array.
The available space on each rank will be divided into extents. The extents are the building
blocks of the logical volumes. An extent is striped across all disks of an array as shown in
Figure 5-4 and indicated by the small squares in Figure 5-5 on page 91.
A FB rank has an extent size of 1 GB (more precisely, GiB, gibibyte, or binary gigabyte, being
equal to 230 bytes).
IBM System z users or administrators typically do not deal with gigabytes or gibibytes, and
instead they think of storage in terms of the original 3390 volume sizes. A 3390 Model 3 is
three times the size of a Model 1 and a Model 1 has 1113 cylinders, which is about 0.94 GB.
The extent size of a CKD rank is one 3390 Model 1 or 1113 cylinders.
Figure 5-4 shows an example of an array that is formatted for FB data with 1 GB extents (the
squares in the rank just indicate that the extent is composed of several blocks from different
DDMs).
D1 D7 D 13 ...
D ata
D ata D2 D8 D 14 ...
D ata
D ata
RAID D3 D9 D 15 ...
Array
D4 D 10 D 16 ...
D ata
D ata D5 D 11 P ...
Parity D6 P D 17 ...
Spare P D 12 D 18 ...
C reation of
a R ank
....
....
....
FB R ank
1G B 1G B 1G B 1G B ....
....
of 1G B
....
extents
....
....
It is still possible to define a CKD volume with a capacity that is an integral multiple of one
cylinder or a fixed block LUN with a capacity that is an integral multiple of 128 logical blocks
(64 KB). However, if the defined capacity is not an integral multiple of the capacity of one
extent, the unused capacity in the last extent is wasted. For example, you could define a one
cylinder CKD volume, but 1113 cylinders (1 extent) will be allocated and 1112 cylinders would
be wasted.
Encryption group
A DS8000 series can be ordered with encryption capable disk drives. If you plan to use
encryption, before creating a rank, you must define an encryption group (for more
information, see IBM System Storage DS8700 Disk Encryption Implementation and Usage
Important: Do not mix ranks with different RAID types or disk rpm in an Extent Pool. Do
not mix SSD and HDD ranks in the same Extent Pool.
There is no predefined affinity of ranks or arrays to a storage server. The affinity of the rank
(and its associated array) to a given server is determined at the point it is assigned to an
Extent Pool.
One or more ranks with the same extent type (FB or CKD) can be assigned to an Extent Pool.
One rank can be assigned to only one Extent Pool. There can be as many Extent Pools as
there are ranks.
There are considerations regarding how many ranks should be added in an Extent Pool.
Storage Pool Striping allows you to create logical volumes striped across multiple ranks. This
will typically enhance performance. To benefit from Storage Pool Striping (see “Storage Pool
Striping: Extent rotation” on page 96), more than one rank in an Extent Pool is required.
Storage Pool Striping can enhance performance significantly, but when you lose one rank (in
the unlikely event that a whole RAID array failed due to a scenario with multiple failures at the
same time), not only is the data of this rank lost, but also all data in this Extent Pool because
data is striped across all ranks. To avoid data loss, mirror your data to a remote DS8000.
The DS Storage Manager GUI prompts you to use the same RAID types in an Extent Pool. As
such, when an Extent Pool is defined, it must be assigned with the following attributes:
Server affinity
Extent type
RAID type
Drive Class
Encryption group
Just like ranks, Extent Pools also belong to an encryption group. When defining an Extent
Pool, you have to specify an encryption group. Encryption group 0 means no encryption.
Encryption group 1 means encryption. Currently, the DS8000 series supports only one
encryption group and encryption is on for all Extent Pools or off for all Extent Pools.
The minimum number of Extent Pools is two, with one assigned to server 0 and the other to
server 1 so that both servers are active. In an environment where FB and CKD are to go onto
the DS8000 series storage system, four Extent Pools would provide one FB pool for each
server, and one CKD pool for each server, to balance the capacity between the two servers.
Figure 5-5 is an example of a mixed environment with CKD and FB Extent Pools. Additional
Extent Pools might also be desirable to segregate ranks with different DDM types. Extent
Pools are expanded by adding more ranks to the pool. Ranks are organized in two rank
groups; rank group 0 is controlled by server 0 and rank group 1 is controlled by server 1.
Server1
1GB 1GB 1GB 1GB
FB FB FB FB
On a DS8000, up to 65280 (we use the abbreviation 64 K in this discussion, even though it is
actually 65536 - 256, which is not quite 64 K in binary) volumes can be created (either 64 K
CKD, or 64 K FB volumes, or a mixture of both types with a maximum of 64 K volumes in
total).
LUNs can be allocated in binary GiB (230 bytes), decimal GB (109 bytes), or 512 or 520 byte
blocks. However, the physical capacity that is allocated for a LUN is always a multiple of
1 GiB, so it is a good idea to have LUN sizes that are a multiple of a gibibyte. If you define a
LUN with a LUN size that is not a multiple of 1 GiB, for example, 25.5 GiB, the LUN size is
25.5 GiB, but 26 GiB are physically allocated, of which 0.5 GiB of the physical storage remain
unusable.
The DS8000 and z/OS limit CKD extended address volumes (EAV) sizes. Now you can define
CKD volumes with up to 262,668 cylinders, which is about 223 GB. This new volume capacity
is called Extended Address Volume (EAV) and is supported by the 3390 Model A.
Important: EAV volumes can only be used by IBM z/OS 1.10 or later versions.
If the number of cylinders specified is not an exact multiple of 1113 cylinders, then some
space in the last allocated extent is wasted. For example, if you define 1114 or 3340
cylinders, 1112 cylinders are wasted. For maximum storage efficiency, you should consider
allocating volumes that are exact multiples of 1113 cylinders. In fact, multiples of 3339
cylinders should be consider for future compatibility.
If you want to use the maximum number of cylinders for a volume on a DS8800 (that is
262,668 cylinders), you are not wasting cylinders, because it is an exact multiple of 1113
(262,688 divided by 1113 is exactly 236). For even better future compatibility, you should use
a size of 260,442 cylinders, which is an exact multiple (78) of 3339, a model 3 size. On
DS8000s running older Licensed Machine Codes, the maximum number of cylinders was
65,520 and it is not a multiple of 1113. You can use 65,520 cylinders and waste 147 cylinders
for each volume (the difference to the next multiple of 1113), or you might be better off with a
volume size of 64,554 cylinders, which is a multiple of 1113 (factor of 58), or even better, with
63,441 cylinders, which is a multiple of 3339, a model 3 size. See Figure 5-6 on page 92.
1113 1000
Rank-y used
used
used
used
113 cylinders unused
A CKD volume cannot span multiple Extent Pools, but a volume can have extents from
different ranks in the same Extent Pool or you can stripe a volume across the ranks (see
1 GB 1 GB
Rank-b used
free
used
free Allocate a 3 GB LUN
1 GB 1 GB 1 GB 1 GB
Rank-a used
3 GB LUN 2.9 GB LUN
created
1 GB 1 GB
used used
Rank-b used used
100 MB unused
IBM i LUNs
IBM i LUNs are also composed of fixed block 1 GiB extents. There are, however, some
special aspects with System i LUNs. LUNs created on a DS8000 are always RAID-protected.
LUNs are based on RAID 5, RAID 6, or RAID 10 arrays. However, you might want to deceive
i5/OS and tell it that the LUN is not RAID-protected. This causes the i5/OS to do its own
mirroring. System i LUNs can have the attribute unprotected, in which case, the DS8000 will
report that the LUN is not RAID-protected.
The i5/OS only supports certain fixed volume sizes, for example, model sizes of 8.5 GB,
17.5 GB, and 35.1 GB. These sizes are not multiples of 1 GB, and hence, depending on the
model chosen, some space is wasted. IBM i LUNs expose a 520 Byte block to the host. The
operating system uses 8 of these Bytes so the usable space is still 512 Bytes like other SCSI
LUNs. The capacities quoted for the IBM i LUNs are in terms of the 512 Byte block capacity
and are expressed in GB (109 ). These capacities should be converted to GiB (230 ) when
considering effective utilization of extents that are 1 GiB (230 ). For more information about
this topic, see IBM Redbooks publication IBM System Storage DS8000: Host Attachment and
Interoperability, SG24-8887.
When a standard FB LUN or CKD volume is created on the physical drive, it will occupy as
many extents as necessary for the defined capacity.
Note: The DS8800 at Licensed Machine Code 6.6.0.xx does not support Extent Space
Efficient Volumes.
A Space Efficient volume does not occupy physical capacity when it is created. Space gets
allocated when data is actually written to the volume. The amount of space that gets
physically allocated is a function of the amount of data written to or changes performed on the
volume. The sum of capacities of all defined Space Efficient volumes can be larger than the
physical capacity available.This function is also called over-provisioning or thin provisioning.
Space Efficient volumes for the DS8800 can be created when it has the IBM FlashCopy SE
feature enabled (licensing is required).
The general idea behind Space Efficient volumes is to use or allocate physical storage when
it is only potentially or temporarily needed.
The repository is an object within an Extent Pool. In some sense it is similar to a volume
within the Extent Pool. The repository has a physical size and a logical size. The physical size
of the repository is the amount of space that is allocated in the Extent Pool. It is the physical
space that is available for all Space Efficient volumes in total in this Extent Pool. The
repository is striped across all ranks within the Extent Pool. There can only be one repository
per Extent Pool.
Important: The size of the repository and the virtual space it utilizes are part of the Extent
Pool definition. Each Extent Pool may have a TSE volume repository, but this physical
space cannot be shared between Extent Pools.
The logical size of the repository is limited by the available virtual capacity for Space Efficient
volumes. As an example, there could be a repository of 100 GB reserved physical storage
and you defined a virtual capacity of 200 GB. In this case, you could define 10 TSE-LUNs
with 20 GB each. So the logical capacity can be larger than the physical capacity. Of course,
you cannot fill all the volumes with data because the total physical capacity is limited by the
repository size, that is, to 100 GB in this example.
Note: In the current implementation of Track Space Efficient volumes, it is not possible to
expand the physical size of the repository. Therefore, careful planning for the size of the
repository is required before it is used. If a repository needs to be expanded, all Track
Space Efficient volumes within this Extent Pool must be deleted, and then the repository
must be deleted and recreated with the required size.
Because space is allocated in extents or tracks, the system needs to maintain tables
indicating their mapping to the logical volumes, so there is some impact involved with Space
Efficient volumes. The smaller the allocation unit, the larger the tables and the impact.
Summary: Virtual space is created as part of the Extent Pool definition. This virtual space
is mapped onto TSE volumes in the repository (physical space) as needed. No actual
storage is allocated until write activity occurs to the TSE volumes.
Extent
Pool
Space
efficient
volume Ranks
Repository for
space efficient
volumes normal
striped across Volume
ranks
The lifetime of data on Track Space Efficient volumes is expected to be short because they
are used as FlashCopy targets only. Physical storage gets allocated when data is written to
Track Space Efficient volumes and we need some mechanism to free up physical space in the
repository when the data is no longer needed.
The FlashCopy commands have options to release the space of Track Space Efficient
volumes when the FlashCopy relationship is established or removed.
The CLI commands initfbvol and initckdvol can also release the space for Space
Efficient volumes.
This construction method of using fixed extents to form a logical volume in the DS8000 series
allows flexibility in the management of the logical volumes. We can delete LUNs/CKD
volumes, resize LUNs/volumes, and reuse the extents of those LUNs to create other
LUNs/volumes, maybe of different sizes. One logical volume can be removed without
affecting the other logical volumes defined on the same Extent Pool.
Because the extents are cleaned after you have deleted a LUN or CKD volume, it can take
some time until these extents are available for reallocation. The reformatting of the extents is
a background process.
There are two extent allocation algorithms for the DS8000: Rotate volumes and Storage Pool
Striping (Rotate extents).
Note: The default for extent allocation method is Storage Pool Striping (Rotate extents) for
Licensed Machine Code 6.6.0.xx. In prior releases of Licensed Machine Code the default
allocation method was Rotate volumes.
The DS8000 maintains a sequence of ranks. The first rank in the list is randomly picked at
each power on of the storage subsystem. The DS8000 keeps track of the rank in which the
last allocation started. The allocation of the first extent for the next volume starts from the next
rank in that sequence. The next extent for that volume is taken from the next rank in sequence
and so on. Thus, the system rotates the extents across the ranks.
If more than one volume is created in one operation, the allocation for each volume starts in
another rank. When allocating several volumes, we rotate through the ranks.
You might want to consider this allocation method when you prefer to manage performance
manually. The workload of one volume is going to one rank. This makes the identification of
performance bottlenecks easier; however, by putting all the volumes data onto just one rank,
you might introduce a bottleneck, depending on your actual workload.
Tip: Rotate extents and rotate volume EAMs provide distribution of volumes over ranks.
Rotate extents does this at a granular (1 GB extent) level, which is the preferred method to
minimize hot spots and improve overall performance.
3
3 5
Striped volume created R3
Starts at next rank (R2)
(extents 12 to 15)
Extent
8…….12
When you create striped volumes and non-striped volumes in an Extent Pool, a rank could be
filled before the others. A full rank is skipped when you create new striped volumes.
Tip: If you have to add capacity to an Extent Pool because it is nearly full, it is better to add
several ranks at the same time, not just one. This allows new volumes to be striped across
the newly added ranks.
By using striped volumes, you distribute the I/O load of a LUN/CKD volume to more than just
one set of eight disk drives. The ability to distribute a workload to many physical drives can
greatly enhance performance for a logical volume. In particular, operating systems that do not
have a volume manager that can do striping will benefit most from this allocation method.
However, if you have Extent Pools with many ranks and all volumes are striped across the
ranks and you lose just one rank, for example, because there are two disk drives in the same
rank that fail at the same time and it is not a RAID 6 rank, you will lose much of your data.
On the other hand, if you do, for example, Physical Partition striping in AIX already, double
striping probably will not improve performance any further. The same can be expected when
the DS8000 LUNs are used by an SVC striping data across LUNs.
If you decide to use Storage Pool Striping it is probably better to use this allocation method for
all volumes in the Extent Pool to keep the ranks equally filled and utilized.
Tip: When configuring a new DS8000, do not mix volumes using the storage pool striping
method and volumes using the rotate volumes method in the same Extent Pool.
When a logical volume creation request is received, a logical volume object is created and the
logical volume's configuration state attribute is placed in the configuring configuration state.
Once the logical volume is created and available for host access, it is placed in the normal
configuration state. If a volume deletion request is received, the logical volume is placed in the
deconfiguring configuration state until all capacity associated with the logical volume is
deallocated and the logical volume object is deleted.
The reconfiguring configuration state is associated with a volume expansion request (refer to
“Dynamic Volume Expansion” for more information). As shown, the configuration state
serializes user requests with the exception that a volume deletion request can be initiated
from any configuration state.
Create Volume
Configuring
Expand Volume
Volume Online Normal Reconfiguring
Volume Expansion
Online
Delete Volume
Deconfiguring
Volume deleted
A logical volume has the attribute of being striped across the ranks or not. If the volume was
created as striped across the ranks of the Extent Pool, then the extents that are used to
increase the size of the volume are striped. If a volume was created without striping, the
system tries to allocate the additional extents within the same rank that the volume was
created from originally.
Because most operating systems have no means of moving data from the end of the physical
disk off to some unused space at the beginning of the disk, and because of the risk of data
Consideration: Before you can expand a volume, you have to delete any copy services
relationship involving that volume.
On the DS8000 series, there is no fixed binding between any rank and any logical subsystem.
The capacity of one or more ranks can be aggregated into an Extent Pool and logical volumes
configured in that Extent Pool are not bound to any specific rank. Different logical volumes on
the same logical subsystem can be configured in different Extent Pools. As such, the
available capacity of the storage facility can be flexibly allocated across the set of defined
logical subsystems and logical volumes. You can now define up to 255 LSSs for the DS8000
series.
For each LUN or CKD volume, you can now choose an LSS. You can have up to 256 volumes
in one LSS. There is, however, one restriction. We already have seen that volumes are
formed from a number of extents from an Extent Pool. Extent Pools, however, belong to one
server (CEC), server 0 or server 1, respectively. LSSs also have an affinity to the servers. All
even-numbered LSSs (X’00’, X’02’, X’04’, up to X’FE’) belong to server 0 and all
odd-numbered LSSs (X’01’, X’03’, X’05’, up to X’FD’) belong to server 1. LSS X’FF’ is
reserved.
System z users are familiar with a logical control unit (LCU). System z operating systems
configure LCUs to create device addresses. There is a one to one relationship between an
LCU and a CKD LSS (LSS X'ab' maps to LCU X'ab'). Logical volumes have a logical volume
number X'abcd' where X'ab' identifies the LSS and X'cd' is one of the 256 logical volumes on
the LSS. This logical volume number is assigned to a logical volume when a logical volume is
created and determines the LSS that it is associated with. The 256 possible logical volumes
associated with an LSS are mapped to the 256 possible device addresses on an LCU (logical
volume X'abcd' maps to device address X'cd' on LCU X'ab'). When creating CKD logical
volumes and assigning their logical volume numbers, consider whether Parallel Access
Volumes (PAVs) are required on the LCU and reserve some of the addresses on the LCU for
alias addresses.
For open systems, LSSs do not play an important role except in determining which server
manages the LUN (and in which Extent Pool it must be allocated) and in certain aspects
related to Metro Mirror, Global Mirror, or any of the other remote copy implementations.
Some management actions in Metro Mirror, Global Mirror, or Global Copy, operate at the LSS
level. For example, the freezing of pairs to preserve data consistency across all pairs, in case
you have a problem with one of the pairs, is done at the LSS level. With the option to put all or
most of the volumes of a certain application in just one LSS, makes the management of
remote copy operations easier; see Figure 5-11.
Array
Site
LSS X'17'
DB2
Array
….
…. Site
24
24
. ...
. ...
Array
Site
Array
Site
LSS X'18'
…….
….
Array DB2-test
24
24
Site
. ... ...
…. ...
Array
Site
Fixed block LSSs are created automatically when the first fixed block logical volume on the
LSS is created, and deleted automatically when the last fixed block logical volume on the LSS
is deleted. CKD LSSs require user parameters to be specified and must be created before the
first CKD logical volume can be created on the LSS; they must be deleted manually after the
last CKD logical volume on the LSS is deleted.
Address groups
Address groups are created automatically when the first LSS associated with the address
group is created, and deleted automatically when the last LSS in the address group is
deleted.
All devices in an LSS must be either CKD or FB. This restriction goes even further. LSSs are
grouped into address groups of 16 LSSs. LSSs are numbered X'ab', where a is the address
group and b denotes an LSS within the address group. So, for example, X'10' to X'1F' are
LSSs in address group 1.
All LSSs within one address group have to be of the same type, CKD or FB. The first LSS
defined in an address group sets the type of that address group.
Important: System z users who still want to use ESCON to attach hosts to the DS8000
series should be aware that ESCON supports only the 16 LSSs of address group 0 (LSS
X'00' to X'0F'). Therefore, this address group should be reserved for ESCON-attached
CKD devices in this case and not used as FB LSSs. The DS8800 does not support
ESCON channels. ESCON devices can only be attached by using FICON/ESCON
converters.
Server1
X'1E01'
X'1D00'
Server0
LSS X'1E'
Extent Pool FB-1 LSS X'1F' Extent Pool FB-2
Rank-y
Rank-c
The LUN identifications X'gabb' are composed of the address group X'g', and the LSS
number within the address group X'a', and the position of the LUN within the LSS X'bb'. For
example, FB LUN X'2101' denotes the second (X'01') LUN in LSS X'21' of address group 2.
Host attachment
Host bus adapters (HBAs) are identified to the DS8000 in a host attachment construct that
specifies the HBAs’ World Wide Port Names (WWPNs). A set of host ports can be associated
through a port group attribute that allows a set of HBAs to be managed collectively. This port
group is referred to as a host attachment within the GUI.
Each host attachment can be associated with a volume group to define which LUNs that HBA
is allowed to access. Multiple host attachments can share the same volume group. The host
attachment can also specify a port mask that controls which DS8800 I/O ports the HBA is
allowed to log in to. Whichever ports the HBA logs in on, it sees the same volume group that
is defined on the host attachment associated with this HBA.
When used in conjunction with open systems hosts, a host attachment object that identifies
the HBA is linked to a specific volume group. You must define the volume group by indicating
which fixed block logical volumes are to be placed in the volume group. Logical volumes can
be added to or removed from any volume group dynamically.
There are two types of volume groups used with open systems hosts and the type determines
how the logical volume number is converted to a host addressable LUN_ID on the Fibre
Channel SCSI interface. A map volume group type is used in conjunction with FC SCSI host
types that poll for LUNs by walking the address range on the SCSI interface. This type of
volume group can map any FB logical volume numbers to 256 LUN_IDs that have zeroes in
the last six Bytes and the first two Bytes in the range of X'0000' to X'00FF'.
A mask volume group type is used in conjunction with FC SCSI host types that use the Report
LUNs command to determine the LUN_IDs that are accessible. This type of volume group
can allow any and all FB logical volume numbers to be accessed by the host where the mask
is a bitmap that specifies which LUNs are accessible. For this volume group type, the logical
volume number X'abcd' is mapped to LUN_ID X'40ab40cd00000000'. The volume group type
also controls whether 512 Byte block LUNs or 520 Byte block LUNs can be configured in the
volume group.
When associating a host attachment with a volume group, the host attachment contains
attributes that define the logical block size and the Address Discovery Method (LUN Polling or
Report LUNs) that are used by the host HBA. These attributes must be consistent with the
volume group type of the volume group that is assigned to the host attachment so that HBAs
that share a volume group have a consistent interpretation of the volume group definition and
have access to a consistent set of logical volume types. The GUI typically sets these values
appropriately for the HBA based on your specification of a host type. You must consider what
volume group type to create when setting up a volume group for a particular HBA.
FB logical volumes can be defined in one or more volume groups. This allows a LUN to be
shared by host HBAs configured to different volume groups. An FB logical volume is
automatically removed from all volume groups when it is deleted.
Figure 5-13 shows the relationships between host attachments and volume groups. Host
AIXprod1 has two HBAs, which are grouped together in one host attachment and both are
granted access to volume group DB2-1. Most of the volumes in volume group DB2-1 are also
in volume group DB2-2, accessed by server AIXprod2. In our example, there is, however, one
volume in each group that is not shared. The server in the lower left part has four HBAs and
they are divided into two distinct host attachments. One can access some volumes shared
with AIXprod1 and AIXprod2. The other HBAs have access to a volume group called “docs.”.
WWPN-7
Host att: Prog
WWPN-8 Volume group: docs
This virtualization concept provides much more flexibility than in previous products. Logical
volumes can dynamically be created, deleted, and resized. They can be grouped logically to
simplify storage management. Large LUNs and CKD volumes reduce the total number of
volumes, which contributes to the reduction of management effort.
Data
1 GB FB
1 GB FB
1 GB FB
Data
1 GB FB
1 GB FB
1 GB FB
Data
Data
Server0
Data
1 GB FB
1 GB FB
1 GB FB
Data
Parity
Spare
X'2x' FB
4096
addresses
LSS X'27'
X'3x' CKD
4096
addresses
These functions make the DS8800 series a key component for disaster recovery solutions,
data migration activities, and for data duplication and backup solutions.
The information provided in this chapter is only an overview. It is covered to a greater extent
and in more detail in the following IBM Redbooks and IBM Redpapers™ publications:
IBM System Storage DS8000: Copy Services for Open Systems, SG24-6788
IBM System Storage IBM System Storage DS8000: Copy Services for IBM System z,
SG24-6787
IBM System Storage DS8000 Series: IBM FlashCopy SE, REDP-4368
The Copy Services functions run on the DS8800 storage unit and support open systems and
System z environments. They are also supported on other DS8000 family models.
Many design characteristics of the DS8800 and its data copy and mirror capabilities and
features contribute to the protection of your data, 24 hours a day and seven days a week.
FlashCopy is an optional licensed feature of the DS8800. Two variations of FlashCopy are
available:
Standard FlashCopy, also referred to as the Point-in-Time Copy (PTC) licensed function
FlashCopy SE licensed function
To use FlashCopy, you must have the corresponding licensed function indicator feature in the
DS8800, and you must acquire the corresponding DS8800 function authorization with the
adequate feature number license in terms of physical capacity. For details about feature and
function requirements, see 10.1, “IBM System Storage DS8800 licensed functions” on
page 204.
In this section, we discuss the FlashCopy and FlashCopy SE basic characteristics and
options.
Note: In this chapter, track means a piece of data in the DS8800; the DS8800 uses the
concept of logical tracks to manage Copy Services functions.
Figure 6-1 and the subsequent section explain the basic concepts of a standard FlashCopy.
If you access the source or the target volumes while the FlashCopy relation exists, I/O
requests are handled as follows:
Read from the source volume
When a read request goes to the source, data is directly read from there.
Read from the target volume
When a read request goes to the target volume, FlashCopy checks the bitmap and:
Source Target
FlashCopy command issued
The background copy can slightly impact application performance because the physical copy
needs some storage resources. The impact is minimal because host I/O always has higher
priority than the background copy.
FlashCopy SE is automatically invoked with the NOCOPY option, because the target space is
not allocated and the available physical space is smaller than the size of the volume. A full
background copy would contradict the concept of space efficiency.
IBM FlashCopy SE is designed for temporary copies. FlashCopy SE is optimized for use
cases where only about 5% of the source volume data is updated during the life of the
relationship. If more than 20% of the source data is expected to change, standard FlashCopy
would likely be the better choice.
In all scenarios, the write activity to both source and target is the crucial factor that decides
whether FlashCopy SE can be used.
In many cases only a small percentage of the entire data is changed in a day. In this situation,
you can use this function for daily backups and save the time for the physical copy of
FlashCopy.
Incremental FlashCopy requires the background copy and the Persistent FlashCopy option to
be enabled.
Persistent FlashCopy
Persistent FlashCopy allows the FlashCopy relationship to remain even after the copy
operation completes. You must explicitly delete the relationship to terminate it.
Consistency Group FlashCopy ensures that the order of dependent writes is always
maintained and thus creates host-consistent copies, not application-consistent copies. The
copies have power-fail or crash level consistency. To recover an application from Consistency
Group FlashCopy target volumes, you need to perform the same kind of recovery as after a
system crash.
Note: You cannot FlashCopy from a source to a target if the target is also a Global Mirror
primary volume.
Metro Mirror and Global Copy are explained in 6.3.1, “Metro Mirror” on page 114 and in 6.3.2,
“Global Copy” on page 114.
Note: This function is available by using the DS CLI, TSO, and ICKDSF commands, but
not by using the DS Storage Manager GUI.
Incremental FlashCopy
Because Incremental FlashCopy implies an initial full volume copy and a full volume copy is
not possible in an IBM FlashCopy SE relationship, Incremental FlashCopy is not possible with
IBM FlashCopy SE.
The Remote Mirror and Copy functions are optional licensed functions of the DS8800 that
include:
Metro Mirror
Global Copy
Global Mirror
Metro/Global Mirror
In the following sections, we discuss these Remote Mirror and Copy functions.
For a more detailed and extensive discussion about these topics, refer to the IBM Redbooks
publications listed on page 107.
Licensing requirements
To use any of these Remote Mirror and Copy optional licensed functions, you must have the
corresponding licensed function indicator feature in the DS8800, and you must acquire the
Also, consider that some of the remote mirror solutions, such as Global Mirror, Metro/Global
Mirror, or z/OS Metro/Global Mirror, integrate more than one licensed function. In this case,
you need to have all of the required licensed functions.
Server 4 Write
acknowledge
write
1 Write hit
on secondary
3
2
Primary Write to Secondary
(local) secondary (remote)
Figure 6-2 Metro Mirror basic operation
You have to take extra steps to make Global Copy target data usable at specific points in time.
These steps depend on the purpose of the copy.
2 Write
Server acknowledge
write
1
Write to secondary
(non-synchronously) FlashCopy
B (automatically)
A
Automatic cycle controlled by active session
The A volumes at the local site are the production volumes and are used as Global Copy
primaries. The data from the A volumes is replicated to the B volumes using Global Copy. At a
certain point in time, a Consistency Group is created from all the A volumes, even if they are
located in different storage units. This has very little application impact, because the creation
of the Consistency Group is quick (on the order of a few milliseconds).
After the Consistency Group is created, the application writes can continue updating the A
volumes. The missing increment of the consistent data is sent to the B volumes using the
existing Global Copy relations. After all data has reached the B volumes, Global Copy is
halted for brief period, while Global Mirror creates a FlashCopy from the B to the C volumes.
These now contain a consistent set of data at the secondary site.
The data at the remote site is current within 3 to 5 seconds, but this recovery point depends
on the workload and bandwidth available to the remote site.
With its efficient and autonomic implementation, Global Mirror is a solution for disaster
recovery implementations where a consistent copy of the data needs to be available at all
times at a remote location that can be separated by a long distance from the production site.
***
normal application I/Os failover application I/Os
Global Copy FlashCopy
asynchronous incremental
Metro Mirror long distance NOCOPY
A B C
Metro Mirror D
synchronous Global Mirror
short distance
Intermediate site Remote site
Local site (site A) (site B) (site C)
Both Metro Mirror and Global Mirror are well established replication solutions. Metro/Global
Mirror combines Metro Mirror and Global Mirror to incorporate the best features of the two
solutions:
Metro Mirror
– Synchronous operation supports zero data loss.
– The opportunity to locate the intermediate site disk systems close to the local site
allows use of intermediate site disk systems in a high availability configuration.
Note: Metro Mirror can be used for distances of up to 300 km, but when used in a
Metro/Global Mirror implementation, a shorter distance might be more appropriate in
support of the high availability functionality.
Global Mirror
– Asynchronous operation supports long distance replication for disaster recovery.
– The Global Mirror methodology has no impact to applications at the local site.
– This solution provides a recoverable, restartable, and consistent image at the remote
site with an RPO, typically in the 3 to 5 second range.
Figure 6-5 illustrates the basic operational characteristics of z/OS Global Mirror.
Primary Secondary
site site
SDM manages
the data
consistency System
Data
Write Mover
acknowledge
Server 2
write Read
asynchronously
1
Given the appropriate hardware and software, a range of zGM workload can be offloaded to
zIIP processors. The z/OS software must be at V1.8 and above with APAR OA23174,
specifying zGM PARMLIB parameter zIIPEnable(YES).
Figure 6-6 illustrates the basic operational characteristics of a z/OS Metro/Global Mirror
implementation.
z/OS Global
Mirror
Metropolitan Unlimited
distance distance
Metro
FlashCopy
Mirror
when
required
P X’
P’ X
DS8000 DS8000 DS8000 X”
Metro Mirror Metro Mirror/ z/OS Global Mirror
Secondary z/OS Global Mirror Secondary
Primary
Metro Mirror
Metro Mirror is a function for synchronous data copy at a limited distance. The following
considerations apply:
There is no data loss, and it allows for rapid recovery for distances up to 300 km.
There will be a slight performance impact for write operations.
Global Copy
Global Copy is a function for non-synchronous data copy at long distances, which is only
limited by the network implementation. The following considerations apply:
It can copy your data at nearly an unlimited distance, making it suitable for data migration
and daily backup to a remote distant site.
The copy is normally fuzzy but can be made consistent through a synchronization
procedure.
Global Mirror
Global Mirror is an asynchronous copy technique; you can create a consistent copy in the
secondary site with an adaptable Recovery Point Objective (RPO). RPO specifies how much
data you can afford to recreate if the system needs to be recovered. The following
considerations apply:
Global Mirror can copy to nearly an unlimited distance.
It is scalable across multiple storage units.
Chapter 7. Performance
This chapter discusses the performance characteristics of the IBM System Storage DS8800
regarding physical and logical configuration. The considerations presented in this chapter can
help you plan the physical and logical setup.
For a detailed discussion about performance, see DS8000 Performance Monitoring and
Tuning, SG24-7146.
The DS8800 offers either a dual 2-way processor complex or a dual 4-way processor
complex. The DS8800 overcomes many of the architectural limits of the predecessor disk
subsystems.
In this section, we go through the different architectural layers of the DS8000 and discuss the
performance characteristics that differentiate the DS8000 from other disk subsystems.
These switches use the FC-AL protocol and attach to the SAS drives (bridging to SAS
protocol) through a point-to-point connection. The arbitration message of a drive is captured
in the switch, processed, and propagated back to the drive, without routing it through all the
other drives in the loop.
Performance is enhanced because both device adapters (DAs) connect to the switched Fibre
Channel subsystem back-end, as shown in Figure 7-1. Note that each DA port can
concurrently send and receive data.
These two switched point-to-point connections to each drive, which also connect both DAs to
each switch, mean the following:
There is no arbitration competition and interference between one drive and all the other
drives, because there is no hardware in common for all the drives in the FC-AL loop. This
leads to an increased bandwidth, which goes with the full 8 Gbps FC speed up to the
back-end place where the FC-to-SAS conversion is made, and which utilizes the full
SAS 2.0 speed for each individual drive.
This architecture doubles the bandwidth over conventional FC-AL implementations due to
two simultaneous operations from each DA to allow for two concurrent read operations
and two concurrent write operations at the same time.
In addition to superior performance, note the improved reliability, availability, and
serviceability (RAS) that this setup has over conventional FC-AL. The failure of a drive is
detected and reported by the switch. The switch ports distinguish between intermittent
failures and permanent failures. The ports understand intermittent failures, which are
recoverable, and collect data for predictive failure statistics. If one of the switches fails, a
disk enclosure service processor detects the failing switch and reports the failure using the
other loop. All drives can still connect through the remaining switch.
This discussion outlines the physical structure. A virtualization approach built on top of the
high performance architectural design contributes even further to enhanced performance, as
discussed in Chapter 5, “Virtualization concepts” on page 85.
The RAID device adapter is built on PowerPC technology with four Fibre Channel ports and
high function and high performance ASICs. It is PCIe Gen.-based and runs at 8 Gbps.
Note that each DA performs the RAID logic and frees up the processors from this task. The
actual throughput and performance of a DA is not only determined by the port speed and
hardware used, but also by the firmware efficiency.
Adapter Adapter
DA
Storage server
Processor Memory Processor PowerPC
Adapter Adapter
Fibre Channel Fibre Channel
Protocol Proc Protocol Proc
Figure 7-3 shows the detailed cabling between the Device Adapters and the 24-drive
Gigapacks. The ASICs seen there provide the FC-to-SAS bridging function from the external
SFP connectors to each of the ports on the SAS disk drives. The processor is the controlling
element in the system.
8G b ps F i br e Ch an n el
O ptic al C o nn ecti on s
Debug Ports
Interface Card 1 8 G bp s F ib re C ha nn el D ebug Ports
Interface Card 1
O p ti cal Co nn ec ti on s
S FP SF P SF P S FP
8Gbps FC 8Gbps FC 8Gbps FC 8 Gbps FC
SF P
SF P
S FP ASIC SF P SF P ASIC S FP
Device SF P
SF P Pr oce sso r
Flash
SR AM
Pro cess or
Flas h
SRAM
6 Gbps SAS 6 Gbps SAS
Adapter AC /DC AC/D C
Po w e r Su p p ly P o w er S up p ly
SF P
SA S SAS ..24.. SAS S AS SAS SAS ..24.. SA S SAS
SF P AC /DC AC/D C
Device SF P
SF P
Po w e r Su p p ly P o w er S up p ly
S FP ASIC SF P SF P ASIC S FP
8Gbps FC 8Gbps FC 8Gbps FC 8Gbps FC
S FP SF P SF P S FP
Already for the DS8700, the device adapters had been upgraded with a twice-as-fast
processor on the adapter card compared to DS8100 and DS8300, providing much higher
throughput on the device adapter. For the DS8800, additional enhancements to the DA bring
a major performance improvement compared to DS8700: For DA limited workloads, the
maximum IOps throughput (small blocks) per DA has been increased by 40% to 80%, and DA
sequential throughput in MB/s (large blocks) has increased by approximately 85% to 210%
from DS8700 to DS8800. For instance, a single DA under ideal workload conditions can
process a sequential large-block read throughput of up to 1600 MB/s. These improvements
are of value in particular when using Solid-State Drives (SSDs), but also give the DS8800
system very high sustained sequential throughput, for instance in High-Performance
Computing configurations.
Each port provides industry-leading throughput and I/O rates for FICON and FCP.
To host servers
Adapter Adapter
HA
Adapter Adapter
With FC adapters that are configured for FICON, the DS8000 series provides the following
configuration capabilities:
Either fabric or point-to-point topologies
A maximum of 128 host adapter ports, depending on the DS8800 processor feature
A maximum of 509 logins per Fibre Channel port
A maximum of 8192 logins per storage unit
A maximum of 1280 logical paths on each Fibre Channel port
Access to all control-unit images over each FICON port
A maximum of 512 logical paths per control unit image
FICON host channels limit the number of devices per channel to 16,384. To fully access
65,280 devices on a storage unit, it is necessary to connect a minimum of four FICON host
channels to the storage unit. This way, by using a switched configuration, you can expose 64
control-unit images (16,384 devices) to each host channel.
The front-end with the 8 Gbps ports scales up to 128 ports for a DS8800, using the 8-port
HBAs. This results in a theoretical aggregated host I/O bandwidth of 128 times 8 Gbps.
The 8 Gbps adapter ports can negotiate to 8, 4, or 2 Gbps (1 Gbps not possible). For
attachments to 1-Gbps hosts, use a switch in between.
While the DS8100 and DS8300 used the RIO-G connection between the clusters as a
high-bandwidth interconnection to the device adapters, the DS8800 and DS8700 use
dedicated PCI Express connections to the I/O enclosures and the device adapters. This
increases the bandwidth to the storage subsystem back-end by a factor of up to 16 times to a
theoretical bandwidth of 64 GB/s.
DS8800
PCIe cable I/O attach
New I/O
attach
P6+
server
P6 P6
RIO RIO
P6 P6
PCIe cables
I/O enclosure
Figure 7-5 PCI Express connections to I/O enclosures
All I/O enclosures are equally served from either processor complex.
L3
L1,2
Memory Processor
PCIe interconnect Processor
L1,2
Memory
L3
L3
Memory Memory
Memory
L3 L1,2
PCIe interconnect L1,2
L3
L3
Memory Memory Processor Processor Memory
Memory
Memory
L1,2 L1,2
Memory Memory Processor Processor Memory Memory
PCIe interconnect
L3 L1,2 L1,2
Processor L3
L3
Memory Memory Processor Memory
Memory
Memory
Figure 7-6 DS8800 scale performance linearly: view without disk subsystems
Although Figure 7-6 does not display the back-end part, it can be derived from the number of
I/O enclosures, which suggests that the disk subsystem also doubles, as does everything
else, when switching from a DS8800 2-way system with four I/O enclosures to an DS8800
4-way system with eight I/O enclosures. Doubling the number of processors and I/O
enclosures accounts also for doubling the potential throughput.
Again, note that a virtualization layer on top of this physical layout contributes to additional
performance potential.
7.2.1 End-to-end I/O priority: synergy with AIX and DB2 on System p
End-to-end I/O priority is a new addition, requested by IBM, to the SCSI T10 standard. This
feature allows trusted applications to override the priority given to each I/O by the operating
system. This is only applicable to raw volumes (no file system) and with the 64-bit kernel.
Currently, AIX supports this feature in conjunction with DB2. The priority is delivered to
storage subsystem in the FCP Transport Header.
The priority of an AIX process can be 0 (no assigned priority) or any integer value from 1
(highest priority) to 15 (lowest priority). All I/O requests associated with a given process
inherit its priority value, but with end to end I/O priority, DB2 can change this value for critical
data transfers. At the DS8000, the host adapter will give preferential treatment to higher
priority I/O, improving performance for specific requests deemed important by the application,
such as requests that might be prerequisites for others, for example, DB2 logs.
With the implementation of cooperative caching, the AIX operating system allows trusted
applications, such as DB2, to provide cache hints to the DS8000. This improves the
performance of the subsystem by keeping more of the repeatedly accessed data within the
cache. Cooperative caching is supported in System p AIX with the Multipath I/O (MPIO) Path
Control Module (PCM) that is provided with the Subsystem Device Driver (SDD). It is only
applicable to raw volumes (no file system) and with the 64-bit kernel.
7.2.3 Long busy wait host tolerance: Synergy with AIX on System p
Another new addition to the SCSI T10 standard is SCSI long busy wait, which provides a way
for the target system to specify that it is busy and how long the initiator should wait before
retrying an I/O.
This information, provided in the Fibre Channel Protocol (FCP) status response, prevents the
initiator from retrying too soon. This in turn reduces unnecessary requests and potential I/O
failures due to exceeding a set threshold for the number of retries. IBM System p AIX
supports SCSI long busy wait with MPIO, and it is also supported by the DS8000.
You can approach this task from the disk side and look at some basic disk figures. Current
SAS 15K RPM disks, for example, provide an average seek time of approximately 3.1 ms and
an average latency of 2 ms. For transferring only a small block, the transfer time can be
neglected. This is an average 5.1 ms per random disk I/O operation or 196 IOPS. A combined
number of eight disks (as is the case for a DS8000 array) will thus potentially sustain 1568
IOPS when spinning at 15 K RPM. Reduce the number by 12.5% when you assume a spare
drive in the eight pack.
Back on the host side, consider an example with 1000 IOPS from the host, a read-to-write
ratio of 3 to 1, and 50% read cache hits. This leads to the following IOPS numbers:
750 read IOPS.
375 read I/Os must be read from disk (based on the 50% read cache hit ratio).
250 writes with RAID 5 results in 1,000 disk operations due to the RAID 5 write penalty
(read old data and parity, write new data and parity).
This totals to 1375 disk I/Os.
With 15K RPM DDMs doing 1000 random IOPS from the server, we actually do 1375 I/O
operations on disk compared to a maximum of 1440 operations for 7+P configurations or
1260 operations for 6+P+S configurations. Thus, 1000 random I/Os from a server with a
standard read-to-write ratio and a standard cache hit ratio saturate the disk drives. We made
the assumption that server I/O is purely random. When there are sequential I/Os,
track-to-track seek times are much lower and higher I/O rates are possible. We also assumed
that reads have a hit ratio of only 50%. With higher hit ratios, higher workloads are possible.
This shows the importance of intelligent caching algorithms as used in the DS8000.
Important: When sizing a storage subsystem, you should consider the capacity and the
number of disk drives needed to satisfy the performance requirements.
For a single disk drive, various disk vendors provide the disk specifications on their websites.
Because the access times for the disks are the same for same RPM speeds, but they have
different capacities, the I/O density is different. 146 GB 15K RPM disk drives can be used for
access densities up to, and slightly over, 1 I/O per GB·s. For 450 GB drives, it is
approximately 0.5 I/O per GB·s. While this discussion is theoretical in approach, it provides a
first estimate.
After the speed of the disk has been decided, the capacity can be calculated based on your
storage capacity needs and the effective capacity of the RAID configuration you will use.
Refer to Table 8-9 on page 173 for information about calculating these needs.
Important: SATA drives are not the appropriate option for every storage requirement. For
many enterprise applications, and certainly mission-critical and production applications,
SAS (or Fibre Channel) disks remain the best choice.
SATA disk drives are a cost-efficient storage option for lower intensity storage workloads and
are available with the DS8700.
The DS8000 series cache is organized in 4 KB pages called cache pages or slots. This unit of
allocation (which is smaller than the values used in other storage systems) ensures that small
I/Os do not waste cache memory.
The decision to copy some amount of data into the DS8000 cache can be triggered from two
policies: demand paging and prefetching.
Demand paging means that eight disk blocks (a 4K cache page) are brought in only on a
cache miss. Demand paging is always active for all volumes and ensures that I/O patterns
with some locality discover at least some recently used data in the cache.
For prefetching, the cache management uses tracks. A track is a set of 128 disk blocks (16
cache pages). To detect a sequential access pattern, counters are maintained with every
track to record whether a track has been accessed together with its predecessor. Sequential
prefetching becomes active only when these counters suggest a sequential access pattern. In
this manner, the DS8000 monitors application read-I/O patterns and dynamically determines
whether it is optimal to stage into cache:
Just the page requested
That page requested plus the remaining data on the disk track
An entire disk track (or a set of disk tracks), which has not yet been requested
The decision of when and what to prefetch is made in accordance with the Adaptive
Multi-stream Prefetching (AMP) algorithm, which dynamically adapts the amount and timing
of prefetches optimally on a per-application basis (rather than a system-wide basis). AMP is
described further in 7.4.2, “Adaptive Multi-stream Prefetching” on page 132.
To decide which pages are evicted when the cache is full, sequential and random
(non-sequential) data is separated into different lists. Figure 7-7 illustrates the SARC
algorithm for random and sequential data.
RANDOM SEQ
MRU MRU
Desired size
SEQ bottom
LRU
RANDOM bottom
LRU
A page that has been brought into the cache by simple demand paging is added to the head
of Most Recently Used (MRU) of the RANDOM list. Without further I/O access, it goes down
to the bottom of Least Recently Used (LRU). A page that has been brought into the cache by
a sequential access or by sequential prefetching is added to the head of MRU of the SEQ list
and then goes in that list. Additional rules control the migration of pages between the lists so
as to not keep the same pages in memory twice.
Additionally, the algorithm dynamically modifies the sizes of the two lists and the rate at which
the sizes are adapted. In a steady state, pages are evicted from the cache at the rate of
cache misses. A larger (respectively, a smaller) rate of misses effects a faster (respectively, a
slower) rate of adaptation.
Other implementation details take into account the relationship of read and write (NVS)
cache, efficient destaging, and the cooperation with Copy Services. In this manner, the
DS8000 cache management goes far beyond the usual variants of the Least Recently
Used/Least Frequently Used (LRU/LFU) approaches.
In DS8800 and DS8700, Adaptive Multi-stream Prefetching (AMP), which is a tool from IBM
Research, manages the SEQ. AMP is an autonomic, workload-responsive, self-optimizing
prefetching technology that adapts both the amount of prefetch and the timing of prefetch on
a per-application basis to maximize the performance of the system. The AMP algorithm
solves two problems that plague most other prefetching algorithms:
Prefetch wastage occurs when prefetched data is evicted from the cache before it can be
used.
Cache pollution occurs when less useful data is prefetched instead of more useful data.
By wisely choosing the prefetching parameters, AMP provides optimal sequential read
performance and maximizes the aggregate sequential read throughput of the system. The
amount prefetched for each stream is dynamically adapted according to the application's
needs and the space available in the SEQ list. The timing of the prefetches is also
continuously adapted for each stream to avoid misses and at the same time avoid any cache
pollution.
SARC and AMP play complementary roles. While SARC is carefully dividing the cache
between the RANDOM and the SEQ lists so as to maximize the overall hit ratio, AMP is
managing the contents of the SEQ list to maximize the throughput obtained for the sequential
workloads. While SARC impacts cases that involve both random and sequential workloads,
AMP helps any workload that has a sequential read component, including pure sequential
read workloads.
AMP dramatically improves performance for common sequential and batch processing
workloads. It also provides excellent performance synergy with DB2 by preventing table
scans from being I/O bound and improves performance of index scans and DB2 utilities like
Copy and Recover. Furthermore, AMP reduces the potential for array hot spots, which result
from extreme sequential workload demands.
The CLOCK algorithm exploits temporal ordering. It keeps a circular list of pages in memory,
with the “hand” pointing to the oldest page in the list. When a page needs to be inserted in the
cache, then a R (recency) bit is inspected at the “hand's” location. If R is zero, the new page is
put in place of the page the “hand” points to and R is set to 1; otherwise, the R bit is cleared
and set to zero. Then, the clock hand moves one step clockwise forward and the process is
repeated until a page is replaced.
The CSCAN algorithm exploit spatial ordering. The CSCAN algorithm is the circular variation
of the SCAN algorithm. The SCAN algorithm tries to minimize the disk head movement when
servicing read and write requests. It maintains a sorted list of pending requests along with the
position on the drive of the request. Requests are processed in the current direction of the
disk head, until it reaches the edge of the disk. At that point, the direction changes. In the
CSCAN algorithm, the requests are always served in the same direction. Once the head has
arrived at the outer edge of the disk, it returns to the beginning of the disk and services the
new requests in this one direction only. This results is more equal performance for all head
positions.
The basic idea of IWC is to maintain a sorted list of write groups, as in the CSCAN algorithm.
The smallest and the highest write groups are joined, forming a circular queue. The additional
new idea is to maintain a recency bit for each write group, as in the CLOCK algorithm. A write
group is always inserted in its correct sorted position and the recency bit is set to zero at the
beginning. When a write hit occurs, the recency bit is set to one. The destage operation
proceeds, where a destage pointer is maintained that scans the circular list looking for
destage victims. Now this algorithm only allows destaging of write groups whose recency bit
is zero. The write groups with a recency bit of one are skipped and the recent bit is then
turned off and reset to zero, which gives an “extra life” to those write groups that have been hit
since the last time the destage pointer visited them; Figure 7-8 gives an idea of how this
mechanism works.
In the DS8000 implementation, an IWC list is maintained for each rank. The dynamically
adapted size of each IWC list is based on workload intensity on each rank. The rate of
destage is proportional to the portion of NVS occupied by an IWC list (the NVS is shared
across all ranks in a cluster). Furthermore, destages are smoothed out so that write bursts
are not translated into destage bursts.
In summary, IWC has better or comparable peak throughput to the best of CSCAN and
CLOCK across a wide gamut of write cache sizes and workload configurations. In addition,
even at lower throughputs, IWC has lower average response times than CSCAN and CLOCK.
Important: Balance your ranks and Extent Pools between the two DS8000 servers. Half of
the ranks should be managed by each server (see Figure 7-9).
Server 0 Server 1
DA2 DA2
DA0 DA0
DA3 DA3
DA1 DA1
ExtPool 0 ExtPool 1
Figure 7-9 Ranks in a multirank Extent Pool configuration balanced across DS8000 servers
All disks in the storage disk subsystem should have roughly equivalent utilization. Any disk
that is used more than the other disks will become a bottleneck to performance. A practical
method is to use predefined Storage Pool Striping. Alternatively, make extensive use of
volume-level striping across disk drives.
For optimal performance, your data should be spread across as many hardware resources as
possible. RAID 5, RAID 6, or RAID 10 already spreads the data across the drives of an array,
but this is not always enough. There are two approaches to spreading your data across even
more disk drives:
Storage Pool Striping
Striping at the host level
The easiest way to stripe is to use Extent Pools with more than one rank and use Storage
Pool Striping when allocating a new volume (see Figure 7-10). This striping method is
independent of the operating system.
Extent pool
Rank 1
Extent
Rank 2
1GB 8 GB LUN
Rank 3
Rank 4
In 7.3, “Performance considerations for disk drives” on page 129, we discuss how many
random I/Os can be performed for a standard workload on a rank. If a volume resides on just
one rank, this rank’s I/O capability also applies to the volume. However, if this volume is
striped across several ranks, the I/O rate to this volume can be much higher.
The total number of I/Os that can be performed on a given set of ranks does not change with
Storage Pool Striping.
On the other hand, if you stripe all your data across all ranks and you lose just one rank, for
example, because you lose two drives at the same time in a RAID 5 array, all your data is
Tip: Use Storage Pool Striping and Extent Pools with a minimum of four to eight ranks of
the same characteristics (RAID type and disk RPM) to avoid hot spots on the disk drives.
Figure 7-11 shows a good configuration. The ranks are attached to DS8000 server 0 and
server 1 in a half and half configuration, ranks on different device adapters are used in a
multi-rank Extent Pool, and there are separate Extent Pools for 6+P+S and 7+P ranks.
DS8000
Server 0 Server 1
There is no reorg function for Storage Pool Striping. If you have to expand an Extent Pool, the
extents are not rearranged.
Tip: If you have to expand a nearly full Extent Pool, it is better to add several ranks at the
same time instead of just one rank, to benefit from striping across the newly added ranks.
Other examples for applications that stripe data across the volumes include the SAN Volume
Controller (SVC) and IBM System Storage N series Gateways.
Do not expect that double striping (at the storage subsystem level and at the host level) will
enhance performance any further.
LVM striping is a technique for spreading the data in a logical volume across several disk
drives in such a way that the I/O capacity of the disk drives can be used in parallel to access
data on the logical volume. The primary objective of striping is high performance reading and
writing of large sequential files, but there are also benefits for random access.
LSS 00 LSS 01
DA pair 1
Extent Pool FB-0b Extent Pool FB-1b
Server1
Server0
DA pair 2
Extent Pool FB-0d Extent Pool FB-1d
Figure 7-12 shows an optimal distribution of eight logical volumes within a DS8000. Of
course, you could have more Extent Pools and ranks, but when you want to distribute your
data for optimal performance, you should make sure that you spread it across the two
servers, across different device adapter pairs, and across several ranks.
To be able to create very large logical volumes or to be able to use Extent Pool striping, you
must consider having Extent Pools with more than one rank.
If you use multirank Extent Pools and you do not use Storage Pool Striping, you have to be
careful where to put your data, or you can easily unbalance your system (see the right side of
Figure 7-13).
Extent
Rank 2 Extent pool 2
1GB 2GB LUN 2
Rank 6
Extent
1GB
Extent pool 3
2GB LUN 3 Rank 7
Rank 3
Extent pool 4 Rank 8
2GB LUN 4
Rank 4
Combining Extent Pools made up of one rank and then LVM striping over LUNs created on
each Extent Pool will offer a balanced method to evenly spread data across the DS8000
without using Extent Pool striping, as shown on the left side of Figure 7-13.
The stripe size has to be large enough to keep sequential data relatively close together, but
not too large so as to keep the data located on a single array.
We recommend that you define stripe sizes using your host’s logical volume manager in the
range of 4 MB to 64 MB. You should choose a stripe size close to 4 MB if you have a large
number of applications sharing the arrays and a larger size when you have few servers or
applications sharing the arrays.
The dynamic I/O load-balancing option (default) of SDD is recommended to ensure better
performance because:
SDD automatically adjusts data routing for optimum performance. Multipath load
balancing of data flow prevents a single path from becoming overloaded, causing
input/output congestion that occurs when many I/O operations are directed to common
devices along the same input/output path.
The path to use for an I/O operation is chosen by estimating the load on each adapter to
which each path is attached. The load is a function of the number of I/O operations
currently in process. If multiple paths have the same load, a path is chosen at random from
those paths.
For more information about the SDD, see the IBM Redbooks publication DS8000: Host
Attachment and Interoperability, SG24-8887.
When the number of commands sent to the DS8000 port has exceeded the maximum
number of commands that the port can queue, the port has to discard these additional
commands.
This operation is a normal error recovery operation in the Fibre Channel protocol to prevent
more damage. The normal recovery is a 30-second timeout for the server, after that time the
Automatic Port Queues is a mechanism the DS8800 uses to self-adjust the queue based on
the workload. This allows higher port queue oversubscription while maintaining a fair share
for the servers and the accessed LUNs.
The port that the queue is filling up goes into SCSI Queue Fill mode, where it accepts no
additional commands to slow down the I/Os.
By avoiding error recovery and the 30 second blocking SCSI Queue Full recovery interval, the
overall performance is better with Automatic Port Queues.
The DS8000 host adapters have no server affinity, but the device adapters and the rank have
server affinity. Figure 7-14 shows a host that is connected through two FC adapters to two
DS8000 host adapters located in different I/O enclosures.
Reads
Reads
L1,2
Memory Processor L1,2
Memory Processor Memory Memory
SERVER 0 PCIe Interconnect SERVER 1
L3 L1,2
Memory Processor L1,2 L3
Memory Processor Memory Memory
DAs with an affinity to server 0 LUN1 ooo DAs with an affinity to server 1
24 DDM
The host has access to LUN1, which is created in the Extent Pool 1 controlled by the DS8000
server 0. The host system sends read commands to the storage server.
When a read command is executed, one or more logical blocks are transferred from the
selected logical drive through a host adapter over an I/O interface to a host. In this case, the
Each I/O enclosure can hold up to four HAs. The example in Figure 7-15 shows only eight
FICON channels connected to the first two I/O enclosures. Not shown is a second FICON
director, which connects in the same fashion to the remaining two I/O enclosures to provide a
total of 16 FICON channels in this particular example. The DS8800 disk storage system
provides up to 128 FICON channel ports. Again, note the efficient FICON implementation in
the DS8000 FICON ports.
z/OS 1.7+
Parallel
Sysplex
HA
FICON
Director
Server 0 Server 1
L1,2 Processor L1,2
Memory Memory Processor
Memory Memory
L3 L1,2
Memory
Processor
PCIe connections Processor L1,2 L3
Memory Memory Memory
The ability to do multiple I/O requests to the same volume nearly eliminates I/O supervisor
queue delay (IOSQ) time, one of the major components in z/OS response time. Traditionally,
access to highly active volumes has involved manual tuning, splitting data across multiple
volumes, and more. With PAV and the Workload Manager (WLM), you can almost forget
about manual performance tuning. WLM manages PAVs across all the members of a Sysplex
too. This way, the DS8000 in conjunction with z/OS has the ability to meet the performance
requirements by its own.
Figure 7-16 illustrates the traditional z/OS behavior without PAV, where subsequent
simultaneous I/Os to volume 100 are queued while volume 100 is still busy with a preceding
I/O.
One I/O to
one volume
System z at one time System z
100
Not only were the z/OS systems limited to processing only one I/O at a time, but also the
storage subsystems accepted only one I/O at a time from different system images to a shared
volume, for the same reasons previously mentioned; see Figure 7-16 on page 143.
z/OS
Single image
System z
Physical layer
This feature that allows parallel I/Os to a volume from one host is called Parallel Access
Volume (PAV).
Alias addresses have to be defined to the DS8000 and to the I/O definition file (IODF). This
association is predefined, and you can add new aliases nondisruptively. Still, the association
For guidelines about PAV definition and support, see DS8000: Host Attachment and
Interoperability, G24-8887.
PAV is an optional licensed function on the DS8000 series. PAV also requires the purchase of
the FICON Attachment feature.
z/OS recognizes the aliases that are initially assigned to a base during the Nucleus
Initialization Program (NIP) phase. If dynamic PAVs are enabled, the WLM can reassign an
alias to another base by instructing the IOS to do so when necessary; see Figure 7-18.
WLM
IOS
Assign to base 100
z/OS Workload Manager in Goal mode tracks system workloads and checks whether
workloads are meeting their goals as established by the installation.
WLM also keeps track of the devices utilized by the different workloads, accumulates this
information over time, and broadcasts it to the other systems in the same sysplex. If WLM
determines that any workload is not meeting its goal due to IOS queue (IOSQ) time, WLM will
attempt to find an alias device that can be reallocated to help this workload achieve its goal;
see Figure 7-19.
7.7.4 HyperPAV
Dynamic PAV requires the WLM to monitor the workload and goals. It takes some time until
the WLM detects an I/O bottleneck. Then the WLM must coordinate the reassignment of alias
addresses within the sysplex and the DS8000. All of this takes time, and if the workload is
fluctuating or has a burst character, the job that caused the overload of one volume could
have ended before the WLM had reacted. In these cases, the IOSQ time was not eliminated
completely.
Applications
UCB 08F0 Logical Subsystem (LSS) 0800
z/OS Sysplex
do I/O to
base UCB 0802
volumes Alias UA=F0
Alias UA=F1
Alias UA=F2
Alias UA=F3
Applications z/OS Image
do I/O to Base UA=01
base UCB 08F0
volumes UCB 0801
Base UA=02
UCB 08F1 P
Applications
UCB 08F3 O
do I/O to
base UCB 0802 O
volumes UCB 08F2 L
This capability also allows different HyperPAV hosts to use one alias to access different
bases, which reduces the number of alias addresses required to support a set of bases in an
iBM System z environment, with no latency in assigning an alias to a base. This functionality
is also designed to enable applications to achieve better performance than is possible with
the original PAV feature alone, while also using the same or fewer operating system
resources.
Benefits of HyperPAV
HyperPAV has been designed to:
Provide an even more efficient Parallel Access Volumes (PAV) function
Help clients who implement larger volumes to scale I/O rates without the need for
additional PAV alias definitions
Exploit the FICON architecture to reduce impact, improve addressing efficiencies, and
provide storage capacity and performance improvements:
– More dynamic assignment of PAV aliases improves efficiency
– Number of PAV aliases needed might be reduced, taking fewer from the 64 K device
limitation and leaving more storage for capacity use
Enable a more dynamic response to changing workloads
Simplified management of aliases
Make it easier for users to make a decision to migrate to larger volume sizes
Our rule of thumb is that the number of aliases required can be approximated by the peak of
the following multiplication: I/O rate multiplied by the average response time. For example, if
the peak of the above calculation happened when the I/O rate is 2000 I/O per second and the
average response time is 4 ms (which is 0.004 sec), then the result of above calculation will
be:
This means that the average number of I/O operations executing at one time for that LCU
during the peak period is eight. Therefore, eight aliases should be able to handle the peak I/O
rate for that LCU. However, because this calculation is based on the average during the
RMF™ period, you should multiply the result by two, to accommodate higher peaks within
that RMF interval. So in this case, the recommended number of aliases would be:
2 x 8 = 16
Note: For more details about MSS, see Multiple Subchannel Sets: An Implementation
View, REDP-4387, found at:
http://www.redbooks.ibm.com/abstracts/redp4387.html?Open
HyperPAV would help minimize the Input/Output Supervisor Queue (IOSQ) Time. If you still
see IOSQ Time, then there are two possible reasons:
There are more aliases required to handle the I/O load compared to the number of aliases
defined in the LCU.
There is Device Reserve issued against the volume. A Device Reserve would make the
volume unavailable to the next I/O, causing the next I/O to be queued. This delay will be
recorded as IOSQ Time.
Figure 7-21 z/VM support of PAV volumes dedicated to a single guest virtual machine
DSK001
In this way, PAV provides to the z/VM environments the benefits of a greater I/O performance
(throughput) by reducing I/O queuing.
With the small programming enhancement (SPE) introduced with z/VM 5.2.0 and APAR
VM63952, additional enhancements are available when using PAV with z/VM. For more
information, see 10.4, “z/VM considerations” in DS8000: Host Attachment and
Interoperability, SG24-8887.
The DS8000 series accepts multiple I/O requests from different hosts to the same device
address, increasing parallelism and reducing channel impact. In older storage disk
subsystems, a device had an implicit allegiance, that is, a relationship created in the control
unit between the device and a channel path group when an I/O operation is accepted by the
device. The allegiance causes the control unit to guarantee access (no busy status
With Multiple Allegiance, the requests are accepted by the DS8000 and all requests are
processed in parallel, unless there is a conflict when writing to the same data portion of the
CKD logical volume, as shown in Figure 7-23.
Appl. A Appl. B
UCB 100 UCB 100
Multiple
System z Allegiance System z
Physical
layer
Figure 7-23 Parallel I/O capability with Multiple Allegiance
Nevertheless, good application software access patterns can improve global parallelism by
avoiding reserves, limiting the extent scope to a minimum, and setting an appropriate file
mask, for example, if no write is intended.
In systems without Multiple Allegiance, all except the first I/O request to a shared volume are
rejected, and the I/Os are queued in the System z channel subsystem, showing up in Device
Busy Delay and PEND time in the RMF DASD Activity reports. Multiple Allegiance will allow
multiple I/Os to a single volume to be serviced concurrently. However, a device busy condition
can still happen. This will occur when an active I/O is writing a certain data portion on the
volume and another I/O request comes in and tries to either read or write to that same data.
To ensure data integrity, those subsequent I/Os will get a busy condition until that previous I/O
is finished with the write operation.
Priority queuing
I/Os from different z/OS system images can be queued in a priority order. It is the z/OS
Workload Manager that makes use of this priority to privilege I/Os from one system against
the others. You can activate I/O priority queuing in WLM Service Definition settings. WLM has
to run in Goal mode.
When a channel program with a higher priority comes in and is put in front of the queue of
channel programs with lower priority, the priority of the low-priority programs will be
increased; see Figure 7-24. This prevents high-priority channel programs from dominating
lower priority ones and gives each system a fair share.
System A System B
WLM WLM
IO Queue
for I/Os that IO with
have to be Priority
queued X'21'
Figure 7-25 shows Extended Distance FICON (EDF) performance comparisons for a
sequential write workload. The workload consists of 64 jobs performing 4 KB sequential
writes to 64 data sets with 1113 cylinders each, which all reside on one large disk volume.
There is one SDM configured with a single, non-enhanced reader to handle the updates.
When turning the XRC Emulation off (Brocade emulation in the diagram), the performance
drops significantly, especially at longer distances. However, after the Extended Distance
FICON (Persistent IU Pacing) function is installed, the performance returns to where it was
with XRC Emulation on.
Figure 7-25 Extended Distance FICON with small data blocks sequential writes on one SDM reader
Figure 7-26 shows EDF performance, this time used in conjunction with Multiple Reader
support. There is one SDM configured with four enhanced readers.
Figure 7-26 Extended Distance FICON with small data blocks sequential writes on four SDM readers
High Performance FICON for z (zHPF) is an enhanced FICON protocol and system I/O
architecture that results in improvements for small block transfers (a track or less) to disk
using the device independent random access method. Instead of Channel Command Word
(CCWs), Transport Control Words (TCWs) can be used. I/O that is using the Media Manager,
like DB2, PDSE, VSAM, zFS, VTOC Index (CVAF), Catalog BCS/VVDS, or Extended Format
SAM, will benefit from zHPF.
In situations where this is the exclusive access in use, it can improve FICON I/O throughput
on a single DS8000 port by 100%. Realistic workloads with a mix of data set transfer sizes
can see a 30% to 70% increase in FICON IOs utilizing zHPF, resulting in up to a 10% to 30%
channel utilization savings.
Although clients can see I/Os complete faster as the result of implementing zHPF, the real
benefit is expected to be obtained by using fewer channels to support existing disk volumes,
or increasing the number of disk volumes supported by existing channels.
Only the System z196 (zEnterprise) or z10 processors support zHPF, and only on the FICON
Express8, FICON Express 4, or FICON Express2 adapters. The FICON Express adapters
are not supported. The required software is z/OS V1.7 with IBM Lifecycle Extension for z/OS
V1.7 (5637-A01), z/OS V1.8, z/OS V1.9, or z/OS V1.10 with PTFs, or z/OS 1.11 and higher.
For z/OS, after the PTFs are installed in the LPAR, you must then set ZHPF=YES in
IECIOSxx in SYS1.PARMLIB or issue the SETIOS ZHPF=YES command. ZHPF=NO is the
default setting.
IBM suggests that clients use the ZHPF=YES setting after the required configuration changes
and prerequisites are met. For more information about zHPF in general, refer to:
http://www.ibm.com/systems/z/resources/faq/index.html
Figure 7-28 shows that on the base code, without this feature, going from 0 km to 20 km will
increase the service time by 0.4 ms. With the extended distance High Performance FICON,
the service time increase will be reduced to 0.2 ms.
1.80
1.60
1.40
Response Time (ms)
1.20
1.00
0.80
0.60
0.40
0.20
0.00
0 5,000 10,000 15,000 20,000 25,000
IO Rate
Base, 0KM Base, 20KM Ext'd Dist. Cap, 0KM Ext'd Dist. Cap, 20KM
Review IBM System Storage DS8000 Introduction and Planning Guide, GC27-2297, for
additional information and details that you will need during the configuration and installation
process.
In this chapter, you will find information that will assist you with the planning and installation
activities. Additional information can be found in IBM System Storage DS8000 Introduction
and Planning Guide, GC27-2297.
A Storage Administrator should also coordinate requirements from the user applications and
systems to build a storage plan for the installation. This will be needed to configure the
storage after the initial hardware installation is complete.
The following people should be briefed and engaged in the planning process for the physical
installation:
Systems and Storage Administrators
Installation Planning Engineer
Building Engineer for floor loading and air conditioning and Location Electrical Engineer
Security Engineers for Business-to-Business VPN, LDAP, TKLM, and encryption
Administrator and Operator for monitoring and handling considerations
IBM or Business Partner Installation Engineer
Note that IBM System Storage DS8000 Introduction and Planning Guide, GC27-2297,
contains additional information about physical planning. You can download it from the
following address:
http://www.ibm.com/systems/storage/disk/ds8000/index.html
Table 8-1 lists the final packaged dimensions and maximum packaged weight of the DS8800
storage unit shipgroup.
Model 941 (4-way) Height 207.5 cm (81.7 in.) 1378 kg (3036 lb)
pallet or crate Width 101.5 cm (40 in.)
Depth 137.5 cm (54.2 in.)
Model 95E expansion unit Height 207.5 cm (81.7 in.) 1277 kg (2810 lb)
pallet or crate Width 101.5 cm (40 in.)
Depth 137.5 cm (54.2 in.)
(if ordered) System Storage Height 68.0 cm (26.8 in.) 47 kg (104 lb)
Productivity Center (SSPC), Width 65.0 cm (25.6 in.)
PSU Depth 105.0 cm (41.3 in.)
(if ordered) System Storage Height 68.0 cm (26.8 in.) 62 kg (137 lb)
Productivity Center (SSPC), Width 65.0 cm (25.6 in.)
PSU, External HMC Depth 105.0 cm (41.3 in.)
(if ordered as MES) External Height 40.0 cm (17.7 in.) 32 kg (71 lb)
HMC container Width 65.0 cm (25.6 in.)
Depth 105.0 cm (41.3 in.)
Attention: A fully configured model in the packaging can weight over 1416 kg (3120 lbs).
Use of fewer than three persons to move it can result in injury.
Important: You need to check with the building engineer or other appropriate personnel
that the floor loading was properly considered.
Raised floors can better accommodate cabling layout. The power and interface cables enter
the storage unit through the rear side.
Figure 8-1 shows the location of the cable cutouts. You may use the following measurements
when you cut the floor tile:
Width: 45.7 cm (18.0 in.)
Depth: 16 cm (6.3 in.)
The storage unit location area should also cover the service clearance needed by IBM service
representatives when accessing the front and rear of the storage unit. You can use the
following minimum service clearances; the dimensions are also shown in Figure 8-2:
1. For the front of the unit, allow a minimum of 121.9 cm (48 in.) for the service clearance.
2. For the rear of the unit, allow a minimum of 76.2 cm (30 in.) for the service clearance.
3. For the sides of the unit, allow a minimum of 5.1 cm (2 in.) for the service clearance.
Power connectors
Each DS8800 base and expansion unit has redundant power supply systems. The two line
cords to each frame should be supplied by separate AC power distribution systems. Use a
60 A rating for the low voltage feature and a 25 A rating for the high voltage feature.
For more details regarding power connectors and line cords, see IBM System Storage
DS8000 Introduction and Planning Guide, GC27-2297.
Input voltage
The DS8800 supports a three-phase input voltage source. Table 8-4 lists the power
specifications for each feature code.
Nominal input voltage 200, 208, 220, or 240 RMS Vac 380, 400, 415, or 480 RMS Vac
(3-phase)
The values represent data that was obtained from typically systems configured as follows:
Base models that contain 15 disk drive sets (240 disk drives) and fibre channel adapters
Expansion models that contain 21 disk drive sets (336 disk drives) and fibre channel
adapters
Air circulation for the DS8800 is provided by the various fans installed throughout the frame.
All of the fans on the DS8800 direct air flow from the front of the frame to the rear of the
frame. No air exhausts to the top of the machine. Using a directional air flow in this manner
allows for “cool aisles” and “hot aisles” to the front and rear of the machine.
Important: Make sure that air circulation for the DS8800 base unit and expansion units is
maintained free from obstruction to keep the unit operating in the specified temperature
range.
For more details regarding power control features, see IBM System Storage DS8000
Introduction and Planning Guide, GC27-2297.
The DS8800 supports one type of fiber adapter, the 8 Gb Fibre Channel/FICON PCI Express
adapter, which is offered in shortwave and longwave versions.
Fibre Channel/FICON
The DS8800 Fibre Channel/FICON adapter has four or eight ports per card. Each port
supports FCP or FICON, but not simultaneously. FCP is supported on point-to-point, fabric,
and arbitrated loop topologies. FICON is supported on point-to-point and fabric topologies.
Fabric components from various vendors, including IBM, CNT, McDATA, Brocade, and Cisco,
are supported by both environments.
The Fibre Channel/FICON shortwave Host Adapter, feature 3153, when used with 50 micron
multi-mode fibre cable, supports point-to-point distances of up to 300 meters. The Fibre
Channel/FICON longwave Host Adapter, when used with 9 micron single-mode fibre cable,
extends the point-to-point distance to 10 km for feature 3245 (4 Gb 10 km LW Host Adapter).
Feature 3243 (4 Gb LW Host Adapter) supports point-to-point distances up to 4 km.
Additional distance can be achieved with the use of appropriate SAN fabric components.
A 31-meter fiber optic cable or a 2-meter jumper cable can be ordered for each Fibre Channel
adapter port.
Note: The Remote Mirror and Copy functions use FCP as the communication link between
the IBM System Storage DS8000 series, DS6000s, and ESS Models 800 and 750.
For more details about IBM-supported attachments, see IBM System Storage DS8000 Host
Systems Attachment Guide, SC26-7917.
For the most up-to-date details about host types, models, adapters, and operating systems
supported by the DS8800 unit, refer to the DS8800 System Storage Interoperability Center at
the following address:
http://www.ibm.com/systems/support/storage/ssic/interoperability.wss
Check your local environment for the following DS8800 unit connections:
Hardware Management Console and network access
System Storage Productivity Center and network access
DSCLI console
DSCIMCLI console
Remote support connection
Remote power control
Storage area network connection
TKLM connection
LDAP connection
For more details about physical network connectivity, see IBM System Storage DS8000
User´s Guide, SC26-7915, and IBM System Storage DS8000 Introduction and Planning
Guide, GC27-2297.
A second, redundant external HMC is orderable and highly recommended for environments
that use TKLM encryption management and Advanced Copy Services functions. The second
HMC is external to the DS8800 rack(s) and consists of a similar mobile workstation as the
primary HMC.
Tip: To ensure that the IBM service representative can quickly and easily access an
external HMC, place the external HMC rack within 15.2 m (50 ft) of the storage units that
are connected to it.
The management console can be connected to your network for remote management of your
system by using the DS Storage Manager web-based graphical user interface (GUI), the DS
Command-Line Interface (CLI), or using storage management software through the DS Open
API. In order to use the CLI to manage your storage unit, you need to connect the
management console to your LAN because the CLI interface is not available on the HMC. The
DS8800 can be managed from the HMC, or remotely using SSPC. Connecting the System
Storage Productivity Center (SSPC) to your LAN allows you to access the DS Storage
Manager GUI from any location that has network access.
To connect the management consoles (internal, and external if present) to your network, you
need to provide the following settings to your IBM service representative so that he can
configure the management consoles for attachment to your LAN:
Management console network IDs, host names, and domain name
Domain Name Server (DNS) settings (if you plan to use DNS to resolve network names)
Routing information
For additional information regarding the HMC planning, see Chapter 9, “DS8800 HMC
planning and setup” on page 177.
SSPC hardware
The SSPC (IBM model 2805-MC5) server contains the following hardware components:
x86 server 1U rack installed
Intel Quadcore Xeon processor running at 2.53GHz
8 GB of RAM
Two hard disk drives
Dual port Gigabit Ethernet
SSPC software
The IBM System Storage Productivity Center includes the following preinstalled (separately
purchased) software, running under a licensed Microsoft® Windows Server 2008 Enterprise
Edition R2 64-bit (included):
IBM Tivoli Storage Productivity Center V4.2.1 licensed as TPC Basic Edition (includes the
Tivoli Integrated Portal). A TPC upgrade requires that you purchase and add additional
TPC licenses.
DS CIM Agent Command-Line Interface (DSCIMCLI) 5.5
IBM Tivoli Storage Productivity Center for Replication (TPC-R) V4.2.1. To run TPC-R on
SSPC, you must purchase and add TPC-R licenses.
IBM DB2 Enterprise Server Edition 9.7 64-bit Enterprise.
Clients have the option to purchase and install the individual software components to create
their own SSPC server.
For details, see Chapter 12, “System Storage Productivity Center” on page 229, and IBM
System Storage Productivity Center Deployment Guide, SG24-7560.
Network connectivity
To connect the System Storage Productivity Center (SSPC) to your network, you need to
provide the following settings to your IBM service representative:
SSPC network IDs, host names, and domain name
Domain Name Server (DNS) settings (if you plan to use DNS to resolve network names)
Routing information
There are several networks ports that need to be opened between the SSPC console and the
DS8800 and LDAP server if the SSPC is installed behind a firewall.
For details about the hardware and software requirements for the DSCLI, see IBM System
Storage DS8000: Command-Line Interface User´s Guide, SC26-7916.
For details about the configuration of the DSCIMCLI, see IBM DS Open Application
Programming Interface Reference, GC35-0516.
You can take advantage of the DS8800 remote support feature for outbound calls (Call Home
function) or inbound calls (remote service access by an IBM technical support
representative). You need to provide an analog telephone line for the HMC modem.
Note the following guidelines to assist in the preparation for attaching the DS8800 to the
client’s LAN:
1. Assign a TCP/IP address and host name to the HMC in the DS8800.
2. If email notification of service alert is allowed, enable the support on the mail server for the
TCP/IP addresses assigned to the DS8800.
3. Use the information that was entered on the installation worksheets during your planning.
It is best practice to use a service connection through the high-speed VPN network utilizing a
secure Internet connection. You need to provide the network parameters for your HMC
Your IBM System Support Representative (SSR) will need the configuration worksheet during
the configuration of your HMC. A worksheet is available in IBM System Storage DS8000
Introduction and Planning Guide, GC27-2297.
See Chapter 17, “Remote support” on page 363 for further discussion about remote support
connection.
In a System z environment, the host must have the Power Sequence Controller (PSC) feature
installed to have the ability to turn on and off specific control units, such as the DS8800. The
control unit is controlled by the host through the power control cable. The power control cable
comes with a standard length of 31 meters, so be sure to consider the physical distance
between the host and DS8800.
A SAN allows your single Fibre Channel host port to have physical access to multiple Fibre
Channel ports on the storage unit. You might need to establish zones to limit the access (and
provide access security) of host ports to your storage ports. Take note that shared access to a
storage unit Fibre Channel port might come from hosts that support a combination of bus
adapter types and operating systems.
The isolated TKLM server consists of the following hardware and software:
IBM System x3650 with L5420 processor
– Quad-core Intel Xeon® processor L5420 (2.5 GHz, 12 MB L2, 1.0 GHz FSB, 50 W)
– 6 GB memory
Note: No other hardware or software is allowed on this server. An isolated server must
only use internal disk for all files necessary to boot and have the TKLM key server
become operational.
Processor speed For Linux and Windows For Linux and Windows
systems: systems:
2.66 GHz single processor 3.0 GHz dual processors
For AIX and Sun Solaris For AIX and Sun Solaris
systems: systems:
1.5 GHz (2–way) 1.5 GHz (4–way)
AIX Version 5.3 64-bit, and Version 6.1 For Version 5.3, use Technology Level 5300-04
and Service Pack 5300-04-02
On Linux platforms, Tivoli Key Lifecycle Manager requires the following package:
compat-libstdc++-33-3.2.3-61 or higher
For more information regarding the required TKLM server and other requirements and
guidelines, see IBM System Storage DS8700 Disk Encryption Implementation and Usage
Guidelines, REDP-4500.
There are two network ports that need to be opened on a firewall to allow DS8800 connection
and have an administration management interface to the TKLM server. These ports are
defined by the TKLM administrator.
Typically, there is normally one LDAP server installed in the client environment to provide
directory services. For details, see IBM System Storage DS8000: LDAP Authentication,
REDP-4505.
If the LDAP server is isolated from the SSPC by a firewall, the LDAP port needs to be opened
in that firewall. There might also be a firewall between the SSPC and the DS8800 that needs
to be opened to allow LDAP traffic between them.
Make sure that you have a sufficient number of FCP paths assigned for your remote mirroring
between your source and target sites to address performance and redundancy issues. When
you plan to use both Metro Mirror and Global Copy modes between a pair of storage units, we
recommend that you use separate logical and physical paths for the Metro Mirror and another
set of logical and physical paths for the Global Copy.
Plan the distance between the primary and secondary storage units to properly acquire the
necessary length of fiber optic cables that you need or if your Copy Services solution requires
separate hardware, such as channel extenders or dense wavelength division multiplexing
(DWDM).
The DDM sparing policies support the overconfiguration of spares. This possibility might be
useful for some installations, because it allows the repair of some DDM failures to be deferred
until a later repair action is required. See IBM System Storage DS8000 Introduction and
Planning Guide, GC27-2297 and 4.6.8, “Spare creation” on page 77 for more details about
the DS8800 sparing concepts.
Table 8-9 Disk drive set capacity for open systems and System z environments
Notes:
1. Effective capacities are in decimal gigabytes (GB). One GB is 1,000,000,000 bytes.
2. Although disk drive sets contain 16 drives, arrays use only eight drives. The effective
capacity assumes that you have two arrays for each disk drive set
An updated version of Capacity Magic (see “Capacity Magic” on page 580) will aid you in
determining the raw and net storage capacities and the numbers regarding the required
extents for each available type of RAID.
For more information about encrypted drives and inherent restrictions, see IBM System
Storage DS8700 Disk Encryption Implementation and Usage Guidelines, REDP-4500.
Planning for future growth normally suggests an increase in physical requirements, in your
installation area (including floor loading), electrical power, and environmental cooling.
A key feature that you can order for your dynamic storage requirement is the Standby
Capacity on Demand (CoD). This offering is designed to provide you with the ability to tap into
additional storage and is particularly attractive if you have rapid or unpredictable growth, or if
you simply want the knowledge that the extra storage will be there when you need it. Standby
CoD allows you to access the extra storage capacity when you need it through a
nondisruptive activity. For more information about Capacity on Demand, see 18.2, “Using
Capacity on Demand (CoD)” on page 383.
The HMC is the point where the DS8800 is connected to the client network. It provides the
services that the client needs to configure and manage the storage, and it also provides the
interface where service personnel will perform diagnostics and repair actions. The HMC is the
contact point for remote support, both modem and VPN.
SW1 Left switch – Black Network SW2 Right switch – Gray Network
The HMC’s public Ethernet port, shown as eth2 in Figure 9-1 on page 178, is where the client
connects to its network. The HMC’s private Ethernet ports, eth0 and eth3, are configured into
port 1 of each Ethernet switch to form the private DS8800 network. To interconnect two
DS8800 primary frames, FC1190 provides a pair of 31 m Ethernet cables to connect each
switch in the second base frame into port 2 of switches in the first frame. Depending on the
machine configuration, one or more ports might be unused on each switch.
Important: The internal Ethernet switches pictured in Figure 9-2 are for the DS8800
private network only. No client network connection should ever be made to the internal
switches. Client networks are connected to the HMCs directly.
The GUI and the CLI are comprehensive, easy-to-use interfaces for a storage administrator to
perform DS8800 management tasks to provision the storage arrays, manage application
users, and change some HMC options. The two can be used interchangeably, depending on
the particular task.
The DS Open API provides an interface for external storage management programs, such as
Tivoli Productivity Center (TPC), to communicate with the DS8800. It channels traffic through
the IBM System Storage Common Information Model (CIM) agent, a middleware application
that provides a CIM-compliant interface.
Older DS8000 family products used a service interface called WebSM. The DS8800 uses a
newer, faster interface called WebUI that can be used remotely over a VPN by support
personnel to check the health status or to perform service tasks.
To access the DS Storage Manager GUI through the SSPC, open a new browser window or
tab and type the following address:
http://<SSPC ipaddress>:9550/ITSRM/app/welcome.html
A more thorough description of setting up and logging into SSPC can be found in 12.2.1,
“Configuring SSPC for DS8800 remote GUI access” on page 233.
Fluxbox
Terminals >
Net > Net
HMC Browser
3. The web browser starts with no address bar and a web page titled WELCOME TO THE
DS8000 MANAGEMENT CONSOLE appears, as shown in Figure 9-4.
See Chapter 14, “Configuration with the DS Command-Line Interface” on page 307 for more
information about using DS CLI, as only a few commands are covered in this section. See
IBM System Storage DS8000: Command-Line Interface User´s Guide, SC26-7916, for a
complete list of DS CLI commands.
Note: The DS CLI cannot be used locally at the DS8800 Hardware Management Console.
After the DS CLI has been installed on a workstation, you can use it by typing dscli in a
command prompt window. The DS CLI provides three command modes:
Interactive command mode
Script command mode
Single-shot command mode
Interactive mode
To enter the interactive mode of the DS CLI, type dscli in a command prompt window and
follow the prompts to log in, as shown in Example 9-1. After you are logged on, you can enter
DS CLI commands one at a time.
To prepare a custom DS CLI profile, the file dscli.profile can be copied and then modified,
as shown in Example 9-2 on page 183. On a Windows workstation, save the file in the
directory C:\Program Files\IBM\dscli with the name lab8700.profile. The -cfg flag is used
at the dscli prompt to call this profile. Example 9-3 shows how to connect DS CLI to the
DS8800 HMC using this custom profile.
Script mode
If you already know exactly what commands you want to issue on the DS8800, multiple DS
CLI commands can be integrated into a script that can be executed by launching dscli with
the -script parameter. To call a script with DS CLI commands, use the following syntax in a
command prompt window of a Windows workstation:
dscli -script <script_filename> -hmc1 <ip-address> -user <userid> -passwd
<password>
In Example 9-4, the script file lssi.cli contains just one CLI command, that is, the lssi
command.
The Common Information Model (CIM) Agent for the DS8800 is Storage Management
Initiative Specification (SMI-S) 1.1-compliant. This agent is used by storage management
applications, such as Tivoli Productivity Center (TPC), Tivoli Storage Manager, and
VSS/VDS. Also, to comply with more open standards, the agent can be accessed by software
from third-party vendors, including VERITAS/Symantec, HP/AppIQ, EMC, and many other
applications at the SNIA Interoperability Lab. For more information, visit the following address:
http://www.snia.org/forums/smi/tech_programs/lab_program/
For the DS8800, the CIM agent is preloaded with the HMC code and is started when the HMC
boots. An active CIM agent only allows access to the DS8800s managed by the HMC on
which it is running. Configuration of the CIM agent must be performed by an IBM Service
representative using the DS CIM Command Line Interface (DSCIMCLI).
HMC Service
Help Logoff
Management Management
Extra
Status
Informatio
Overview
n
Because the HMC WebUI is mainly a services interface, it will not be covered here. Further
information can be obtained through the Help menu.
Note: Applying increased feature activation codes is a concurrent action, but a license
reduction or deactivation is a disruptive action.
Maintenance windows
Even though the microcode update of the DS8800 is a nondisruptive action, any
prerequisites identified for the hosts (for example, patches, new maintenance levels, or
new drivers) could make it necessary to schedule a maintenance window. The host
environments can then be upgraded to the level needed in parallel to the microcode
update of the DS8800 taking place.
For more information about microcode upgrades, see Chapter 15, “Licensed machine code”
on page 343.
With the DS8800, the HMC has the ability to utilize the Network Time Protocol (NTP) service.
Customers can specify NTP servers on their internal network to provide the time to the HMC.
It is a client responsibility to ensure that the NTP servers are working, stable, and accurate.
An IBM service representative will enable the HMC to use NTP servers, ideally at the time of
the initial DS8800 installation.
Note: Because of the many components and operating systems within the DS8800, time
and date setting is a maintenance activity that can only be done by the IBM service
representative.
SIM notification is only applicable for System z servers. It allows you to receive a notification
on the system console in case of a serviceable event. SNMP and email are the only
notification options for the DS8800.
Call Home is the capability of the HMC to contact IBM support to report a serviceable event.
Remote Services is the capability of IBM service representatives to connect to the HMC to
perform service tasks remotely. If allowed to do so by the setup of the client’s environment, an
IBM service support representative could connect to the HMC to perform detailed problem
analysis. The IBM service support representative can view error logs and problem logs, and
initiate trace or dump retrievals.
In the remainder of this section, we illustrate the steps required to configure the DS8800 HMC
eth2 port for IPv6:
1. Launch and log in to WebUI; refer to 9.2.4, “Web-based user interface” on page 184 for the
procedure.
2. In the HMC welcome window, select HMC Management, as shown in Figure 9-6.
The password of the admin user ID will need to be changed before it can be used. The GUI
will force you to change the password when you first log in. The DS CLI will allow you to log in
but will not allow you to issue any other commands until you have changed the password. As
an example, to change the admin user’s password to passw0rd, use the following DS CLI
command:
chuser-pw passw0rd admin
After you have issued that command, you can then issue other commands.
Note: The DS8800 supports the capability to use a Single Point of Authentication function
for the GUI and CLI through a centralized LDAP server. This capability requires a TPC
Version 4.1 server. For detailed information about LDAP based authentication, refer to IBM
System Storage DS8000: LDAP Authentication, REDP-4505.
User roles
During the planning phase of the project, a worksheet or a script file was established with a
list of all people who need access to the DS GUI or DS CLI. Note that a user can be assigned
to more than one group. At least one person should be assigned to each of the following roles
(user_id):
The Administrator (admin) has access to all HMC service methods and all storage image
resources, except for encryption functionality. This user authorizes the actions of the
Security Administrator during the encryption deadlock prevention and resolution process.
The Security Administrator (secadmin) has access to all encryption functions. secadmin
requires an Administrator user to confirm the actions taken during the encryption deadlock
prevention and resolution process.
The Physical operator (op_storage) has access to physical configuration service methods
and resources, such as managing storage complex, storage image, Rank, array, and
Extent Pool objects.
The Logical operator (op_volume) has access to all service methods and resources that
relate to logical volumes, hosts, host ports, logical subsystems, and Volume Groups,
excluding security methods.
The Monitor group has access to all read-only, nonsecurity HMC service methods, such
as list and show commands.
The Service group has access to all HMC service methods and resources, such as
performing code loads and retrieving problem logs, plus the privileges of the Monitor
group, excluding security methods.
The Copy Services operator has access to all Copy Services methods and resources, plus
the privileges of the Monitor group, excluding security methods.
No access prevents access to any service method or storage image resources. This group
is used by an administrator to temporarily deactivate a user ID. By default, this user group
is assigned to any user account in the security repository that is not associated with any
other user group.
Best practice: Do not set the values of chpass to 0, as this indicates that passwords never
expire and unlimited login attempts are allowed.
If access is denied for the administrator due to the number of invalid login attempts, a
procedure can be obtained from your IBM representative to reset the administrator’s
password. The password for each user account is forced to adhere to the following rules:
The length of the password must be between 6 and 16 characters.
It must begin and end with a letter.
It must have at least five letters.
It must contain at least one number.
It cannot be identical to the user ID.
It cannot be a previous password.
Note: User names and passwords are case sensitive. If you create a user name called
Anthony, you cannot log in using the user name anthony.
chuser
This command changes the password or group (or both) of an existing user ID. It is also
used to unlock a user ID that has been locked by exceeding the allowable login retry
count. The administrator could also use this command to lock a user ID. In Example 9-8,
we unlock the user, change the password, and change the group membership for a user
called JensW. He must use the chpass command when he logs in the next time.
lsuser
With this command, a list of all user IDs can be generated. In Example 9-9, we can see
three users and the admin account.
showuser
The account details of a user ID can be displayed with this command. In Example 9-10,
we list the details of the user Robert.
chpass
This command lets you change two password policies: password expiration (days) and
failed logins allowed. In Example 9-12, we change the expiration to 365 days and five
failed login attempts.
showpass
This command lists the properties for passwords (Password Expiration days and Failed
Logins Allowed). In Example 9-13, we can see that passwords have been set to expire in
90 days and that four login attempts are allowed before a user ID is locked.
The next window shows you the users defined on the HMC. You can choose to add a new
user (select Select action Add user) or modify the properties of an existing user. The
administrator can perform several tasks from this window:
Add User (The DS CLI equivalent is mkuser.)
Modify User (The DS CLI equivalent is chuser.)
Lock or Unlock User: Choice will toggle (The DS CLI equivalent is chuser.)
Delete User (The DS CLI equivalent is rmuser.)
Password Settings (The DS CLI equivalent is chpass.)
Note: If a user who is not in the Administrator group logs on to the DS GUI and goes to the
User Administration window, the user will only be able to see their own user ID in the list.
The only action they will be able to perform is to change their password.
Take special note of the new role of the Security Administrator (secadmin). This role was
created to separate the duties of managing the storage from managing the encryption for
DS8800 units that are shipped with Full Disk Encryption storage drives.
If you are logged in to the GUI as a Storage Administrator, you cannot create, modify, or
delete users of the Security Administrator role. Notice how the Security Administrator option
is disabled in the Add/Modify User window in Figure 9-13. Similarly, Security Administrators
cannot create, modify, or delete Storage Administrators. This is a new feature of the
microcode for the DS8800.
The DS8800 is capable of performing all storage duties while the HMC is down, but
configuration, error reporting, and maintenance capabilities become severely restricted. Any
organization with extremely high availability requirements should consider deploying an
external HMC.
Note: To help preserve Data Storage functionality, the internal and external HMCs are not
available to be used as general purpose computing resources.
Any changes made using one HMC are instantly reflected in the other HMC. There is no
caching of host data done within the HMC, so there are no cache coherency issues.
After you make these changes and save the profile, the DS CLI will be able to automatically
communicate through HMC2 in the event that HMC1 becomes unreachable. This change will
allow you to perform both configuration and Copy Services commands with full redundancy.
Operating Environment License 0700 and 70xx 239x Model LFA, 703x/706x
High Performance FICON 0709 and 7092 239x Model LFA, 7092
Space Efficient FlashCopy 0730 and 73xx 239x Model LFA, 735x-736x
z/OS Global Mirror 0760 and 76xx 239x Model LFA, 765x-766x
z/OS Metro/Global Mirror 0763 and 76xx 239x Model LFA, 768x-769x
Incremental Resync
Parallel Access Volumes 0780 and 78xx 239x Model LFA, 782x-783x
The DS8800 provides Enterprise Choice warranty options associated with a specific
machine type. The x in 242x designates the machine type according to its warranty period,
where x can be either 1, 2, 3, or 4. For example, a 2424-951 machine type designates a
DS8800 storage system with a four-year warranty period.
The x in 239x can either be 6, 7, 8, or 9, according to the associated 242x base unit
model. 2396 function authorizations apply to 2421 base units, 2397 to 2422, and so on.
For example, a 2399-LFA machine type designates a DS8800 Licensed Function
Authorization for a 2424 machine with a four-year warranty period.
The 242x licensed function indicator feature numbers enable the technical activation of the
function, subject to the client applying a feature activation code made available by IBM.
The 239x licensed function authorization feature numbers establish the extent of
authorization for that function on the 242x machine for which it was acquired.
With the DS8800 storage system, IBM offers value-based licensing for the Operating
Environment License. It is priced based on the disk drive performance, capacity, speed, and
These features are required in addition to the per TB OEL features (#703x-704x). For each
disk drive set, the corresponding number of value units must be configured, as shown in
Table 10-3.
Table 10-3 Value unit requirements based on drive size, type, and speed
Drive set Drive size Drive type Drive speed Encryption Value units
feature drive required
number
The HyperPAV license is a flat-fee, add-on license that requires the Parallel Access Volumes
(PAV) license to be installed.
The license for Space-Efficient FlashCopy does not require the ordinary FlashCopy (PTC)
license. As with the ordinary FlashCopy, the FlashCopy SE is licensed in tiers by gross
amount of TB installed. FlashCopy (PTC) and FlashCopy SE can be complementary licenses.
FlashCopy SE will serve to do FlashCopies with Track Space-Efficient (TSE) target volumes.
When also doing FlashCopies to standard target volumes, use the PTC license in addition.
Metro Mirror (MM license) and Global Mirror (GM) can be complementary features as well.
Chapter 10. IBM System Storage DS8800 features and license keys 205
Note: For a detailed explanation of the features involved and the considerations you must
have when ordering DS8800 licensed functions, refer to these announcement letters:
IBM System Storage DS8800 series (IBM 242x)
IBM System Storage DS8800 series (M/T 239x) high performance flagship - Function
Authorizations.
Important: There is a special procedure to obtain the license key for the Full Disk
Encryption feature. It cannot be obtained from the DSFA website. Refer to IBM System
Storage DS8700 Disk Encryption, REDP-4500, for more information.
You can activate all license keys at the same time (for example, on initial activation of the
storage unit) or they can be activated individually (for example, additional ordered keys).
Before connecting to the IBM DSFA website to obtain your feature activation codes, ensure
that you have the following items:
The IBM License Function Authorization documents. If you are activating codes for a new
storage unit, these documents are included in the shipment of the storage unit. If you are
activating codes for an existing storage unit, IBM will send the documents to you in an
envelope.
A USB memory device can be used for downloading your activation codes if you cannot
access the DS Storage Manager from the system that you are using to access the DSFA
website. Instead of downloading the activation codes in softcopy format, you can also print
the activation codes and manually enter them using the DS Storage Manager GUI.
However, this is slow and error prone, because the activation keys are 32-character long
strings.
2. Select Storage Complexes to open the Storage Complexes Summary window, as shown
in Figure 10-2. From here, you can obtain the serial number of your DS8800 storage
image.
Chapter 10. IBM System Storage DS8800 features and license keys 207
3. In the Storage Complexes Summary window, select the storage image by checking the
box to the left of it, and select Properties from the drop-down Select action menu in the
Storage Unit section (Figure 10-3).
4. The Storage Unit Properties window opens. Click the Advanced tab to display more
detailed information about the DS8800 storage image, as shown in Figure 10-4.
Machine signature
Chapter 10. IBM System Storage DS8800 features and license keys 209
2. Click IBM System Storage DS8000 series. This brings you to the Select DS8000 series
machine window (Figure 10-6). Select the appropriate 242x Machine Type.
3. Enter the machine information collected in Table 10-4 on page 209 and click Submit. The
View machine summary window opens (Figure 10-7).
Chapter 10. IBM System Storage DS8800 features and license keys 211
5. When you have entered the values, click Submit. The View activation codes window
opens, showing the license activation codes for the storage images (Figure 10-9). Print the
activation codes or click Download to save the activation codes in a file that you can later
import in the DS8800. The file contains the activation codes for two storage images (which
are currently not offered for DS8800).
Note: In most situations, the DSFA application can locate your 239x licensed function
authorization record when you enter the DS8800 (242x) serial number and signature.
However, if the 239x licensed function authorization record is not attached to the 242x
record, you must assign it to the 242x record using the Assign function authorization link
on the DSFA application. In this case, you need the 239x serial number (which you can find
on the License Function Authorization document).
The following activation activities are disruptive and require a machine IML or reboot of the
affected image:
Removal of a DS8800 licensed function to deactivate the function.
A lateral change or reduction in the license scope. A lateral change is defined as
changing the license scope from fixed block (FB) to count key data (CKD) or from CKD
to FB. A reduction is defined as changing the license scope from all physical capacity
(ALL) to only FB or only CKD capacity.
Attention: Before you begin this task, you must resolve any current DS8800 problems.
Contact IBM support for assistance in resolving these problems.
The easiest way to apply the feature activation codes is to download the activation codes from
the IBM Disk Storage Feature Activation (DSFA) website to your local computer and import
the file into the DS Storage Manager. If you can access the DS Storage Manager from the
same computer that you use to access the DSFA website, you can copy the activation codes
from the DSFA window and paste them into the DS Storage Manager window. The third
option is to manually enter the activation codes in the DS Storage Manager from a printed
copy of the codes.
Figure 10-10 DS8800 Storage Manager GUI: select Apply Activation Codes
Chapter 10. IBM System Storage DS8800 features and license keys 213
2. The Apply Activation Codes window opens (Figure 10-11). If this is the first time that you
are applying the activation codes, the fields in the window are empty. In our example, there
is only a 19 TB Operating Environment License (OEL) for FB volumes. You have an option
to manually add an activation key by selecting Add Activation Key from the drop-down
Select action menu. The other option is to select Import Key File, which you used when
you downloaded a file with the activation key from the IBM DSFA site, as explained in
10.2.2, “Obtaining activation codes” on page 209.
3. The easiest way is to import the activation key from the file, as shown in Figure 10-12.
Figure 10-12 Apply Activation Codes by importing the key from the file
5. Your license is now listed in the table. In our example, there is one OEL license active, as
shown in Figure 10-14.
Chapter 10. IBM System Storage DS8800 features and license keys 215
7. To view all the activation codes that have been applied, from My Work navigation pane on
the DS Storage Manager Welcome window, select Manage hardware Storage
Complexes, and from the drop-down Select action menu, click Apply Activation Codes.
The activation codes are displayed, as shown in Figure 10-15.
2. Obtain your license activation codes from the IBM DSFA website, as discussed in 10.2.2,
“Obtaining activation codes” on page 209.
3. Use the applykey command to activate the codes and the lskey command to verify which
type of licensed features are activated for your storage unit.
c. Enter an applykey command at the dscli command prompt as follows. The -file
parameter specifies the key file. The second parameter specifies the storage image.
dscli> applykey -file c:\2107_7520780.xml IBM.2107-7520781
d. Verify that the keys have been activated for your storage unit by issuing the DS CLI
lskey command, as shown in Example 10-2.
For more details about the DS CLI, refer to IBM System Storage DS: Command-Line
Interface User’s Guide, GC53-1127.
Chapter 10. IBM System Storage DS8800 features and license keys 217
10.3.1 Why you get a choice
Let us imagine a simple scenario where a machine has 20 TB of capacity. Of this capacity,
15 TB is configured as FB and 5 TB is configured as CKD. If we only want to use
Point-in-Time Copy for the CKD volumes, then we can purchase just 5 TB of Point-in-Time
Copy and set the scope of the Point-in-Time Copy activation code to CKD. There is no need
to buy a new PTC license in case you do not need Point-in-Time Copy for CKD anymore, but
you would like to use it for FB only. Simply obtain a new activation code from DSFA website by
changing the scope to FB.
When deciding which scope to set, there are several scenarios to consider. Use Table 10-5 to
guide you in your choice. This table applies to both Point-in-Time Copy and Remote Mirror
and Copy functions.
4 This function is currently only needed by open Select FB and change to scope
systems hosts, but we might use it for System All if and when the System z
z at some point in the future. requirement occurs.
5 This function is currently only needed by Select CKD and change to scope
System z hosts, but we might use it for open All if and when the open systems
systems hosts at some point in the future. requirement occurs.
6 This function has already been set to All. Leave the scope set to All.
Changing the scope to CKD or
FB at this point requires a
disruptive outage.
Any scenario that changes from FB or CKD to All does not require an outage. If you choose to
change from All to either CKD or FB, then you must have a disruptive outage. If you are
absolutely certain that your machine will only ever be used for one storage type (for example,
only CKD or only FB), then you can also quite safely just use the All scope.
Example 10-3 Trying to use a feature for which you are not licensed
dscli> lskey IBM.2107-7520391
Date/Time: 05 October 2010 14:19:17 CET IBM DSCLI Version: 6.6.0.220 DS: IBM.2107-7520391
Activation Key Authorization Level (TB) Scope
============================================================
Metro mirror (MM) 5 All
Operating environment (OEL) 5 All
Point in time copy (PTC) 5 FB
dscli> lsckdvol
Date/Time: 05 October 2010 14:19:17 CET IBM DSCLI Version: 6.6.0.220 DS: IBM.2107-7520391
Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl)
=========================================================================================
- 0000 Online Normal Normal 3390-3 CKD Base - P2 3339
- 0001 Online Normal Normal 3390-3 CKD Base - P2 3339
dscli> lsckdvol
Date/Time: 04 October 2010 15:51:53 CET IBM DSCLI Version: 6.6.0.220 DS: IBM.2107-7520391
Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl)
=========================================================================================
- 0000 Online Normal Normal 3390-3 CKD Base - P2 3339
- 0001 Online Normal Normal 3390-3 CKD Base - P2 3339
Chapter 10. IBM System Storage DS8800 features and license keys 219
10.3.4 Changing the scope from All to FB
In Example 10-5, we decide to increase storage capacity for the entire machine. However, we
do not want to purchase any more PTC licenses, because PTC is only used by open systems
hosts and this new capacity is only to be used for CKD storage. We therefore decide to
change the scope to FB, so we log on to the DSFA website and create a new activation code.
We then apply it, but discover that because this is effectively a downward change (decreasing
the scope), it does not apply until we have a disruptive outage on the DS8800.
dscli> lsckdvol
Date/Time: 05 October 2010 14:19:17 CET IBM DSCLI Version: 6.6.0.220 DS: IBM.2107-7520391
Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl)
=========================================================================================
- 0000 Online Normal Normal 3390-3 CKD Base - P2 3339
- 0001 Online Normal Normal 3390-3 CKD Base - P2 3339
dscli> mkflash 0000:0001 But we are still able to create CKD FlashCopies
Date/Time: 05 October 2010 14:19:17 CET IBM DSCLI Version: 6.6.0.220 DS: IBM.2107-7520391
CMUC00137I mkflash: FlashCopy pair 0000:0001 successfully created.
In this scenario, we have made a downward license feature key change. We must schedule
an outage of the storage image. We should in fact only make the downward license key
change immediately before taking this outage.
Consideration: Making a downward license change and then not immediately performing
a reboot of the storage image is not supported. Do not allow your machine to be in a
position where the applied key is different than the reported key.
At this point, this is still a valid configuration, because the configured ranks on the machine
total less than 5 TB of storage. In Example 10-7, we then try to create a new rank that brings
the total rank capacity above 5 TB. This command fails.
To configure the additional ranks, we must first increase the license key capacity of every
installed license. In this example, that is the FlashCopy license.
To make the calculation, we use the lsrank command to determine how many arrays the rank
contains, and whether those ranks are used for FB or CKD storage. We use the lsarray
command to obtain the disk size used by each array. Then, we multiply the disk size (146,
300, 450, or 600) by eight (for eight DDMs in each array site).
In Example 10-8 on page 222, lsrank tells us that rank R0 uses array A0 for CKD storage.
Then, lsarray tells us that array A0 uses 300 GB DDMs. So we multiple 300 (the DDM size)
by 8, giving us 300 x 8 = 2400 GB. This means we are using 2400 GB for CKD storage.
Chapter 10. IBM System Storage DS8800 features and license keys 221
Now, rank R4 in Example 10-8 is based on array A6. Array A6 uses 146 GB DDMs, so we
multiply 146 by 8, giving us 146 x 8 = 1168 GB. This means we are using 1168 GB for FB
storage.
dscli> lsarray
Date/Time: 05 October 2010 14:19:17 CET IBM DSCLI Version: 6.6.0.220 DS:
IBM.2107-75ABTV1
Array State Data RAIDtype arsite Rank DA Pair DDMcap (10^9B)
====================================================================
A0 Assigned Normal 5 (6+P+S) S1 R0 0 300.0
A1 Unassigned Normal 5 (6+P+S) S2 - 0 300.0
A2 Unassigned Normal 5 (6+P+S) S3 - 0 300.0
A3 Unassigned Normal 5 (6+P+S) S4 - 0 300.0
A4 Unassigned Normal 5 (7+P) S5 - 0 146.0
A5 Unassigned Normal 5 (7+P) S6 - 0 146.0
A6 Assigned Normal 5 (7+P) S7 R4 0 146.0
A7 Assigned Normal 5 (7+P) S8 R5 0 146.0
So for CKD scope licenses, we currently use 2,400 GB. For FB scope licenses, we currently
use 1168 GB. For licenses with a scope of All, we currently use 3568 GB. Using the limits
shown in Example 10-6 on page 221, we are within scope for all licenses.
If we combine Example 10-6 on page 221, Example 10-7 on page 221, and Example 10-8,
we can also see why the mkrank command in Example 10-7 on page 221 failed. In
Example 10-7 on page 221, we tried to create a rank using array A1. Now, array A1 uses 300
GB DDMs. This means that for FB scope and All scope licenses, we use 300 x 8 = 2400 GB
more license keys.
In Example 10-6 on page 221, we had only 5 TB of FlashCopy license with a scope of All.
This means that we cannot have total configured capacity that exceeds
5000 TB. Because we already use 3568 GB, the attempt to use 2400 more GB will fail,
because 3568 plus 2400 equals 5968 GB, which is more than 5000 GB. If we increase the
size of the FlashCopy license to 10 TB, then we can have 10,000 GB of total configured
capacity, so the rank creation will then succeed.
Part 3 Storage
configuration
In this part, we discuss the configuration tasks required on your IBM System Storage
DS8800. We cover the following topics:
System Storage Productivity Center (SSPC)
Configuration using the DS Storage Manager GUI
Configuration with the DS Command-Line Interface
The customization worksheets are important and need to be completed before the
installation. It is important that this information is entered into the machine so that preventive
maintenance and high availability of the machine are maintained. You can find the
customization worksheets in IBM System Storage DS8000 Introduction and Planning Guide,
GC27-2297.
The customization worksheets allow you to specify the initial setup for the following items:
Company information: This information allows IBM service representatives to contact you
as quickly as possible when they need to access your storage complex.
Management console network settings: This allows you to specify the IP address and LAN
settings for your management console (MC).
Remote support (includes Call Home and remote service settings): This allows you to
specify whether you want outbound (Call Home) or inbound (remote services) remote
support.
Notifications (include SNMP trap and e-mail notification settings): This allows you to
specify the types of notifications that you want and that others might want to receive.
Power control: This allows you to select and control the various power modes for the
storage complex.
Control Switch settings: This allows you to specify certain DS8800 settings that affect host
connectivity. You need to enter these choices on the control switch settings worksheet so
that the service representative can set them during the installation of the DS8800.
Important: The configuration flow changes when you use the Full Disk Encryption Feature
for the DS8800. For details, refer to IBM System Storage DS8000:IBM System Storage
DS8700 Disk Encryption Implementation and Usage Guidelines, REDP-4500.
1. Install license keys: Activate the license keys for the storage unit.
2. Create arrays: Configure the installed disk drives as either RAID 5, RAID 6, or RAID 10
arrays.
3. Create ranks: Assign each array to either a fixed block (FB) rank or a count key data
(CKD) rank.
The actual configuration can be done using either the DS Storage Manager GUI or DS
Command-Line Interface, or a mixture of both. A novice user might prefer to use the GUI,
while a more experienced user might use the CLI, particularly for some of the more repetitive
tasks, such as creating large numbers of volumes.
For a more detailed discussion about how to perform the specific tasks, refer to:
Chapter 10, “IBM System Storage DS8800 features and license keys” on page 203
Chapter 13, “Configuration using the DS Storage Manager GUI” on page 251
Chapter 14, “Configuration with the DS Command-Line Interface” on page 307
Taking full advantage of the available and optional functions usable with SSPC can result in:
Fewer resources required to manage the growing storage infrastructure
Reduced configuration errors
Decreased troubleshooting time and improved accuracy
SSPC hardware
The SSPC (IBM model 2805-MC5) server contains the following hardware components:
x86 server 1U rack installed
Intel Quadcore Xeon processor running at 2.53 GHz
8 GB of RAM
Two hard disk drives
Dual port Gigabit Ethernet
SSPC software
The IBM System Storage Productivity Center 1.5 includes the following preinstalled
(separately purchased) software, running under a licensed Microsoft Windows Server 2008
Enterprise Edition R2, 64-bit (included):
IBM Tivoli Storage Productivity Center V4.2.1 licensed as TPC Basic Edition (includes the
Tivoli Integrated Portal). A TPC upgrade requires that you purchase and add additional
TPC licenses.
Customers have the option to purchase and install the individual software components to
create their own SSPC server.
With version 4.2, IBM Tivoli Storage Productivity Center for Replication no longer supports
DB2 as the datastore for its operational data. IBM Tivoli Storage Productivity Center for
Replication uses an embedded repository for its operational data.
For detailed information, and additional considerations, see the TPC/SSPC Information
Center at the following address:
http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/index.jsp
After IBM Support physically installs the 2805-MC5 and tests IP connectivity with the
DS8000, SSPC is ready for configuration by either the client or IBM Services.
After the initial configuration of SSPC is done, the SSPC user is able to configure the remote
GUI access to all DS8000 systems in the TPC Element Manager, as described in
“Configuring TPC to access the DS8000 GUI” on page 235.
Figure 12-1 Entry window for installing TPC-GUI access through a browser
After you click TPC GUI (Java Web Start), a login window appears, as shown in Figure 12-2.
If “Change password at first login” was specified by the SSPC administrator for the Windows
user account, the user must first log on to the SSPC to change the password. The logon can
be done at the SSPC terminal itself or by Windows Remote Desktop authentication to SSPC.
Since DS8700 LIC Release 5, remote access to the DS8000 GUI is bundled with SSPC.
Note: For storage systems discovered or configured for CIMOM or native device interface,
TPC automatically defines Element Manager access. You need to specify the correct
username and password in the Element Manager to use it.
12.Click the Element Management button in the Welcome window. If this is not the first login
to the TPC and you removed the Welcome window, then click the Element Management
button above the Navigation Tree section. The new window displays all storage systems
(Element Managers) already defined to TPC. From the Select action drop-down menu,
select Add Element Manager, as shown in Figure 12-4.
Figure 12-4 TPC Element Manager view: Options to add and launch Elements
Figure 12-5 Configure a new DS8800 Element in the TPC Element Manager view
TPC tests the connection to the DS8000 Element Manager. If the connection was
successful, the new DS8000 Element Manager is displayed in the Element Manager table.
14.After the DS8000 GUI has been added to the Element Manager, select the Element
Manager you want to work with and, from the Select Action drop-down menu, click
Launch Default Element Manager, as shown in Figure 12-6.
To modify the password and re-enable remote GUI access through TPC:
1. Launch the TPC and navigate to Element Manager.
2. Select the DS8000 system for which the password needs to be modified and, from the
Select action drop-down menu, click Modify Element Manager.
3. Enter a modified password in the Modify Element Manager window matching the DS8000
system security rules, as documented in the DS8000 Information Center (go to
http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic/index.jsp, search for
User administration, and select Defining password rules). The password and its use
must meet the following criteria:
– Be six to 16 characters long.
– Must contain five or more letters, and it must begin and end with a letter.
– Must contain one or more numbers.
– Cannot contain the user's user ID.
– Is case-sensitive.
– Four unique new passwords must be issued before an old password can be reused.
– Allowable characters are a-z, A-Z, and 0-9.
After the password has been changed, the access to the DS8000 GUI is reenabled.
If SSPC will be used for access to the DS8000 Storage Manager only, the configuration of
SSPC in regards to DS8000 system administration and monitoring is completed.
If the SSPC user wants to use the advanced function of TPC-BE, further configuration will be
required, as described in the next sections.
There is an option to enable or disable the embedded CIMOM manually through the DS8000
HMC Web User Interface (WUI).
Example 12-1 DSCIMCLI commands to check CIMOM connectivity to primary and secondary HMC
Example 12-2 DSCIMCLI commands to offload DSCIMCLI logs from DS8000 HMC onto SSPC
C:\Program Files\IBM\DSCIMCLI\Windows> dscimcli collectlog -s
https://<<DS8800_HMC_IP_addr.>:6989 -u <valid ESSNI user> -p <associated ESSNI
password>
Old remote log files were successfully listed.
No one old log file on the DS Agent side.
New remote log file was successfully created.
Figure 12-8 shows the cascade of authentication from SSPC (on the Windows operating
system) to the DS8000 storage configuration.
Figure 12-8 Cascade of authentication from SSPC (on the Windows operating system) to the DS8000
storage configuration
The procedure to add users to the SSPC requires TPC Administrator credentials and can be
split into two parts:
Set up a user at the operating system level and then add this user to a group.
Set up TPC-BE to map the operating system group to a TPC Role.
TPC Administrator Has full access to all operations in the TPC GUI
Disk Administrator • Has full access to TPC GUI disk functions, including tape devices
• Can launch DS8000 GUI by using stored passwords in TPC Element
Manager
• Can add and delete volumes by TPC
Disk Operator • Has access to reports of disk functions and tape devices
• Has to enter user name/password to launch the DS8000 GUI
• Cannot start CIMOM discoveries or probes
• Cannot take actions in TPC, for example, delete and add volumes
Figure 12-10 Assigning the Windows user group Disk Administrator to the TPC Role Disk Administrator
To set up a native device interface, in the left menu panel navigate to Administrative
services Data sources Storage subsystems and click the Add button. In the
Configure Devices panel shown in Figure 12-11, fill in the required information, then click
Add.
Figure 12-11 Dialog for adding DS8000 using native device interface in TPC 4.2.1
The connection will be verified and the device will be added to the list. Click Next (at the
bottom of the dialog panel) to discover the new device. On the next panel, you can specify
how the device will be probed. There are predefined templates. Choose the option that best
fits your environment and click Next. A summary page will appear and after confirmation the
Result page is displayed. Click Finish. Device discovery will start immediately and depending
on your settings a pop-up window, as shown in Figure 12-12, might appear allowing you to
view the discovery logs.
Figure 12-12 Pop-up window will bring you to the jobs list
With TPC V4.2, there is a significant change in log management: now you can access all TPC
logs in one place through the Job Management window as shown in Figure 12-13.
Note: If not already defined, during this process TPC automatically defines GUI access for
the discovered DS8000 in the Element Manager.
In cases where CIMOM discovery and probes are not scheduled to run continuously or the
time frame until the next scheduled run is not as desired, the TPC user can manually run the
CIMOM discovery and probe to re-enable TPC ahead of the next scheduled probe to display
the health status and to continue the optional performance monitoring. To do this task, the
TPC user must have an Administrator role. The steps to perform are:
1. In the TPC Enterprise Manager view, select Administrative Services Data
Sources CIMOM Agents. Then select the CIMOM connections reported as failed and
execute Test CIMOM Connection. If the connection status of one or more CIMOM
connections changes to SUCCESS, continue with step 2.
Figure 12-14 Drill-down of the topology viewer for a DS8000 Extent Pool
Figure 12-15 shows a graphical and tabular view with more information.
Figure 12-15 Graphical and tabular view of a broken DS8000 DDM set to deferred maintenance
Figure 12-16 on page 246 shows a TPC graphical and tabular view to an Extent Pool.
The DDM displayed as green in Figure 12-16 is a spare DDM, and is not part of the RAID 5
configuration process that is currently in progress.
The two additional DDMs displayed in Figure 12-17 in the missing state have been replaced,
but are still displayed due to the settings configured in historic data retention.
Figure 12-17 TPC graphical view to an Extent Pool configured out of three ranks (3x8=24 DDMs)
To display the path information shown in Figure 12-18 and Figure 12-19, execute these steps:
1. In the topology view, select Overview Fabrics Fabric.
2. Expand the Connectivity view of the devices for which you would like to see the physical
connectivity.
3. Click the first device.
4. Press Ctrl and click any additional devices to which you would like to display the physical
path (Figure 12-18).
5. To obtain more details about the connectivity of dedicated systems, as shown in
Figure 12-19, double-click the system of interest and expand the details of the system
view.
Figure 12-18 Topology view of physical paths between one host and one DS8000 system
In Figure 12-18, the display of the Topology view points out physical paths between the hosts
and their volumes located on the DS8000 system. In this view, there are only WWPNs shown
in left box labeled Other. To interpret WWPNs of a host in the fabric, data agents must be
placed on that host. Upgrading TPC-BE with additional TPC licenses will enable TPC to
assess and also warn you about lack of redundancy.
Figure 12-19 Topology view: detailed view of the DS8000 host ports assigned to one of its two switches
As shown in Figure 12-20, the health status function of TPC-BE Topology Viewer allows you
to display the individual FC port health inside a DS8000 system.
Figure 12-20 TPC graphical view of a broken DS8000 host adapter Card R1-I1-C5 and the associated
WWNN as displayed in the tabular view of the topology viewer
As shown in Figure 12-21, the TPC-BE Topology viewer allows you to display the connectivity
and path health status of one DS8000 system into the SAN by providing a view that can be
broken down to the switch ports and their WWPNs.
Figure 12-21 Connectivity of a DS8000 system drilled down to the ports of a SAN switch
1 http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic/index.jsp
However, if the TPC console is not regularly monitored for health status changes, configure
alerts to avoid health issues going unrecognized for a significant amount of time. To configure
alerts for the DS8000 system, in the navigation tree, select Disk Manager Alerting and
right-click Storage Subsystem Alerts. In the window displayed, the predefined alert trigger
conditions and the Storage Subsystems can be selected. Regarding the DS8000 system, the
predefined alert triggers can be categorized into:
Capacity changes applied to cache, volumes, and Extent Pools
Status changes to online/offline of storage subsystems, volumes Extent Pools, and disks
Device not found for storage subsystems, volumes Extent Pools, and disks
Device newly discovered for storage subsystem, volume Extent Pool, and disk
Version of storage subsystems changed
12.3.5 Display host volumes through SVC to the assigned DS8000 volume
With SSPC’s TPC-BE, you can create a table to display the name of host volumes assigned
to an SVC vDisk and the DS8000 volume ID associated to this vDisk. For a fast view, select
SVC VDisks MDisk DS8000 Volume ID. To populate this host, select Volume name
SVC DS8000 Volume ID view (TPC-BE SVC and DS8000 probe setup is required). To
display the table, as demonstrated in Figure 12-22, select TPC Disk Manager
Reporting Storage Subystems Volume to Backend Volume Assignment By
Volume and select Generate Report.
Figure 12-22 Example of three DS8000 volumes assigned to one vDisk and the name associated to
this vDisk (tabular view split into two pieces for better overview of the columns)
For information about Copy Services configuration in the DS8000 family using the DS GUI,
see the following IBM Redbooks publications:
IBM System Storage DS8000: Copy Services for Open Systems, SG24-6788
IBM System Storage DS8000: Copy Services for IBM System z, SG24-6787
For information about DS GUI changes related to disk encryption, see IBM System Storage
DS8700: Disk Encryption Implementation and Usage Guidelines, REDP-4500.
For information about DS GUI changes related to LDAP authentication, see IBM System
Storage DS8000: LDAP Authentication, REDP-4505.
Note: Some of the screen captures in this chapter might not reflect the latest version of the
DS GUI code.
The new Internet protocol IPv6 supports access to the DS8000 HMC.
These different access capabilities, using Basic authentication, are shown in Figure 13-1. In
our illustration, SSPC connects to two HMCs managing two DS8000 storage complexes.
Although you have different options to access DS GUI, SSPC is the preferred access method.
Authentication
Browser
without LDAP TPC GUI
DS8000 HMC 2
ESSNI DS8800
Server Complex 2
User repository
The DS8800 supports the ability to use a Single Point of Authentication function for the GUI
and CLI through an centralized LDAP server. This capability is supported with SSPC running
on 2805-MC5 hardware that has TPC Version 4.2.1 (or later) preloaded. If you have an older
The different access capabilities of the LDAP authentication are shown in Figure 13-2. In this
illustration, TPC connects to two HMCs managing two DS8800 storage complexes.
Note: For detailed information about LDAP-based authentication, see IBM System Storage
DS8000: LDAP Authentication, REDP-4505.
LDAP Authentication
Browser The authentication is now
TPC GUI managed through the
Authentication Server, a
TPC component, and a new
Authentication Client at
Directly the HMC
TCP/IP 1,2,3
1 1 TCP/IP
DS8800 HMC 1
TPC 4.2 3
2 ESSNI
TPC GUI ESSNI Server 10 DS8800
DS GUI Client Complex 1
Host System 4 9
SSPC
TIP Authentication
6 5 Client
Authentication 8
LDAP Service Server
7
DS8800 HMC 2
ESSNI
1 1,2,3 Server
The Authentication DS8800
Server provides the Complex 2
connection to the Authentication
LDAP or other Client
repositories
Figure 13-2 LDAP authentication to access the DS 8800 GUI and CLI
Note: Here we assume that the DS8000 storage subsystem (Element Manager) is already
configured in TPC, as described in 12.2, “SSPC setup and configuration” on page 233.
5. You are presented with the DS GUI Welcome window for the selected disk system, as
shown in Figure 13-5 on page 255.
In the Welcome window of the DS8000 Storage Manger GUI, you see buttons for accessing
the following options:
Show all tasks: Opens the Task Manager window, where you can end a task or switch to
another task.
Hide Task List: Hides the Task list and expands your work area.
Toggle Banner: Removes the banner with the IBM System Storage name and expands the
working space.
The DS GUI displays the configuration of your DS8000 in tables. There are several options
you can use:
To download the information from the table, click Download. This can be useful if you
want to document your configuration. The file is in comma-separated value (.csv) format
and you can open the file with a spreadsheet program. This function is also useful if the
table on the DS8000 Manager consists of several pages; the .csv file includes all pages.
The Print report option opens a new window with the table in HTML format and starts the
printer dialog box if you want to print the table.
The Select Action drop-down menu provides you with specific actions that you can
perform. Select the object you want to access and then the appropriate action (for
example, Create or Delete).
There are also buttons to set and clear filters so that only specific items are displayed in
the table (for example, show only FB extentpools in the table). This can be useful if you
have tables with a large number of items.
When performing the logical configuration, the following approach is likely to be the most
straightforward:
1. Start by defining the storage complex.
2. Create arrays, ranks, and Extent Pools.
3. Create open system volumes.
4. Create count key data (CKD) LSSs and volumes.
5. Create host connections and volume groups.
Figure 13-8 shows the successful completion of the different tasks (adding capacity and
creating new volumes). Click the specific task link to get more information about the task.
For each configuration task (for example, creating an array), the process guides you through
windows where you enter the necessary information. During this process, you have the ability
to go back to make modifications or cancel the process. At the end of each process, you get a
verification window where you can verify the information that you entered before you submit
the task.
In the My Work section of the DS GUI welcome window, navigate to Manage Hardware
Storage Complexes. The Storage Complexes Summary window opens, as shown in
Figure 13-9.
You should have at least one storage complex listed in the table. If you have more than one
DS8800 system or any other DS8000 family model in your environment connected to the
same network, you can define it here by adding a new storage complex. Select Add from the
Select action drop-down menu in order to add a new storage complex (see Figure 13-10 on
page 260).
Provide the IP address of the Hardware Management Console (HMC) connected to the new
storage complex that you wish to add and click OK to continue. A new storage complex is
added to the table, as shown in Figure 13-12 on page 261.
Having all the DS8000 storage complexes defined together provides flexible control and
management. The status information indicates the healthiness of each storage complex. By
clicking the status description link of any storage complex, you can obtain more detailed
health check information for various vital DS8000 components (see Figure 13-13).
Different status descriptions may be reported for your storage complexes. These descriptions
depend on the availability of the vital storage complexes components. In Figure 13-14, we
show an example of different status states.
A Critical status indicates unavailable vital storage complex resources. An Attention status
may be triggered by some resources being unavailable. Because they are redundant, the
storage complex is still operational. One example is when only one storage server inside a
storage image is offline, as shown in Figure 13-15.
We recommend checking the status of your storage complex and proceeding with logical
configuration (create arrays, ranks, Extent Pools, or volumes) only when your HMC consoles
are connected to the storage complex and both storage servers inside the storage image are
online and operational.
Tip: You do not necessarily need to create arrays first and then ranks. You can proceed
directly with the creation of Extent Pools, as explained in 13.3.4, “Create Extent Pools” on
page 274.
Note: If you have defined more storage complexes or storage images, be sure to select the
right storage image before you start creating arrays. From the Storage image drop-down
menu, select the desired storage image you want to access.
In our example, the DS8000 capacity is still not assigned to open systems or System z.
3. In our example, all array sites are unassigned and therefore eligible to be used for array
creation. Each array site has eight physical disk drives. In order to discover more details
about each array site, select the desired array site and click Properties in the Select
Action drop-down menu. The Single Array Site Properties window opens. It provides
general array site characteristics, as shown in Figure 13-18.
5. All DDMs in this array site are in the Normal state. Click OK to close the Single Array Site
Properties window and go back to the Disk Configuration main window.
6. After we identify the unassigned and available storage, we can create an array. Click the
Array tab in the Manage Disk Configuration section and select Create Arrays in the
Select action drop-down menu, as shown in Figure 13-20.
8. The Create array verification window is displayed (Figure 13-23). It lists all array sites
chosen for the new arrays we want to create. At this stage, you can still change your
configuration by deleting the array sites from the lists and adding new array sites if
required. Click Create All once you decide to continue with the proposed configuration.
10.The window in Figure 13-25 shows the newly created arrays. You can see that the graph in
the Disk Configuration Summary section has changed accordingly and now includes the
new capacity we used for creating arrays.
Tip: You do not necessarily need to create arrays first and then ranks. You can proceed
directly with the creation of Extent Pools (see 13.3.4, “Create Extent Pools” on page 274).
Note: If you have defined more storage complexes/storage images, be sure to select
the right storage image before you start creating ranks. From the Storage image
drop-down menu, select the desired storage image you want to access.
The duration of the Create rank task is longer than the Create array task. Click the View
Details button in order to check the overall progress. It takes you to the Long Running
Task Summary window, which shows all tasks executed on this DS8800 storage
subsystem. Click the task link name (which has an In progress state) or select it and click
Properties from the Select action drop-down menu, as shown in Figure 13-30.
In the task properties window, you can see the progress and task details, as shown in
Figure 13-31.
The bar graph in the Disk Configuration Summary section has changed. There are ranks for
both CKD and FB, but they are not assigned to Extent Pools.
Note: If you have defined more storage complexes or storage images, be sure to select the
right storage image before you create Extent Pools. From the Storage image drop-down
menu, select the desired storage image you want to access.
Click the View Details button to check the overall progress. It takes you to the Long
Running Task Summary window, where you can see all tasks executed on this DS8800
storage subsystem. Click your task link name (which has the In progress state) to see the
task progress, as shown in Figure 13-39.
6. After the task is completed, go back to Disk Configuration and, under the Extent Pools tab,
check the list of newly created ranks (see Figure 13-40).
8. The Single Pool properties window opens (Figure 13-42). Basic Extent Pool information is
provided here as well as volume relocation related information. You can, if necessary,
change the Extent Pool Name, Storage Threshold, and Storage Reserved values and
select Apply to commit all the changes.
9. For more information about drive types or ranks included in the Extent Pool, select the
appropriate tab. Click OK to return to the Disk Configuration window.
10.To discover more details about the DDMs, select the desired Extent Pool from the Disk
Configuration window and, from the Select action drop-down menu, click DDM
Properties, as shown in Figure 13-43.
Use the DDM Properties window to view all the DDMs that are associated with the
selected Extent Pool and to determine the DDMs state. You can print the table, download
it in .csv format, and modify the table view by selecting the appropriate icon at the top of
the table.
Click OK to return to the Disk Configuration window.
3. Select the storage image for which you want to configure the ports and, from the Select
action drop-down menu, click Configure I/O Ports (under the Storage Image section of
the menu).
4. The Configure I/O Port window opens, as shown in Figure 13-45.
You get a warning message that the ports might become unusable by the hosts that are
currently connected to them.
5. You can repeat this step to format all ports to their required function. Multiple port selection
is supported.
Tip: You can use the View host port login status link to query the host that is logged
into the system or use this window to debug host access and switch configuration
issues.
If you have more than one storage image, you have to select the right one and then, to
create a new host, select the Create new host connection link in the Tasks section.
3. The resulting windows guide you through the host configuration, beginning with the
window in Figure 13-47.
6. In the Verification window, check the information that you entered during the process. If
you want to make modifications, select Back, or you can cancel the process. After you
have verified the information, click Finish to create the host system. This action takes you
back to the Host Connection window and Manage Host Connections table, where you can
see the list of all created host connections.
If you need to make changes to a host system definition, select your host in the Manage Host
Connections table and choose appropriate the action from the drop-down menu, as shown in
Figure 13-51.
Note: Be aware that you have other selection possibilities. We show only one way here.
4. The table in Create Volumes window contains all the Extent Pools that were previously
created for the FB storage type. To ensure a balanced configuration, select Extent Pools in
pairs (one from each server). If you select multiple pools, the new volumes are assigned to
the pools based on the assignment option that you select on this window.
Click Next to continue. The Define Volume Characteristics window appears, as shown in
Figure 13-54.
5. Select the Volume type, Size, Volume quantity, Storage allocation method, Extent
allocation method, Nickname prefix, Nickname suffix, and one or more volume groups (if
you want to add this new volume to a previously created volume group).
When your selections are complete, click Add Another if you want to create more
volumes with different characteristics or click OK to continue. The Create Volumes window
opens, as shown in Figure 13-55 on page 288.
6. If you need to make any further modifications to the volumes in the table, select the
volumes you are about to modify and choose the appropriate action from the Select action
drop-down menu. Otherwise, click Next to continue the process.
7. We need to select LSS for all created volumes. In our example, we select the Automatic
assignment method, where the system assigns five volumes addresses to LSS 00 and five
volumes addresses to LSS 01 (see Figure 13-56).
10.The Creating Volumes information window opens. Depending on the number of volumes,
the process can take a while to complete. Optionally, click the View Details button in order
to check the overall progress. It takes you to the Long Running Task Properties window,
where you can see the task progress.
11.After the creation is complete, a final window opens. You can select View detail or Close.
If you click Close, you return to the main Open system Volumes window, as shown in
Figure 13-58.
4. In the Define Volume Group Properties window, enter the nickname for the volume group
and select the host type from which you want to access the volume group. If you select
one host (for example, IBM System p), all other host types with the same addressing
method are automatically selected. This does not affect the functionality of the volume
group; it supports the host type selected.
5. Select the volumes to include in the volume group. If you have to select a large number of
volumes, you can specify the LSS so that only these volumes display in the list, and then
you can select all.
6. Click Next to open the Verification window shown in Figure 13-62.
Important: The LCUs you create must match the logical control unit definitions on the host
I/O configuration. More precisely, each LCU ID number you select during the create
process must correspond to a CNTLUNIT definition in the HCD/IOCP with the same
CUADD number. It is vital that the two configurations match each other.
2. Select a storage image from the Select storage image drop-down menu if you have more
than one. The window is refreshed to show the LCUs in the storage image.
4. Select the LCUs you want to create. You can select them from the list displayed on the left
by clicking the number, or you can use the map. When using the map, click the available
LCU square. You have to enter all the other necessary parameters for the selected LCUs.
– Starting SSID: Enter a Subsystem ID (SSID) for the LCU. The SSID is a four character
hexadecimal number. If you create multiple LCUs at one time, the SSID number is
incremented by one for each LCU. The LCUs attached to the same operating system
image must have different SSIDs. We recommend that you use unique SSID numbers
across your whole environment.
– LCU type: Select the LCU type you want to create. Select 3990 Mod 6 unless your
operating system does not support Mod 6. The options are:
• 3990 Mod 3
• 3990 Mod 3 for TPF
• 3990 Mod 6
The following parameters affect the operation of certain Copy Services functions:
– Concurrent copy session timeout: The time in seconds that any logical device on this
LCU in a concurrent copy session stays in a long busy state before suspending a
concurrent copy session.
Note: You can assign all aliases in the LCU to just one base volume if you have
implemented HyperPAV or Dynamic alias management. With HyperPAV, the alias
devices are not permanently assigned to any base volume even though you initially
assign each to a certain base volume. Rather, they reside in a common pool and are
assigned to base volumes as needed on a per I/O basis. With Dynamic alias
management, WLM will eventually move the aliases from the initial base volume to
other volumes as needed.
If your host system is using Static alias management, you need to assign aliases to all
base volumes on this window, because the alias assignments made here are
permanent in nature. To change the assignments later, you have to delete and
re-create aliases.
In the last section of this window, you can optionally assign the alias nicknames for your
volumes:
– Nickname prefix: If you select a nickname suffix of None, you must enter a nickname
prefix in this field. Blanks are not allowed. If you select a nickname suffix of Volume ID
or Custom, you can leave this field blank.
– Nickname suffix: You can select None as described above. If you select Volume ID,
you have to enter a four character volume ID for the suffix, and if you select Custom,
you have to enter four digit hexadecimal number or a five digit decimal number for the
suffix.
– Start: If you select Hexadecimal sequence, you have to enter a number in this field.
Note: The nickname is not the System z VOLSER of the volume. The VOLSER is
created later when the volume is initialized by the ICKDSF INIT command.
Click OK to proceed. The Create Volumes window shown in Figure 13-66 appears.
8. The Create LCUs Verification window appears, as shown in Figure 13-68, where you can
see list of all the volumes that are going to be created. If you want to add more volumes or
modify the existing ones, you can do so by selecting the appropriate action from the Select
action drop-down list. Once you are satisfied with the specified values, click Create all to
create the volumes.
9. The Creating Volumes information window opens. Depending on the number of volumes,
the process can take a while to complete. Optionally, click the View Details button in order
to check the overall progress. This action takes you to the Long Running Task Properties
window, where you can see the task progress.
The bar graph in the System z Storage Summary section has changed.
The next window (Figure 13-71) shows that you can take actions at the volume level once you
have selected an LCU:
Increase capacity: Use this action to increase the size of a volume. The capacity of a 3380
volume cannot be increased. After the operation completes, you need to use the ICKDSF
REFORMAT REFVTOC command to adjust the volume VTOC to reflect the additional
cylinders. Note that the capacity of a volume cannot be decreased.
Add Aliases: Use this action when you want to define additional aliases without creating
new base volumes.
View properties: Here you can view the volumes properties. The only value you change is
the nickname. You can also see if the volume is online from the DS8700 side.
Delete: Here you can delete the selected volume. This must be confirmed, because you
will also delete all alias volumes and data on this volume.
Tip: After initializing the volumes using the ICKDSF INIT command, you also will see the
VOLSERs in this window. This is not done in this example.
The Increase capacity action can be used to dynamically expand volume capacity without
needing to bring the volume offline in z/OS. It is good practice to start using 3390 Mod A once
you can expand the capacity and change the device type of your existing 3390 Mod 3, 3390
Mod 9, and 3390 Custom volumes. Keep in mind that 3390 Mod A volumes can only be used
on z/OS V1.10 or later and that after the capacity has been increased on DS8700, you need
to run a ICKDSF to rebuild the VTOC Index, allowing it to recognize the new volume size.
2. The new Storage Complex window provides general DS8800 system information. It is
divided into four sections (see Figure 13-73):
a. System Summary: You can quickly identify the percentage of capacity that is currently
used, and the available and used capacity for opens systems and System z. In
addition, you can check the system state and obtain more information by clicking the
state link.
b. Management Console information.
c. Performance: Provides performance graphs for host MBps, host KIOps, rank MBps,
and rank KIOps. This information is periodically updated every 60 seconds.
d. Racks: Represents the physical configuration.
3. In the Rack section, the number of racks shown matches the racks physically installed in
the storage unit. If you position the mouse pointer over the rack, additional rack
information is displayed, such as the rack number, the number of DDMs, and the number
of host adapters (see Figure 13-74).
You can explore the DS8800 hardware components and discover the correlation between
logical and physical configuration by performing the following steps:
1. In the My Work section in the DS GUI welcome window, navigate to Manage Hardware
Storage Complexes.
2. The Storage Complexes Summary window opens. Select your storage complex an, from
the Select action drop-down menu, click System Summary.
3. Select the Hardware Explorer tab to switch to the Hardware Explorer window (see
Figure 13-75).
4. In this window, you can explore the specific hardware resources installed by selecting the
appropriate component under the Search rack criteria by resources drop-down menu. In
the Rack section of the window, there is a front and rear view of the DS8800 rack. You can
interact with the rack image to locate resources. To view a larger image of a specific
location (displayed in the right pane of the window), use your mouse to move the yellow
box to the desired location across the DS8800 front and rear view.
6. After you have identified the location of array DDMs, you can position the mouse pointer
over the specific DDM to display more information, as shown in Figure 13-77.
8. Another very useful function in the Hardware Explorer GUI section is the ability to identify
the physical location of each FCP or FICON port. Change the search criteria to I/O Ports
and select one or more ports in the Available Resources section. Use your mouse to move
the yellow box in the rack image to the rear DS8800 view (bottom pane), where the I/O
ports are located (see Figure 13-79).
Click the highlighted port to discover its basic properties and status.
For information about using the DS CLI for Copy Services configuration, encryption handling,
or LDAP usage, refer to the documents listed here.
For Copy Services configuration in the DS8000 using the DS CLI, see the following books:
IBM System Storage DS: Command-Line Interface User's Guide, GC53-1127
IBM System Storage DS8000: Copy Services for Open Systems, SG24-6788
IBM System Storage DS8000: Copy Services for IBM System z, SG24-6787
For DS CLI commands related to disk encryption, see IBM System Storage DS8700 Disk
Encryption Implementation and Usage Guidelines, REDP-4500.
For DS CLI commands related to LDAP authentication, see IBM System Storage DS8000:
LDAP Authentication, REDP-4505.
The following list highlights a few of the functions that you can perform with the DS CLI:
Create user IDs that can be used with the GUI and the DS CLI.
Manage user ID passwords.
Install activation keys for licensed features.
Manage storage complexes and units.
Configure and manage Storage Facility Images.
Create and delete RAID arrays, ranks, and Extent Pools.
Create and delete logical volumes.
Manage host access to volumes.
Check the current Copy Services configuration that is used by the Storage Unit.
Create, modify, or delete Copy Services configuration settings.
Integrate LDAP policy usage and configuration.
Implement encryption functionality.
Note: The DSCLI version must correspond to the LMC level installed on your system. You
can have more versions of DSCLI installed on your system, each in its own directory.
Important: For the most recent information about currently supported operating systems,
refer to the IBM System Storage DS8000 Information Center website at:
http://publib.boulder.ibm.com/infocenter/ds8000ic/index.jsp
The DS CLI is supplied and installed via a CD that ships with the machine. The installation
does not require a reboot of the open systems host. The DS CLI requires Java 1.4.1 or later.
Java 1.4.2 is the preferred JRE on Windows, AIX, and Linux, and is supplied on the CD. Many
hosts might already have a suitable level of Java installed. The installation program checks for
this requirement during the installation process and does not install the DS CLI if you do not
have the correct version of Java.
The installation process can be performed through a shell, such as the bash or korn shell, or
the Windows command prompt, or through a GUI interface. If performed via a shell, it can be
performed silently using a profile file. The installation process also installs software that
allows the DS CLI to be completely uninstalled should it no longer be required.
If you create one or more profiles to contain your preferred settings, you do not have to
specify this information each time you use DS CLI. When you launch DS CLI, all you need to
do is to specify a profile name with the dscli command. You can override the profile’s values
by specifying a different parameter value with the dscli command.
When you install the command-line interface software, a default profile is installed in the
profile directory with the software. The file name is dscli.profile, for example,
c:\Program Files\IBM\DSCLI\profile\dscli.profile for the Windows platform and
/opt/ibm/dscli/profile/dscli.profile for UNIX and Linux platforms.
Attention: The default profile file created when you install the DS CLI will potentially be
replaced every time you install a new version of the DS CLI. It is a good practice to open
the default profile and then save it as a new file. You can then create multiple profiles and
reference the relevant profile file using the -cfg parameter.
These profile files can be specified using the DS CLI command parameter -cfg
<profile_name>. If the -cfg file is not specified, the user’s default profile is used. If a user’s
profile does not exist, the system default profile is used.
Note: If there are two profiles with the same name, one in default system’s directory and
one in your personal directory, your personal profile will be taken.
4. The notepad opens with the DS CLI profile in it. There are four lines you can consider
adding. Examples of these lines are shown in bold in Example 14-2.
Tip: The default newline delimiter is a UNIX delimiter, which may render text in notepad as
one long line. Use a text editor that correctly interprets UNIX line endings.
devid: IBM.2107-75ABCDE
hmc1: 10.0.0.250
username: admin
password: passw0rd
Adding the serial number by using the devid parameter, and the HMC IP address by using the
hmc1 parameter, is strongly suggested. Not only does this help you to avoid mistakes when
using more profiles, but also you do not need to specify this parameter for some dscli
commands that require it. Additionally, if you specify dscli profile for copy services usage, then
using the remotedevid parameter is strongly suggested for the same reasons. To determine a
storage system’s id, use the lssi CLI command.
Although adding the user name and password parameters will simplify the DS CLI startup, it
is not suggested that you add them because a password is saved in clear text in the profile
file. It is better to create an encrypted password file with the managepwfile CLI command. A
password file generated using the managepwfile command is located in the directory
user_home_directory/dscli/profile/security/security.dat.
Important: Use care if adding multiple devid and HMC entries. Only one should be
uncommented (or more literally, unhashed) at any one time. If you have multiple hmc1 or
devid entries, the DS CLI uses the one closest to the bottom of the profile.
There are other customization parameters that affect dscli output; the most important are:
banner - date/time with dscli version is printed for each command.
header - column names are printed.
paging - for interactive mode, it breaks output after a certain number of rows (24 by
default).
You must supply the login information and the command that you want to process at the same
time. Follow these steps to use the single-shot mode:
1. Enter:
dscli -hmc1 <hostname or ip address> -user <adm user> -passwd <pwd> <command>
or
dscli -cfg <dscli profile> <command>
2. Wait for the command to process and display the end results.
Note: When typing the command, you can use the host name or the IP address of the
HMC. It is also important to understand that every time a command is executed in single
shut mode, the user must be authenticated. The authentication process can take a
considerable amount of time.
Tip: In interactive mode for long outputs, the message Press Enter To Continue...
appears. The number of rows can be specified in the profile file. Optionally, you can turn off
the paging feature in the profile file by using the paging:off parameter.
In Example 14-5, we show the contents of a DS CLI script file. Note that it only contains DS
CLI commands, although comments can be placed in the file using a hash symbol (#). Empty
lines are also allowed. One advantage of using this method is that scripts written in this format
can be used by the DS CLI on any operating system into which you can install DS CLI.
For script command mode, you can turn off the banner and header for easier output parsing.
Also, you can specify an output format that might be easier to parse by your script.
In Example 14-6, we start the DS CLI using the -script parameter and specifying a profile
and the name of the script that contains the commands from Example 14-5.
Note: The DS CLI script can contain only DS CLI commands. Using shell commands
results in process failure. You can add comments in the scripts prefixed by the hash symbol
(#). It must be the first non-blank character on the line.
Only one single authentication process is needed to execute all the script commands.
To obtain information about a specific DS CLI command, enter the command name as a
parameter of the help command. Examples of usage include:
help <command name> gives a detailed description of the specified command.
help -s <command name> gives a brief description of the specified command.
help -l <command name> gives syntax information about the specified command.
Example 14-7 Displaying a list of all commands in DS CLI using the help command
# dscli -cfg ds8800.profile help
applydbcheck lsframe mkpe setdbcheck
applykey lshba mkpprc setdialhome
chauthpol lshostconnect mkpprcpath setenv
chckdvol lshosttype mkrank setflashrevertible
chextpool lshostvol mkreckey setioport
chfbvol lsioport mkremoteflash setnetworkport
chhostconnect lskey mksession setoutput
chkeymgr lskeygrp mksestg setplex
chlcu lskeymgr mkuser setremoteflashrevertible
chlss lslcu mkvolgrp setrmpw
chpass lslss offloadauditlog setsim
chrank lsnetworkport offloaddbcheck setsmtp
chsession lspe offloadss setsnmp
chsestg lsportprof pausegmir setvpn
chsi lspprc pausepprc showarray
chsp lspprcpath quit showarraysite
chsu lsproblem resumegmir showauthpol
chuser lsrank resumepprc showckdvol
chvolgrp lsremoteflash resyncflash showcontactinfo
clearvol lsserver resyncremoteflash showenv
closeproblem lssession reverseflash showextpool
commitflash lssestg revertflash showfbvol
commitremoteflash lssi revertremoteflash showgmir
cpauthpol lsss rmarray showgmircg
Man pages
A man page is available for every DS CLI command. Man pages are most commonly seen in
UNIX-based operating systems and give information about command capabilities. This
information can be displayed by issuing the relevant command followed by the -h, -help, or
-? flags.
Example 14-10 on page 317 shows the output of the showioport -metrics command, which
illustrates the many important metrics returned by the command. It provides the performance
counter of the port and the FCLink error counter. The FCLink error counter is used to
determine the health of the overall communication.
Important: Remember that an array for a DS8000 can only contain one array site, and a
DS8000 array site contains eight disk drive modules (DDMs).
We can now issue the mkarray command to create arrays, as shown in Example 14-12. You
will notice that in this case we have used one array site (in the first array, S1) to create a single
RAID 5 array. If we wished to create a RAID 10 array, we would have to change the -raidtype
parameter to 10, and if we wished to create a RAID 6 array, we would have to change the
-raidtype parameter to 6 (instead of 5).
We can now see what arrays have been created by using the lsarray command, as shown in
Example 14-13.
We can see in this example the type of RAID array and the number of disks that are allocated
to the array (in this example 6+P+S, which means the usable space of the array is 6 times the
DDM size), as well as the capacity of the DDMs that are used and which array sites were
used to create the arrays.
Once we have created all the ranks, we run the lsrank command. This command displays all
the ranks that have been created, to which server the rank is attached, the RAID type, and the
format of the rank, whether it is Fixed Block (FB) or Count Key Data (CKD).
Example 14-14 shows the mkrank commands and the result of a successful lsrank -l
command.
Example 14-14 Creating and listing ranks with mkrank and lsrank
dscli> mkrank -array A0 -stgtype fb
CMUC00007I mkrank: Rank R0 successfully created.
dscli> mkrank -array A1 -stgtype fb
CMUC00007I mkrank: Rank R1 successfully created.
dscli> lsrank -l
ID Group State datastate Array RAIDtype extpoolID extpoolnam stgtype exts usedexts
=======================================================================================
R0 - Unassigned Normal A0 5 - - fb 773 -
R1 - Unassigned Normal A1 5 - - fb 773 -
For easier management, we create empty Extent Pools related to the type of storage that is in
the pool. For example, create an Extent Pool for high capacity disk, create another for high
performance, and, if needed, Extent Pools for the CKD environment.
When an Extent Pool is created, the system automatically assigns it an Extent Pool ID, which
is a decimal number starting from 0, preceded by the letter P. The ID that was assigned to an
Extent Pool is shown in the CMUC00000I message, which is displayed in response to a
successful mkextpool command. Extent pools associated with rank group 0 get an even ID
number. Extent pools associated with rank group 1 get an odd ID number. The Extent Pool ID
is used when referring to the Extent Pool in subsequent CLI commands. It is therefore good
practice to make note of the ID.
Example 14-15 shows one example of Extent Pools you could define on your machine. This
setup would require a system with at least six ranks.
Note that the mkextpool command forces you to name the Extent Pools. In Example 14-16,
we first create empty Extent Pools using the mkextpool command. We then list the Extent
Pools to get their IDs. Then we attach a rank to an empty Extent Pool using the chrank
command. Finally, we list the Extent Pools again using lsextpool and note the change in the
capacity of the Extent Pool.
Example 14-16 Extent Pool creation using mkextpool, lsextpool, and chrank
dscli> mkextpool -rankgrp 0 -stgtype fb FB_high_0
CMUC00000I mkextpool: Extent Pool P0 successfully created.
dscli> mkextpool -rankgrp 1 -stgtype fb FB_high_1
CMUC00000I mkextpool: Extent Pool P1 successfully created.
dscli> lsextpool
Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols
===========================================================================================
FB_high_0 P0 fb 0 below 0 0 0 0 0
FB_high_1 P1 fb 1 below 0 0 0 0 0
dscli> chrank -extpool P0 R0
CMUC00008I chrank: Rank R0 successfully modified.
After having assigned a rank to an Extent Pool, we should be able to see this change when
we display the ranks. In Example 14-17, we can see that rank R0 is assigned to extpool P0.
Example 14-17 Displaying the ranks after assigning a rank to an Extent Pool
dscli> lsrank -l
ID Group State datastate Array RAIDtype extpoolID extpoolnam stgtype exts usedexts
===================================================================================
R0 0 Normal Normal A0 5 P0 FB_high_0 fb 773 0
R1 1 Normal Normal A1 5 P1 FB_high_1 fb 773 0
Example 14-18 shows the creation of a repository. The unit type of the real capacity (-repcap)
and virtual capacity (-vircap) sizes can be specified with the -captype parameter. For FB
Extent Pools, the unit type can be either GB (default) or blocks.
You can obtain information about the repository with the showsestg command. Example 14-19
shows the output of the showsestg command. You might particularly be interested in how
much capacity is used within the repository by checking the repcapalloc value.
Note that some more storage is allocated for the repository in addition to repcap size. In
Example 14-19 on page 320, the line that starts with overhead indicates that 3 GB had been
allocated in addition to the repcap size.
In Example 14-20, we have created eight volumes, each with a capacity of 10 GB. The first
four volumes are assigned to rank group 0 and the second four are assigned to rank group 1.
Looking closely at the mkfbvol command used in Example 14-20 on page 321, we see that
volumes 1000 - 1003 are in extpool P0. That Extent Pool is attached to rank group 0, which
For volumes 1100 - 1003 in Example 14-20 on page 321, the first two digits of the volume
serial number are 11, which is an odd number, which signifies they belong to rank group 1.
Also note that the -cap parameter determines size, but because the -type parameter was not
used, the default size is a binary size. So these volumes are 10 GB binary, which equates to
10,737,418,240 bytes. If we used the parameter -type ess, then the volumes would be
decimally sized and would be a minimum of 10,000,000,000 bytes in size.
In Example 14-20 on page 321 we named the volumes using naming scheme high_fb_0_#h,
where #h means you are using the hexadecimal volume number as part of the volume name.
This can be seen in Example 14-21, where we list the volumes that we have created using the
lsfbvol command. We then list the Extent Pools to see how much space we have left after
the volume creation.
Example 14-21 Checking the machine after creating volumes by using lsextpool and lsfbvol
dscli> lsfbvol
Name ID accstate datastate configstate deviceMTM datatype extpool cap (2^30B)
=========================================================================================
high_fb_0_1000 1000 Online Normal Normal 2107-922 FB 512 P0 10.0
high_fb_0_1001 1001 Online Normal Normal 2107-922 FB 512 P0 10.0
high_fb_0_1002 1002 Online Normal Normal 2107-922 FB 512 P0 10.0
high_fb_0_1003 1003 Online Normal Normal 2107-922 FB 512 P0 10.0
high_fb_1_1100 1100 Online Normal Normal 2107-922 FB 512 P1 10.0
high_fb_1_1101 1101 Online Normal Normal 2107-922 FB 512 P1 10.0
high_fb_1_1102 1102 Online Normal Normal 2107-922 FB 512 P1 10.0
high_fb_1_1103 1103 Online Normal Normal 2107-922 FB 512 P1 10.0
dscli> lsextpool
Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols
===========================================================================================
FB_high_0 P0 fb 0 below 733 5 733 0 4
FB_high_1 P1 fb 1 below 733 5 733 0 4
Important: For the DS8000, the LSSs can be ID 00 to ID FE. The LSSs are in address
groups. Address group 0 is LSS 00 to 0F, address group 1 is LSS 10 to 1F, and so on. The
moment you create an FB volume in an address group, then that entire address group can
only be used for FB volumes. Be aware of this fact when planning your volume layout in a
mixed FB/CKD DS8000.
You can also specify that you want the extents of the volume you are creating to be evenly
distributed across all ranks within the Extent Pool. This allocation method is called rotate
extents.
The extent allocation method is specified with the -eam rotateexts or -eam rotatevols option
of the mkfbvol command (see Example 14-22).
The showfbvol command with the -rank option (see Example 14-23) shows that the volume
we created is distributed across 12 ranks and how many extents on each rank were allocated
for this volume.
When listing Space Efficient repositories with the lssestg command (see Example 14-25),
we can see that in Extent Pool P53 we have a virtual allocation of 40 extents (GB), but that
the allocated (used) capacity repcapalloc is still zero.
This allocation comes from the volume just created. To see the allocated space in the
repository for just this volume, we can use the showfbvol command (see Example 14-26).
Note: The new capacity must be larger than the previous one; you cannot shrink the
volume.
Because the original volume had the rotateexts attribute, the additional extents are also
striped (see Example 14-28).
Important: Before you can expand a volume, you must delete all Copy Services
relationships for that volume.
Deleting volumes
FB volumes can be deleted by using the rmfbvol command.
Starting with Licensed Machine Code (LMC) level 6.5.1.xx, the command includes new
options to prevent the accidental deletion of volumes that are in use. A FB volume is
considered to be “in use”, if it is participating in a Copy Services relationship or if the volume
has received any I/O operation in the previous 5 minutes.
Volume deletion is controlled by the -safe and -force parameters (they cannot be specified
at the same time) as follows:
If neither of the parameters is specified, the system performs checks to see whether or not
the specified volumes are in use. Volumes that are not in use will be deleted and the ones
in use will not be deleted.
In Example 14-29, we create volumes 2100 and 2101. We then assign 2100 to a volume
group. We then try to delete both volumes with the -safe option, but the attempt fails without
deleting either of the volumes. We are able to delete volume 2101 with the -safe option
because it is not assigned to a volume group. Volume 2100 is not in use, so we can delete it
by not specifying either parameter.
Having determined the host type, we can now make a volume group. In Example 14-31, the
example host type we chose is AIX, and in Example 14-30, we can see the address discovery
method for AIX is scsimask.
In this example, we added volumes 1000 to 1002 and 1100 to 1102 to the new volume group.
We did this task to spread the workload evenly across the two rank groups. We then listed all
available volume groups using lsvolgrp. Finally, we listed the contents of volume group V11,
because this was the volume group we created.
We might also want to add or remove volumes to this volume group at a later time. To achieve
this goal, we use chvolgrp with the -action parameter. In Example 14-32, we add volume
1003 to volume group V11. We display the results, and then remove the volume.
Important: Not all operating systems can deal with the removal of a volume. Consult your
operating system documentation to determine the safest way to remove a volume from a
host.
All operations with volumes and volume groups described previously can also be used with
Space Efficient volumes as well.
In Example 14-33, we create a single host connection that represents one HBA in our
example AIX host. We use the -hosttype parameter using the hosttype we have in
Example 14-30 on page 326. We allocated it to volume group V11. At this point, provided that
the SAN zoning is correct, the host should be able to see the logical unit numbers (LUNs) in
volume group V11.
You can also use simply -profile instead of -hosttype. However, this is not a best practice.
Using the -hosttype parameter actually invokes both parameters (-profile and -hosttype).
In contrast, simply using -profile leaves the -hosttype column unpopulated.
There is also the option in the mkhostconnect command to restrict access to only certain I/O
ports. This is done with the -ioport parameter. Restricting access in this way is usually
unnecessary. If you want to restrict access for certain hosts to certain I/O ports on the
DS8000, do this by way of zoning on your SAN switch.
When creating hosts, you can specify the -portgrp parameter. By using a unique port group
number for each attached server, you can easily detect servers with multiple HBAs.
In Example 14-34, we have six host connections. By using the port group number, we see
that there are three separate hosts, each with two HBAs. Port group 0 is used for all hosts that
do not have a port group number set.
In this section, we give examples for several operating systems. In each example, we assign
several logical volumes to an open systems host. We install DS CLI on this host. We log on to
this host and start DS CLI. It does not matter which HMC we connect to with the DS CLI. We
then issue the lshostvol command.
Important: The lshostvol command communicates only with the operating system of the
host on which the DS CLI is installed. You cannot run this command on one host to see the
attached disks of another host.
In fact, from this display, it is not possible to tell if MPIO is even installed. You need to run the
pcmpath query device command to confirm the path count.
Note: If you use Open HyperSwap on a host, the lshostvol command may fail to show
any devices
Example 14-37 lshostvol on an HP-UX host that does not use SDD
dscli> lshostvol
Disk Name Volume Id Vpath Name
==========================================
c38t0d5 IBM.2107-7503461/1105 ---
c38t0d6 IBM.2107-7503461/1106
Example 14-39 lshostvol on a Solaris host that does not have SDD
dscli> lshostvol
Disk Name Volume Id Vpath Name
==========================================
c6t1d0 IBM-2107.7520781/4200 ---
c6t1d1 IBM-2107.7520781/4201 ---
c7t2d0 IBM-2107.7520781/4200 ---
c7t2d1 IBM-2107.7520781/4201 ---
Example 14-40 lshostvol on a Windows host that does not use SDD or uses SDDDSM
dscli> lshostvol
Disk Name Volume Id Vpath Name
==========================================
Disk2 IBM.2107-7520781/4702 ---
Disk3 IBM.2107-75ABTV1/4702 ---
Disk4 IBM.2107-7520781/1710 ---
Disk5 IBM.2107-75ABTV1/1004 ---
Disk6 IBM.2107-75ABTV1/1009 ---
Disk7 IBM.2107-75ABTV1/100A ---
Disk8 IBM.2107-7503461/4702 ---
Example 14-41 lshostvol on a Windows host that does not use SDD
dscli> lshostvol
Disk Name Volume Id Vpath Name
============================================
Disk2,Disk2 IBM.2107-7503461/4703 Disk2
Disk3,Disk3 IBM.2107-7520781/4703 Disk3
Disk4,Disk4 IBM.2107-75ABTV1/4703 Disk4
Note that there is one additional step, which is to create Logical Control Units (LCUs), as
displayed in the following list.
You do not have to create volume groups or host connects for CKD volumes. If there are I/O
ports in Fibre Channel connection (FICON) mode, access to CKD volumes by FICON hosts is
granted automatically.
Space Efficient repository creation for CKD Extent Pools is identical to that of FB Extent
Pools, with the exception that the size of the repository’s real capacity and virtual capacity are
expressed either in cylinders or as multiples of 3390 model 1 disks (the default for CKD
Extent Pools), instead of in GB or blocks, which apply to FB Extent Pools only.
You can obtain information about the repository with the showsestg command. Example 14-44
shows the output of the showsestg command. You might particularly be interested in how
much capacity is used in the repository; to obtain this information, check the repcapalloc
value.
Note that some storage is allocated for the repository in addition to repcap size. In
Example 14-44, the line that starts with overhead indicates that 4 GB had been allocated in
addition to the repcap size.
To display the LCUs that we have created, we use the lslcu command.
In Example 14-46, we create two LCUs using the mklcu command, and then list the created
LCUs using the lslcu command. Note that by default the LCUs that were created are 3990-6.
Also note that because we created two LCUs (using the parameter -qty 2), the first LCU,
which is ID BC (an even number), is in address group 0, which equates to rank group 0. The
second LCU, which is ID BD (an odd number), is in address group 1, which equates to rank
group 1. By placing the LCUs into both address groups, we maximize performance by
spreading workload across both rank groups of the DS8000.
Note: For the DS8000, the CKD LCUs can be ID 00 to ID FE. The LCUs fit into one of 16
address groups. Address group 0 is LCUs 00 to 0F, address group 1 is LCUs 10 to 1F, and
so on. If you create a CKD LCU in an address group, then that address group cannot be
used for FB volumes. Likewise, if there were, for example, FB volumes in LSS 40 to 4F
(address group 4), then that address group cannot be used for CKD. Be aware of this
limitation when planning the volume layout in a mixed FB/CKD DS8000.
The major difference to note here is that the capacity is expressed in either cylinders or as
CKD extents (1,113 cylinders). In order to not waste space, use volume capacities that are a
multiple of 1,113 cylinders. Also new is the support of DS8000 Licensed Machine Code
5.4.xx.xx for Extended Address Volumes (EAV). This support expands the maximum size of a
CKD volume to 262,668 cylinders and creates a new device type, 3390 Model A. This new
volume can only be used by IBM z/OS systems running V1.10 or later versions.
Note: For 3390-A volumes, the size can be specified from 1 to 65,520 in increments of 1
and from 65,667 (next multiple of 1113) to 262,668 in increments of 1113.
Remember, we can only create CKD volumes in LCUs that we have already created.
You also need to be aware that volumes in even numbered LCUs must be created from an
Extent Pool that belongs to rank group 0. Volumes in odd numbered LCUs must be created
from an Extent Pool in rank group 1.
You can also specify that you want the extents of the volume to be evenly distributed across
all ranks within the Extent Pool. This allocation method is called rotate extents.
The extent allocation method is specified with the -eam rotateexts or -eam rotatevols
option of the mkckdvol command (see Example 14-48).
Note: In DS8800 with Licensed Machine Code (LMC) 6.6.xxx, the default allocation policy
has changed to rotate extents.
The showckdvol command with the -rank option (see Example 14-49) shows that the volume
we created is distributed across two ranks, and it also displays how many extents on each
rank were allocated for this volume.
A Track Space Efficient volume is created by specifying the -sam tse parameter with the
mkckdvol command (see Example 14-50).
When listing Space Efficient repositories with the lssestg command (see Example 14-51),
we can see that in Extent Pool P4 we have a virtual allocation of 7.9 GB, but that the allocated
(used) capacity repcapalloc is still zero.
This allocation comes from the volume just created. To see the allocated space in the
repository for just this volume, we can use the showckdvol command (see Example 14-52).
Because the original volume had the rotateexts attribute, the additional extents are also
striped (see Example 14-54).
Note: Before you can expand a volume, you first have to delete all Copy Services
relationships for that volume, and you may not specify both -cap and -datatype for the
chckdvol command.
It is possible to expand a 3390 Model 9 volume to a 3390 Model A. You can do that just by
specifying a new capacity for an existing Model 9 volume. When you increase the size of a
3390-9 volume beyond 65,520 cylinders, its device type automatically changes to 3390-A.
However, keep in mind that a 3390 Model A can only be used in z/OS V1.10 and later (see
Example 14-55).
You cannot reduce the size of a volume. If you try, an error message is displayed, as shown in
Example 14-56.
Deleting volumes
CKD volumes can be deleted by using the rmckdvol command. FB volumes can be deleted
by using the rmfbvol command.
Starting with Licensed Machine Code (LMC) level 6.5.1.xx, the command includes a new
capability to prevent the accidental deletion of volumes that are in use. A CKD volume is
considered to be in use if it is participating in a Copy Services relationship, or if the IBM
System z path mask indicates that the volume is in a “grouped state” or online to any host
system. A CKD volume is considered to be in use if it is participating in a Copy Services
relationship, or if the volume has had any I/O in the last five minutes.
If the -force parameter is not specified with the command, volumes that are in use are not
deleted. If multiple volumes are specified and some are in use and some are not, the ones not
in use will be deleted. If the -force parameter is specified on the command, the volumes will
be deleted without checking to see whether or not they are in use.
In Example 14-57, we try to delete two volumes, 0900 and 0901. Volume 0900 is online to a
host, while 0901 is not online to any host and not in a Copy Services relationship. The
rmckdvol 0900-0901 command deletes just volume 0901, which is offline. To delete volume
0900, we use the -force parameter.
When IBM releases new microcode for the DS8800, it is released in the form of a bundle. The
term bundle is used because a new code release can include updates for various DS8800
components. These updates are tested together and then the various code packages are
bundled together into one unified release. In general, when referring to what code level is
being used on a DS8800, the term bundle should be used. Components within the bundle will
each have their own revision levels.
For a DS8000 Cross-Reference table of Code Bundles, visit the following site:
http://www.ibm.com/
Click Support & Downloads Support by Product Storage
Select 1. Choose your products Disk systems Enterprise Storage Servers
DS8800
Select 2. Choose your task Downloads
Select 3. See your results View your page
Click DS8800 Code Bundle Information
The Cross-Reference Table shows the levels of code for Release 6, which is installed on the
DS8800. It should be updated as new bundles are released. It is important to always match
your DS CLI version to the bundle installed on your DS8800.
For the DS8800, the naming convention of bundles is PR.MM.AA.E, where the letters refer to:
P Product (8 = DS8800)
R Release (6)
MM Maintenance Level (xx)
AA Service Pack (xx)
E EFIX level (0 is base, and 1.n is the interim fix build above base level.)
Important: Licensed Machine Code is always provided and installed by IBM Service
Engineers. Licensed Machine Code is not a client-serviceable task.
It is likely that a new bundle will include updates for the following components:
Linux OS for the HMC
AIX OS for the CECs
Microcode for HMC and CECs
Microcode/Firmware for Host Adapters
While the installation process described above might seem complex, it does not require a
great deal of user intervention. The code installer normally simply starts the distribution and
activation process and then monitors its progress using the HMC.
Important: An upgrade of the DS8800 microcode might require that you upgrade the DS
CLI on workstations. Check with your IBM representative regarding the description and
contents of the release bundle.
Best practice: Many clients with multiple DS8800 systems follow the updating schedule
detailed here, wherein the HMC is updated 1 to 2 days before the rest of the bundle is
applied.
Prior to the update of the CEC operating system and microcode, a pre-verification test is run
to ensure that no conditions exist that need to be corrected. The HMC code update will install
the latest version of the pre-verification test. Then the newest test can be run and if problems
are detected, there are one to two days before the scheduled code installation window to
correct them. An example of this procedure is illustrated here:
Thursday Copy or download the new code bundle to the HMCs.
Update the HMC(s) to the new code bundle.
Run the updated preverification test.
Resolve any issues raised by the preverification test.
Saturday Update the SFIs.
Note that the actual time required for the concurrent code load varies based on the bundle
that you are currently running and the bundle to which you are updating. Always consult with
your IBM service representative regarding proposed code load schedules.
Additionally, it is good practice at regular intervals to check multipathing drivers and SAN
switch firmware levels for current levels.
CUIR allows the DS8800 to request that all attached system images set all paths required for
a particular service action to the offline state. System images with the appropriate level of
software support respond to these requests by varying off the affected paths, and either
notifying the DS8800 subsystem that the paths are offline, or that it cannot take the paths
offline. CUIR reduces manual operator intervention and the possibility of human error during
maintenance actions, at the same time reducing the time required for the maintenance
window. This is particularly useful in environments where there are many systems attached to
a DS8800.
15.8 Summary
IBM might release changes to the DS8800 series Licensed Machine Code. These changes
may include code fixes and feature updates relevant to the DS8800.
These updates and the information regarding them are detailed on the DS8000 Code
Cross-Reference website as previously mentioned.
It is important that the Code Bundle installations are planned and coordinated to ensure
connectivity is maintained to the DS8800 system; this includes the DS CLI and the SSPC.
SNMP provides a standard MIB that includes information such as IP addresses and the
number of active TCP connections. The actual MIB definitions are encoded into the agents
running on a system.
MIB-2 is the Internet standard MIB that defines over 100 TCP/IP specific objects, including
configuration and statistical information, such as:
Information about interfaces
Address translation
IP, Internet-control message protocol (ICMP), TCP, and User Datagram Protocol (UDP)
SNMP can be extended through the use of the SNMP Multiplexing protocol (SMUX protocol)
to include enterprise-specific MIBs that contain information related to a specific environment
or application. A management agent (a SMUX peer daemon) retrieves and maintains
information about the objects defined in its MIB and passes this information on to a
specialized network monitor or network management station (NMS).
The SNMP protocol defines two terms, agent and manager, instead of the terms client and
server, which are used in many other TCP/IP protocols.
Agents send traps to the SNMP manager to indicate that a particular condition exists on the
agent system, such as the occurrence of an error. In addition, the SNMP manager generates
traps when it detects status changes or other unusual conditions while polling network
objects.
You can gather various information about the specific IP hosts by sending the SNMP get and
get-next requests, and can update the configuration of IP hosts by sending the SNMP set
request.
The SNMP agent can send SNMP trap requests to SNMP managers, which listen on UDP
port 162. The SNMP trap1 requests sent from SNMP agents can be used to send warning,
alert, or error notification messages to SNMP managers.
Note that you can configure an SNMP agent to send SNMP trap requests to multiple SNMP
managers. Figure 16-1 illustrates the characteristics of SNMP architecture and
communication.
Therefore, you should be careful about the SNMP security. At the very least, do not allow
access to hosts that are running the SNMP agent from networks or IP hosts that do not
necessarily require access.
You might want to physically secure the network to which you send SNMP packets by using a
firewall, because community strings are included as plain text in SNMP packets.
Most hardware and software vendors provide you with extended MIB objects to support their
own requirements. The SNMP standards allow this extension by using the private sub-tree,
called enterprise specific MIB. Because each vendor has a unique MIB sub-tree under the
private sub-tree, there is no conflict among vendors’ original MIB extensions.
A trap message contains pairs of an OID and a value shown in Table 16-1 to notify the cause
of the trap message. You can also use type 6, the enterpriseSpecific trap type, when you have
to send messages that do not fit other predefined trap types, for example, DISK I/O error
and application down. You can also set an integer value field called Specific Trap on your
trap message.
The DS8000 does not have an SNMP agent installed that can respond to SNMP polling. The
default Community Name is set to public.
The management server that is configured to receive the SNMP traps receives all the generic
trap 6 and specific trap 3 messages, which are sent in parallel with the Call Home to IBM.
Before configuring SNMP for the DS8000, you are required to get the destination address for
the SNMP trap and also the port information on which the Trap Daemon listens.
A serviceable event is posted as a generic trap 6 specific trap 3 message. The specific trap 3
is the only event that is sent for serviceable events. For reporting Copy Services events,
generic trap 6 and specific traps 100, 101, 102, 200, 202, 210, 211, 212, 213, 214, 215, 216,
or 217 are sent.
The SNMP trap is sent in parallel with a Call Home for service to IBM.
For open events in the event log, a trap is sent every eight hours until the event is closed. Use
the following link the discover explanations about all System Reference Codes (SRC):
http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000sv/index.jsp
This chapter describes only the messages and the circumstances when traps are sent by the
DS8000. For detailed information about these functions and terms, refer to IBM System
Storage DS8000: Copy Services for IBM System z, SG24-6787 and IBM System Storage
DS8000: Copy Services for Open Systems, SG24-6788.
If one or several links (but not all links) are interrupted, a trap 100, as shown in Example 16-2,
is posted and indicates that the redundancy is degraded. The RC column in the trap
represents the return code for the interruption of the link; return codes are listed in Table 16-2
on page 355.
Example 16-2 Trap 100: Remote Mirror and Copy links degraded
PPRC Links Degraded
UNIT: Mnf Type-Mod SerialNm LS
PRI: IBM 2107-922 75-20781 12
SEC: IBM 2107-9A2 75-ABTV1 24
Path: Type PP PLink SP SLink RC
1: FIBRE 0143 XXXXXX 0010 XXXXXX 15
2: FIBRE 0213 XXXXXX 0140 XXXXXX OK
If all links all interrupted, a trap 101, as shown in Example 16-3, is posted. This event
indicates that no communication between the primary and the secondary system is possible.
Example 16-3 Trap 101: Remote Mirror and Copy links are inoperable
PPRC Links Down
UNIT: Mnf Type-Mod SerialNm LS
PRI: IBM 2107-922 75-20781 10
SEC: IBM 2107-9A2 75-ABTV1 20
Path: Type PP PLink SP SLink RC
1: FIBRE 0143 XXXXXX 0010 XXXXXX 17
2: FIBRE 0213 XXXXXX 0140 XXXXXX 17
After the DS8000 can communicate again using any of the links, trap 102, as shown in
Example 16-4, is sent after one or more of the interrupted links are available again.
Example 16-4 Trap 102: Remote Mirror and Copy links are operational
PPRC Links Up
UNIT: Mnf Type-Mod SerialNm LS
PRI: IBM 2107-9A2 75-ABTV1 21
Table 16-2 lists the Remote Mirror and Copy return codes.
Example 16-5 Trap 200: LSS Pair Consistency Group Remote Mirror and Copy pair error
LSS-Pair Consistency Group PPRC-Pair Error
UNIT: Mnf Type-Mod SerialNm LS LD SR
PRI: IBM 2107-922 75-03461 56 84 08
SEC: IBM 2107-9A2 75-ABTV1 54 84
Trap 202, as shown in Example 16-6, is sent if a Remote Copy Pair goes into a suspend state.
The trap contains the serial number (SerialNm) of the primary and secondary machine, the
logical subsystem or LSS (LS), and the logical device (LD). To avoid SNMP trap flooding, the
number of SNMP traps for the LSS is throttled. The complete suspended pair information is
represented in the summary. The last row of the trap represents the suspend state for all pairs
in the reporting LSS. The suspended pair information contains a hexadecimal string of a
length of 64 characters. By converting this hex string into binary, each bit represents a single
device. If the bit is 1, then the device is suspended; otherwise, the device is still in full duplex
mode.
Example 16-6 Trap 202: Primary Remote Mirror and Copy devices on the LSS were suspended
because of an error
Trap 210, as shown in Example 16-7, is sent when a Consistency Group in a Global Mirror
environment was successfully formed.
Example 16-7 Trap210: Global Mirror initial Consistency Group successfully formed
2005/11/14 15:30:55 CET
Asynchronous PPRC Initial Consistency Group Successfully Formed
UNIT: Mnf Type-Mod SerialNm
IBM 2107-922 75-20781
Session ID: 4002
Trap 211, as shown in Example 16-8, is sent if the Global Mirror setup got into an severe error
state, where no attempts are made to form a Consistency Group.
Trap 212, shown in Example 16-9, is sent when a Consistency Group cannot be created in a
Global Mirror relationship. Some of the reasons might be:
Volumes have been taken out of a copy session.
The Remote Copy link bandwidth might not be sufficient.
The FC link between the primary and secondary system is not available.
Example 16-9 Trap 212: Global Mirror Consistency Group failure - Retry will be attempted
Asynchronous PPRC Consistency Group Failure - Retry will be attempted
UNIT: Mnf Type-Mod SerialNm
IBM 2107-922 75-20781
Session ID: 4002
Trap 213, shown in Example 16-10, is sent when a Consistency Group in a Global Mirror
environment can be formed after a previous Consistency Group formation failure.
Example 16-10 Trap 213: Global Mirror Consistency Group successful recovery
Asynchronous PPRC Consistency Group Successful Recovery
UNIT: Mnf Type-Mod SerialNm
IBM 2107-9A2 75-ABTV1
Session ID: 4002
Trap 215, shown in Example 16-12, is sent if, in the Global Mirror Environment, the master
detects a failure to complete the FlashCopy commit. The trap is sent after a number of commit
retries have failed.
Example 16-12 Trap 215: Global Mirror FlashCopy at Remote Site unsuccessful
Asynchronous PPRC FlashCopy at Remote Site Unsuccessful
A UNIT: Mnf Type-Mod SerialNm
IBM 2107-9A2 75-ABTV1
Session ID: 4002
Trap 216, shown in Example 16-13, is sent if a Global Mirror Master cannot terminate the
Global Copy relationship at one of his subordinates. This might occur if the master is
terminated with rmgmir but the master cannot terminate the copy relationship on the
subordinate. You might need to run a rmgmir against the subordinate to prevent any
interference with other Global Mirror sessions.
Trap 217, shown in Example 16-14, is sent if a Global Mirror environment was suspended by
the DS CLI command pausegmir or the corresponding GUI function.
Trap 218, shown in Example 16-15, is sent if a Global Mirror has exceeded the allowed
threshold for failed consistency group formation attempts.
Example 16-15 Trap 218: Global Mirror number of consistency group failures exceed threshold
Global Mirror number of consistency group failures exceed threshold
UNIT: Mnf Type-Mod SerialNm
IBM 2107-9A2 75-ABTV1
Session ID: 4002
Example 16-16 Trap 219: Global Mirror first successful consistency group after prior failures
Global Mirror first successful consistency group after prior failures
UNIT: Mnf Type-Mod SerialNm
IBM 2107-9A2 75-ABTV1
Session ID: 4002
Trap 220, shown in Example 16-17, is sent if a Global Mirror has exceeded the allowed
threshold of failed FlashCopy commit attempts.
Example 16-17 Trap 220: Global Mirror number of FlashCopy commit failures exceed threshold
Global Mirror number of FlashCopy commit failures exceed threshold
UNIT: Mnf Type-Mod SerialNm
IBM 2107-9A2 75-ABTV1
Session ID: 4002
Trap 221, shown in Example 16-18, is sent when the repository has reached the user-defined
warning watermark or when physical space is completely exhausted.
Example 16-18 Trap 221: Space Efficient repository or overprovisioned volume has reached a warning
watermark
Space Efficient Repository or Over-provisioned Volume has reached a warning
watermark
UNIT: Mnf Type-Mod SerialNm
IBM 2107-9A2 75-ABTV1
Session ID: 4002
The network management server that is configured on the HMC receives all the generic trap
6 specific trap 3 messages, which are sent in parallel with any events that Call Home to IBM.
dscli> showsp
Date/Time: November 16, 2005 10:15:04 AM CET IBM DSCLI Version: 5.1.0.204
Name IbmStoragePlex_2
desc ATS #1
acct -
SNMP Enabled
SNMPadd 10.10.10.11,10.10.10.12
emailnotify Disabled
emailaddr -
emailrelay Disabled
emailrelayaddr -
emailrelayhost -
This plan of providing support remotely must be balanced with the client’s expectations for
security. Maintaining the highest levels of security in a data connection is a primary goal for
IBM. This goal can only be achieved by careful planning with a client and a thorough review of
all the options available.
IP network
There are many protocols running on Local Area Networks (LANs) around the world. Most
companies us the Transmission Control Protocol/Internet Protocol (TCP/IP) standard for their
connectivity between workstations and servers. IP is also the networking protocol of the
global Internet. Web browsing and e-mail are two of the most common applications that run
on top of an IP network. IP is the protocol used by the DS8000 HMC to communicate with
external systems, such as the SSPC or DS CLI workstations. There are two varieties of IP;
refer to 8.3.2, “System Storage Productivity Center and network access” on page 166 for a
discussion about the IPv4 and IPv6 networks.
SSH
Secure Shell is a protocol that establishes a secure communications channel between two
computer systems. The term SSH is also used to describe a secure ASCII terminal session
between two computers. SSH can be enabled on a system when regular Telnet and FTP are
disabled, making it possible to only communicate with the computer in a secure manner.
FTP
File Transfer Protocol is a method of moving binary and text files from one computer system
to another over an IP connection. It is not inherently secure as it has no provisions for
encryption and only simple user and password authentication. FTP is considered appropriate
for data that is already public, or if the entirety of the connection is within the physical
boundaries of a private network.
SFTP
SSH File Transfer Protocol is unrelated to FTP. It is another file transfer method that is
implemented inside a SSH connection. SFTP is generally considered to be secure enough for
mission critical data and for moving sensitive data across the global Internet. FTP ports
(usually ports 20/21) do not have be open through a firewall for SFTP to work.
SSL
Secure Sockets Layer refers to methods of securing otherwise unsecure protocols such as
HTTP (websites), FTP (files), or SMTP (e-mail). Carrying HTTP over SSL is often referred to
as HTTPS. An SSL connection over the global Internet is considered reasonably secure.
VPN
A Virtual Private Network is a private “tunnel” through a public network. Most commonly, it
refers to using specialized software and hardware to create a secure connection over the
Internet. The two systems, although physically separate, behave as though they are on the
same private network. A VPN allows a remote worker or an entire remote office to remain part
of a company’s internal network. VPNs provide security by encrypting traffic, authenticating
sessions and users, and verifying data integrity.
IPSec
Internet Protocol Security is a suite of protocols used to provide a secure transaction between
two systems that use the TCP/IP network protocol. IPSec focuses on authentication and
encryption, two of the main ingredients of a secure connection. Most VPNs used on the
Internet use IPSec mechanisms to establish the connection.
Firewall
A firewall is a device that controls whether data is allowed to travel onto a network segment.
Firewalls are deployed at the boundaries of networks. They are managed by policies which
declare what traffic can pass based on the sender’s address, the destination address, and the
type of traffic. Firewalls are an essential part of network security and their configuration must
be taken into consideration when planning remote support activities.
Bandwidth
Bandwidth refers to the characteristics of a connection and how they relate to moving data.
Bandwidth is affected by the physical connection, the logical protocols used, physical
distance, and the type of data being moved. In general, higher bandwidth means faster
movement of larger data sets.
In the most secure environments, both of these physical connections (Ethernet and modem)
remain unplugged. The DS8000 serves up storage for its connected hosts, but has no other
communication with the outside world. This means that all configuration tasks have to be
done while standing at the HMC (there is no usage of the SSPC or DS CLI). This level of
security, known as an air gap, also means that there is no way for the DS8000 to alert anyone
that it has encountered a problem and there is no way to correct such a problem other than to
be physically present at the system.
So rather than leaving the modem and Ethernet disconnected, clients will provide these
connections and then apply policies on when they are to be used and what type of data they
may carry. Those policies are enforced by the settings on the HMC and the configuration of
client network devices, such as routers and firewalls. The next four sections discuss the
capabilities of each type of connection.
17.3.1 Modem
A modem creates a low-speed asynchronous connection using a telephone line plugged into
the HMC modem port. This type of connection favors transferring small amounts of data. It is
relatively secure because the data is not traveling across the Internet. However, this type of
connection is not terribly useful due to bandwidth limitations. Average connection speed in the
US mainland is 28-36 Kbps, and can be less in other parts of the world.
DS8000 HMC modems can be configured to call IBM and send small status messages.
Authorized support personnel can call the HMC and get privileged access to the command
line of the operating system. Typical PEPackage transmission over a modem line could take
15 to 20 hours depending on the quality of the connection. Code downloads over a modem
line are not possible.
The client has control over whether or not the modem will answer an incoming call. These
options are changed from the WebUI on the HMC by selecting Service Management
Manage Inbound Connectivity, as shown in Figure 17-1.
The HMC provides several settings to govern the usage of the modem port:
Unattended Session
This check box allows the HMC to answer modem calls without operator intervention. If
this is not checked, then someone must go to the HMC and allow for the next expected
call. IBM Support must contact the client every time they need to dial in to the HMC.
Duration: Continuous
This option indicates that the HMC can answer all calls at all times.
Duration: Automatic
This option indicates that the HMC will answer all calls for n days following the creation of
any new Serviceable Event (problem).
Duration: Temporary
This option sets a starting and ending date, during which the HMC will answer all calls.
These options are shown in Figure 17-2. See Figure 17-3 on page 374 for an illustration of a
modem connection.
HMCs connected to a client IP network, and eventually to the Internet, can send status
updates and offloaded problem data to IBM using SSL sessions. They can also use FTP to
retrieve new code bundles from the IBM code repository. It typically take less than an hour to
move the information.
Though favorable for speed and bandwidth, network connections introduce security concerns.
Care must be taken to:
Verify the authenticity of data, that is, is it really from the sender it claims to be?
Verify the integrity of data, that is, has it been altered during transmission?
Verify the security of data, that is, can it be captured and decoded by unwanted systems?
The Secure Sockets Layer (SSL) protocol is one answer to these questions. It provides
transport layer security with authenticity, integrity, and confidentiality, for a secure connection
between the client network and IBM. Some of the features that are provided by SSL are:
Client and server authentication to ensure that the appropriate machines are exchanging
data
Data signing to prevent unauthorized modification of data while in transit
Data encryption to prevent the exposure of sensitive information while data is in transit
See Figure 17-4 on page 375 for an illustration of a basic network connection.
Having the safety of running within a VPN, IBM can use its service interface (WebUI) to:
Check the status of components and services on the DS8000 in real time
Queue up diagnostic data offloads
Start, monitor, pause, and restart repair service actions
Performing the following steps result in the HMC creating a VPN tunnel back to the IBM
network, which service personnel can then use. There is no VPN service that sits idle, waiting
for a connection to be made by IBM. Only the HMC is allowed to initiate the VPN tunnel, and
it can only be made to predefined IBM addresses. The steps to create a VPN tunnel from the
DS8000 HMC to IBM are listed here:
1. IBM support calls the HMC using the modem. After the first level of authentications, the
HMC is asked to launch a VPN session.
2. The HMC hangs up the modem call and initiates a VPN connection back to a predefined
address or port within IBM Support.
3. IBM Support verifies that they can see and use the VPN connection from an IBM internal
IP address.
4. IBM Support launches the WebUI or other high-bandwidth tools to work on the DS8000.
Clients who work with many vendors that have their own remote support systems often own
and manage a VPN appliance, a server that sits on the edge of their network and creates
tunnels with outside entities. This is true for many companies that have remote workers,
outside sales forces, or small branch offices. Because the device is already configured to
meet the client’s security requirements, they only need to add appropriate policies for IBM
support. Most commercially-available VPN servers are interoperable with the IPSec-based
VPN that IBM needs to establish. Using a Business-to-Business VPN layout leverages the
investment that a client has already made in establishing secure tunnels into their network.
The VPN tunnel that gets created is valid for IBM Remote Support use only and has to be
configured both on the IBM and client sides. This design provides several advantages for the
client:
Allows the client to use Network Address Translation (NAT) so that the HMC is given a
non-routable IP address behind the company firewall.
Allows the client to inspect the TCP/IP packets that are sent over this VPN.
Allows the client to disable the VPN on their device for “lockdown” situations.
Note that the Business-to-Business VPN only provides the tunnel that service personnel can
use to actively work with the HMC from within IBM. To offload data or call home, the HMC still
needs to have one of the following:
Modem access
Non-VPN network access (SSL connection)
Traditional VPN access
See Figure 17-6 on page 377 for an illustration of a Business-to-Business VPN connection.
Call Home
Call Home is the capability of the HMC to contact IBM Service to report a service event. This
is referred to as Call Home for service. The HMC provides machine reported product data
(MRPD) information to IBM by way of the Call Home facility. The MRPD information includes
installed hardware, configurations, and features. The Call Home also includes information
about the nature of a problem so that an active investigation can be launched. Call Home is a
one-way communication, with data moving from the DS8000 HMC to the IBM data store.
Heartbeat
The DS8000 also uses the Call Home facility to send proactive heartbeat information to IBM.
A heartbeat is a small message with some basic product information so that IBM knows the
unit is operational. By sending heartbeats, both IBM and the client ensure that the HMC is
always able to initiate a full Call Home to IBM in the case of an error. If the heartbeat
information does not reach IBM, a service call to the client will be made to investigate the
status of the DS8000. Heartbeats represent a one-way communication, with data moving
from the DS8000 HMC to the IBM data store.
Call Home information and heartbeat information are stored in the IBM internal data store so
the support representatives have access to the records.
Modem offload
The HMC can be configured to support automatic data offload using the internal modem and
a regular phone line. Offloading a PEPackage over a modem connection is extremely slow, in
many cases taking 15 to 20 hours. It also ties up the modem for this time so that IBM support
cannot dial in to the HMC to perform command-line tasks. If this is the only connectivity option
available, be aware that the overall process of remote support will be delayed while data is in
transit.
Note: FTP offload of data is supported as an outbound service only. There is no active
FTP server running on the HMC that can receive connection requests.
When a direct FTP session across the Internet is not available or desirable, a client can
configure the FTP offload to use a client-provided FTP proxy server. The client then becomes
responsible for configuring the proxy to forward the data to IBM.
The client is required to manage its firewalls so that FTP traffic from the HMC (or from an FTP
proxy) can pass onto the Internet.
SSL offload
For environments that do not permit FTP traffic out to the Internet, the DS8000 also supports
offload of data using SSL security. In this configuration, the HMC uses the client-provided
network connection to connect to the IBM data store, the same as in a standard FTP offload.
But with SSL, all the data is encrypted so that it is rendered unusable if intercepted.
Client firewall settings between the HMC and the Internet for SSL setup require four IP
addresses open on port 443 based on geography as detailed here:
North and South America
129.42.160.48 IBM Authentication Primary
207.25.252.200 IBM Authentication Secondary
129.42.160.49 IBM Data Primary
207.25.252.204 IBM Data Secondary
All other regions
129.42.160.48 IBM Authentication Primary
207.25.252.200 IBM Authentication Secondary
129.42.160.50 IBM Data Primary
207.25.252.205 IBM Data Secondary
Loading code bundles from CDs is the only option for DS8000 installations that have no
outside connectivity at all. If the HMC is connected to the client network then IBM support will
download the bundles from IBM using either FTP or SFTP.
FTP
If allowed, the support representative will open an FTP session from the HMC to the IBM
code repository and download the code bundle(s) to the HMC. The client firewall will need to
be configured to allow the FTP traffic to pass.
After the code bundle is acquired from IBM, the FTP or SFTP session will be closed and the
code load can take place without needing to communicate outside of the DS8000.
IBM may need to trigger a data offload, perhaps more than one, and at the same time be able
to interact with the DS8000 to dig deeper into the problem and develop an action plan to
restore the system to normal operation. This type of interaction with the HMC is what requires
the most bandwidth.
If the only available connectivity is by modem, then IBM Support will have to wait until any
data offload is complete and then attempt the diagnostics and repair from a command-line
environment on the HMC. This process is slower and more limited in scope than if a network
connection can be used.
If a VPN is available, either from the HMC directly to IBM or by using VPN devices
(Business-to-Business VPN option), then enough bandwidth is available for data offload and
interactive troubleshooting to be done at the same time. IBM Support will be able to use
graphical tools (WebUI and others) to diagnose and repair the problem.
17.5 Scenarios
Now that the four connection options have been reviewed (see 17.3, “Remote connection
types” on page 367) and the tasks have been reviewed (see 17.4, “DS8000 support tasks” on
page 370), we can examine how each task is performed given the type of access available to
the DS8000.
17.5.1 No connections
If both the modem or the Ethernet are not physically connected and configured, then the
tasks are performed as follows:
Call Home and heartbeat: The HMC will not send heartbeats to IBM. The HMC will not call
home if a problem is detected. IBM Support will need to be notified at the time of
installation to add an exception for this DS8000 in the heartbeats database, indicating that
it is not expected to contact IBM.
Data offload: If absolutely required and allowed by the client, diagnostic data can be
burned onto a DVD, transported to an IBM facility, and uploaded to the IBM data store.
Code download: Code must be loaded onto the HMC using CDs carried in by the Service
Representative.
OR
Phone
Line
See Figure 17-4 for an illustration of a modem and network connection without using VPN
tunnels.
Internet
Phone
Line
HMC h as no open network
ports to receive connection s
Figure 17-4 Remote support with modem and network (no VPN)
See Figure 17-5 for an illustration of a modem and network connection plus traditional VPN.
Internet
Data offloads and Call Home Phone
go to IBM over IBM’s VPN Line
Internet via FTP or SSL Device
(one way traffic)
Customer’s
VPN Device
Example 17-1 illustrates the DS CLI command offloadauditlog, which provides clients with
the ability to offload the audit logs to the DS CLI workstation in a directory of their choice.
Example 17-2 Audit log entries related to a remote support event via modem
U,2009/10/05
18:20:49:000,,1,IBM.2107-7520780,N,8000,Phone_started,Phone_connection_started
U,2009/10/05 18:21:13:000,,1,IBM.2107-7520780,N,8036,Authority_to_root,Challenge
Key = 'ZyM1NGMs'; Authority_upgrade_to_root,,,
U,2009/10/05’18:26:02:000,,1,IBM.2107-7520780,N,8002,Phone_ended,Phone_connection_
ended
The Challenge Key shown is not a password on the HMC. It is a token shown to the IBM
support representative that is dialing in to the DS8000. The representative must use the
Challenge Key in an IBM internal tool to generate a Response Key that is given to the HMC.
The Response Key acts as a one-time authorization to the features of the HMC. The
Challenge and Response Keys change every time a remote connection is made.
For a detailed description about how auditing is used to record “who did what and when” in
the audited system, as well as a guide to log management, visit the following address:
http://csrc.nist.gov/publications/nistpubs/800-92/SP800-92.pdf
Note: Full Disk Encryption (FDE) drives can only be added to a DS8800 that was initially
ordered with FDE drives installed. See IBM System Storage DS8700 Disk Encryption
Implementation and Usage Guidelines, REDP-4500, for more information about full disk
encryption restrictions.
The disk drives are installed in Storage Enclosures (SEs). A storage enclosure interconnects
the DDMs to the controller cards that connect to the device adapters. Each storage enclosure
contains a redundant pair of controller cards. Each of the controller cards also has redundant
trunking. Figure 18-1 illustrates a Storage Enclosure.
Each storage enclosure attaches to two device adapters (DAs). The DAs are the RAID
adapter cards that connect the CECs to the DDMs. The DS8800 DA cards are always
installed as a redundant pair, so they are referred to as DA pairs.
Physical installation and testing of the device adapters, storage enclosure pairs, and DDMs
are performed by your IBM service representative. After the additional capacity is added
successfully, the new storage appears as additional unconfigured array sites.
You might need to obtain new license keys and apply them to the storage image before you
start configuring the new capacity; see Chapter 10, “IBM System Storage DS8800 features
and license keys” on page 203 for more information. You cannot create ranks using the new
capacity if this causes your machine to exceed its license key limits. Be aware that applying
increased feature activation codes is a concurrent action, but a license reduction or
deactivation is often a disruptive action.
Note: Special restrictions in terms of placement and intermixing apply when adding Solid
State Drives. Refer to DS8000: Introducing Solid State Drives, REDP-4522.
As a general rule, when adding capacity to a DS8800, storage hardware is populated in the
following order:
1. DDMs are added to underpopulated enclosures. Whenever you add 16 DDMs to a
machine, eight DDMs are installed into the upper storage enclosure and eight into the
lower storage enclosure. If you add a complete 48 pack, then 24 are installed in the upper
storage enclosure and 24 are installed in the lower storage enclosure.
2. After the first storage enclosure pair on a DA pair is fully populated with DDMs (48 DDMs
total), the next two storage enclosures to be populated will be connected to a new DA pair.
The DA cards are installed into the I/O enclosures that are located at the bottom of the
racks. They are not located in the storage enclosures.
3. After each DA pair has two fully populated storage enclosure pairs (96 DDMs total),
another storage enclosure pair is added to an existing storage enclosure pair.
When the -l parameter is added to these commands, additional information is shown. In the
next section, we show examples of using these commands.
For these examples, the target DS8800 has 2 device adapter pairs (total 4 DAs) and 4
fully-populated storage enclosure pairs (total 8 SEs). This means there are 128 DDMs and 16
array sites because each array site consists of 8 DDMs. In the examples, 10 of the array sites
are in use, and 6 are Unassigned meaning that no array is created on that array site. The
example system also uses full disk encryption-capable DDMs.
In Example 18-3, a listing of the storage drives is shown. Because there are 128 DDMs in the
example machine, only a partial list is shown here.
There are various rules about CoD and these are explained in IBM System Storage DS8000
Introduction and Planning Guide, GC35-0515. This section explains aspects of implementing
a DS8800 that has CoD disk packs.
In many database environments, it is not unusual to have very rapid growth in the amount of
disk space required for your business. This can create a problem if there is an unexpected
and urgent need for disk space and no time to create a purchase order or wait for the disk to
be delivered.
With this offering, up to six Standby CoD disk drive sets (96 disk drives) can be
factory-installed or field-installed into your system. To activate, you logically configure the disk
drives for use. This is a nondisruptive activity that does not require intervention from IBM.
Upon activation of any portion of a Standby CoD disk drive set, you must place an order with
IBM to initiate billing for the activated set. At that time, you can also order replacement CoD
disk drive sets.
Contact your IBM representative to obtain additional information regarding Standby CoD
offering terms and conditions.
On the View Authorization Details screen, the feature code 0901 Standby CoD indicator is
shown for DS8800 installations with Capacity on Demand. This is illustrated in Figure 18-3 on
page 386. If instead you see 0900 Non-Standby CoD, it means that the CoD feature has not
been ordered for your machine.
In Example 18-6, you can see how the OEL key is changed. The machine in this example is
licensed for 80 TB of OEL, but actually has 82 TB of disk installed because it has 2 TB of CoD
disks. However, if you attempt to create ranks using the final 2 TB of storage, the command
will fail because it exceeds the OEL limit. After a new OEL key with CoD is installed, the OEL
limit will increase to an enormous number (9.9 million TB). This means that rank creation will
succeed for the last 2 TB of storage.
At that time, you can also order replacement Standby CoD disk drive sets. If new CoD disks
are ordered and installed, then a new OEL key will also be issued and should be applied
immediately. If no more CoD disks are desired, or the DS8800 has reached maximum
capacity, then an OEL key will be issued to reflect that CoD is no longer enabled on the
machine.
Capacity Magic can do the physical (raw) to effective (net) capacity calculations automatically,
taking into consideration all applicable rules and the provided hardware configuration
(number and type of disk drive sets).
Capacity Magic is designed as an easy-to-use tool with a single, main interface. It offers a
graphical interface that allows you to enter the disk drive configuration of a DS8800 and other
IBM subsystems, the number and type of disk drive sets, and the RAID type. With this input,
Capacity Magic calculates the raw and net storage capacities. The tool also has functionality
that lets you display the number of extents that are produced per rank, as shown in
Figure A-1.
Figure A-1 shows the configuration window that Capacity Magic provides for you to specify
the desired number and type of disk drive sets.
Note: Capacity Magic is a tool used by IBM and IBM Business Partners to model disk
storage subsystem effective capacity as a function of physical disk capacity to be installed.
Contact your IBM Representative or IBM Business Partner to discuss a Capacity Magic
study.
Disk Magic
Disk Magic is a Windows-based disk subsystem performance modeling tool. It supports disk
subsystems from multiple vendors, but it offers the most detailed support for IBM subsystems.
Currently Disk Magic supports modelling to advanced-function disk subsystems, such as the
DS8000 series, DS6000, ESS, DS4000, DS5000, N-Series and the SAN Volume Controller.
A critical design objective for Disk Magic is to minimize the amount of input that you must
enter, while offering a rich and meaningful modeling capability. The following list provides
several examples of what Disk Magic can model, but it is by no means complete:
Move the current I/O load to a different disk subsystem model.
Merge the current I/O load of multiple disk subsystems into a single DS8700.
Insert a SAN Volume Controller in an existing disk configuration.
Increase the current I/O load.
Implement a storage consolidation.
Increase the disk subsystem cache size.
With the availability of SSD, Disk Magic perform support modelling when SSD ranks are
included in the configuration. In a z/OS environment, Disk Magic can provide an estimation of
which volumes are good SSD candidates and migrate those volumes to SSD in the model. In
an Open System environment, Disk Magic can model the SSD on a server basis.
Note: Disk Magic is a tool used by IBM and IBM Business Partners to model disk storage
subsystem performance. Contact your IBM Representative or IBM Business Partner to
discuss a Disk Magic study.
HyperPAV analysis
Traditional aliases allow you to simultaneously process multiple I/O operations to the same
logical volume. The question is, how many aliases do you need to assign to the LCUs in your
DS8000?
It is difficult to predict the ratio of aliases to base addresses required to minimize IOSQ time. If
the ratio is too high, this limits the amount of physical volumes that can be addressed, due to
the 64 K addressing limit. If the ratio is too small, then you may see high IOSQ times, which
will impact the business service commitments.
HyperPAV can help improve performance by reducing the IOSQ Time and also help in
reducing the number of aliases required in an LCU, which would free up more addresses to
be used as base-addresses.
To estimate how many aliases are needed, a HyperPAV analysis can be performed using
SMF records 70 through 78. The analysis results provide guidance about how many aliases
are required. This analysis can be performed against IBM and non-IBM disk subsystems.
Note: Contact your IBM Representative or IBM Business Partner to discuss a HyperPAV
study.
FLASHDA
The FLASHDA is a tool written in SAS that can help with deciding which datasets or volumes
are best candidates to be migrated to SSD from HDD.
The prerequisite to use this tool are APARs OA25688 and OA25559, which will report DISC
Time separately by Read I/O and Write I/O. The tool uses SMF 42 subtype 6 and SMF 74
subtype 5 records and provides a list by dataset, showing the amount of accumulated DISC
Time for the Read I/O operations during the time period selected.
If complete SMF records 70 through 78 are also provided, the report can be tailored to show
the report by dataset by each disk subsystem. It can also show the report by volume by disk
Figure 18-4 lists the output of the FLASHDA tool. It shows the Dataset name with the Address
and Volser where the dataset resides and the Total DISC Time in milliseconds for all Read
I/Os. This list is sorted in descending order to show which datasets would benefit the most
when moved to an SSD rank.
The next report, shown in Figure 18-5, is based on the preceding FLASHDA output and from
information extracted from the SMF records. This report shows the ranking of the Total Read
DISC Time in milliseconds by volume. It also shows the number of cylinders defined for that
volume and the serial number of the disk subsystem (DSS) where that volume resides.
A d d re s s V o ls e r T o ta l R d D IS C # c y ls DSS
C 1D 2 I1 0 YY5 6 ,5 9 2 ,2 2 9 32760 IB M -K L Z0 1
2198 X A G YA A 3 ,6 0 8 ,0 5 2 65520 IB M -M N 7 2 1
783E X A 2 Y5 8 3 ,0 3 2 ,3 7 7 65520 IB M -O P 6 6 1
430A Y1 4 Y3 S 2 ,6 5 4 ,0 8 3 10017 IB M -K L Z0 1
21B 2 X A G YC 6 2 ,6 4 8 ,1 2 6 65520 IB M -M N 7 2 1
7A 10 X A 2 Y7 6 2 ,3 8 9 ,5 1 2 65520 IB M -O P 6 6 1
7808 X A 2 Y0 4 2 ,1 0 2 ,7 4 1 65520 IB M -O P 6 6 1
22A A X A G Y8 4 1 ,4 5 8 ,6 9 6 65520 IB M -M N 7 2 1
2193 X A G YA 5 1 ,4 5 5 ,0 5 7 65520 IB M -M N 7 2 1
2A 13 X 3 9 Y6 0 1 ,4 4 4 ,7 0 8 65520 A B C -0 4 7 4 9
21B 5 X A G YC 9 1 ,4 2 9 ,2 3 1 65520 IB M -M N 7 2 1
2B 60 X 3 9 Y1 2 1 ,3 8 7 ,4 0 9 65520 A B C -0 4 7 4 9
Figure 18-5 Total Read DISC Time report by Volume
Using the report by dataset, you can select the datasets that are used by your critical
applications and migrate them to the SSD ranks.
If you use the report by volume, you can decide how many volumes you want to migrate to
SSD, and calculate how many SSD ranks are needed to accommodate the volumes that you
selected. A Disk Magic study can be performed to see how much performance improvement
can be achieved by migrating those volumes to SSD.
Note: Contact your IBM Representative or IBM Business Partner to discuss a FLASHDA
study.
Figure 18-6 shows the detailed analysis report by job name, sorted descending by Disk Read
Wait Total Seconds. This list can be used to select the data used by the job that would get
the highest benefit when migrated to an SSD media.
Note: Contact your IBM Representative or IBM Business Partner to discuss an IBM i SSD
analysis.
IBM Tivoli Productivity Center for Disk centralizes the management of networked storage
devices that implement the SNIA SMI-S specification, which includes the IBM System
Storage DS family, XIV®, N series, and SAN Volume Controller (SVC). It is designed to help
reduce storage management complexity and costs while improving data availability,
centralizing management of storage devices through open standards (SMI-S), enhancing
storage administrator productivity, increasing storage resource utilization, and offering
proactive management of storage devices. IBM Tivoli Productivity Center for Disk offers the
For more information, see Managing Disk Subsystems using IBM TotalStorage Productivity
Center, SG24-7097. Also, refer to the following address:
http://www.ibm.com/servers/storage/software/center/index.html
The service executes a multipass overwrite of the data disks in the storage system:
It operates on the entire box.
It is three pass overwrite, which is compliant with the DoD 5220.20-M procedure for
purging disks.
– Writes all sectors with zeros.
– Writes all sectors with ones.
– Writes all sectors with a pseudo-random pattern.
– Each pass reads back a random sample of sectors to verify the writes are done.
There is a a fourth pass of zeros with InitSurf.
IBM also purges client data from the server and HMC disks.
Certificate of completion
After the overwrite process has been completed, IBM delivers a complete report containing:
A certificate of completion listing:
– The serial number of the systems overwritten.
– The dates and location the service was performed.
– The overwrite level.
– The names of the engineers delivering the service and compiling the report.
A description of the service and the report
On a per data drive serial number basis:
– The G-list prior to overwrite.
– The pattern run against the drive.
– The success or failure of each pass.
– The G-list after the overwrite.
– Whether the overwrite was successful or not for each drive.
Drives erased
As a result of the erase service, all disks in the storage system are erased. Figure A-4 shows
all the drives that are covered by the Secure Data Overwrite Service.
For more information about available services, contact your IBM representative or IBM
Business Partner, or visit the following addresses:
http://www.ibm.com/services/
http://www.ibm.com/servers/storage/services/disk.html
For details about available IBM Business Continuity and Recovery Services, contact your IBM
Representative or visit the following address:
http://www.ibm.com/services/continuity
For details about educational offerings related to specific products, visit the following address:
http://www.ibm.com/services/learning/index.html
Select your country, and then select the product as the category.
http://www.ibm.com/systems/services/labservices/
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.
Other publications
These publications are also relevant as further information sources. Note that some of the
documents referenced here may be available in softcopy only.
IBM System Storage DS8000 Introduction and Planning Guide, GC27-2297
DS8000 Introduction and Planning Guide, GC35-0515
IBM System Storage DS: Command-Line Interface User's Guide, GC53-1127
System Storage Productivity Center Software Installation and User´s Guide, SC23-8823
IBM System Storage Productivity Center Introduction and Planning Guide, SC23-8824
IBM System Storage DS8000 User´s Guide, SC26-7915
IBM System Storage DS8000: Command-Line Interface User´s Guide, SC26-7916
IBM System Storage DS8000 Host Systems Attachment Guide, SC26-7917
Online resources
These websites and URLs are also relevant as further information sources:
IBM Disk Storage Feature Activation (DSFA) website
http://www.ibm.com/storage/dsfa
Documentation for the DS8000
http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic/index.jsp
System Storage Interoperation Center (SSIC)
http://www.ibm.com/systems/support/storage/config/ssic
Security Planning website
http://publib16.boulder.ibm.com/doc_link/en_US/a_doc_lib/aixbman/security/ipsec
_planning.htm
VPN Implementation, S1002693:
http://www.ibm.com/support/docview.wss?&rs=1114&uid=ssg1S1002693
Index 407
functions inband commands 13, 112
activate license 204 increase capacity 301
Incremental FlashCopy 13, 112
index scan 132
G indicator 204
Global Copy 12, 14, 108, 114, 119 initckdvol 95
Global Mirror 12, 14, 108, 115, 119 initfbvol 95
how it works 116 input voltage 163
installation
H DS8000 checklist 158
HA 10, 49 Intelligent Write Caching (IWC) 7, 38, 42, 133
hard drive rebuild 61 interactive mode
Hardware Management Console (HMC) 11, 166, 169 DS CLI 311–312
help 315 internal fabric 10
DS CLI 314 IOCDS 70
High Performance FICON for z (zHPF) 153 IOPS 129
hit ratio 42 IOSQ Time 148
HMC 14, 34, 52, 57, 71, 166 IPv6 7, 52
HMC planning isolated key server (IKS) 11, 54
technical environment 179 IWC 7, 38, 42, 133
host
interface 164 L
prerequisite microcode 187 lane 36
host adapter 10 large LUN and CKD volume support 15
host adapter (HA) 6 lateral change 213
host adapter see HA LCU 293
host adapters 23 LCU type 294
Fibre Channel 49 LDAP authentication 7, 307
FICON 49 LDAP based authentication 191
four port 125 Least Recently Used (LRU) 131
host attachment 101 licensed function
hosttype 328 authorization 204
HWN021724W 243 indicator 204
HyperPAV 18, 147–148, 297 Licensed Machine Code (LMC) 6
HyperPAV license 205 logical configuration 158
HyperSwap 7 logical control unit (LCU) 63, 70, 99
Hypervisor 58 logical size 94
Hypervisor (PHYP) 58 logical subsystem see LSS
logical volumes 91
I long busy state 294
I/O enclosure 43, 51 long busy wait 128
I/O latency 42 longwave 50
I/O priority queuing 18, 151 LSS 63, 99
I/O tower 28 lsuser 193
i5/OS 17, 93 LUNs
IBM Certified Secure Data Overwrite 16 allocation and deletion 96
IBM FlashCopy SE 13, 94, 320, 332 fixed block 91
IBM Redbooks publications 403 masking 15
IBM System Storage Interoperability Center (SSIC) 187 System i 93
IBM System Storage N series 137 LVM
IBM Tivoli Storage Productivity Center Basic Edition (TPC striping 137
BE) 53
IBM TotalStorage Multipath Subsystem Device Driver see M
SDD machine reported product data (MPRD) 371
IBM TotalStorage Productivity Center 11 machine type 23
IBM TotalStorage Productivity Center for Data 231 maintenance windows 187
IBM TotalStorage Productivity Center for Disk 231 man page 315
IKS 11, 54, 83 Management Information Base (MIB) 350
impact 95 managepwfile 194, 310
Index 409
POWER5 41, 126 repository 94, 320, 332
POWER6+ 5, 10 repository size 94
PPRC-XD 114 return codes
PPS 30, 79 DS CLI 313
Predictive Failure Analysis (PFA) 74 RIO-G 31, 38, 42
prefetch wastage 132 interconnect 62
prefetched data 42 RMC 12–13, 108, 113, 171
prefetching 130 Global Copy 114
primary frame 56 Global Mirror 115
primary power supplies (PPS) 72 Metro Mirror 114
primary power supply see PPS rmsestg 321
priority queuing 151 rmuser 197
processor complex 10, 40, 57 role 242
processor enclosure room space 162
power 51 rotate extents 7, 96, 296
project plan rotate volumes 296, 322, 335
considerations prior to installation 158 rotated volume 96
physical planning 157 rotateexts 325, 337
roles 159 RPC 30, 50
project planning 229, 251 RPO 119
information required 159 RPQ 174
PTC 111
S
R SAN 69
rack operator window 34 SAN Volume Controller (SVC) 137
rack power control cards see RPC SARC 16–17, 42, 130, 132
RAID 10 SAS 8, 26
AAL 77 SAS disk drive 4
drive failure 77 SAS drive 5
implementation 77 SAS-2 drive 10
RAID 5 scalability 15
drive failure 74 DS8000
RAID 6 7, 75, 97 scalability 127
implementation 76 script command mode
raidtype 318 DS CLI 313
RANDOM 17 script mode
ranks 88, 94 DS CLI 311
RAS 55 scrubbing 74
CUIR 71 SDD 17, 140
fault avoidance 59 Secure Data Overwrite 395
First Failure Data Capture 59 Security Administrator 191
naming 56 self-healing 60
rebuild time 76 SEQ 17
reconfiguring 98 SEQ list 132
Recovery Key 83 Sequential prefetching in Adaptive Replacement Cache
Recovery Point Objective see RPO see SARC
Redbooks Web site 404 sequential read throughput 5
Contact us xvi sequential write throughput 5
reduction 213 Serial Attached SCSI 26
related publications 403 Serial Attached SCSI (SAS) 86
help from IBM 404 server affinity 90
how to get IBM Redbooks publications 404 server-based SMP 39
online resources 404 service clearance 162
reliability, availability, serviceability see RAS service processor 42, 80
Remote Mirror and Copy function see RMC session timeout 295
Remote Mirror and Copy see RMC SFI 57
remote power control 169 S-HMC 11
remote support 72, 168 shortwave 50
reorg 137 showckdvol 335
repcapalloc 320 showfbvol 336
Index 411
V
value based licensing 204
Virtual Private Network (VPN) 168
virtual space 94
virtualization
abstraction layers for disk 86
address groups 100
array sites 87
arrays 87
benefits 104
concepts 85
definition 86
extent pools 90
hierarchy 103
host attachment 101
logical volumes 91
ranks 88
volume group 102
volume groups 102, 284
volume manager 97
volumes
CKD 92
VPN 15
W
warranty 204
Web UI 180, 184
web-based user interface (Web UI) 180
window
rack operator 34
WLM 145
Workload Manager 145
X
XRC 14, 117
XRC session 295
Z
z/OS Global Mirror 12, 14, 18, 23, 108, 117, 120
z/OS Global Mirror session timeout 295
z/OS Metro/Global Mirror 14, 116, 118
z/OS Workload Manager 145
zHPF 6, 18, 153
extended distance 154
multitrack 154
zIIP 118
High Density Storage This IBM® Redbooks® publication describes the concepts,
architecture, and implementation of the IBM System INTERNATIONAL
Enclosure
Storage® DS8800 storage subsystem. The book provides TECHNICAL
reference information to assist readers who need to plan SUPPORT
8 Gbps Host Adapters for, install, and configure the DS8800. ORGANIZATION
4-Port Device Adapter The IBM System Storage DS8800 is the most advanced
model in the IBM DS8000 lineup. It introduces IBM
POWER6+-based controllers, with dual two-way or dual
four-way processor complex implementations. It also BUILDING TECHNICAL
features enhanced 8 Gpbs device adapters and host INFORMATION BASED ON
adapters. PRACTICAL EXPERIENCE
The DS8800 is equipped with high-density storage
IBM Redbooks are developed
enclosures populated with 24 small form factor SAS-2 by the IBM International
drives. Solid State Drives are also available, as well as Technical Support
support for the Full Disk Encryption (FDE) feature. Organization. Experts from
IBM, Customers and Partners
Its switched Fibre Channel architecture, dual processor from around the world create
complex implementation, high availability design, and timely technical information
incorporated advanced Point-in-Time Copy and Remote based on realistic scenarios.
Mirror and Copy functions make the DS8800 system Specific recommendations
suitable for mission-critical business functions. are provided to help you
implement IT solutions more
Host attachment and interoperability topics for the DS8000 effectively in your
series including the DS8800 are now covered in the IBM environment.
Redbooks publication, IBM System Storage DS8000: Host
Attachment and Interoperability, SG24-8887.