FICON Planning and Implementation Guide
FICON Planning and Implementation Guide
Bill White Wolfgang Fries Brian Hatfield Michal Holenia Dennis Ng Ewerson Palacio Ren Petry
ibm.com/redbooks
International Technical Support Organization FICON Planning and Implementation Guide September 2009
SG24-6497-02
Note: Before using this information and the product it supports, read the information in Notices on page ix.
Third Edition (September 2009) This edition applies to FICON features defined as CHPID type FC, supporting native FICON, High Performance FICON for System z (zHPF), and FICON Channel-to-Channel (CTC) on IBM System z10 and System z9 servers.
Copyright International Business Machines Corporation 2005, 2006, 2009. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Part 1. Understanding FICON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Chapter 1. Introduction to FICON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1 Basic Fibre Channel terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.1 Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.1.2 Port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.1.3 Switched fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.1.4 FC link . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.1.5 World Wide Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.2 System z FICON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.2.1 FICON advantages over ESCON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2.2 FICON operating modes and topologies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.2.3 Terms used with FICON Directors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.2.4 Terms used with the Input/Output architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Chapter 2. System z FICON technical description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Using the FICON architecture for I/O operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 FICON initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 FICON I/O request . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.3 Command mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.4 Transport mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.5 Modified Indirect Data Address Word . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.6 Transport Indirect Data Address Word (TIDAW). . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.7 Open exchange. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.8 Buffer-to-buffer credit usage in FICON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.9 Extended distance FICON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 System z FICON feature support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 3. FICON Director technical description. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 The role of the FICON Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Switched point-to-point configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Cascaded FICON Director configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.3 Basic components of a FICON Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.4 Basic functions of a FICON Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Qualified FICON Directors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 IBM System Storage SAN b-type family components. . . . . . . . . . . . . . . . . . . . . . 3.2.2 IBM System Storage SAN b-type family functions . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Cisco MDS 9500 Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.4 Functions of the Cisco MDS 9500 Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 14 14 16 19 20 22 22 23 23 24 25 33 34 34 35 36 37 44 45 47 52 53
iii
Chapter 4. Planning the FICON environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Structured approach for planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Migrating from ESCON to FICON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Moving to a high bandwidth environment (FICON Express8) . . . . . . . . . . . . . . . . 4.4.3 Migrating from a single site to a multi-site environment . . . . . . . . . . . . . . . . . . . . 4.4.4 Implementing a new FICON environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Topologies and supported distances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 Point-to-point. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.2 Switched point-to-point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.3 Cascaded FICON Directors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.4 Extended distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.1 Intermix fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.2 Fabric security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.3 High integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.4 Zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.1 Command-line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.2 Element management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.3 Fabric management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.4 Storage management initiative specification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.5 System z management for FICON Directors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 Virtualization and availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8.1 System z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8.2 Control unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8.3 FICON Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9.1 Frame pacing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9.2 Extended distance FICON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9.3 Multiple allegiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9.4 Parallel Access Volume and HyperPAV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9.5 Modified Indirect Data Address Word . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9.6 High Performance FICON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9.7 Bandwidth management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9.8 Traffic management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9.9 Evaluation tools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10 Prerequisites and interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11 Physical connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
59 60 62 63 63 64 65 66 66 67 68 68 68 70 71 72 73 74 74 75 75 76 76 76 77 78 78 81 82 83 84 85 86 86 87 88 89 91 91 92 93
Part 3. Configuring the FICON environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Chapter 5. Configuring a point-to-point topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 5.1 Establishing a point-to-point topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.1.1 Description of our environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.1.2 Tasks and checklist. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 5.1.3 Getting started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Chapter 6. Configuring a switched point-to-point topology . . . . . . . . . . . . . . . . . . . . 6.1 Establishing a switched point-to-point topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.1 Description of our environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.2 Tasks and checklist. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
FICON Planning and Implementation Guide
6.1.3 Getting started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Chapter 7. Configuring a cascaded FICON Director topology . . . . . . . . . . . . . . . . . . 7.1 Establishing a cascaded FICON Director topology . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.1 Description of our environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.2 Tasks and checklist. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.3 Getting started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 8. Configuring FICON Directors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Configuration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 Configuration flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.2 FICON Director management connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Installing and using Data Center Fabric Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 Installing Data Center Fabric Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.2 Using the DCFM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Setting up a FICON Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Changing the IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.2 Enabling features (optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.3 Setting up a logical switch (optional). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.4 Configuring the Domain ID and Insistent Domain ID . . . . . . . . . . . . . . . . . . . . . 8.3.5 Setting up PBR, IOD, and DLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.6 Enabling the Control Unit Port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.7 Changing port type and speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.8 Changing buffer credits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.9 Setting up the Allow/Prohibit Matrix (optional) . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.10 Setting up zoning (optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.11 Configuring Port Fencing (optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Setting up cascaded FICON Directors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.1 Setting up Inter-Switch Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.2 Setting up a high integrity fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 FICON Directors in an extended distance environment . . . . . . . . . . . . . . . . . . . . . . . 8.6 FICON Directors in an intermixed environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7 Channel swap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8 Backing up Director configuration data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9 Backing up DCFM configuration data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 134 134 135 137 151 152 152 153 154 155 161 164 164 169 170 173 175 177 179 183 184 185 188 190 190 192 196 197 197 197 198
Part 4. Managing the FICON environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Chapter 9. Monitoring the FICON environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 System Activity Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Resource Measurement Facility monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Introduction to performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Introduction to Resource Measurement Facility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.1 Data gathering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.2 RMF reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 RMF example reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.1 DASD Activity Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.2 I/O Queueing Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.3 Channel Path Activity Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.4 FICON Director Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.5 ESS Link Statistics Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.6 General performance guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.7 Tape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6 DCFM performance monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents
201 202 203 203 204 205 205 206 206 207 209 210 211 212 213 213 v
9.6.1 End-to-End Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 9.6.2 Real-Time Monitor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Chapter 10. Debugging FICON problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Preparing for problem determination activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.1 Using the D M - Display system Matrix command . . . . . . . . . . . . . . . . . . . . . . 10.1.2 Creating a CONFIG member . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.3 Most common I/O-related problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Problem determination approach for FICON. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Environmental Record, Editing, and Printing program . . . . . . . . . . . . . . . . . . . . . . . 10.4 FICON link incident reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 FICON Purge Path Extended . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Helpful z/OS commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7 Node descriptor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.1 View node descriptors from DCFM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8 DCFM logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9 DCFM data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.10 Common z/OS FICON error message - interpretation . . . . . . . . . . . . . . . . . . . . . . 217 218 218 218 219 220 222 222 222 225 226 226 228 229 230
Part 5. Appendixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Appendix A. Example: planning workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Appendix B. Configuration worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Appendix C. Configuration and definition tools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hardware Configuration Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hardware Configuration Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CHPID Mapping Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I/O Configuration Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IOCDS statements and keywords used for FICON. . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 256 256 256 256 257
Appendix D. Configuring the DS8000 for FICON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Getting started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 DS8000 port layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Appendix E. Using HMC and SE for problem determination information . . . . . . . . . HMC and SE versions and user IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Logging on to the HMC and SE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Displaying System z server CPC details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Finding a physical resource on a System z server . . . . . . . . . . . . . . . . . . . . . . . . . . . . Displaying individual FICON channel information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Displaying the Analyze Channel Information panels. . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix F. Useful z/OS commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using z/OS commands for problem determination. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Displaying system status using D M=CPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Displaying additional z/OS information using D IPLINFO . . . . . . . . . . . . . . . . . . . . . . . Displaying the I/O configuration using D IOS,CONFIG. . . . . . . . . . . . . . . . . . . . . . . . . Displaying HSA usage using D IOS,CONFIG(HSA) . . . . . . . . . . . . . . . . . . . . . . . . . . . Display units command D U . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D U,,,dddd,1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D M=DEV(dddd) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DEVSERV command - DS P,dddd,n. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 266 266 266 268 270 272 287 288 288 290 290 291 292 292 296 300
vi
Appendix G. Adding FICON CTC connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How to get Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 307 307 309 309 309
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
Contents
vii
viii
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
ix
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
DB2 DS8000 ESCON FICON GDPS Geographically Dispersed Parallel Sysplex HyperSwap IBM Parallel Sysplex PR/SM Redbooks Redbooks (logo) Resource Link System Storage System z10 System z9 System z Tivoli z/Architecture z/OS z/VM z/VSE z9
The following terms are trademarks of other companies: Disk Magic, and the IntelliMagic logo are trademarks of IntelliMagic BV in the United States, other countries, or both. Snapshot, NOW, and the NetApp logo are trademarks or registered trademarks of NetApp, Inc. in the U.S. and other countries. SAP, and SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries. Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Active Directory, Convergence, Excel, Microsoft, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.
Preface
This IBM Redbooks publication covers the planning, implementation, and management of IBM System z FICON environments. It discusses the FICON and Fibre Channel architectures, terminology, and supported topologies. The book focuses on the hardware installation and the software definitions that are needed to provide connectivity to FICON environments.You will find configuration examples required to support FICON control units, FICON Channel-to-Channel (FCTC), and FICON Directors. It also discusses utilities and commands that are useful for monitoring and managing the FICON environment. The target audience for this document includes IT Architects, data center planners, SAN administrators, and system programmers who plan for and configure FICON environments. The reader is expected to have a basic understanding of IBM System z10 and IBM System z9 hardware, HCD or IOCP, as well as a broad understanding of the Fibre Channel and FICON architectures.
xi
experience with IBM Large Systems. His areas of expertise include System z server technical and customer support. Ewerson has been a System z hardware Top Gun course designer, developer, and instructor for the last five generations of IBM high-end servers. Rene Petry is a Support Center Specialist in the System z Support Center in Germany. Since serving a three-year apprenticeship at IBM in Mainz, Rene has worked for nine years as an Account Customer Engineer for a large banking customer in Germany. His areas of expertise include System z servers, FICON, Fibre Channel Directors, IBM Storage products (disk and tape), and fiber optic infrastructures. Thanks to the following people for their contributions to this project: Bob Haimowitz International Technical Support Organization, Poughkeepsie Center Connie Beuselinck IBM System z Product Planning, Poughkeepsie Charlie Hubert, Brian Jacobs, Sam Mercier IBM Vendor Solutions Connectivity (VSC) Lab, Poughkeepsie Lou Ricci IBM Systems Software Development, Poughkeepsie Jack Consoli Systems Engineer, Brocade Communications Systems, Inc. Thanks to the authors of the previous editions of this book. Authors of the first edition, FICON Implementation Guide, published in February 2005, were: Hans-Peter Eckam, IBM Germany Iain Neville, IBM UK Authors of the second edition, FICON Implementation Guide, published in January 2006, were: Hans-Peter Eckam, IBM Germany Wolfgang Fries, IBM Germany Iain Neville, IBM UK
xii
Comments welcome
Your comments are important to us! We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks Send your comments in an e-mail to: [email protected] Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400
Preface
xiii
xiv
Part 1
Part
Understanding FICON
This part introduces FICON and explains how it is exploited by the System z channel, the FICON Director, and the control unit. It also provides information about the FICON and Fibre Channel architectures and their uses in System z environments.
Chapter 1.
Introduction to FICON
The term Fibre Connection (FICON) represents the architecture as defined by the InterNational Committee of Information Technology Standards (INCITS), and published as ANSI standards. FICON also represents the names of the various System z server I/O features. This chapter discusses the basic Fibre Channel (FC) and FICON terminology, as well as System z FICON support, benefits, operating modes, and topologies. Throughout this chapter we use the term FICON to refer to FICON Express8, FICON Express4, FICON Express2, and FICON Express, except when the function being described is applicable to a specific FICON feature type.
Audio / Video
FC-FCP FICON IPI / SCSI / HIPPI / SB / IP / SB-3/4 Channels Upper Level Protocol (ULP)
P / 802.2 Networks
FC-4
Mapping
Multimedia
FC-3 FC-2
Signaling Protocol
Common Services
FC-1
Transmission Protocol
Encode/Decode (8/10)
FC-0
Interface/Media
The FICON channel architecture consists of the following Fibre Channel (FC) protocols: FC-0 level - Interface and Media The Fibre Channel physical interface (FC-0), specified in FC-PI, consists of the transmission media, transmitters, receivers, and their interfaces. The physical interface specifies a variety of media and associated drivers and receivers capable of operating at various speeds. FC-1 level - Transmission Protocol This is a link control protocol that performs a conversion from the 8-bit EBCDIC code into a 10-bit transmission code; a unique bit-pattern is assigned to each known hexadecimal character. Encoding is done by the N_Port when sending the character stream over the fiber, and the decoding back to 8-bit code is performed by the receiving N_Port.
FC-2 level - Signaling Protocol Fibre Channel physical framing and signaling interface (FC-PH) describes the point-to-point physical interface, transmission protocol, and signaling protocol of high-performance serial links for support of higher-level protocols associated with HIPPI, IPI, SCSI, FC-SB-2/3/4 (FICON), and others. FC-3 level - Common Services This layer is intended to provide the common services required for advanced features. FC-4 level - Mapping The Upper Level Protocol (ULP) is part of FC-4 and describes IPI/FC-FCP (SCSI)/HIPPI/SB/IP and FICON. FICON was introduced with Single Byte-2 Command Code Sets mapping protocol, and then later revised with FC-SB-3. In 2008 another revision was drafted, identified as FC-SB-4; this revision will be used to support additional FICON functions. FC-SB-2, FC-SB-3, and FC-SB-4 architecture information and other FC documentation can be obtained from the following Web site: http://www.t11.org Fibre Channel provides the capability to build a configuration, as shown in Figure 1-2, that can operate in a point-to-point, arbitrated loop, or switched configuration.
Switched Fabric
FC Link
F_Port E_Port E_Port N_Port F_Port
N_Port
ISL
F_Port
FC Link
N_Port
NL_Port
Note: System z FICON channels do not support the arbitrated loop topology.
1.1.1 Node
A node is an endpoint that contains information. It can be a computer (host), a device controller, or a peripheral device (such as a disk or tape drive). A node has a unique 64-bit identifier known as the Node_Name. The Node_Name is typically used for management purposes.
1.1.2 Port
Each node must have at least one port (hardware interface) to connect the node to the FC topology. This node port is referred to as an N_Port. Each N_Port has a Port_Name, which is a unique 64-bit identifier that is assigned at the time it is manufactured. The N_Port is used to associate an access point to a nodes resources. Other port types include: E_Port F_Port FL_Port G_Port L_Port NL_Port U_Port An expansion port is used to interconnect switches and build a switched fabric. A fabric port is used to connect an N_Port to a switch that is not loop-capable. A fabric loop port is used to connect NL_Ports to a switch in a loop configuration. A generic port is a port that has not yet participated in a fabric. A loop port is a port in a Fibre Channel Arbitrated Loop (FC-AL) topology. A node loop port is an N_Port with loop capabilities. A universal port is a port that has not yet assumed a specific function in the fabric. It is a generic switch port that can operate as an E_Port, an F_Port, or an FL_Port.
The port type is determined by the nodes role in the topology, as shown in Figure 1-2 on page 5.
1.1.4 FC link
The port connects to the topology through an FC link. The FC link is a fiber optic cable that has two strands. One strand is used to transmit a signal and the other strand is used to receive a signal (see Figure 1-3 on page 7). An FC link is used to interconnect nodes and switches.
FC port
FC port
Outbound Outbound
Tx Rx
Tx Rx
Inbound
Inbound
For example, an FC link (port-to-port connection) can be: Node-to-node link (N_Port-to-N_Port) Node-to-switch link (N_Port-to-F_Port) Switch-to-switch link (E_Port-to-E_Port)
N_Port
WWPN21
N_Port
F_Port
F_Port
F_Port
WWPN23
F_Port
WWPN24
N_Port
WWPN12
N_Port
WWPN32
On a System z server, the WWNN is constant for all FICON channels (ports); however, the WWPN is unique to each FICON channel on the server.
a. Maximum number of concurrent I/O operations can vary depending on the use of high performance FICON and FICON feature. b. The maximum link rate depends on the specific FICON feature. FICON Express8 provides up to 8 Gbps link rate. c. Standard unrepeated distance for most FICON LX features is 10 km; additional distance may be achieved with an RPQ on some FICON features. d. To avoid performance degradation at extended distances, FICON switches, directors, or channel extension devices may be required.
FICON-to-ESCON solutions
The FICON Express feature was the last FICON channel feature to support CHPID type FCV. The FICON Express channel in FCV mode allows the accessing of ESCON interface control units with ESCON interfaces via a 9032-5 ESCON Director FICON Bridge adapter. Many System z environments only use FICON Express4 features and above. The FICON Express features are supported on System z10 and System z9 servers only if carried forward on a server upgrade. Therefore, another solution is needed for FICON channel-to-ESCON control unit connectivity, such as the PRIZM FICON to ESCON Converter from Optica Technologies Inc. The PRIZM FICON to ESCON Converter is a channel-based appliance that converts native FICON protocol to native ESCON protocol, thereby allowing attachment of existing ESCON devices directly to FICON channels.
Unlike the Model 5 ESCON Director FICON Bridge solution, PRIZM supports native FICON channel protocol (CHPID type FC), and supports attachment to existing FICON fabrics. PRIZM provides strategic flexibility to those migrating from ESCON to FICON on their System z servers. For additional details, refer to the following Web site: http://www.opticatech.com/?page_id=19 The PRIZM solution is also offered through IBM GTS Site and Facilities Services: http://www.ibm.com/services/us/index.wss/itservice/igs/a1026000
Point-to-point
FC
FC Link
FICON CU
Switched point-to-point
FC
FC Link
FC Link
FICON CU
FICON CU
FC L
ink
FICON Director
FC Lin
FC
FC
FC Lin k
k C Lin
FICON CU
FICON Director
FICON Director
FC L in k
nk
FICON CU
FC
10
FC
Li
A FICON channel also supports channel-to-channel (CTC) communications. The FICON channel at each end of the FICON CTC connection, supporting the FCTC control units, can also communicate with other FICON control units, such as disk and tape devices.
Input/output channels
Input/output (I/O) channels are components of the System z Channel Subsystem (CSS) and z/Architecture. They provide a pipeline through which data is exchanged between servers, or between a server and external devices. z/Architecture channel connections are referred to as channel paths.
Control unit
The most common attachment to a System z channel is a control unit (CU) accessed via an Enterprise Systems CONnection (ESCON) or a FIbre CONnection (FICON) channel. The CU controls I/O devices such as disk and tape devices.
I/O devices
An input/output (I/O) device provides external storage, a means of communication between data processing systems, or a means of communication between a system and its environment. In the simplest case, an I/O device is attached to one control unit and is accessible through one channel path.
Channel-to-Channel
The Channel-to-Channel (CTC) function simulates an I/O device that can be used by one system control program to communicate with another system control program. It provides the data path and synchronization for data transfer between two channels. When the CTC option is used to connect two channels that are associated with different systems, a loosely coupled
11
multiprocessing system is established. The CTC connection, as viewed by either of the channels it connects, has the appearance of an unshared input/output device.
Channel Subsystem
The Channel Subsystem provides the functionality for System z servers to communicate with input/output (I/O) devices and the network. The CSS has evolved with the increased scalability of IBM System z servers. The CSS architecture provides functionality in the form of multiple Channel Subsystems (CSSs). Multiple CSSs can be configured within the same System z (up to four on z10 EC servers and z9 EC servers, and up to two on z10 BC servers and z9 BC servers).
Subchannels
A subchannel provides the logical representation of a device to the program. It contains the information required for sustaining a single I/O operation. A subchannel is assigned for each device defined to the logical partition. Note that Multiple Subchannel Sets (MSS) are available on System z10 and System z9 servers to increase addressability. Two subchannel sets are provided: subchannel set-0 can have up to 63.75 K subchannels, and subchannel set-1 can have up to 64 K subchannels.
Channel spanning
Channel spanning extends the MIF concept of sharing channels across logical partitions to sharing channels across Channel Subsystems and logical partitions.
12
Chapter 2.
13
14
Point-to-point
This is the initialization process for a point-to-point connection: 1. 2. 3. 4. 5. Channel N_Port Login process. Request Node Identifier (RNID) function to provide specific neighbor node information. Channel State-Change Registration (SCR) is sent to the control unit. Channel Link-Incident-Record Registration (LIRR). Process Login (PRLI) is used when channel and control unit support System z High Performance FICON (zHPF). 6. Establish Logical Path (ELP) to the control unit images that are stored in the Hardware Storage Area (HSA1) when each channel image is initialized.
Switched point-to-point
This is the initialization process for a switched point-to-point connection: 1. Channel F_Port Login process. 2. Query Security Attributes (QSA) determines if a cascaded switch configuration is supported and if two-byte destination addresses can be used. 3. RNID function provides specific neighbor node information. 4. Channel SCR is sent to the fabric controller. 5. LIRR channel to the management server. 6. Channel N_Port Login. The channel will also login to the defined control unit N_Port. The channel performs this for each control unit N_Port link address defined on this channel path. RNID to control unit. LIRR to control unit. PRLI used when channel and control unit support zHPF. ELP to the control unit images that are stored in the HSA.
Channel initialization PLOGI/FLOGI QSA RNID PLOGI / LIRR / RNID ELP / LPE
FICON Director
Control Unit
System z10
Also called the Hardware System Area, it is a protected area of storage where control blocks used by the channel subsystem are located.
15
Link initialization is described in FC-FS-2. When link initialization is complete, the N_Port or F_Port is in the active state. After link initialization is complete for an N_Port or F_Port, the port is considered to be operational as long as it remains in the active state. The link speed is also determined during this process, through auto-negotiation. The FICON link will start at the highest speed and work lower until both sides choose the same speed. FICON features support the following speeds: FICON Express2 will auto-negotiate to 2 Gbps or 1 Gbps. FICON Express4 will auto-negotiate to 4 Gbps, 2 Gbps, or 1 Gbps. FICON Express8 will auto-negotiate to 8 Gbps, 4 Gbps, or 2 Gbps.
FICON registration
System z10 servers support platform and name server registration to the fabric. When registered, information about the channels connected to a fabric will be available to other nodes or SAN managers. The attributes that are registered for the System z10 servers include: Platform information World Wide Node Name (WWNN) - This is the node name of the platform, and it is the same for all channels belonging to the platform. Platform type (host computer). Platform name - The name includes Vendor ID, product ID, and vendor-specific data from the node descriptor. Channel information World Wide Port Name (WWPN). Port type (N_Port_ID). FC-4 types supported. Classes of service supported by the channel. Platform registration is a service defined in the Fibre Channel - Generic Services - 4 (FC-GS-4) standard (INCITS (ANSI) T11 group).
Application
IOS UCB
System z Channel Subsystem FICON Channel FC4 (Protocol) FC3 (Services) FC2 (Framing) FC1
Encode/ Decode
FC0 (Optics)
FICON payload
FC-2 CRC
EOF
FC Port addressing
Fibre Channel Fabric FICON CU
FC-2 Hdr - S_ID D_ID
S_ID
D_ID
D_ID
S_ID
A FICON I/O request flow is as follows: 1. An application or system component invokes an I/O request. The application or access method provides CCWs and additional parameters in the Operation Request Block (ORB). 2. The request is queued on the Unit Control Block (UCB)2. The Input Output Supervisor (IOS) will service the request from the UCB on a priority basis. 3. IOS issues a Start Subchannel (SSCH) instruction with the Subsystem Identification word (SSID) representing the device and ORB as operands. 4. The ORB contains start-specific control information. It indicates whether the channel is operating in transport mode (zHPF support) or command mode, and also indicates the starting address of the Channel Programs (CPA) Channel Command. 5. The CSS selects the most appropriate channel and passes the I/O request to it. 6. The channel fetches from storage the Channel Command Words and associated data (for Write Operations). 7. The channel assembles the required parameters and fields of the FC-2 and FC-SB-3 or FC-SB-4 for the I/O request and passes them to the Fibre Channel adapter (which is part of the FICON channel). Device-level information is transferred between a channel and a control unit in SB-3 or SB-4 Information Units (IUs). Information units are transferred using both link-level and device-level functions and protocols. For example, when the channel receives an initiative to start an I/O operation, the device-level functions and protocols obtain the command and other parameters from the current CCW and insert them into the appropriate fields within a command IU. When the
2
The UCB is a control block in memory that describes an I/O device to the operating system.
17
command IU is ready for transmission, link-level functions and protocols provide additional information (for example, address identifiers and exchange ID in the frame header) and then coordinate the actual transmission of the frame on the channel path. 8. The Fibre Channel adapter builds the complete FC-FS FC-2 serial frame and transmits it into the Fibre Channel link. As part of building the FC-FS FC-2 frame for the I/O request, the FICON channel in FICON native (FC) mode constructs the 24-bit FC port address of the destination N_Port of the control unit, and the control unit image and device address within the physical CU.
FICON
Bits 23-16
bits 7-0
Responder Exchange ID
R_CTL indicates if it is a
Link control frame Data frame (FICON )
Exchange ID
Command mode (exchange pair is is used in an active I/O operation)
18
The FC header and its content determine how the frame is to be handled. The R_CTL field indicates the type of frame it is, as follows: Link Control Acknowledge Link Response Link Command Data Frame Video_Data Link Data Basic Link Service (BLS) Extended Link Service (ELS) Device_Data (Type - IP, IPI-3, SCSI, FCP, SB-3 or SB-4) SB-3 and SB-4 is used for FICON The FC header also contains the Exchange ID. In FICON command mode, two exchange IDs are used and are called an exchange pair. Command mode uses one exchange pair for one I/O operation. An open exchange represents an I/O operation in progress over the channel. A detailed description of the FC frame and FICON FC-SB-3 or SB-4 usage documentation can be obtained from the following Web site: http://www.t11.org
FICON Channel
zArch CCW1 FC-SB-3 Cmd/Data IU Cmd/Data IU Cmd IU Cmd IU Cmd IU Cmd/Data IU Cmd/Data IU Data IU FC-FS FC2 FC2 FC2 FC2 FC2 FC2 FC2
Control Unit
Frame(s) Frame(s) Frame(s) Frame(s) Frame(s) Frame(s)
CCW1 CCW2 CCW3 CCW4 CCW5 CCW6 CCWn CMD2 CMR CMD1
Device
Data to Memory
CCWn
Frame(s) Frame
IU CE/DE (CCW-y)
CMDy
CMD-y End
Frame
A fundamental difference with ESCON is the CCW chaining capability of the FICON architecture. ESCON channel program operation requires a Channel End/Device End (CE/DE) after executing each CCW. FICON supports CCW chaining without requiring a CE/DE at the completion of each CCW operation.
19
The ESCON channel transfers the CCW to the control unit and waits for a CE/DE presented by the control unit after execution of the CCW by the device (CCW interlock). After receiving CE/DE for the previous CCW, the next CCW is transferred to the control unit for execution. With a FICON channel, CCWs are transferred to the control unit without waiting for the first command response (CMR) from the control unit or for a CE/DE after each CCW execution. The device presents a logical end to the control unit after each CCW execution. After the last CCW of the CCW chain has been executed by the CU/device, the control unit presents a CE/DE to the channel. In addition, FICON channels can multiplex data transfer for several devices at the same time. This also allows workloads with low to moderate control unit cache hit ratios to achieve higher levels of activity rates per channel.
FICON Channel
zHPF Arch FC-SB-4 FC-FS
Control unit
4 KB read cmds
Prefix + 64 byes of data + Read cmd (4k) + Read cmd (4k) + Read cmd (4k) + Read cmd (4k) Cmd/Data IU FC2
Frame(s)
Frame(s)
Frame(s)
Status
Total frames 11 20
Total commands 5 5
The FICON Express8, FICON Express4, and FICON Express2 features support transport mode. A parameter in the Operation Request Block (ORB) is used to determine how the FICON channel will operate (command mode or transport mode). The mode used for an I/O operation also depends on the settings in the z/OS operating system.
20
The application or access method provides the channel program commands and parameters in the ORB. Bit 13 in word 1 of the ORB specifies how to handle the channel program in either command mode or transport mode.
Channel-to-CU initialization PLOGI/ LIRR Send RNID Accept RNID (Indicate support for FC-SB-4 process login) Perform PRLI and determine transport mode support)
Control Unit
A channel that supports the process login (PRLI) extended link service and transport mode operations will send a request to each control unit in its configuration that also supports the process login ELS. The PRLI is used to determine whether the control unit supports transport mode operations. zHPF provides additional capabilities such as being able to interrogate the CU before an actual missing interrupt occurs. Because zHPF uses TCWs (which do not time the channel operation in the same way as command mode), a mechanism must be used to prevent unnecessarily invoking the Missing Interrupt Handler (MIH) recovery actions. Transport mode provides an in-band method for z/OS to query the state of the I/O at the control unit without invoking error recovery or retry.
21
Normally when a channel does not get a CMR response from a started I/O operation within the MIH value, it invokes MIH recovery procedures, which can then terminate the I/O in progress. The transport mode operation called interrogate is used to query the state of the CU. (Note that the interrogate process does not affect the state of the primary operation, and is sent before the MIH value is exceeded.) The CU provides an interrogate response IU that contains extended status that describes the state of the primary operation. z/OS can then decide whether recovery is needed, or to reset MIH value for this I/O operation.
CCW with IDAW flag set (with ORB specifying 4K blocks) CCWs
cmd 04
IDAW address
MIDAW address
MIDAWs
2k 32 1k
22
TIDAWs are used when certain flag bits are set in the transport control word. Figure 2-8 illustrates an example of TIDAW usage.
flags reserved count real address flags reserved count real address flags reserved count real address
TIDAWs and MIDAWs are used with z/OS extended format data sets that use internal structures (usually not visible to the application program) that require scatter-read or scatter-write operation.
23
accept one frame. At port initialization (login), buffer credit values are exchanged between two ports based on the number of buffers available for the ports. For example, the FICON Express8 feature has 40 buffer credits. This means that the FICON Express8 receiver is capable of storing up to 40 frames of data at any one time. The other FICON features support the following: FICON Express4 contains 212 buffer credits. FICON Express2 contains 107 buffer credits. FICON Express contains 64 buffer credits. For a more complete explanation about how buffer credits work, refer to Buffer credits on page 39.
CU
CU response - I got frames send more IUs (CU may modify IU count at this time)
At the start of every I/O, the IU count is reset to the default number. The IU pacing protocol, as defined, has the limitation that the first burst of IUs from the channel to the control unit may be no larger than a default value of 16. This causes a delay in the execution of channel programs with more than 16 IUs at long distances because a round trip to the control unit is required before the remainder of the IUs can be sent by the channel, upon the receipt of the first command response, as allowed by the increased pacing count. A channel may operate in the default IU pacing mode or in the persistent IU pacing mode. During initialization, the specific node descriptor information is exchanged between the channel and control unit. This information includes SB-4 support and will indicate whether the node supports concurrent enablement of the persistent IU pacing function. When a channel that supports concurrent enablement of the persistent IU pacing function receives a node descriptor from a control unit with bit 7 of byte 1 equal to one (1), the channel enables persistent IU pacing for all currently established logical paths with the control unit. When a control unit that supports concurrent enablement of the persistent IU pacing function receives a node descriptor from a channel with bit 7 of byte 1 equal to one (1), the 24
FICON Planning and Implementation Guide
control unit enables persistent IU pacing for all currently established logical paths with the channel. For logical paths that are established subsequent to the processing of the node descriptor, the persistent IU pacing bit in the ELP/LPE IU optional features field is used to enable or disable persistent IU pacing. Persistent IU pacing is a method for allowing a channel and control unit supporting the FC-SB-4 process login to retain a pacing count that can be used at the start of execution of a channel program. This may improve performance of long I/O programs at higher link speeds and long distances by allowing the channel to send more IUs to the control unit and eliminating the delay of waiting for the first Command Response. The channel retains the pacing count value, presented by the control unit in accordance with the standard, and uses that pacing count value as its new default pacing count for any new channel programs issued on the same logical path. Figure 2-10 illustrates an example of using persistent IU pacing for extended distance.
Channel
Initialization Establish logical path (ELP) LP established and increase IU count (Persistent IU pacing)
CU
Send FF IUs
FICON channel FC4 IU (command / data) FC 2 frames
frame frame frame frame frame frame frame frame frame CU response - I got frames (CU may modify IU count at this time)
CU DS8000
Support is provided on the IBM System Storage DS8000 with the appropriate licensed machine code level, and is exclusive to System z10 servers.
25
Table 2-1 System z server FICON feature support Maximum number of channels Channel feature Feature codes 2319 2320 3319 3320 3321 3322 3318 3323 3324 3325 3326 z9 EC 120 120 336 336 336 336 n/a n/a 336 n/a n/a z9 BC R07/S07 32/40 32/40 64/80a 64/80a 64/112 64/112 32/56 32/56 64/112 n/a n/a z10 EC 120 120 336a 336a 336 336 n/a n/a 336 336 336 z10 BC 40 40 112a 112a 128 128 64 64 128 128 128 Channels per feature 2 2 4 4 4 4 2 2 4 4 4 Channel increments (orderable) 2 2 4 4 4 4 2 2 4 4 4
FICON Express2 LX FICON Express2 SX FICON Express4 10KM LXb FICON Express4 SX
b
FICON Express4-2C SX FICON Express4-2C 4KM LX FICON Express4 4KM LXb FICON Express8 10KM LX FICON Express8 SX
a. Carry forward on an upgrade b. Effective October 27, 2009, withdrawn from marketing for System z10 servers
FICON Express8
The FICON Express8 features are exclusive to System z10 and are designed to deliver increased performance compared to the FICON Express4 features. The FICON Express8 features have four independent channels, and each feature occupies a single I/O slot, utilizing one CHPID per channel. Each channel supports 2 Gbps, 4 Gbps, and 8 Gbps link data rates with auto-negotiation. A link data rate of 1 Gbps is not supported. All FICON Express8 features use Small Form Factor Pluggable (SFP) optics to permit each channel to be individually serviced in the event of a fiber optic module failure. The traffic on the other channels on the same feature can continue to flow if a channel requires servicing. The FICON Express8 features are ordered in 4-channel increments and are designed to be added concurrently. This concurrent update capability allows you to continue to run workloads through other channels while the FICON Express8 features are being added. FICON Express8 CHPIDs can be defined as a spanned channel and can be shared among logical partitions within and across CSSs. The FICON Express8 features are designed for connectivity to servers, switches, Directors, disks, tapes, and printers, and they can be defined as: CHPID type FC Native FICON, High Performance FICON for System z (zHPF), and FICON Channel-to-Channel (CTC) traffic Supported in the z/OS, z/VM, z/VSE, z/TPF, and Linux on System z environments CHPID type FCP Fibre Channel Protocol traffic for communication with SCSI devices Supported in z/VM, z/VSE, and Linux on System z environments
26
FICON Express8 SX
Feature code 3326 is designed to support unrepeated distances up to 150 meters (492 feet) at 8 Gbps. Each channel supports 50/125 micrometer multimode fiber optic cable or a 62.5/125 micrometer multimode fiber optic cable terminated with an LC Duplex connector.
FICON Express4
The FICON Express4 features have four (or two, for the 2-port features) independent channels, and each feature occupies a single I/O slot, utilizing one CHPID per channel. Each channel supports 1 Gbps, 2 Gbps, and 4 Gbps link data rates with auto-negotiation. All FICON Express4 features use Small Form Factor Pluggable (SFP) optics to permit each channel to be individually serviced in the event of a fiber optic module failure. The traffic on the other channels on the same feature can continue to flow if a channel requires servicing. The FICON Express4 features are ordered in 4-channel (or 2-channel) increments and are designed to be added concurrently. This concurrent update capability allows you to continue to run workloads through other channels while the FICON Express4 features are being added. FICON Express4 CHPIDs can be defined as a spanned channel and can be shared among logical partitions within and across CSSs. The FICON Express4 features are designed for connectivity to servers, switches, Directors, disks, tapes, and printers, and they can be defined as: CHPID type FC Native FICON, High Performance FICON for System z (zHPF), and FICON Channel-to-Channel (CTC) traffic Supported in the z/OS, z/VM, z/VSE, z/TPF, and Linux on System z environments CHPID type FCP Fibre Channel Protocol traffic for communication with SCSI devices Supported in z/VM, z/VSE, and Linux on System z environments
27
transceivers with 4 km transceivers is supported if the unrepeated distance does not exceed 4 km.
FICON Express4 SX
Feature code 3322 is designed to support unrepeated distances up to 270 meters (886 feet) at 4 Gbps. Each channel supports 62.5 micron or 50 micron multimode fiber optic cable terminated with an LC Duplex connector.
FICON Express4-2C SX
Feature code 3318, with two channels per feature, is designed to support unrepeated distances up to 270 meters (886 feet) at 4 Gbps. Note: FICON Express4-2C SX is only available on System z10 BC and System z9 BC servers. Each channel supports 62.5 micron or 50 micron multimode fiber optic cable terminated with an LC Duplex connector.
FICON Express2
The FICON Express2 SX and FICON Express2 LX features have four independent channels, with each feature occupying a single I/O slot, utilizing one CHPID per channel and four
28
CHPIDs per feature, while continuing to support 1 Gbps and 2 Gbps link data rates. The link speed is auto-negotiated. The FICON Express2 SX and LX features are ordered in 4-channel increments and designed to be added concurrently. This concurrent update capability allows you to continue to run workloads through other channels while the FICON Express2 features are being added. FICON Express2 CHPIDs can be defined as a spanned channel and can be shared among logical partitions within and across CSSs. The FICON Express2 features are designed for connectivity to servers, switches, Directors, disks, tapes, and printers, and they can be defined as: CHPID type FC Native FICON, High Performance FICON for System z (zHPF), and FICON Channel-to-Channel (CTC) traffic Supported in the z/OS, z/VM, z/VSE, z/TPF, and Linux on System z environments CHPID type FCP Fibre Channel Protocol traffic for communication with SCSI devices Supported in z/VM, z/VSE, and Linux on System z environments Note: FICON Express2 is supported on System z10 and System z9 servers only if carried forward on an upgrade.
FICON Express2 LX
Feature code 3319 is designed to support unrepeated distances up to 10 km (6.2 miles) at 2 Gbps. Each channel supports 9 micron single mode fiber optic cable terminated with an LC Duplex connector. Multimode (62.5 or 50 micron) fiber cable can be used with the FICON Express2 LX feature. The use of multimode cable types requires a mode conditioning patch (MCP) cable.
FICON Express2 SX
Feature code 3320 is designed to support unrepeated distances up to 500 meters (1640 feet) at 2 Gbps. Each channel supports 62.5 micron or 50 micron multimode fiber optic cable terminated with an LC Duplex connector.
FICON Express
The two channels residing on a single FICON Express feature occupy one I/O slot in the System z I/O cage. Each channel can be configured individually and support 1 Gbps link speed. The FICON Express features are designed for connectivity to servers, switches, Directors, disks, tapes, and printers, and they can be defined as: CHPID type FC Native FICON and FICON Channel-to-Channel (CTC) traffic Supported in the z/OS, z/VM, z/VSE, z/TPF, and Linux on System z environments CHPID type FCP Fibre Channel Protocol traffic for communication with SCSI devices Supported in z/VM, z/VSE, and Linux on System z environments CHPID type FCV
29
Supported in the z/OS, z/VM, z/VSE, z/TPF, and Linux on System z environments Note: FICON Express is supported on System z10 and System z9 servers only if carried forward on an upgrade.
FICON Express LX
Feature code 2319 is designed to support unrepeated distances up to 10 km (6.2 miles) at 1 Gbps. Each channel supports 9 micron single mode fiber optic cable terminated with an LC Duplex connector. Multimode (62.5 or 50 micron) fiber cable can be used with the FICON Express LX feature. The use of multimode cable types requires a mode conditioning patch (MCP) cable.
FICON Express SX
Feature code 2320 is designed to support unrepeated distances up to 860 meters (2822 feet) at 1 Gbps. Each channel supports 62.5 micron or 50 micron multimode fiber optic cable terminated with an LC Duplex connector. Table 2-2 summarizes the available FICON feature codes and their respective specifications. Notes: Mode Conditioning Patch (MCP) cables can be used with FICON features that can operate at a link data rate of 1 Gbps (100 MBps) only. FICON Express8 features do not support the attachment of MCP cables.
Table 2-2 FICON channel specifications Feature code Feature name Connector type Cable type Link data rate 1 or 2 Gbpsa Server z10 ECb z10 BCb z9 ECb z9 BCb z10 ECb z10 BCb z9 ECb z9 BCb z10 ECb z10 BCb z9 ECb z9 BCb
SM 9 m 2319 FICON Express LX LC Duplex with MCPc: MM 50 m or MM 62. 5m MM 62.5 m 2320 FICON Express SX LC Duplex MM 50 m
1 Gbps
1 or 2 Gbpsa
30
Feature code
Feature name
Connector type
Cable type
Server z10 ECb z10 BCb z9 ECb z9 BCb z10 ECb z10 BCb z9 ECb z9 BCb z10 ECb z10 BCb z9 ECb z9 BCb z10 BC z9 BC
SM 9 m 3319 FICON Express2 LX LC Duplex with MCPc : MM 50 m or MM 62.5 m MM 62.5 m 3320 FICON Express2 SX LC Duplex MM 50 m
1 Gbps
1 or 2 Gbpsa
MM 62.5 m 3318 FICON Express4-2C SX LC Duplex MM 50 m SM 9 m 3321 FICON Express4 10KM LX LC Duplex with MCPc : MM 50 m or MM 62.5 m MM 62.5 m 3322 FICON Express4 SX LC Duplex MM 50 m SM 9 m 3323 FICON Express4-2C 4KM LX LC Duplex with MCPc : MM 50 m or MM 62.5 m SM 9 m 3324 FICON Express4 4KM LX LC Duplex with MCPc : MM 50 m or MM 62.5 m SM 9 m MM 62.5 m 3326 FICON Express8 SX LC Duplex MM 50 m 1, 2, or 4 Gbpsa z10 BC z9 BC 1, 2, or 4 Gbpsa 1, 2, or 4 Gbpsa 1, 2, or 4 Gbpsa
1, 2, or 4 Gbpsa
3325
FICON Express8
LC Duplex
10KM LX
2, 4, or 8 Gbpsd 2, 4, or 8 Gbpsd
a. Supports auto-negotiate with neighbor node b. Only supported when carried forward on an upgrade c. Mode conditioning patch cables may be used d. Supports auto-negotiate with neighbor node for link data rates 8, 4, and 2 Gbps only
Note: IBM does not support a mix of 50 m and 62.5 m fiber optic cabling in the same physical link.
31
Refer to Table 4-1 on page 67 for the allowable maximum distances and link loss budgets based on the supported fiber optic cable types and link data rates.
32
Chapter 3.
33
FICON Channel
FIC ON
Fra m
es
O FIC
es r am NF
FIC ON F
ram es
FICON Director
ram NF ICO F
FICON Channel
The FICON channel supports multiple concurrent I/O connections. Each concurrent I/O operation can be to the same FICON control unit (but to different devices/control unit images), or to different FICON control units. A FICON channel uses the Fibre Channel communication infrastructure to transfer channel programs and data through its FICON features to another FICON-capable node, such as a storage device, printer, or System z server (channel-to-channel). See Chapter 2, System z FICON technical description on page 13 for more details. A FICON channel, in conjunction with the FICON Director, can operate in two topologies: 1. Switched point-to-point (through a single FICON Director to FICON-capable control units) 2. Cascaded FICON Directors (through two FICON Directors to FICON-capable control units)
34
The FICON channel determines whether the associated link is in a point-to-point or switched topology. It does so by logging into the fabric using fabric login (FLOGI ELS), and checking the accept response to the fabric login (ACC ELS). The FLOGI-ACC (accept) response indicates if the channel (N_Port) is connected to another N_Port (point-to-point) or an F_Port (fabric port). An example of a switched point-to-point topology is shown in Figure 3-2.
System z Server Storage
Determines fabric connection (point-to-point or switched)
N_Ports
F_Ports
FICON Director
F_Ports
N_Ports
Logical Path
Multiple channel images and multiple control unit images can share the resources of the FC link and the FICON Director, such that multiplexed I/O operations can be performed. Channels and control unit links can be attached to the FICON Director in any combination, depending on configuration requirements and available resources in the Director. Sharing a control unit through a FICON Director means that communication from a number of System z channels to the control unit can take place over one Director. A FICON channel can communicate with a number of FICON control units on different ports as well. The communication path between a channel and a control unit is composed of two different parts: the physical channel path and the logical path. In a FICON switched point-to-point topology (with a single Director), the physical paths are the FC links, or an interconnection of two FC links through a FICON Director, that provide the physical transmission path between a channel and a control unit. A FICON (FC-SB-3 or FC-SB-4) logical path is the relationship established between a channel image and a control unit image for communication during execution of an I/O operation and presentation of status.
35
Channels and control unit FC links can be attached to the FICON Directors in any combination, depending on configuration requirements and available Director ports. Sharing a control unit through a FICON Director means that communication from a number of channels to the control unit can take place either over one Director-to-CU link (in the case where a control unit has only one FC link to the FICON Director), or over multiple link interfaces (in the case where a control unit has more than one FC link to the FICON Director). A FICON channel can also communicate with a number of FICON control units on different ports in the second FICON Director. The communication path between a channel and a control unit is composed of two different parts: the physical channel path and the logical path. In a cascaded FICON Director topology, the physical paths are the FC links, interconnected by the Directors, that provide the physical transmission path between a FICON channel and a control unit. Figure 3-3 illustrates the configuration.
System z Server
Determines fabric connection (point-to-point or switched)
Storage
Fabric Login (FLOGI) FICON Channel FICON Channel ISLs FICON Control Unit Port FICON Control Unit Port
FICON Director
Logical Path
E_Ports
FICON Director
A FICON (FC-SB-3 or FC-SB-4) logical path is the relationship established between a FICON channel image and a control unit image for communication during execution of an I/O operation and presentation of status.
36
Backplane module
The backplane provides the connectivity within the chassis and between all system (I/O, switch, and control) modules, including the power supply assemblies and fan assemblies. The backplane gives you the capability to increase the number of ports by adding I/O modules.
Oversubscription The term oversubscription implies that the port may not be able to drive at full speed due to
limited backplane capacity. Oversubscription may only occur when simultaneous activities are present and backplane capacity is depleted. This usually applies to I/O modules with high port counts (for example, 48 ports operating at 8 Gbps).
I/O module
The main function of the I/O module is to provide the physical connectivity between FICON channels and control units that are being connected to the Director. Due to the usage of Small Form Factor pluggable (SFP) ports instead of fixed optical port cards, port upgrades are simple and nondisruptive to other ports on the I/O module. Typically, any mix of short wavelength and long wavelength ports is allowed with any I/O module.
Switch module
The switch module on the FICON Director contains the microprocessor and associated logic that provides overall switching in the Director. The base configuration always contains two switch modules for redundancy reasons. If the active switch module fails or is uninstalled, the standby switch module automatically becomes the new active switch module. Failover occurs as soon as the active switch module is detected to be faulty or uninstalled. The active switch module performs following control functions: Switch initialization High availability and switch drivers Name server SNMP Zoning Typically, firmware upgrades can be performed concurrent to FICON Director operation.
37
use in the System z server. The switch address is a hexadecimal number used in defining the Director in the System z environment. The Domain ID and the switch address must be the same when referring to a Director in a System z environment. Each FICON Director in a fabric must have a unique switch address. The switch address is used in the HCD or IOCP definitions, and it can be any hex value between x00 and xFF. The valid Domain ID range for the FICON Director is vendor-dependent. When defining the switch addresses in the HCD or IOCP, ensure that you use values within the FICON Directors range. The switch ID must be assigned by the user, and it must be unique within the scope of the definitions (IOCP and HCD). The switch ID in the CHPID statement is basically used as an identifier or label. Although the switch ID can be different than the switch address or Domain ID, we recommend that you use the same value as the switch address and the Domain ID when referring to a FICON Director. The Domain ID is assigned by the manufacturer, and it can be customized to a different value. It must be unique within the fabric and insistent. Insistent means that the FICON Director uses the same Domain ID every time, even if it is rebooted. Insistent Domain IDs are required in a cascaded FICON Director environment. There is no need for a Domain ID to ever change in a FICON environment, and fabrics will come up faster after recovering from a failure if the Domain ID is insistent.
Port number
The port number is typically used in open systems environments to refer to a port on a Director. In some cases, port numbers and port addresses are the same. However, in many other cases they are different, depending on the Director port characteristics. Port addressing is platform- and vendor-specific.
Inter-Switch Link
Inter-Switch Links (ISLs) are the fiber optic cables that connect two Directors using expansion ports (E_Ports). ISLs carry frames originating from N_Ports, as well as frames that are generated within the fabric for management purposes. Multiple ISLs are usually installed between Directors to increase bandwidth and provide higher availability.
38
Note: FSPF only provides basic utilization of ISLs with the same cost. It does not ensure all ISLs are equally utilized, and it does not inhibit congestion even if the other ISLs between Directors are not fully utilized.
Port swapping
Port swapping refers to the capability of a FICON Director to redirect traffic on a failed F_Port to a working F_Port without requiring a change in System z I/O configuration. In a Fibre Channel fabric, a failing port typically results in a cable being reconnected to another available port, followed by automatic discovery of the device through the name server. However, in a FICON environment, the control unit link address is defined in the channel configuration file (IOCP) of the System z server. Therefore, the FICON Director must ensure that the N_Port address for the control unit remains the same even after the cable is reconnected to a different switch port. After port swapping, the address assigned to the N_Port performing fabric login (FLOGI) by the alternate port should be the same as the one that would have been assigned by the original port. The original port will assume the port address associated with the alternate port.
Buffer credits
Buffer credits are used by the Fibre Channel architecture as a flow control mechanism to represent the number of frames a port can store. Buffer credits are mainly a concern when dealing with extended distances and higher speeds. However, a poorly designed configuration may also consume available buffer credits and have a negative impact on performance. Buffer credit flow control is done at the link level (that is, between an N_Port and an F_Port), and is used by Class 2 and Class 3 traffic. It relies on the receiver-ready (R_RDY) signal to replenish credits. The total number of available buffer credits (BCs) that a receiver has is determined during the link initialization process (see Figure on page 40).
39
Channel N_Port
Xmit Rec 40 BCs
BCs can be different across ports Number of BCs on receiver port
F_Port
Xmit Rec 10 BCs
F_Port
Xmit Rec 10 BCs
Number of BCs on receiver port
CU N_Port
Xmit Rec 5 BCs
The sender monitors the receiver's ability to accept frames by managing a buffer credit count value of the receiver port. The buffer credit count value is decremented by the transmitter when a frame is sent (see Figure 3-5).
Channel N_Port
Xmit Rec
BC counter
BC counter
CU N_Port
5 Buffers
5 Xmit Rec
An R_RDY signal is returned for every received frame when a receive port buffer is made available. The buffer credit count value is incremented when an acknowledgment (R_RDY) is received (see Figure 3-6).
BC counter
Channel N_Port
Xmit Rec
8
Credit 1
Frame 3 Frame 2
10 Buffers Frame 1
BC counter
CU N_Port
5 Buffers
4 Xmit Frame 1
R_RDY
If the buffer credit count is zero (0), frame transmission to the associated port is temporarily suspended. This flow control mechanism prevents data overruns.
40
It is important to note that not all applications utilize the same frame size. FICON frame size information can be tracked using the FICON Director Activity Report in RMF. For RMF to receive the information, the CUP must be configured and the FDR parameter in SYS1.PARMLIB must be set to Yes. The FICON frame size of 2112 bytes can be achieved through the use of zHPF with large sequential reads/writes, or with a very large block size such as 4 x 4 KB or 12 x 4 KB blocks per I/O.
41
5
3
FICON Director
FICON Director
System z servers
oops
Storage Devices
2
FICON Director FICON Director
The checking process proceeds in this way: 1. Channel initialization completes. 2. At some later time, miscabling occurs (for example, cables are swapped at a patch panel). 3. The Director port enters invalid attachment state and notifies state change back to System z. 4. The System z server invokes the channel logical path testing, reporting, and isolation, and error recovery. 5. Any I/O requests to the invalid route are discarded until the error is corrected. 6. Data is protected.
Fabric binding
Fabric binding is a security feature that enables explicit control over which FICON Directors can be interconnected by preventing non-authorized Directors from merging, either accidentally or intentionally. This is done by manually defining the authorized Director in a fabric binding database. The fabric binding database contains the World Wide Node Name (WWNN) and Domain ID of the connecting FICON Director. The FICON Director that is allowed to connect must be added to the fabric binding database of the other FICON Director. Note: Activating fabric binding is a prerequisite for a cascaded FICON Director topology.
42
feature enabled, and a new Director is connected to it (through an ISL) without the Domain ID feature enabled, the new Director is segmented into a separate fabric and user data will not flow.
Zoning
Zoning is a method used in the FICON Director to enable or disable communication between different attached devices. A zone consists of a group of ports or WWNs. Connectivity is permitted only between connections to the Director that are in the same zone. There are two types of zoning: Name zoning, which permits connectivity between attached nodes based on WWN Port zoning, which restricts port connectivity based on port number WWN zoning is typically used for open systems connections. Port zones should be used for FICON. Nodes that are not already defined to a zone are automatically put into the default zone. Conversely, default zone members are automatically removed whenever that member is added to an active zone.
Protocol intermix
An intermix of FICON and Fibre Channel Protocol (FCP) is supported by FICON Directors at the port level (each port can run either FICON or FCP). This means that the SAN infrastructure can be shared by both protocols, to create a commonly managed infrastructure.
Management capabilities
FICON Directors can be managed using different communication methods, such as: In-band management via a FICON Channel (using the CUP) Out-of-band management via the following: IP-based client/server management software Simple network management protocol (SNMP) Trivial file transfer protocol (TFTP) to load firmware Command-line Interface (CLI) via Telnet GUI provided by the Director vendor or other vendors CLI via the serial interface (RS232) Call home via a modem connection (for notification purposes) Figure 3-8 on page 44 shows the various interfaces that can be used to manage the FICON Director.
43
Serial Interface
44
IBM and Cisco FICON Directors IBM Model 2499-384 2499-192 2054-E11 2054-E07 2054-E04 Supported protocols FICON, FCP FICON, FCP FICON, FCP FICON, FCP FICON, FCP Protocol intermix Yes Yes Yes Yes Yes Number of ports 16 to 384 16 to 192 12 to 528 12 to 336 12 to 192 Port speed (Gbps) 1, 2, 4, 8 1, 2, 4, 8 1, 2, 4, 8 1, 2, 4, 8 1, 2, 4, 8 ISL speed (Gbps) 2, 4, 8, 10 2, 4, 8, 10 2, 4, 8, 10 2, 4, 8, 10 2, 4, 8, 10
IBM System Storage SAN768B IBM System Storage SAN384B Cisco MDS 9513 Cisco MDS 9509 Cisco MDS 9506
45
Table 3-2 lists the available blade options and the associated feature codes.
Table 3-2 SAN768B and SAN384B blade options Blade options FC 8 Gbps 16-portsa FC 8 Gbps 32-portsa FC 8 Gbps 48-portsa (does not support loop devices) FC 10 Gbps 6-ports FC Routing (16 + 2 ports) Feature code 3816 3832 3848 3870 3850
a. All ports must be populated with SFPs (each with a minimum of 1 Gbps and a maximum of 8 Gbps).
Table 3-3 lists the available transceiver options and the associated feature codes.
Table 3-3 SAN768B and SAN384B transceiver options Transceiver options a 4 Gbps Short Wave SFP Transceiver 4 Gbps WSW 8-Pack SFP Transceiver 4 Gbps 10 km Long Wave SFP Transceiver 4 Gbps 10 km LW 8-Pack SFP Transceiver 4 Gbps 4 km Long Wave SFP Transceiver 4 Gbps 4 km LW SFP Transceiver 8 Pack 4 Gbps 30 km ELW SFP Transceiver 10 Gbps FC SW XFP Transceiver 10 Gbps FC LW XFP Transceiver 1 GbE Copper SFP Transceiver 8 Gbps Short Wave SFP Transceiver 8 Gbps SW SFP Transceiver 8 Pack 8 Gbps 10 km Long Wave SFP Transceiver 8 Gbps 10 km LW SFP Transceiver 8-Pack 8 Gbps 25 km ELW SFP Transceiver a. Maximum of 48 eight-packs or 255 singles Feature code 2401 2408 2411 2418 2441 2448 2480 2510 2520 2550 2801 2808 2821 2828 2881
Table 3-4 lists the available licensed and miscellaneous options and associated feature codes.
Table 3-4 SAN768B and SAN384B licensed and miscellaneous options Licensed and miscellaneous options SAN 768B Inter-Chassis Cable Kit SAN768B Pair of Upgrade Power Supplies Feature code 7870 7880
46
Licensed and miscellaneous options SAN768B Inter-Chassis License SAN768B FICON w/CUP Activation FCIP/FC High-Performance Extension FICON Accelerator Integrated Routing
Table 3-5 lists the available fiber optic options and associated feature codes.
Table 3-5 SAN768B and SAN384B fiber optic cable options Fiber optic cable options a Fiber Cable LC/LC 5 meter 50 m multimode Fiber Cable LC/LC 25 meter 50 m multimode Fiber Cable LC/LC 5 meter 50 m multmode 4-Pack Fiber Cable LC/LC 25 meter 50 m multimode 4-Pack Fiber Cable LC/LC 31 meter 9 m single mode 4-Pack Fiber Cable LC/LC 31 meter 9 m single mode a. Maximum of 96 four-packs or 255 singles Feature code 5305 5325 5405 5425 5444 5731
Table 3-6 lists the available rack mounting options and associated feature codes.
Table 3-6 SAN768B and SAN384B rack mounting options Rack mounting options First SAN768B installed in a 2109-C36 rack Second SAB768B installed in a 2109-C36 rack Standalone mode Feature code 9281 9282 9284
47
DCFM provides multiprotocol networking support for: Fibre Channel Fiber Connectivity (FICON) Fibre Channel over IP (FCIP) Fibre Channel Routing (FCR) Internet SCSI (iSCSI)
Inter-Chassis Link
An Inter-Chassis Link (ICL) is a licensed feature used to interconnect two System Storage SAN b-type family Directors. ICL ports in the core blades are used to interconnect the two Directors, potentially increasing the number of usable ports. The ICL ports on are internally managed as E_Ports and use proprietary connectors instead of traditional SFPs. When two System Storage SAN b-type family Directors are interconnected by ICLs, each chassis still requires a unique Domain ID and is managed as a separate Director.
Traffic Isolation
Traffic Isolation (TI) zoning allows you to direct traffic to certain paths in cascaded FICON Director environments. For this reason, ISLs (E_Ports) must be included in the TI zone. TI zones are a logical AND with classic zoning (port or WWNN zoning). Note that TI zoning does not replace port or WWNN zoning. TI zoning has two modes of operation: Failover enabled With failover enabled, traffic will take the path specified by the TI zone regardless of other paths that are available. If all paths in the TI zone fail, then traffic will rerouted to another TI zone. Failover disabled With failover disabled, true traffic isolation is achieved. Traffic will only take the path specified by the TI zone. If all paths in the TI zone fail, traffic stops and will not be rerouted to another path even if other paths are available outside the TI zone.
Adaptive Networking
Adaptive Networking is a suite of tools and capabilities that enable you to ensure optimized behavior in the SAN. Even under the worst congestion conditions, the Adaptive Networking features can maximize fabric behavior and provide necessary bandwidth for high priority, mission-critical applications and connections. The following features are part of the Adaptive Networking suite: Traffic Isolation Routing QoS Ingress Rate Limiting QoS SID/DID Traffic Prioritization
48
Frame-level trunking
Frame-level trunking automatically distributes data flows over multiple physical Inter-Switch Link (ISL) connections and logically combines them into a trunk to provide full bandwidth utilization while reducing congestion. Frame-level trunking can: Optimize link usage by evenly distributing traffic across all ISLs at the frame level Maintain in-order delivery to ensure data reliability Help ensure reliability and availability even if a link in the trunk fails Optimize fabric-wide performance and load balancing with Dynamic Path Selection (DPS) Simplify management by reducing the number of ISLs required Provide a high-performance solution for network- and data-intensive applications Up to eight ISLs can be combined into a trunk, providing up to 68 Gbps data transfers (with 8 Gbps ISLs).
49
Change the existing path to a more optimal path Wait for sufficient time for frames already received to be transmitted Resume traffic after a failure
Default switch
Logical switch
Port numbering
A port number is a number assigned to an external port to give it a unique identifier to the FICON Director. Ports are identified by both the slot number in which the blade is located in the chassis and the port number on the blade (slot number/port number). Table 3-8 shows the port numbering scheme for each blade type.
Table 3-8 Port numbering scheme Blade type 16-port 32-port Blade port numbering Ports are numbered from 0 through 15 from bottom to top. Ports are numbered from 0 through 15 from bottom to top on the left set of ports, and 16 through 31 from bottom to top on the right set of ports.
50
Blade port numbering Ports are numbered from 0 through 23 from bottom to top on the left set of ports, and 24 through 47 from bottom to top on the right set of ports. Ports are numbered from 0 through 5 from bottom to top.
Any port can be assigned to any logical or default switch. When zero-based addressing is used, ports can have any number the user defines. Some forward planning, however, can simplify the task of numbering the ports and minimize service impacts.
Migration
The combination of logical switches and zero-based addressing make for easy migration from IBM System Storage SAN b-type and m-type FICON Directors to a SAN768B or SAN384B platform. For each FICON Director to be migrated, a logical switch with its Domain ID and port numbers identical to those in the old FICON Director can be created. Therefore, you can perform a migration without making any changes to IOCP or HCD. Table 3-9 lists some of the functions and their associated hardware, software, and firmware prerequisites that are discussed in this section for the SAN768B and SAN384B.
Table 3-9 Hardware, software, and firmware prerequisites Function Data Center Fabric Manager Hardware prerequisites All supported Directors Software and firmware prerequisites Refer to: ftp://ftp.software.ibm.com/common/ ssi/pm/sp/n/tsd03064usen/TSD03064U SEN.PDF FOS 6.1 FOS 6.0.0c and above FOS 6.2 FOS 6.0.0c and above FOS 6.0.0c
Inter-Chassis Link (ICL) Traffic Isolation (TI) Virtual Fabric (logical partition) Quality of Service (QoS) Adaptive Network TopTalker, QoS, Ingress Rate Limiting
SAN768B or SAN384B SAN768B or SAN384B SAN768B or SAN384B SAN768B or SAN384B SAN768B or SAN384B
For more details regarding the features and functions provided by the System Storage SAN b-type family, refer to the following: IBM System Storage SAN768B Director Installation, Service and User's Guide, GA32-0574 IBM System Storage SAN384B Director Installation, Service and User's Guide, GC52-1333 Brocade Fabric OS Administrators Guide, 53-1001185 IBM System Storage SAN768B Web page: http://www-03.ibm.com/systems/storage/san/b-type/san768b/index.html IBM System Storage SAN384B Web page: http://www-03.ibm.com/systems/storage/san/b-type/san384b/index.html
51
Table 3-11 lists the available transceiver options and associated feature codes.
Table 3-11 Cisco MDS 9500 Series transceiver Transceiver options FC 10 Gbps SRX2 Transceiver FC Ethernet 10 Gbps SRX2 Transceiver FC 10 Gbps 10 km LWX2 SC Transceiver FC 10Gbps 40 km ERX2 Transceiver Feature code 5030 5032 5040 5050
52
Transceiver options FC 4 Gbps SW SFP Transceiver - 4 Pack FC 4 Gbps 4 km LW SFP Transceiver 4 Pack FC 4 Gbps 10 km LW SFP Transceiver - 4 Pack FC 8 Gbps SW SFP+ Transceiver (requires OS 4.1.1 or later) FC 8 Gbps SW SFP+ Transceiver - 4 Pack (requires OS 4.1.1 or later) FC 8 Gbps 10 km LW SFP+ Transceiver (requires OS 4.1.1 or later) FC 8 Gbps 10 km LW SFP+ Transceiver - 4 Pack (requires OS 4.1.1 or later) Tri-Rate SW SFP Transceiver Tri-Rate LW SFP Transceiver Gigabit Ethernet Copper SFP (feature code 2450 is required)
Feature code 5434 5444 5454 5830 5834 5850 5854 5210 5220 5250
Table 3-12 lists the available hardware and software packages and associated feature codes.
Table 3-12 Cisco MDS 9500 Series hardware and software packages Hardware and software packages MDS 9500 Enterprise Package MDS 9500 Fabric Manager Server Package MDS 9500 Mainframe Server Package Feature code 7021 7026 7036
Table 3-13 lists the available fiber optic cable options and associated feature codes.
Table 3-13 Cisco MDS 9500 Series fiber optic cable options Fiber optic cable options 5 meter 50 m LC/LC Fiber Cable (multimode) 25 meter 50 m LC/LC Fiber Cable (multimode) 5 meter 50 m LC/LC Fiber Cable - 4 Pack 25 meter 50 m LC/LC Fiber Cable - 4 Pack Feature code 5605 5625 5642 5643
Table 3-14 lists the available rack mounting options and associated feature codes.
Table 3-14 Cisco MDS 9500 Series rack mounting options Rack mounting options Plant Install 9513 in 2109-C36 rack Field Merge 9513 in 2109-C36 rack Feature code 9543 9544
53
Fabric Manager
The Fabric Manager is a set of network management tools that supports secure Simple Network Management Protocol version 3 (SNMPv3) and earlier versions. It provides a graphical user interface (GUI) that displays real-time views of your network fabric and lets you manage the configuration of Cisco MDS 9500 Series FICON Directors. Detailed traffic analysis is also provided by capturing data with SNMP. The captured data is compiled into various graphs and charts that can be viewed with any Web browser. The Cisco Fabric Manager applications are: Fabric Manager Server Device Manager
Device Manager
The Device Manager presents two views of a Director: 1. Device View displays a graphic representation of the switch configuration, and provides access to statistics and configuration information for a single switch. 2. Summary View displays a summary of E_Ports (Inter-Switch Links), F_Ports (fabric ports), and N_Ports (attached hosts and storage) on the Director.
54
Port numbering
A range of 250 port numbers is available for you to assign to all the ports on a Director. You can have more than 250 physical ports assigned and the excess ports do not have port numbers in the default numbering scheme. You can have ports without a port number assigned if they are not in a FICON VSAN, or you can assign duplicate port numbers if they are not used in the same FICON VSAN. By default, port numbers are the same as port addresses, and the port addresses can be swapped. The following rules apply to FICON port numbers: Supervisor modules do not have port number assignments. Port numbers do not change based on TE ports. Because TE ports appear in multiple VSANs, chassis-wide unique port numbers should be reserved for TE ports.
Chapter 3. FICON Director technical description
55
Each PortChannel must be explicitly associated with a FICON port number. When the port number for a physical PortChannel becomes uninstalled, the relevant PortChannel configuration is applied to the physical port. Each FCIP tunnel must be explicitly associated with a FICON port number. If the port numbers are not assigned for PortChannels or for FCIP tunnels, then the associated ports will not come up.
Migration
The combination of VSANs and FICON port numbers make for easy migration from previous-generation FICON Directors to a Cisco MDS 9500 infrastructure. For each FICON Director to be migrated, a FICON VSAN with its Domain ID and FICON port numbers identical to those in the old FICON Director can be created. Therefore, you can perform a migration without making any changes to IOCP or HCD. For more details regarding the features and functions provided by the Cisco MDS 9500 Series, refer to the following: Cisco MDS 9513 for IBM System Storage http://www-03.ibm.com/systems/storage/san/ctype/9513/index.html Cisco MDS 9509 for IBM System Storage http://www-03.ibm.com/systems/storage/san/ctype/9509/index.html Cisco MDS 9506 for IBM System Storage http://www-03.ibm.com/systems/storage/san/ctype/9506/index.html For additional Cisco MDS 9500 Series-related information, go to: http://www.cisco.com/en/US/partner/products/ps5990/tsd_products_support_series_ home.html
56
Part 2
Part
57
58
Chapter 4.
59
Step 1 - Documentation
Current and accurate documentation (planning, design, installation, and final) is essential for a successful implementation and later operations of the FICON environment. However, because most information is not always readily available during the various phases of the planning process, there will be situations where it will require iterations of the process. To avoid delays during the planning and implementation process, ensure that the various types of documentation needed are always well-defined, complete, and current.
Step 2 - Requirements
Planning for any end-to-end solution is a process of iterations. It starts with gathering requirements and defining the desired outcome of the solution. After all relevant data has been collected, you will need to review and evaluate it, as well as outline the objectives of the solution. If you are migrating from an existing environment, you must also consider the current configuration, including the functions and features that are implemented and how they are being used.
60
Step 3 - Context
The output of the requirements-gathering phase should provide a solid base for mapping to one or more of the following scenarios: Migrating from ESCON to FICON Moving to an 8 Gbps environment (FICON Express8) Migrating from a single site to a multi-site environment Implementing a new FICON environment
Step 5 - Convergence
The I/O communication infrastructure (FICON fabric) should provide the ability to attain high availability and continuous operations in a predictable manner, with centralized management and scalability for future changes. This means that the switching platform must be robust enough to deliver such capability, as well as be ready to deploy new enhancements that improve throughput, security, and virtualization. Therefore, the FICON Director is the reference platform for FICON fabrics. Factors that influence the selection of the FICON Director platform are: Intermix of the FICON and Fibre Channel Protocol (FCP), which enables you to consolidate the I/O infrastructure, maximize the utilization of the bandwidth, and consolidate the fabric management. Security, high integrity, and zoning are integral parts of the entire FICON infrastructure. Other relevant factors for the FICON fabric, such as power consumption, cooling, and space inside a data center, should also be considered.
Step 6 - Management
The selection of the systems management platforms and their integration with the operation support environment is very important to the continuous operations of the FICON infrastructure. The possibilities for FICON management include the following: Command-line Interface Element management Fabric management Storage management initiative specification System z management for FICON Directors
Chapter 4. Planning the FICON environment
61
Step 8 - Performance
The selection of the right technology is determined by satisfying traffic patterns, further segmentation, and performance of the end-to-end solution. It is very important to understand the fan-in, fan-out, and oversubscription ratios for the entire FICON environment. Traffic management gives you the possibility to proactively manage any congestion inside the fabric and minimize latency. There are a number of design evaluation tools that can aid in this process.
4.2 Documentation
Planning, design, installation, and final documentation is a requirement for a successful implementation and later operations of the FICON environment. Based on best practices and requirements we recommend the following types of documentation:
Planning documentation
Current environment Requirements Available features and functions Decision points
Design documentation
High level Detailed level
Installation documentation
Detailed implementation plan
62
Final documentation
Technical Operations Disaster recovery plan Change procedure The number of documents that you must create depends on the complexity of the solution and the required level of detail. Creating and maintaining documentation throughout the planning, design, and implementation phases of your FICON infrastructure is an important part of running the environment without unplanned outages. For the most part, the implementation, operations, and support (troubleshooting) responsibilities are owned by different groups. From a security viewpoint, there is usually a a clear boundary between the responsibilities. Changes have to be monitored and audited to ensure that due diligence is observed; for example, execution must be strictly separated from auditing. And finally, you have to consider the planned and unplanned turnover of professionals at the workplace. To reduce the risk of disruption to your FICON infrastructure, keep your documentation as current and accurate as possible and store a copy in a secure place, preferably in a different location in case of a disaster.
4.3 Requirements
It is important to clearly understand and accurately document all requirements. Such documentation will help you throughout the planning and design process. After the requirements are collected and documented, each one will have to be carefully evaluated. For existing environments (FICON or ESCON), it is also important to identify all equipment that is currently installed and how it is being used. This means physical and logical inventories should be carried out. The goal of the physical inventory is to identify and verify what is installed in your FICON or ESCON environment. Through onsite visual inspections and documentation, all ESCON or FICON channels, Directors and ports, control units, and cabling should be identified. Although the overall goal of the physical inventory is to identify and verify what you have physically installed, the goal of the logical inventory is to understand the functions that are being used and how they are defined. Determine whether the functions must be replaced by a FICON function. During the planning and design process, the requirements will be mapped to features, functions, topology, and technologies, which in turn will determine an approximate design of your FICON environment. This does not mean you will only have one solution, but rather multiple alternatives. The alternatives will have to be assessed using all the steps in this chapter.
4.4 Context
To define the context of your environment, you must take into consideration the existing and planned components (System z, FICON Director, and control units) that were determined and
Chapter 4. Planning the FICON environment
63
documented in the requirements gathering phase. You will then have to define your transport layer strategy dependent on the upper layer requirements for high availability, disaster recovery, and business continuity. This all has to be synchronized with the communication layer requirements of your applications. After all relevant data is reviewed and analyzed, the output (alternatives) can be used to map to one or more of the following scenarios: Migrating from ESCON to FICON (changing the transport layer protocol) Moving to a high bandwidth environment (implementing FICON Express8) Migrating from a single site to a multi-site cascaded topology (building a highly available multi-site solution) Implementing a new FICON environment (building a completely new FICON environment) The following sections provide addition information for each scenario.
64
A physical control unit that has multiple logical control units (specified by the CUADD parameter in IOCP/HCD) may be accessed more than once from the same FICON (FC) channel path, but the access is to different CUADDs (different logical control units) within the physical control unit. Configure the channel paths according to the quantity of resources available in the FICON channel and control unit. It is not possible to aggregate two or more ESCON channel paths that access the same logical control unit into only one FICON channel path. Even if there are the same number of paths from the operating system image to the disk subsystem in both ESCON and FICON configurations, the advantages of using FICON include: More concurrent I/Os to the same control unit Concurrent I/Os with other control units Link data rate (200 Mbps for ESCON versus up to 8 Gbps for FICON) Unrepeated distance from channel (3 km for ESCON and 10 km for FICON)
65
For more detailed information, refer to: Chapter 2, System z FICON technical description on page 13 Chapter 3, FICON Director technical description on page 33 Implementing an IBM/Brocade SAN with 8 Gbps Directors and Switches, SG24-6116 IBM System Storage SAN768B, TSD03037USEN Cisco MDS 9506 for IBM System Storage, TSD00069USEN Cisco MDS 9509 for IBM System Storage, TSD00070USEN Cisco MDS 9513 for IBM System Storage, TSD01754USEN
Performance - I/O throughput and traffic patterns Availability - redundant ports, bandwidth, and logical paths End-to-end management Cabling infrastructure Distances between the components Control unit and device capabilities Interoperability of all components The remaining sections in this chapter help you define a baseline for your FICON infrastructure, using best practices and industry standards that also align with vendor recommendations.
1 Gbps
Distance in meters (in feet) Link loss budget in dB
2 Gbps
Distance in meters (in feet) Link loss budget in dB
4 Gbps
Distance in meters (in feet) Link loss budget in dB
8 Gbps
Distance in meters (in feet) Link loss budget in dB
10 Gbps ISLa
Distance in meters (in feet) Link loss budget in dB
9 m SM (10 km LX laser) 9 m SM ( 4 km LX laser) 50 m MMb (SX laser) 50 m MMc (SX laser) 62.5 m MMd (SX laser)
10000 (32736)
7.8
10000 (32736)
7.8
10000 (32736)
7.8
10000 (32736)
6.4
10000 (32736)
6.0
4000 (13200)
4.8
4000 (13200)
4.8
4000 (13200)
4.8
NA
NA
NA
NA
4.62
3.31
2.88
2.04
2.6
3.85
2.62
2.06
1.68
2.3
3.0
2.1
1.78
1.58
2.4
a. Inter-Switch Link (ISL) between two FICON Directors b. OM3: 50/125 m laser optimized multimode fiber with a minimum overfilled launch bandwidth of 1500 MHz-km at 850nm as well as an effective laser launch bandwidth of 2000 MHz-km at 850 nm in accordance with IEC 60793-2-10 Type A1a.2 fiber c. OM2: 50/125 m multimode fiber with a bandwidth of 500 MHzkm at 850 nm and 500 MHz-km at 1300 nm in accordance with IEC 60793-2-10 Type A1a.1 fiber d. OM1: 62.5/125 m multimode fiber with a minimum overfilled launch bandwidth of 200 MHzkm at 850 nm and 500 MHz-km at 1300 nm in accordance with IEC 60793-2-10 Type A1b fiber
67
Note: IBM does not support a mix of 50 m and 62.5 m fiber optic cabling in the same physical link. Refer to 2.2, System z FICON feature support on page 25 for details about the FICON features available on the System z10 and System z9 servers.
4.5.1 Point-to-point
As illustrated in Figure 4-1, the maximum unrepeated distance of a FICON LX link using a single mode fiber optic cable is: 10 km (6.2 miles) for 1, 2, 4, 8 Gbps LX links 20 km (12.43 miles) for 1 Gbps LX links with RPQ 8P2263 12 km (7.46 miles) for 2 Gbps LX links with RPQ 8P2263
FICON link
9 m fiber
FICON CU
System z
FICON link
9 m fiber
FICON links
9 m fiber
FICON CU FICON CU
System z
10 km
68
FICON Directors may be required. Each ISL requires one fiber trunk (two fibers) between the FICON Directors. As illustrated in Figure 4-3, and using the example of 10 km ISL links between FICON Directors, the maximum supported distance of a FICON LX channel path using single mode fiber optic cables is: 30 km (18.64 miles) for 1, 2, 4, 8 Gbps LX links 40 km (24.86 miles) for 1 Gbps LX links with RPQ 8P2263 32 km (19.88 miles) for 2 Gbps LX links with RPQ 8P2263
FICON Director FICON Director
FICON CU
FICON link
9 m fiber
ISL
9 m fiber
FICON links
9 m fiber
FICON CU FICON CU
System z
10 km >10 km (RPQ8P2263)
10 km
10 km
A FICON channel path through one or two FICON Directors consists of multiple optical fiber links. Each link in the channel path can be either long wavelength (LX) or short wavelength (SX), allowing the channel path to be made up of an intermix of link types as illustrated in Figure 4-4. This is possible because the FICON Director performs an optical-to-electrical-to-optical conversion (OEO) of the channel path as it passes through the Director. The transceiver types (LX or SX) at each end of a given link must match.
Site 2 Site 1
FICON Director
SM
LX FICON CU
LX SX LX LX LX
MM
SM
SX FICON CU LX FICON CU
10 km
SM M
LX SX
SM
System z
LX LX SX
LX LX SX LX
SM
LX FICON CU
MM
SM
SX FICON CU LX FICON CU
FICON Director
69
DWDM
Site 1
LX
km 100
SM
SM
LX FICON CU
LX SX SX LX
FICON Director
MM
DWDM
SX FICON CU LX FICON CU
SX SX
SM
SX
System z
Figure 4-5 FICON point-to-point and switched point-to-point maximum extended distance
By combining FICON cascaded Director technology with DWDM technology, the FICON implementation supports a maximum distance of 100 km between two sites, as illustrated in Figure 4-6 on page 71. DWDM technology also provides increased flexibility because multiple links and protocol types can be transported over a single dark fiber trunk.
M M
SX FICON CU
SM
LX FICON CU
LX SX SX LX
FICON Director
MM
SM
SX FICON CU LX FICON CU
70
Site 2 Site 1
FICON Director
SM
LX FICON CU
LX SX
LX LX LX LX SX SX SX SX
MM
SX FICON CU LX FICON CU
LX
SM
10 0
DW DM
SM
SX SX
km
DW DM
M M
SX SX
System z
LX LX
LX LX
SX SX
SM
LX FICON CU
SX SX
LX SX LX
MM
SM
SX FICON CU LX FICON CU
FICON Director
For more detailed information, refer to: Planning for Fiber Optic Links (ESCON, FICON, Coupling Links, and Open Systems Adapters), GA23-0367 For the list of IBM qualified extended distance FICON Director solutions, see: http://www-03.ibm.com/systems/storage/san/index.html For the list of IBM qualified extended distance WDM solutions, see: http://resourcelink.ibm.com
4.6 Convergence
Some System z data center fabrics may require the use of a fully integrated multiprotocol infrastructure that supports mixed I/O and traffic types for simultaneous storage connectivity. A converged infrastructure enables you to consolidate multiple transport layers in a single physical interconnect. This consolidation provides the flexibility to build virtual server and storage environments supporting low-latency, high-bandwidth applications, and simplify hardware and cabling. This approach also simplifies management and reduces power consumption, cooling, and space requirements inside a data center. Other relevant factors to consider are: High density port count (small form factor pluggable ports) Best energy effectiveness devices (per transferred byte) High availability (a redundant architecture of all components) High speed (switching capability - backplane throughput) If you selected a point-to-point topology in the previous step, you can skip this section and go to 4.7, Management on page 75.
71
72
For more detailed information about these topics, refer to: Implementing an IBM/Brocade SAN with 8 Gbps Directors and Switches, SG24-6116 IBM/Cisco Multiprotocol Routing: An Introduction and Implementation, SG24-7543 IBM System Storage/Brocade Multiprotocol Routing: An Introduction and Implementation, SG24-7544 Implementing an IBM/Cisco SAN, SG24-7545
Securing Long-Distance SAN, by using: Physical separation Electronic separation Logical separation The use of Intrusion Detection and Incident Response Automatic event detection and management Event forensics Incident procedure to exercise due diligence
73
The use of Fabric-Based Encryption This technology enables you to encrypt specific or all disk data to prevent accidental data leaks when disk drives are replaced or disk arrays refreshed, and to protect enterprises from data theft. Although this is not supported with FICON, you can use the encryption capabilities of System z.
4.6.4 Zoning
Zoning is a method used in the Director to restrict communication. There are two types of zoning: WWN zoning and port zoning. Zoning is used, for example, to separate FICON devices from each other, to restrict confidential data to specific servers, and to control traffic paths. The default zone automatically puts nodes attached to a fabric that are not already in a zone into a single zone and allow any-to-any connectivity. Conversely, default zone members are automatically removed whenever that member is added to an active zone. The best practice recommendation is to disable the default zone and create port zones, even if the port zone includes all the ports in the fabric. By disabling the default zone and defining port zones, you ensure that the maximum number of zone members is not exceeded and does not inadvertently allow unintended connectivity. 74
FICON Planning and Implementation Guide
WWN zoning permits connectivity between attached nodes based on WWN. That node can be moved anywhere in the fabric and it will remain in the same zone. WWN zoning is used in open systems environments and is not used for FICON. The IOCP effectively does what WWN zoning does in open systems environments. Adding WWN zoning on top of that adds an unnecessary layer of complexity, because every time a new channel card or control unit interface is added to the fabric, the WWN zoning would have to be modified. Port zoning limits port connectivity based on port number. Port zoning is what should be used for FICON. Ports can easily be added to port zones even if there is nothing attached to the port. In purely FICON environments, you put all ports in a single port zone. Define port zones for FICON and World Wide Name (WWN) zones for FCP. Disk and tape mirroring protocols are FCP even when the front end is FICON. We recommend that you separate this traffic from the FICON traffic either with WWN zoning or virtual fabrics. In more complex fabrics, more zones can provide additional security. In intermixed environments, only the FICON ports are put in the port zone. FICON prohibits dynamic connectivity mask (PDCM) controls whether or not communication between a pair of ports in the Director is prohibited or allowed. If there are any differences in the restrictions set up and PDCM, the most restrictive rules are automatically applied. Link incidents for FICON ports are reported only to registered FICON listener ports. The only exception to this is the loss of synchronization link incident. To create a solid base for a SAN, use standard open systems best practices for creating WWN zones for the FCP traffic to ensure that the SAN is secure, stable, and easy to manage: Implement zoning, even if LUN masking is used. Persistently disable all unused ports to increase security. Use port WWN identification for all zoning configuration. Limit the aliases and names to allow maximum scaling. Use frame-based hardware enforcement. Use single initiator zoning with separate zones for tape and disk traffic if an HBA is used for both traffic types. Disable access between devices inside the default zone.
4.7 Management
We recommend that you manage the FICON fabric at various levels. Each level has a specific mission to fulfill. The most simple is offered by the CLI. Unfortunately, this does not provide enough control over the entire environment. To operate the FICON infrastructure you need to automate the whole process. All management levels create together robust systems management and interconnections with data center management.
75
Because all FICON Directors ship with the same default IP address, only one can be connected to the management network at a time. You will also need a sufficient number of IP addresses for each FICON Director dependent on the platform. If all FICON Directors will be connected to the same management network, then they must have the same subnet mask and have a common IP addressing range. For recommended tasks executed from the CLI, refer to platform-specific documentation.
76
The SMI-S does not require any modification or upgrade to deployed fabrics when it is deployed. The implementation is vendor-specific. It can be a demon running directly on the Director platform or a separate host application. Each fabric device needs its own specific agent. The advantage of using an end-to-end storage management platform is the possibility of management automation of the whole storage network including the FICON fabric. For more details about IBM Storage Management Platform, refer to the following URL: http://www.storage.ibm.com
77
using the CUP, on 256 port boxes, physical port addresses FE and FF cannot be used. In this case, use physical port addresses FE and FF for port swapping and for intermix ports. The following applies to FICON Directors with at least 256 ports in use: The FICON Director is seen by the System z server as a 256-port Director. The Prohibit dynamic connectivity mask and port names are for ports 0 through 253. Ports 254 and 255 are reserved. For information about the CUP license, behavior, and addressing rules, refer to FICON Director-specific documentation.
4.8.1 System z
The use of FICON channels gives you the ability to define shared channels that can request logical paths. However, control units can only allocate a limited number of logical paths in relation to the number of logical paths that FICON channels can request. In configurations
78
where channels request more logical paths than a control unit can allocate, you must manage logical paths to help ensure that the I/O operations take place. With proper planning, you can create I/O configuration definitions that allow control units in the configuration to allocate logical paths for every possible request made by channels in either of the following ways: Create a one-to-one correspondence between the logical path capacity of all control units in the physical configuration and the channels attempting to request them. Create I/O configurations that can exceed the logical path capacity of all or some of the control units in the physical configuration, but at the same time provide the capability to selectively establish logical connectivity between control units and channels as needed. This capability can be useful or even necessary in several configuration scenarios. Several components of System z provide the capability for this virtual environment, as explained in the following sections.
Channel Subsystem
Each server has its own Channel Subsystem (CSS). The CSS enables communication from server memory to peripherals via channel connections. The channels in the CSS permit transfer of data between main storage and I/O devices or other servers under the control of a channel program. The CSS allows channel I/O operations to continue independently of other operations within the server. This allows other functions to resume after an I/O operation has been initiated. The CSS also provides communication between logical partitions within a physical server using internal channels.
79
I/O connectivity. This reduction in hardware requirements can apply to physical channels, Director ports, and control unit ports, depending on the configuration. MIF further improves control unit connection topologies for System z servers with multiple LPARs. MIF enables many LPARs to share a physical channel path, thereby reducing the number of channels and control unit interfaces required without a corresponding reduction in I/O connectivity. Installation can take advantage of MIF performance enhancements offered by: Understanding and utilizing I/O-busy management enhancements Planning for concurrent data transfer Understanding examples of MIF consolidation Understanding and utilizing I/O-busy management enhancements Before you can consolidate channels, you must be aware of the channel requirements of the particular control units you are configuring. The number of channels needed is independent of the number of LPARs on a system. The number of channels is based on the number of concurrent data transfers that the control unit is capable of. Although the recommended number of channels satisfies connectivity and performance requirements, additional channels can be added for availability. Note that not all ESCON or FICON configurations benefit from the use of shared channels. There are some configurations where using an unshared channel is more appropriate, as explained here: When there are logical path limitations of the control unit Although many ESCON control units can communicate with multiple LPARs at a time using multiple logical paths, there are some ESCON-capable control units that can only communicate with one LPAR at a time. When the channel utilization of shared channels will be greater than unshared channels If you use shared channels to consolidate channel resources, you must consider the channel utilization of all the channels you consolidate. The channel utilization of a shared channel will roughly equal the sum of the channel utilizations of each unshared channel that it consolidates. If this total channel utilization is capable of decreasing performance, then you must consider using unshared channels or a different configuration of shared and unshared channels to meet your connectivity needs. MIF allows you to use shared channels when defining shared devices. Using shared channels reduces the number of channels required, allows for increased channel utilization, and reduces the complexity of your IOCP input. You cannot mix shared and unshared channel paths to the same control unit or device. Channel spanning extends the MIF concept of sharing channels across logical partitions to sharing channels across logical partitions and channel subsystems. When defined that way, the channels can be transparently shared by any or all of the configured logical partitions, regardless of the Logical Channel Subsystem to which the logical partition is configured. For more details see: System z Enterprise Class System Overview, SA22-1084 z/Architecture, Principles of Operation, SA22-7832 System z Input/Output Configuration Program Users Guide for ICP IOCP, SB10-7037 System z Processor resource/Systems Manager Planning Guide, SB10-7153 System z Support Element Operations Guide, SC28-6879 IBM System z Connectivity Handbook, SG24-5444 80
FICON Planning and Implementation Guide
81
For more information about these topics, refer to: IBM System Storage DS8000 Architecture and Implementation, SG24-6786 IBM System Storage DS8000 Introduction and Planning Guide, GC35-0515
Port characteristics
The port addressing mode is platform-specific. For more details refer to Chapter 3, FICON Director technical description on page 33 and to platform-specific documentation. For port addressing, use the worksheet provided in Appendix B, Configuration worksheets on page 251, to help document the physical layout of your FICON Director. Ports will automatically determine what they are connected to and at what speed they will operate. Manual settings should only be used with equipment that cannot auto-negotiate or login properly. Typically, the only interfaces that require manual settings are older 1 Gbps interfaces and long distance ISLs. Before installing the FICON Director, consider where you will connect your FICON channels, based on your availability requirements. Distribute the channels among different port cards. If two channels are defined to access the same CU, plug both fiber optic cables into different port cards in the Director. Distribute the CU ports among different port cards. If two paths are defined to attach the CU to the server through the Director, connect both fiber optic cables to ports on different cards. Distribute the ISLs across different port cards. If two or more ISLs are to be attached between the Directors, connect the fiber optic cables to ports on different cards. For frame-based trunking, the ISLs needs to be on the same ASIC. Following these rules will ensure that there is always one path available between the server and the CU in case of a defective port card in the Director.
82
If multiple FICON Directors are planned based on the number of required connections and paths, consider spreading those paths across different Directors.
Inter-switch link
The number of Inter-switch links (ISLs) must be carefully planned based on the existing and planned topology, workload characteristics, traffic patterns, and system performance. For availability, there should always be two ISLs between any two cascaded FICON Directors. A general guideline is to plan for one ISL to four FICON channels that are routed over it. The ISL speed must reflect the speed of the channel. Disk and tape should not share the same ISLs. FICON and FCP traffic should not share the same ISLs. Use TI zones to direct traffic to different ISLs. This is vendor-specific. Plan on having enough ISL bandwidth so that the ISLs do not become congested when the maximum number of streaming tape devices are running, because that could result in tape traffic termination. Tape performance is adversely affected when tapes stop. Tapes are also more likely to break when stopping and starting.
Virtual ISL
When two Directors are connected, an ISL connection for two logical switches or extended ISL (XISL) connection for two base switches is created, dependent on the configuration. When logical switches with the same Fabric ID are configured to use the XISL, the Director automatically creates a logical ISL (LISL) within the XISL. The LISL isolates traffic from multiple fabrics. XISL and LISL technology is not supported for FICON at this time. For Cisco, the equivalent of virtual ISL is Trunking E-ports, which can transport virtual SAN Fabric information between Directors. For information regarding the support of virtual ISLs in FICON environments, refer to platform-specific documentation.
4.9 Performance
A single FICON channel can replace multiple ESCON channels. This means fewer channels, Director ports, and control unit ports, resulting in a simpler configuration to manage. Additionally, FICON supports the combined CTC and channel function. ESCON-to-FICON channel aggregation can be anywhere between 8:1 and 2:1. An understanding of the factors that affect performance must be established before deciding on your target configuration. For example, the characteristics of your workload have a direct impact on the performance you will realize in your FICON environment. In addition, keep the utilization of your FICON channels at or below 50% utilization to maintain optimum response times. The following items will require analysis: I/O rates Block sizes Data chaining Read/write ratio
83
For more information about these topics, read the following performance papers: Performance Considerations for a Cascaded FICON Director Environment Version 0.2x, by Richard Basener and Catherine Cronin FICON Express2 Channel Performance Version 1.0, GM13-0702 IBM System z9 I/O and FICON Express4 Channel Performance, ZSW03005USEN IBM System z10 I/O and High Performance FICON for System z Channel Performance, ZSW03059USEN
I/O operations
Deploying FICON channels demands a further understanding of the I/O operations in your environment. The following terms are used to describe the different phases with respect to the measurements available for determining the duration of an I/O operation. I/O supervisor queue time (IOSQ), measured by the operating system. The I/O request may be queued in the operating system if the I/O device, represented by the UCB, is already being used by another I/O request from the same operating system image (UCB busy). The I/O Supervisor (IOS) does not issue a start subchannel (SSCH) command to the Channel Subsystem until the current I/O operation to this device ends, thereby freeing the UCB for use by another I/O operation. Pending time (PEND), measured by the channel subsystem. After IOS issues the start subchannel command, the channel subsystem may not be able to initiate the I/O operation if any path or device busy condition is encountered. Channel busy Director port busy Control unit adapter busy Device busy
Connect time, measured by the channel subsystem. This is the time that the channel is connected to the control unit, transferring data for the I/O operation. Disconnect time The channel is not being used for the I/O operation, because the control unit is disconnected from the channel, waiting for access to the data or to reconnect. In the following sections we describe factors that will influence your FICON performance design.
84
You must also keep in mind that tape workloads have larger payloads in a FICON frame, while disk workloads might have much smaller. The average payload size for disk is often about 800 to 1500 bytes. By using the FICON Director activity reports, you can gain an understanding of your average read and write frames sizes on a port basis. The buffer credit represents the number of receive buffers supported by a port for receiving frames. The minimum value of buffer credits is one (1). This value is used as a controlling parameter in the flow of frames over the link to avoid possible overrun at the receiver. The number of buffer credits is not associated with the performance until high data rates are attempted over long distances. If there are insufficient buffer credits, there may be a hard limit on the data rate that can be sustained. The number of buffer credits available for each port on the FICON Director is implementation-dependent. The optimal amount of buffer credits is determined by the distance (frame delivery time), the processing time at the receiving port, link data rate, and the size of the frames being transmitted. There are four implications to consider when planning buffer credit allocation: Ports do not negotiate buffer credits down to the lowest common value. A receiver simply advertises buffer credits to the linked transmitter. The exhaustion of buffer credits at any point between an initiator and a target will limit overall performance. For write-intensive applications across an ISL (tape and disk replication), the buffer credit value advertised by the E_Port on the target cascaded FICON Director is the major factor that limits performance. For read-intensive applications across an ISL (regular transactions), the buffer credit value advertised by the E_Port on the local FICON Director is the major factor that limits performance. Buffer credits are mainly a concern for extended distances; however, a poorly designed configuration may consume all available buffer credits and have an impact on performance. For example, assume you have a FICON 8 Gbps channel attached to two different control units running at lower link rates 4 Gbps and 2 Gbps. Depending on the traffic pattern, it can happen that the low speed device consumes all the available buffer credits of the 8 Gbps link and you are not able to send more packets between the server and FICON Director. In this case the average link utilization goes down. For this reason, we recommend that you upgrade all low speed channels where possible, or physically separate low and high speed environments. For more information about this topic, refer to Buffer credits on page 39.
85
Note, however, that Extended distance FICON does not extend the achievable physical FICON distances or offer any performance enhancements in a non-z/OS global mirror environment. For more information about this topic, refer to 2.1.9, Extended distance FICON on page 24.
86
When an I/O completes, the alias is returned to the pool for the logical control unit. It then becomes available to subsequent I/Os. The number of aliases required can be approximated by the peak I/O rates multiplied by the average response time. For example, if the average response time is 4 ms and the peak I/O rate is 2000 per second, then the average number of I/O operations executing at one time for that LCU during the peak is eight. Therefore, eight PAV-aliases should be all that is needed to handle the peak I/O rate for the LCU, along with all the other PAV-base addresses in the LCU. Depending on the kind of workload, there is a huge reduction in PAV-alias UCBs with HyperPAV. The combination of HyperPAV and EAV allows you to significantly reduce the constraint on the 64 K device address limit and in turn increase the amount of addressable storage available on z/OS. In conjunction with Multiple Subchannel Sets (MSS) on System z, you have even more flexibility in device configuration. For more information about this topic, refer to: Disk storage access with DB2 for z/OS, REDP-4187 http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD100311 Use the PAV analysis tool to achieve a better understanding of your workload profile and achieve a more precise split: http://www.ibm.com/servers/eserver/zseries/zos/unix/bpxa1ty2.html
87
In addition to reducing channel utilization, indirect addressing also has the effect of reducing the utilization of the control unit FICON channel. This effect is hidden, because the system does not have any way of measuring the utilization of the channel. The channel has to process every CCW, with or without data chaining, but indirect data addressing is transparent to the channel. MIDAWs help reduce channel utilization and affect the relative performance of different block sizes. Physical disk performance is irrelevant if the data permanently resides in cache, or if the channels are faster than the speed at which data can be sequentially read from the disks. However, block size can also affect FICON channel performance. In other words, in addition to eliminating the extended format performance penalty, using MIDAWs also eliminates the small block performance channel penalty, except to the extent that the record size affects the number of bytes that fit on a track. Small blocks are usually better for database transaction workloads, because the buffer hit ratio for random access tends to be higher with smaller blocks. MIDAWs affect the performance of update writes in the same manner as they affect reads. For more information about this topic, refer to: How Does the MIDAW Facility Improve the Performance of FICON Channels Using DB2 and Other Workloads?, REDP-4201
88
For more information about this topic, refer to: High Performance FICON for System z: Technical Summary for Customer Planning, ZSW03058USEN
Local switching
There are two Director architectures: Multi-stage switching with interconnected ASICs The architecture approach uses ASICs to both process and switch data traffic. Processing can measure performance, enforcing zoning and other protocol-related tasks. Switching is simply directing traffic from one port to another. If more user-facing ports are required, two or more ASICs are internally interconnected. Single-stage switching with one or more crossbars ASICs process data on the port blades, but they do not switch the traffic. Switching is performed by one or more serial crossbars. This provides highly predictable performance across all ports and an easier design for higher port capacity. The bandwidth maximum for a Director chassis is the crossbars bandwidth. There are three connection allocation possibilities: You can gain a local switching benefit by distributing storage, server, and ISL ports evenly across Director port blades, to minimize the impact of losing a port blade to a hardware fault or other issue. You can perform a slight modification by distributing storage, server, and ISL ports evenly across port blade ASICs. By going across port blades and alternating between the high and low ASICs, local switching occurs much more frequently. You can group storage, server, and ISL ports that will need to switch with each other within Local Switching groups. Ensure that array devices are mapped to a local switching group that also has server ports that will require those devices. Combinations of local and non-local traffic flows can occur simultaneously. The oversubscription rates can be optimized, depending on the design.
Frame-based trunking
Using frame-based trunking, data flows are automatically distributed over multiple physical Inter-switch link connections. Frame-based trunking combines up to eight ISLs into a single logical trunk, optimizes link usage by evenly distributing traffic across all ISLs at the frame
89
level, maintains in-order delivery to ensure data reliability, ensures reliability and availability even if a link in the trunk fails, and simplifies management by reducing the number of ISLs. Frame-based trunking is the most effective way to utilize ISLs for FICON traffic while maintaining in-order-delivery (IOD) of frames inside the fabric.
Port-based DLS
DLS monitors link or trunk utilization to ensure load balancing. The choice of port-routing path is based only on the incoming port and destination domain. This feature is not supported for FICON Fabric.
Lossless DLS
Lossless Dynamic Load Sharing (DLS) allows you to rebalance trunk port paths without causing input/output (I/O) failures in cases where the end devices require in-order-delivery (IOD) of frames.
Port-channel
The Port-channel feature is an equivalent for link trunking on Cisco FICON Directors. The balancing is based either on a source-destination or source-destination-exchange ID policy. For FICON, only the source-destination policy is supported with guaranteed in-order-delivery of frames.
Traffic isolation
The term preferred path for FICON traffic was replaced by the use of static routes and Traffic Isolation (TI) zones (not defined by the Fibre Channel standard). Traffic Isolation allows data paths to be specified. It is used to: Separate disk and tape traffic Select traffic for diverse ISL routes Guarantee bandwidth for mission-critical data Traffic isolation can operate with failover enabled or disabled. Any traffic on ports that are not included in a TI zone follow the fabric shortest path first (FSPF) algorithm. TI zone can be used for failover traffic from other TI zones when the ISLs fail, and from ports that are not in a TI zone. It is recommended that you either put all ports in TI zones or do not use TI zoning at all. General rules for traffic isolation zones are: An N_Port can be a member of only a single TI zone. If administrative domains (ADs) are configured, this checking is done only against the current ADs zone database. An E_Port can be a member of only a single TI zone.
90
If multiple E_Ports are configured that are on the lowest-cost route to a domain, load balancing is applied. Only source ports included in the zone are routed to zone E_Ports as long as other paths exist. If no other paths exist, the dedicated E_Ports are used for other traffic also.
Port fencing
Bad optics can cause errors to occur at a high enough frequency that error processing and sending and processing RSCNs can cause fabric performance problems. Port fencing allows the user to limit the number of errors that a port can receive by forcing a port offline when certain error thresholds are met. For FICON environments, port fencing is only set for CRC errors, Invalid Transmission Words, and Loss of Sync. The recommended thresholds are: Five errors per minute for CRC errors 25 errors per minute for Invalid Transmission Words Two errors per minute for Loss of Sync These settings are high enough to ignore occasional errors and transient errors due to recabling, but low enough to stop problematic optics from causing fabric issues. By default, the alarms are set to fence the port, log an alert, send and e-mail, and set an SNMP trap. In most FICON environments, only fencing the port and logging the alert are desired.
91
Using the suite of evaluation tools, you can provide the data and insight that are needed to effectively manage mainframe storage performance, as well as optimize disk subsystem investments and fine-tune z/OS tape workloads. The tools are listed and explained here: Processor evaluation - IBM zCP3000 study Channel evaluation - IBM FICON aggregation study When planning to perform a capacity planning study for FICON channels, use the IBM System z Capacity Planning tool, zCP3000. zCP3000 is designed to support FICON channel aggregation (select channel candidates for aggregation onto FICON channels). Disk controller evaluation - IBM Disk Magic study This is limited to the storage FICON channel port and port busy utilization numbers for workloads and configurations. You can model, if additional channels are needed. IBM zHPF evaluation study This study is designed to quantify the amount of I/Os in an environment that appears to be zHPF-eligible. With this information, the impact/benefit of zHPF can be assessed for a specific situation or workload. For System z, the z/OS Resource Measurement Facility (RMF) can be used to analyze your environment. This provides online, interactive performance monitoring or alternatively, long-term overview reporting with post-processor reports. Some reports to assist in your analysis are: Channel path activity report This report identifies each channel path by identifier and channel path type and reports on the channel utilization by server and individual LPAR. Device activity report This report provides activity and performance information for selected devices. I/O queuing activity report This report provides information about the I/O configuration and activity rate, queue lengths, and percentages when one or more I/O components were busy. FICON Director activity report This report provides information about Director latency caused by port contention. Shared device activity report This report gives you an overall performance picture of disk and tape devices that are shared between MVS systems in a sysplex. For further information, refer to z/OS RMF Report Analysis, SC33-7991.
Directors providing connectivity for storage and servers must also have common services and settings to operate with stability and full performance. When interconnecting Directors, the name server, zoning database, FSPF, RSCNs, and time-out value information must be shared. Even if vendors follow Fibre Channel standards, coordinating fabric information between heterogeneous platforms is still an issue for fabric interoperability. Vendor implementation of the same standard could be at different stages and include varying enhancements, which can lead to fabric connectivity issues. Native connectivity is required in the following situations: Migration between two platforms Port optimization within a fabric Temporary platform merge for infrastructure change Double vendor policies Issues introduced by the native connectivity include: Management interface inconsistencies in fabric topology, node port states, zoning, and performance measurement Slow fabric recovery during adds, moves, and changes Failing fabric merges due to slight differences in standard implementations Name server synchronization issues Security features implementation Special vendor features to enhance fabric performance E_Port connectivity in these situations is used to form a single fabric. Using Fibre Channel Routing is an option to allow secure communication between hosts and storage in two or more separate, unmerged fabrics and to be managed independently. FCR is not supported by FICON. For this reason, any cascaded FICON Directors must be from the same vendor. For System z details, refer to Chapter 2, System z FICON technical description on page 13. For FICON Director details, refer to Chapter 3, FICON Director technical description on page 33. IBM System Storage Web site: http://www-03.ibm.com/systems/storage/san/index.html IBM System Storage SAN768B Fiber Backbone Interoperability Matrix IBM System Storage SAN384B Fiber Backbone Interoperability Matrix Cisco MDS 9506, 9509, 9513 for IBM System Storage Directors Interoperability Matrix For more details regarding IBM System z qualification letters, see: http://resourcelink.ibm.com
93
The supported standard media for System z FICON features are single mode OS1 and OS2, as well as multimode OM1, OM2, and OM3. With the increasing data rate on the link you need to utilize the latest fiber media types when designing the physical layer. New fiber media types provide more flexibility and better performance. A unified media platform is also very important inside the data center for simplified operations and troubleshooting. The highest flexibility and investment protection is with the single mode fiber. We recommend using the OS2 fiber type for System z FICON LX features, because of the lower attenuation compared to the OS1 fiber type. The System z FICON SX features support multimode fiber. For the best performance and investment protection for multimode, we recommend using the OM3 fiber type. There are higher bandwidth fiber cables on the market today, but their use could introduce a risk for interoperability and may not be supported by the equipment vendor. Note: OM1 and OM2 fiber types are not recommended for laser-based systems. When choosing the fiber media, we recommend using only qualified vendor fiber optic cabling components. Also, avoid mixing cabling components from different vendors even if they are all qualified. Qualified cabling components (connectors and cables) follow a standard color code. They are delivered with a measurement protocol directly from the manufacturer, which guarantees a high quality cable. This is important because the higher the bit rate on the channel, the more stringent the requirements for the fiber and the connectors. That means low attenuation, high return loss, and high number of cycles for reuse.
94
inside the cabinets. Failure to do so can cause multiple outages during the operations of the devices, when moving or laying cables. There are various types of high density connectors available that offer differing parameters such as robustness, ease of use, number of channels, and fiber link parameters. We recommend using a unified connector type for the entire infrastructure. The standard-based high density connectors are LC, MU, and MTP. The IBM high-density connector type is referred to as the small form factor duplex fiber optic connector for the data center (SCDC). The SCDC was especially developed for the mainframe fiber optic cabling environment, starting with ESCON. When designing a fiber infrastructure, also consider the polarity management. We recommend using the standard-based ANSI/TIA-568-B.1-7 Method C. This is the default for the IBM Fiber Transport System. Each cabling infrastructure needs to be carefully documented. You have to consider the naming conventions for each environment specifically. When creating a naming convention for System z, keep in mind the format used in the FICON environment, for ease of use and understanding by the operators. When planning the optical interfaces, pay extra attention to the power level alignment between the receiver and the transmitter on the channel between two components. Low power levels do not necessarily mean a bad cable, but rather the potential of port errors and resending of frames, which in turn will impact performance. The FICON Express8 features utilize the existing single-mode and multimode cable plants. However, the 8 Gbps channel is more sensitive to the condition of the cable plant. The cable plant must satisfy the industry standard specification to minimize connector loss and reflections. We strongly recommend you thoroughly analyze the fiber optic link to ensure that the cable plant specifications (total cable plant optical loss as well as connectors and splices return loss) are being met for that link length. The most common source of cable plant optical link problems is associated with the various connections in the optical link. Dust, dirt, oil, or defective connections may cause a problem with high speed channels such as 8 Gbps, although lower link data rates such as 1 Gbps, 2 Gbps, or 4 Gbps may not be affected. If you are experiencing excessive bit errors on a link (regardless of the data link rate), we recommend that you first clean and reassemble the connections. Refer to IBM Fiber Optic Cleaning Procedure, SY27-2604 for the procedure and materials required. The cleaning is best performed by skilled personnel. The cleaning procedure may need to be performed more than once to ensure that all dust, dirt, or oil is removed. For more detailed information about fiber optic link testing and measurement, refer to System z Maintenance Information for Fiber Optic Links (ESCON, FICON, Coupling Links, and Open System Adapters), SY27-2597. For additional information, refer to the following URLs: http://www.tiaonline.org http://www.cenelec.eu http://www.ibm.com/services/siteandfacilities For an example of how you can use this decision-making process when planning a FICON environment, we provide a walkthrough of our FICON environment with a cascaded FICON Director topology in Appendix A, Example: planning workflow on page 243.
95
96
Part 3
Part
97
98
Chapter 5.
99
z10 EC
SCZP201
LPAR SC33 A29 z/OS V1R10
FICON Express8 LX CSS2 PCHID 5A3 CHPID 7F
DS8000
CU# D000
LX 0132
D0xx
LX 0331
D1xx
CU# D100
* All cable connectors are LC Duplex type
The z10 EC server (SCZP201) has one LPAR defined and activated. The system name running in this partition is SC33, which corresponds to the LPAR name A29. The operating system running in this partition is z/OS V1R10. As a standard feature of the z10 servers, zHPF protocol will be used for data transfer to the DS8000 storage device. Two FICON channels are defined as shared and spanned across CSS2 and CSS0 (only CCS2 is shown). The channels are FICON Express8 LX features (FC 3325), using PCHID 5A3 (CHPID 7F) and PCHID 5D3 (CHPID 83). An IBM System Storage DS8000 will be connected to the z10 server via CHPID 7F and CHPID 83. The two host adapters installed in the DS8000 are longwave (LX) laser. Port numbers 0132 and 0331 in the DS8000 are used to connect to the server. Two logical control
100
units (D000 and D100) are defined, which have devices D000-D0FF and D100-D1FF assigned. The DS8000 will have the zHPF feature enabled. A maximum unrepeated distance of 10km (6.2 miles) is supported by the longwave laser (LX) feature when using 9m single mode (SM) fiber optic cable. The fiber optic cables have an LC Duplex connector at both ends to connect to the z10 FICON Express8 channels and to the DS8000 host adapters.
Tasks
Figure 5-2 shows the main steps required to define and activate a FICON point-to-point configuration.
Follow the verification checklist to ensure that all HW and SW prerequisites are met. Go to Verification checklist on page 101.
Verification checklist
Information about defining the channel paths, control units, and devices is given in Defining the channel, CU, and storage device in IOCDS on page 103.
Configure storage CU
The configuration tasks for DS8000 storage system are described in Configuring the IBM Storage System DS8000 on page 108.
Information about fiber optic cables and plugging rules to achieve the desired configuration is given in Connecting the fiber optic cables on page 108.
Verify configuration
Information about how to verify that your actual configuration matches the desired configuration is given in Verifying the installation on page 110.
Figure 5-2 Main steps for configuring and verifying a FICON point-to-point configuration
Verification checklist
Before configuring the point-to-point topology shown in Figure 5-1 on page 100, the following list was checked. All steps in the checklist must be finished and corrected (if required) to ensure a smooth and successful configuration of the topology. Both hardware and software requirements must be checked. Check that the appropriate FICON features are available on the System z server. For details about each feature, see System z FICON feature support on page 25. FICON Express2 LX FC 3319
Chapter 5. Configuring a point-to-point topology
101
SX FC 3320 FICON Express4 LX FC 3321, FC 3323, FC 3324 SX FC 3318, FC 3322 FICON Express8 LX FC 3325 SX FC 3326 If using the FICON Express8 feature, check the System z operating system requirements. z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01) is required to support the FICON Express8 feature in the z10 server. Check the 2097DEVICE PSP bucket for the latest information about FICON Express8 support on the operating system. Check DS8000 storage hardware requirements to support FICON longwave (LX) or shortwave (SX) connectivity. Check that FC 0709 and FC 7092 are installed to support zHPF. Note: FC 0709 and FC 7092 must be ordered to obtain the license key to enable zHPF support in a DS8000 storage controller. Check that DS8000 firmware is at level 6.1.16.0 or higher to support the zHPF feature. Check that the zHPF feature is enabled in the DS8000. Check that the correct fiber optic cables are available to connect the DS8000 storage controller to the z10 server. A 9m single mode (SM) fiber optic cable is required to support longwave laser (LX) for a maximum distance of 10km (6.2 miles). A 50m or 62.5m multi-mode (MM) fiber optic cable is required to support shortwave laser (SX). See System z FICON feature support on page 25 for the maximum supported distance depending on cable type and speed. An LC duplex connector is required at both ends of the fiber optic cables to connect to the System z server FICON adapter and the DS8000 host adapter. Note: All fiber optic cables used in a link must be of the same type. For example, they must be either all single mode or all multi-mode fiber optic cables.
For our scenario, we had an active partition in a System z server running z/OS with HCD. We used HCD to create, save, and activate our I/O definitions.
CHPID PATH=(CSS(2),7F),SHARED,PARTITION=((A29),(=)),PCHID=5A3,* TYPE=FC CHPID PATH=(CSS(2),83),SHARED,PARTITION=((A29),(=)),PCHID=5D3,* TYPE=FC The PATH keyword in the CHPID statement defines CHPID 7E in CSS 2 as SHARED. The LPAR name (A29) is specified in the PARTITION keyword to use CSS 2 to access CHPID 7F. LPAR names specified in ICODS correspond to system name SC33. The PCHID 5A3 has been merged to the IOCDS by the CMT, which assigned CHPID 7F to PCHID 5A3. The TYPE of the channel is specified as FC (native FICON). The same definition rules apply to the definition of CHPID 83.
103
With the CHPIDs defined, we next show how the CUs attached to the CHPIDs are defined; Example 5-2 displays the CNTLUNIT statement and keywords.
Example 5-2 CNTLUNIT definition for point-to-point configuration
CNTLUNIT CUNUMBR=D000, * PATH=((CSS(0),50,54,58,5C),(CSS(1),50,54,58,5C),(CSS(2),* 50,54,58,5C,7E,82,7F,83),(CSS(3),50,54,58,5C)), * UNITADD=((00,256)), * LINK=((CSS(0),2C,2C,0A,0A),(CSS(1),2C,2C,0A,0A),(CSS(2),* 2C,2C,0A,0A,6688,66CF,**,**),(CSS(3),2C,2C,0A,0A)), * CUADD=0,UNIT=2107 CNTLUNIT CUNUMBR=D100, * PATH=((CSS(0),51,55,59,5D),(CSS(1),51,55,59,5D),(CSS(2),* 51,55,59,5D,7E,82,7F,83),(CSS(3),51,55,59,5D)), * UNITADD=((00,256)), * LINK=((CSS(0),28,28,0E,0E),(CSS(1),28,28,0E,0E),(CSS(2),* 28,28,0E,0E,6688,66CF,**,**),(CSS(3),28,28,0E,0E)), * CUADD=1,UNIT=2107 There are two control units defined by the CUNUMBR keyword: D000 and D100. The logical CU image number 0 is specified for control unit D000. For control unit D100, the logical CU image number 1 is specified in the CUADD keyword. CHIPDs 7F and 83 have access to CUs D000 and D1000 because both CHPIDs (among others) are specified in CSS2 in the PATH keyword for both CUs. The LINK keyword is required because some other FICON channels have access to the CUs through a FICON Director. No link address is required for CHPID 7F and 83. Two asterisks (**) are specified in the LINK keyword for point-to-point connectivity. After the CHPIDs and the CUs are defined, the next step is to define the devices owned by the CUs; Example 5-3 displays the IODEVICE statement and keywords.
Example 5-3 IODEVICE definition for point-to-point configuration
The IODEVICE statement specifies the characteristics of devices D000 and D100. For control unit D000 there are 113 devices defined, starting at device address D000. The same definitions are done for devices associated to control unit D100. These are base devices in the storage unit. In addition, 143 Hyper PAV alias devices are defined starting at device address D071 and D171.
104
Based on those considerations, we are going to configure FCTCs in a point-to-point FICON environment. Figure 5-3 illustrates the desired FCTC configuration. To simplify the illustration only four CUs and two LPARs are shown in this sample configuration, although more than four CUs could be defined.
z10 EC
SCZP201
LPAR SC30 A23 z/OS V1R10
FICON Express8 LX CSS2 PCHID 5A2 CHPID 7E
Both LPARs have access to CHPID 7E and 82 because the CHPIDs are defined as shared. There is a direct connection between CHPID 7E and CHPID 82 by a fiber optic cable. Each LPAR will have access to a logical control unit and their logical devices defined in the corresponding LPAR. LPAR A23 will communicate via a FICON channel to CU 5034, which is assigned to LPAR A24. LPAR A24 will communicate via a FICON channel to CU 4044, which is assigned to LPAR A23. Example 5-4 shows the definition of FCTC control units and devices.
Example 5-4 FCTC definitions point-to-point
CNTLUNIT CUNUMBR=5034,PATH=((CSS(2),7E)),UNITADD=((04,004)), CUADD=23,UNIT=FCTC IODEVICE ADDRESS=(5034,004),UNITADD=04,CUNUMBR=(5034), STADET=Y,PARTITION=((CSS(2),A24)),UNIT=FCTC CNTLUNIT CUNUMBR=4044,PATH=((CSS(2),82)),UNITADD=((04,004)), CUADD=24,UNIT=FCTC IODEVICE ADDRESS=(4044,004),UNITADD=04,CUNUMBR=(4044), STADET=Y,PARTITION=((CSS(2),A23)),UNIT=FCTC
* *
* *
One control unit is defined for each LPAR: CU 5034 for LPAR A23, and CU 4044 for LPAR A24. The PARTITION keyword in the IODEVICE statement specifies the logical partition that can access the device using the FICON channel. There are four devices specified for each LPAR (LPAR A23 and LPAR A24). This allows data traffic to flow between both LPARs using the corresponding devices: SC30 SC31 4044 <---> 5034
Chapter 5. Configuring a point-to-point topology
105
4045 <---> 5035 4046 <---> 5036 4047 <---> 5037 The rules we followed for device numbering are described in FCTC device numbering scheme on page 306. Figure 5-4 illustrates the logical view of the FCTC configuration and the data path between LPARs. To simplify the illustration, only two logical CUs are shown.
LPAR A23
Data transfer
CU 4044
7E
82
CU 5034
Data transfer
LPAR A24
Figure 5-4 FCTC data transfer (point-to-point)
Data is transferred over the FICON link in both directions between the CUs and the logical partitions (LPARs). LPAR A23 sends data to A24 via CU 5034. LPAR A23 receives data from A24 via CU 4044. The reverse applies to LPAR A24 when data is sent to or received from A23.
106
The options used to activate the changes are: Option 1. Build production I/O definition file Option 2. Build ICODS Option 6. Activate or verify configuration dynamically The Build production I/O definition file (IODF) function (option 1) saves definition data into an IODFxx dataset, where xx in the data set name is a suffix used to identify the IODF among other IODFs which may already exist. The suffix of the IODF data set is used in the IPL parameter when the z/OS image is brought up. This ensures that the desired IODF used by the operating system image matches the IOCDS that is loaded into the HSA. Using Build IOCDS (option 2) allows you to build an IOCDS and save it to the CPC. The IOCDS is built using the definitions saved in the IODFxx data set. Using Activate or verify configuration dynamically (option 6) allows you to dynamically activate changes in the I/O definitions. There is no Power-on Reset (POR) required because changes are directly updated to the HSA in the CPC. This is the preferred method for activating changes. Another way to activate changes in I/O definitions is to perform a POR at the CPC with the newly-built IOCDS. Keep in mind, however, that a POR causes a system outage. For this reason, if a POR is required, then we suggest that you plan ahead to make any changes in I/O definitions so that the new IOCDS is ready for use at the next POR. Changes can be done at any time and saved in an IODF data set. The newly-built IOCDS based on the I/O definition in the IODF should be sent to the CPCs Support Element and marked for use at the next POR. This ensures that the newly-built IOCDS will be used the next time the CPC is activated. To verify which I/O definitions (IODFs) are currently active, enter the D IOS,CONFIG command at the z/OS operator console. The response to the command is shown in Example 5-5.
Chapter 5. Configuring a point-to-point topology
107
IOS506I 15.20.05 I/O CONFIG DATA 345 ACTIVE IODF DATA SET = SC33.IODF01 CONFIGURATION ID = TEST2094 EDT ID = 01 TOKEN: PROCESSOR DATE TIME DESCRIPTION SOURCE: SCZP201 09-05-11 20:50:41 SC33 IODF01 ACTIVE CSS: 2 SUBCHANNEL SETS CONFIGURED: 0, 1 CHANNEL MEASUREMENT BLOCK FACILITY IS ACTIVE Example 5-5 shows that the IODF01 data set is currently active in z/OS image SC33. It also provides information about the CSSs used in this z/OS image. CSS2 is the active CSS in system SC33. Compare the information provided by the D IOS,CONFIG command with the data of the ID statement in the IOCDS, shown in Example 5-6.
Example 5-6 ID statement in IOCDS
ID
The suffix of the IODF data set shown in the IOCDS ID statement and displayed by the D IOS command should be the same number (that is, IODF01). This will prove that the IOCDS was built from the current active IODF data set and that the latest changes and new IO definitions were successfully activated.
108
For further information and considerations regarding fiber cabling and documentation, refer to 4.11, Physical connectivity on page 93 and 4.2, Documentation on page 62.
CF CHP(7F) IEE502I CHP(7F),ONLINE IEE712I CONFIG PROCESSING COMPLETE To achieve the best performance on the FICON channel, make sure that zHPF is enabled. Refer to 4.9.6, High Performance FICON on page 88 for considerations regarding how to exploit zHPF on System z servers. Enter D IOS,ZHPF at the z/OS console to display zHPF settings for the z/OS image. If zHPF is disabled, enter SETIOS ZHPF,YES. This enables zHPF, as shown in Example 5-8.
Example 5-8 Enabling zHPF
D IOS,ZHPF RESPONSE=SC33 IOS630I 13.27.23 ZHPF FACILITY 021 HIGH PERFORMANCE FICON FACILITY IS ENABLED Using the SETIOS ZHPF=YES command enables zHPF temporarily. However, after the next system IPL, the zHPF facility will be reset to the default (disabled). To permanently enable zHPF for z/OS, change the FCX parameter in the SYS1.PARMLIB member IECIOSxx to FCX=YES. Now you can query the status and functional details of the channel by entering D M=CHP(7F) at the operator console. The command output is shown in Example 5-9. It provides information about the channel and the attached devices.
Example 5-9 D M=CHP(7F)
D M=CHP(7F) IEE174I 11.58.59 DISPLAY M 290 CHPID 7F: TYPE=1A, DESC=FICON POINT TO POINT, DEVICE STATUS FOR CHANNEL PATH 7F 0 1 2 3 4 5 6 7 8 9 A B C D 0D00 + + + + + + + + + + + +@ +@ + 0D01 + + + + + + + + + + + + + + 0D02 + + + + + + + + + + + + + + ... 0DFD HA HA HA HA HA HA HA HA HA HA HA HA HA HA 0DFE HA HA HA HA HA HA HA HA HA HA HA HA HA HA 0DFF HA HA HA HA HA HA HA HA HA HA HA HA HA HA SWITCH DEVICE NUMBER = NONE ATTACHED ND = 002107.922.IBM.75.0000000BALB1
ONLINE E + + + F + + +
HA HA HA HA HA HA
109
PHYSICAL CHANNEL ID = 05A3 FACILITIES SUPPORTED = ZHPF Example 5-9 on page 109 shows that CHPID 7F is online and operating in a point-to-point configuration. Information is also displayed about the attached devices and the facilities (for example, zHPF) supported by the channel. To verify that communication to the attached devices is working properly, enter D M=DEV(xxxx), where xxxx is any device number. For example, to check the status of device D000, which is a storage device, enter D M=DEV(D000) on a z/OS console (see Example 5-10).
Example 5-10 D M=DEV(D000)
D M=DEV(D000) IEE174I 12.08.55 DISPLAY M 296 DEVICE D000 STATUS=ONLINE CHP 50 54 58 5C 7E 82 7F ENTRY LINK ADDRESS 0F 0F 1B 1B 6503 651B .. DEST LINK ADDRESS 2C 2C 0A 0A 6688 66CF 0D PATH ONLINE N N N N N N Y CHP PHYSICALLY ONLINE N N N N N N Y PATH OPERATIONAL N N N N N N Y MANAGED N N N N N N N CU NUMBER D000 D000 D000 D000 D000 D000 D000 MAXIMUM MANAGED CHPID(S) ALLOWED: 0 DESTINATION CU LOGICAL ADDRESS = 00 SCP CU ND = 002107.900.IBM.75.0000000BALB1.0132 SCP TOKEN NED = 002107.900.IBM.75.0000000BALB1.0000 SCP DEVICE NED = 002107.900.IBM.75.0000000BALB1.0000 HYPERPAV ALIASES CONFIGURED = 143 FUNCTIONS ENABLED = MIDAW, ZHPF
83 .. 0D Y Y Y N D000
The response to D M=DEV(D000) displays all available paths to the devices and their status. Information about the device (for example, device type), the control unit number, and the functions supported by the device (for example, MIDAW and zHPF) is shown.
110
Essential information for PCHID 5A3 is shown on the PCHID Details panel: The PCHID status is Operating. The PCHID type is FICON Express8. CSS.CHPID 2.7F is assigned to PCHID 5A3. The owning image of PCHID 5A3 is A29. The CHPID assigned to PCHID 5A3 is shared. Similar information can be found on the CHPID details panel, shown in Figure 5-7.
The CHPID Details panel provides information that is similar to the PCHID details panel. To display the CHPID details, select the channel list for an operating system image from the Support Element workplace. Notice that the information for PCHID 5A3 (CHPID 2.7F) provided on the details window matches the designated configuration. This proves that PCHID 5A3 has CHPID 2.7F assigned and that image A29 can access the channel. Repeat these checks on other channels that were recently configured. Important: If any of the data displayed on the PCHID or CHPID detail panel does not match the desired configuration, you must correct the definitions in the IOCDS.
111
Next, check that the FICON channels are connected to the correct host adapter port in the DS8000 storage controller. On either the PCHID details or CHPID details panel, click the Channel Problem Determination button. This will display the Channel Problem Determination panel, where you can select which information you want to display (see Figure 5-8).
Select Analyze channel information and click OK. The Analyze Channel Information window is displayed, as shown in Figure 5-9 on page 113, which provides information about the node attached to the FICON channel.
112
Information about the nodes is displayed at the bottom part of the window. The lower left side displays information about the node in the System z server. The lower right side displays information about the attached node. Important: Make sure that the Node status for both nodes is displayed as Valid. If any other status is shown, none of the displayed information is valid. Check that the Type/model information as well as the serial number (Seq. number) is as expected. Next, examine the Tag field for each node. The Tag provides information about the port number of the attached node. The two right-most digits of the Tag value represent the CHPID number for the channel node (7F). For the attached node, the four digits represent the port number (0132). Be aware, however, that the tag value is provided by the attached device during link initialization and may have different meanings, depending on the vendor. The World Wide Node Name (WWN) and the World Wide Port Name (WWP) are also shown for each port, and may be used to prove that the channel is connected to the correct FICON adapter port if the WWN or WWP of the attached device is known. If the node status is not Valid or the Tag value and WWPN value are not correct, check the fiber optic cable link between the z10 server and the FICON Director to ensure that it is plugged correctly.
113
In our scenario we are now sure that PCHID 5A3 has connectivity to port number 0132 at the DS8000 host adapter, which matches our desired configuration (see Figure 5-1 on page 100). If the displayed values are not as expected, the fiber optic cables may not be plugged correctly and must be checked. After completing the preceding steps and proving that the physical path to the DS8000 storage controller and the logical definitions of the link are correct, check that the path to the control unit image is initialized correctly and properly defined. On the Channel Problem Determination panel, shown in Figure 5-8 on page 112, select Analyze Serial Link Status and click OK. The Analyze Serial Link Status window is displayed, as shown in Figure 5-10.
The Analyze Serial Link Status window provides status information about the link to the control unit images defined in the IOCDS. Scroll through the list of CU images and check that the status for all CUs displays as Initialization Complete. Figure 5-10 shows a link status of Initialization Complete for all defined CU images on CPID 7F. Although there is no FICON Director attached, a link address of 0D is displayed by default for a point-to-point FICON link. If the link status Initialization Complete is not shown, you must check that the ports in the CU are correctly configured, and that the fiber optic cable link to the CU has the correct cable type and plugging.
114
Chapter 6.
115
z10 EC
SCZP201
LPAR SC30 (A23) z/OS V1R10
FICON Express8 LX CSS2 PCHID 5A2 CHPID 7E
DS8000
CU# D000
LX 0003
D0xx
Switch # 65 Switch @ 65
Port 1B
Port 3F
LX 0242
D1xx
CU# D100
* All cable connectors are LC Duplex type
The z10 EC server (SCZP201) has two LPARS defined and activated. The system names running in these partitions are SC30 and SC31, which correspond to the LPAR names A23 and A24, respectively. The operating system running in both partitions is z/OS V1R10. As a standard feature of the z10 server, zHPF protocol will be used for data transfer to the DS8000 storage device. Two FICON channels are defined as shared and in CSS2. Both LPARs have access to the two FICON channels. The channels are FICON Express8 LX features (FC 3325) using PCHID 5A2 (CHPID 7E) and PCHID 5D2 (CHPID 82).
116
The SAN768B FICON Director has connectivity to the z10 server on port 03 and port 1B, and connectivity to the DS8000 storage system on port 28 and 3F. All the ports in the FICON Director are longwave (LX) ports. The switch number (Switch #) and the switch address (Switch @) are both set to 65. Although the switch number is specified in the IOCP, the switch address is the Domain ID specified in the FICON Director. The two host adapters installed in the DS8000 are longwave (LX) lasers. Port number 0003 and port number 0242 are used to connect to the FICON Director. Two logical control units (D000 and D100) are defined and have devices D000-D0FF and D1000-D1FF assigned. The DS8000 will have the zHPF feature enabled. A maximum unrepeated distance of 10 km (6.2 miles) is supported by the longwave laser (LX) feature when using 9m single mode (SM). The fiber optic cables have an LC Duplex connecter at both ends to connect to the z10 FICON Express8 channels; to the ports in the FICON Director; and to the DS8000 host adapter.
Tasks
Figure 6-2 shows the main steps required to define and activate a FICON switched point-to-point configuration.
Verification checklist
Follow the verification checklist to ensure that all hardware and software prerequisites are met. Go to Verification checklist on page 118. Information about defining the channel paths, control units, and devices is given in Defining the channel, CU, and device in IOCP on page 119.
Configure storage CU
The configuration tasks for a DS8000 storage system are described in Configuring the IBM Storage System DS8000 on page 125. All required steps to set up the FICON Director are shown in 8.1.1 Configuration flowchart on page 152.
Information about fiber optic cables and plugging rules is given in Connecting the fiber optic cables on page 126.
Verify configuration
Information about how to verify that your actual configuration matches the desired configuration is given in Verifying the installation on page 128.
Figure 6-2 Main steps for configuring and verifying a FICON switched point-to-point configuration
117
Verification checklist
Before configuring the switched point-to-point topology shown in Figure 6-1 on page 116, the following list was checked. All steps in the checklist must be finished and corrected (if required) to ensure a smooth and successful configuration of the topology. Hardware and software requirements must be checked. Check that the appropriate FICON features are available on the System z server. For details about each feature, refer to System z FICON feature support on page 25. FICON Express2 LX FC 3319 SX FC 3320 FICON Express4 LX FC 3321, FC 3323, FC 3324 SX FC 3318, FC 3322 FICON Express8 LX FC 3325 SX FC 3326 If using FICON Express8 features, check System z operating system requirements. z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01) is required to support the FICON Express8 feature in z10 server. Check the 2097DEVICE PSP bucket for the latest information about FICON Express8 support on the operating system. Check that the number and types of FICON Director ports match the configuration requirements. LX SFPs SX SFPs Check DS8000 storage hardware requirements to support FICON longwave (LX) connectivity to the z10 server. Check that FC 0709 and FC 7092 are installed to support zHPF. Note: FC 0709 and FC 7092 must be ordered to obtain the license key to enable zHPF support in a DS8000 storage controller. Check that the DS8000 firmware is at level 6.1.16.0 or higher to support the zHPF feature. Check that the zHPF feature is enabled in the DS8000. Check that the correct fiber optic cables are available to connect the FICON Director to the System z server, and the DS8000 storage to the FICON Director. A 9m single mode (SM) fiber optic cable is required to support longwave laser (LX) for a maximum distance of 10km (6.2 miles). A 50m or 62.5m multi mode (MM) fiber optic cable is required to support shortwave laser (SX). See System z FICON feature support on page 25 for the maximum supported distance depending on cable type and speed. An LC duplex connector is required at both ends of the fiber optic cables to connect to the z10 server FICON channel, the ports in the FICON Director, and the DS8000 host adapter ports.
118
Note: All fiber optic cables used in a link must be of the same type. For example, they must be either all single mode or all multi-mode fiber optic cables.
119
First we explain how the CHPIDs are defined. See Example 6-1 for the CHPID statement and its keywords.
Example 6-1 CHPID definition for switched point-to-point configuration
CHPID PATH=(CSS(0,2),7E),SHARED, * PARTITION=((CSS(0),(A01),(=)),(CSS(2),(A23,A24,A25,A29),* (=))),SWITCH=65,PCHID=5A2,TYPE=FC CHPID PATH=(CSS(1,2),82),SHARED, * PARTITION=((CSS(1),(A11),(=)),(CSS(2),(A23,A24,A25,A29),* (=))),SWITCH=65,PCHID=5D2,TYPE=FC The PATH keyword in the CHPID statement defines CHPID 7E in CSS 2 as shared. The LPAR names A23 and A24 are specified in the PARTITION keyword to use CSS 2 to access CHPID 7E. LPAR names specified in IOCDS correspond to system names SC30 and SC31. The SWITCH keyword is used to provide the switch number (65) of the Director to which the channel is connected. The PCHID 5A2 has been merged to the IOCDS by the CMT which assigned CHPID 7E to PCHID 5A2. The TYPE of the channel is specified as FC (native FICON). The same definition rules apply to the definition of CHPID 82. With the CHPIDs defined, we next show how the CUs attached to the CHPIDs are defined. Example 6-2 displays the CNTLUNIT statement and keywords.
Example 6-2 CNTLUNIT definition for switched point-to-point configuration
CNTLUNIT CUNUMBR=0065,PATH=((CSS(2),7E,82)), * UNITADD=((00,001)),LINK=((CSS(2),FE,FE)),UNIT=2032 CNTLUNIT CUNUMBR=D000, * PATH=((CSS(0),50,54,58,5C),(CSS(1),50,54,58,5C),(CSS(2),* 50,54,58,5C,7E,82),(CSS(3),50,54,58,5C)), * UNITADD=((00,256)), * LINK=((CSS(0),2C,2C,0A,0A),(CSS(1),2C,2C,0A,0A),(CSS(2),* 2C,2C,0A,0A,28,3F),(CSS(3),2C,2C,0A,0A)),CUADD=0, * UNIT=2107 CNTLUNIT CUNUMBR=D100, * PATH=((CSS(0),51,55,59,5D),(CSS(1),51,55,59,5D),(CSS(2),* 51,55,59,5D,7E,82),(CSS(3),51,55,59,5D)), * UNITADD=((00,256)), * LINK=((CSS(0),28,28,0E,0E),(CSS(1),28,28,0E,0E),(CSS(2),* 28,28,0E,0E,28,3F),(CSS(3),28,28,0E,0E)),CUADD=1, * UNIT=2107 There are three control units defined by the CUNUMBER keyword: 0065, D000, and D100. The link address of control unit 0065 is specified as FE in the LINK keyword, which is the port number of CUP in the FICON Director. The destination ports assigned for control units D000 and D100 are ports 28 and 3F in the FICON Director.
120
The logical CU image number 0 is specified for control unit D000. The logical CU image number 1 is specified for control unit D100 in the CUADD keyword. CHIPDs 7E and 82 have access to all CUs because both CHPIDs are defined in the PATH keyword on all three CUs. The CHPIDs and the CUs are now defined. The next step is to define the devices owned by the CU. See Example 6-3 for the IODEVICE statement and its keywords.
Example 6-3 IODEVICE definition for switched point-to-point configuration
IODEVICE ADDRESS=065,UNITADD=00,CUNUMBR=(0065),STADET=Y, * UNIT=2032 IODEVICE ADDRESS=(D000,113),CUNUMBR=(D000),STADET=Y,UNIT=3390B IODEVICE ADDRESS=(D071,143),CUNUMBR=(D000),STADET=Y,UNIT=3390A IODEVICE ADDRESS=(D100,113),CUNUMBR=(D100),STADET=Y,UNIT=3390B IODEVICE ADDRESS=(D171,143),CUNUMBR=(D100),STADET=Y,UNIT=3390A The IODEVICE statement specifies the characteristics of devices 0065, D000, and D100. Device 0065 belongs to control unit 0065, which is the CUP port in FICON Director. On control unit D000 there are 113 devices are defined, starting at device address D000. The same definitions are done for devices associated to control unit D100. These are base devices in the storage unit. In addition 143 alias Hyper PAV devices are defined starting at device address D071 and D171.
121
z10 EC
SCZP201
LPAR SC30 A23 z/OS V1R10
FICON Express8 LX CSS2 PCHID 5A2 CHPID 7E
SAN768B
Port 03
Switch # 65 Switch @ 65
Port 1B
In this configuration there are two LPARs (A23 and A24) residing in the same z10 server. Both LPARs can access CHPIDs 7E and 82 due to shared definition of those channel paths. Channels are attached to FICON Director 65 on ports 03 and 1B. For FCTC communication, there four logical control units defined: two of them for each partition for direct access in the same server, and two of them for each partition to access over the FICON link. Control units 4044 and 5044 allow direct access to LPAR A23. Control units 4034 and 5034 allow access to LPAR A24. Example 6-4 shows the definition of FCTC CUs and devices.
Example 6-4 IOCDS definitions for FICON CTC
CNTLUNIT CUNUMBR=5034,PATH=((CSS(2),7E)),UNITADD=((04,004)), LINK=((CSS(2),1B)),CUADD=23,UNIT=FCTC IODEVICE ADDRESS=(5034,004),UNITADD=04,CUNUMBR=(5034), STADET=Y,PARTITION=((CSS(2),A24)),UNIT=FCTC CNTLUNIT CUNUMBR=5044,PATH=((CSS(2),7E)),UNITADD=((04,004)), LINK=((CSS(2),1B)),CUADD=24,UNIT=FCTC IODEVICE ADDRESS=(5044,004),UNITADD=04,CUNUMBR=(5044), STADET=Y,PARTITION=((CSS(2),A23)),UNIT=FCTC CNTLUNIT CUNUMBR=4034,PATH=((CSS(2),82)),UNITADD=((04,004)), LINK=((CSS(2),03)),CUADD=23,UNIT=FCTC IODEVICE ADDRESS=(4034,004),UNITADD=04,CUNUMBR=(4034), STADET=Y,PARTITION=((CSS(2),A24)),UNIT=FCTC CNTLUNIT CUNUMBR=4044,PATH=((CSS(2),82)),UNITADD=((04,004)), LINK=((CSS(2),03)),CUADD=24,UNIT=FCTC IODEVICE ADDRESS=(4044,004),UNITADD=04,CUNUMBR=(4044), STADET=Y,PARTITION=((CSS(2),A23)),UNIT=FCTC
* * * * * * * *
Four control units are defined by the CNTUNIT statement: 4044 and 5044 for direct access to LPAR A23, and 4034 and 5034 for LPAR A24. Because CHPIDs 7E and 82 are shared by logical partitions, both CHPIDs can be used by both partitions for FCTC communication.
122
The PARTITON keyword specifies that devices in control unit 5034 and 4034 allow direct access only by LPAR A24. These devices can be accessed by LPAR A23 over the FICON link by CHPIDs 7E and 82. The same rule applies to LPAR A24. This allows data traffic to flow between both LPARs using the corresponding devices, as shown here: SC30 4044 4045 4046 4047 <----> <----> <----> <----> SC31 5034 5035 5036 5037 SC30 5044 5045 5046 5047 <----> <----> <----> <----> SC31 4034 4035 4036 4037
The rules we followed for device numbering are described in FCTC device numbering scheme on page 306. Figure 6-4 illustrates the logical view of the FCTC configuration and the data path between two corresponding FCTC devices. To simplify the illustration, only two logical CUs are shown.
LPAR A23
Data transfer
CU 4044
7E
03
@65
1B
82
CU 5034
Data transfer
LPAR A24
Figure 6-4 Corresponding FCTC devices
Data is transferred over the FICON link in both directions between the CUs and the logical partitions (LPARs). LPAR A23 sends data to A24 via CU 5034 and receives data from A24 via CU 4044. The reverse applies to LPAR A24 when data is sent to or received from A23.
123
The options on the menu to activate the changes are: Option 1. Build production I/O definition file Option 2. Build IOCDS Option 3. Activate or verify configuration dynamically The Build production I/O definition file (IODF) function (option 1) saves definition data into an IODFxx dataset, where xx in the data set name is a suffix to identify the IODF among other IODFs, which may already exist. The suffix of the IODF data set is used in the IPL parameter when the z/OS image is brought up. This ensures that the desired IODF used by the operating system image matches the IOCDS that is loaded into the HSA. Using Build IOCDS (option 2) allows you to build an IOCDS and save it to the CPC. The IOCDS is built using the definitions saved in the IODFxx data set. Using Activate or verify configuration dynamically (option 6) allows you to dynamically activate changes in the I/O definitions. There is no Power-on Reset (POR) required because changes are directly updated to the HSA in the CPC. This is the preferred method for activating changes. Another way to activate changes in I/O definitions is to perform a POR at the CPC with the newly built IOCDS. Keep in mind, however, that a POR causes a system outage. For this reason, if a POR is required, then we suggest that you plan ahead to make any changes in I/O definitions so that the new IOCDS is ready for use at the next POR. Changes can be made at any time and saved in an IODF data set. The newly built IOCDS based on the I/O definition in the IODF should be sent to the CPCs Support Element and marked for use at the next POR. This ensures that the newly built IOCDS will be used the next time the CPC is activated.
124
To verify which I/O definitions (IODF) are currently active, enter D IOS,CONFIG at the z/OS operator console. The response to the command is shown in Example 6-5.
Example 6-5 D IOS,CONFIG command
IOS506I 09.08.24 I/O CONFIG DATA 030 ACTIVE IODF DATA SET = SC31.IODF73 CONFIGURATION ID = TEST2097 EDT ID = 01 TOKEN: PROCESSOR DATE TIME DESCRIPTION SOURCE: SCZP201 09-04-21 16:12:24 SC31 IODF73 ACTIVE CSS: 2 SUBCHANNEL SETS CONFIGURED: 0, 1 CHANNEL MEASUREMENT BLOCK FACILITY IS ACTIVE Example 6-5 shows that the IODF73 data set is currently active in z/OS image SC31. It also provides information about the CSSs used in this z/OS image. CSS2 is the active CSS in system SC31. Compare the information provided by the D IOS,CONFIG command with the data of the ID statement in the ICODS, shown in Example 6-6.
Example 6-6 ID statement in IOCDS
ID
The suffix of the IODF data set shown in the IOCDS ID statement and displayed by the D IOS command should be the same number (that is, IODF73). This proves that the IOCDS was built from the current active IODF data set and that the latest changes and new IO definitions were successfully activated.
125
All the required steps to configure the FICON Director are described in detail in Chapter 8, Configuring FICON Directors on page 151. Go to 8.1.1 Configuration flowchart on page 152 and follow the procedures described there. Return to this section after the FICON Director is configured and ready for use.
CF CHP(7E) IEE502I CHP(7E),ONLINE IEE712I CONFIG PROCESSING COMPLETE To achieve the best performance on the FICON channel, make sure that zHPF is enabled. Refer to 4.9.6 High Performance FICON on page 88 for considerations regarding how to exploit zHPF on System z servers. Enter D IOS,ZHPF at the z/OS console to display zHPF settings for the system. If zHPF is disabled, enter SETIOS ZHPF,YES. This enables zHPF, as shown in Example 6-8.
Example 6-8 Enable zHPF
126
HIGH PERFORMANCE FICON FACILITY IS ENABLED The SETIOS ZHPF=YES command enables zHPF temporarily. However, after the next system IPL, the zHPF facility will be reset to the default (disabled). To permanently enable zHPF for z/OS, change the FCX parameter in SYS1.PARMLIB member IECIOSxx to FCX=YES. Now you can query the status and functional details of the channel path by entering D M=CHP(7E) at the operator console. The command output is shown in Example 6-9. It provides information about the channel and the attached devices.
Example 6-9 D M=CHP(7E)
D M=CHP(7E) IEE174I 13.22.46 DISPLAY M 144 CHPID 7E: TYPE=1B, DESC=FICON SWITCHED, ONLINE DEVICE STATUS FOR CHANNEL PATH 7E 0 1 2 3 4 5 6 7 8 9 A B C D E 0006 . . . . . + . . . . . . . . . 0D00 + + + + + + + + + + + + + + + 0D01 + + + + + + + + + + + + + + + 0D02 + + + + + + + + + + + + + + + 0DFD HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA 0DFE HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA 0DFF HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA SWITCH DEVICE NUMBER = 0065 DEFINED ENTRY SWITCH - LOGICAL SWITCH ID = 65 ATTACHED ND = SLKWRM.DCX.BRD.CA.0AFX0642C00Z PHYSICAL CHANNEL ID = 05A2 FACILITIES SUPPORTED = ZHPF
F . + + + HA HA HA
Example 6-9 shows that CHPID 7E is online and working in a switched point-to-point configuration. Information is also displayed about the attached node and the supported facilities (for example, zHPF). To check that communication to the attached devices is working properly, enter D M=DEV(xxxx), where xxxx is any device number. In Example 6-10 we checked the status of device 0065, which is the CUP in FICON Director, and device D000 which is a storage device.
Example 6-10 D M=DEV(0065,D000)
IEE174I 14.11.57 DISPLAY M 161 DEVICE 0065 STATUS=ONLINE CHP 7E 82 ENTRY LINK ADDRESS 03 1B DEST LINK ADDRESS FE FE PATH ONLINE Y Y CHP PHYSICALLY ONLINE Y Y PATH OPERATIONAL Y Y MANAGED N N CU NUMBER 0065 0065 MAXIMUM MANAGED CHPID(S) ALLOWED: 0 DESTINATION CU LOGICAL ADDRESS = 00 SCP CU ND = SLKWRM.DCX.BRD.CA.1AFX0642C00Z.002F SCP TOKEN NED = SLKWRM.DCX.BRD.CA.1AFX0642C00Z.0000 SCP DEVICE NED = SLKWRM.DCX.BRD.CA.1AFX0642C00Z.0000
127
DEVICE D000 STATUS=ONLINE CHP 50 54 58 5C 7E 82 ENTRY LINK ADDRESS 0F 0F 1B 1B 03 1B DEST LINK ADDRESS 2C 2C 0A 0A 28 3F PATH ONLINE N N Y Y Y Y CHP PHYSICALLY ONLINE N N Y Y Y Y PATH OPERATIONAL N N Y Y Y Y MANAGED N N N N N N CU NUMBER D000 D000 D000 D000 D000 D000 MAXIMUM MANAGED CHPID(S) ALLOWED: 0 DESTINATION CU LOGICAL ADDRESS = 00 SCP CU ND = 002107.900.IBM.75.0000000BALB1.0300 SCP TOKEN NED = 002107.900.IBM.75.0000000BALB1.0000 SCP DEVICE NED = 002107.900.IBM.75.0000000BALB1.0000 HYPERPAV ALIASES CONFIGURED = 143 FUNCTIONS ENABLED = MIDAW, ZHPF The response to D M=DEV(0065,D000) displays all available paths to the devices and their status. Information about the device (for example, device type), the control unit number, the link address, and the functions supported by the device (MIDAW and zHPF) is shown.
128
Essential information for PCHID 5A2 is shown on the PCHID Details panel: PCHID status is Operating. PCHID type is FICON Express8. CSS.CHPID 2.7E (and 0.7E) is assigned to PCHID 5A2. The owning image of PCHID 5A2 is A23 and A24 (among other images). CHPIDs assigned to PCHID 5A2 are shared across images. Similar information can be found on the CHPID Details panel, shown in Figure 6-7.
The CHPID Details panel provides similar information to the PCHID details panel. To display the CHPID details, select the channel list for an operAting system image from the Support Element workplace. As you can see, information for PCHID 5A2 (CHPID 2.7E) provided on the details window matches the designated configuration. This proves that PCHID 5A2 has CHPID 2.7E assigned and images A23 and A24 can access the channel. Repeat this checking on other channels that were recently configured. Note: If any of the data displayed on the PCHID or CHPID detail panel does not match to the desired configuration, you must correct the definitions in the IOCDS. Next, check that the FICON channels are connected to the correct FICON switch and port. On either the PCHID details panel or the CHPID details panel, click the Channel Problem Determination button. This will display the Channel Problem Determination panel, where you can select which information you want to display; see Figure 6-8 on page 130.
129
Select Analyze channel information and click OK. The Analyze Channel Information window is displayed, as shown in Figure 6-9, which provides information about the node attached to the FICON channel.
130
Information about the nodes are shown at the bottom part of the window. The lower left side displays information about the node in the System z server. The lower right side displays information about the attached node. Important: Make sure that the Node status for both nodes is displayed as Valid. If any other status is shown, none of the displayed information is valid. Check that the Type/model information and the serial number (Seq. number) are as expected. Next, examine the Tag field for each node. The Tag provides information about the port number of the node. The two right-most digits of the Tag value represent the CHPID number for the channel node (7E). For the attached node, the Tag value represents the port number (03). The World Wide Node Name (WWN) and the World Wide Port Name (WWP) are also shown for each port, and may be used to prove that the channel is connected to the correct FICON Director if the WWN or WWP is known. If the node status is not Valid, or the Tag value and the WWNN or WWPN are not correct, then check the fiber optic cable link between the z10 server and the FICON Director to ensure that it is plugged correctly. Next, check the Channel link address field, which displays the switch address and the port number of the attached Director. The two left-most digits represent the switch address (65). The two digits in the middle represent the port number (03). In our scenario, we are now sure that PCHID 5A2 has connectivity to port number 03 on switch address 65, which matches our desired configuration (see Figure 6-1 on page 116). If the displayed values are not as expected, the fiber optic cables may not be plugged correctly and must be checked. After completing the preceding steps and proving that the physical path to the FICON Director and the logical definitions of the link are correct, check that the path to the control unit image is initialized correctly and properly defined. On the Channel Problem Determination panel (shown in Figure 6-8 on page 130), select Analyze Serial Link Status and click OK. The Analyze Serial Link Status window is displayed, as shown in Figure 6-10 on page 132.
131
The Analyze Serial Link Status window provides status information about the link to the control unit images defined in the IOCDS. Figure 6-10 shows a link status of Initialization Complete for all defined CU images on link address FE and 28. Information is also displayed showing that PCHID 05A2 (CHPID 2.7E) is connected to port 03 on switch address 65 (switch number and channel link address). Link address FE is the CUP port in the FICON Director, which is used for communication and managing the FICON Director. Link address 28 is the destination port where the CU is physically attached. The link to CU address 00 and 01 (among other CU images) is initialized. CU address 00 and 01 correspond to control unit number D000 (CUADD=0) and D1000 (CUADD=1), as defined in the IOCDS. If the link status Initialization Complete is not shown, you must check that the ports in the FICON Director and the CU are correctly configured, and that the fiber optic cable link between the Director and the CU has the correct cable type and plugging.
132
Chapter 7.
133
z10 EC
SCZP201
LPAR SC30 A23 z/OS V1R10
FICON Express8 LX CSS2 PCHID 5A2 CHPID 7E
DS8000
CU# D000
LX 0003
D0xx
Port 03
Port 2D
Port 34
Switch # 65 Switch @ 65
LX 0242
D1xx
CU# D100
Port 1B * All cable connectors are LC Duplex type
SAN768B
Figure 7-1 Cascaded FICON configuration
The z10 EC server (SCZP201) has two LPARS defined and activated. The system names running in this partition are SC30 and SC31, which correspond to the LPAR names A23 and A24. The operating system that is running in both partitions is z/OS V1R10. As a standard feature, the z10 server supports zHPF protocol, which will be used for data transfer to the DS8000 storage device. Two FICON Express8 channels are defined as shared and assigned to CSS2. Both LPARs have access to the two FICON Express8 channels. The channel adapters are FICON 134
FICON Planning and Implementation Guide
Express8 LX features (FC 3325), using a longwave laser for data traffic through the fiber optic cable. PCHID 5A2 has CHPID 7E assigned. PCHID 5D2 has CHPID 82 assigned. The SAN768B FICON Director has connectivity to the z10 server on port 03 and port 1B. All the ports in the FICON Director are longwave (LX) ports. The switch number (Switch #) and the switch address (Switch @) are both set to 65. The switch number is specified in the IOCDS. The switch address is the Domain ID specified in the Director. The SAN384B FICON Director has connectivity to storage devices on port 88 and CF. All the ports in the FICON Director are longwave (LX) ports. The switch number (Switch #) and the switch address (Switch @) are both set to 66. The switch number is specified in the IOCDS. The switch address is the Domain ID specified in the Director. The FICON Directors are attached by an Inter-Switch Link (ISL), which consists of two fiber optic cables connected to ports 2D and 34 in switch 65 and ports 8D and 34 in switch 66. If the Directors are connected they build a High Integrity Fabric, which is required to support cascaded FICON Directors. The ISLs are transparent in the path from the System z server to the CU and do not require any definitions in IOCDS. An IBM System Storage DS8000 will be connected to the SAN384B FICON Director on port 88 and the CF. The two host adapters are installed in the DS8000 supporting longwave (LX) laser. Port numbers 0003 and 0242 in the DS8000 are used to connect to the FICON Director. Two logical control units (D000 and D100) are defined which have devices D000-D0FF or D1000-D1FF assigned. The DS8000 will have the zHPF feature enabled to communicate to the z10 server. A maximum unrepeated distance of 10km (6.2 miles) is supported by the longwave laser (LX) feature when using 9m single mode (SM) fiber optic cables. The fiber optic cables have an LC Duplex connector at both ends to connect to the z10 FICON Express8 channels, as well as to the ports in the FICON Directors and to the DS8000 host adapters.
Tasks
Figure 7-2 on page 136 shows the main steps required to define and activate a cascaded FICON Director configuration.
135
Verification checklist
Follow the verification checklist to ensure that all hardware and software prerequisites are met. Go to Verification checklist on page 136. Information about defining the channel paths, control units, and devices is given in Defining channel, CU, and storage device in IOCDS on page 138.
Configure storage CU
The configuration tasks for a DS8000 storage system are described in Configuring the IBM Storage System DS8000 on page 143. All required steps to set up the FICON Directors are shown in 8.1.1, Configuration flowchart on page 152.
Information about fiber optic cables and plugging rules to achieve the desired configuration is given in Connecting the fiber optic cables on page 144. Information about how to verify that your actual configuration matches the desired configuration is given in Verifying the installation on page 146.
Verify configuration
Figure 7-2 Main steps for configuring and verifying a cascaded FICON Director configuration
Verification checklist
Before configuring the cascaded FICON topology shown in Figure 7-1 on page 134, the following list was checked. All steps in the checklist must be finished and corrected (if required) to ensure a smooth and successful configuration of the topology. Both hardware and software requirements must be checked. Check that FICON features are available on the System z server to establish the desired configuration. For details about each feature code, see System z FICON feature support on page 25. FICON Express2 LX FC 3319 SX FC 3320 FICON Express4 LX FC 3321, FC 3323, FC 3324 SX FC 3318, FC 3322 FICON Express8 LX FC 3325 SX FC 3326 If using the FICON Express8 feature, check the System z operating system requirements. z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01) is required to support the FICON Express8 feature in a z10 server.
136
Check the 2097DEVICE PSP bucket for the latest information about FICON Express support on the operating system. Check that the number and types of FICON Director ports match the configuration requirements. LX SFPs SX SFPs Check that high-integrity fabric support is enabled on the FICON Directors. Check the DS8000 storage hardware requirements to support FICON longwave (LX) or shortwave (SX) connectivity. Check that FC 0709 and FC 7092 are installed to support zHPF. Note: FC 0709 and FC 7092 must be ordered to obtain the license key to enable zHPF support in a DS8000 storage controller. Check that the DS8000 firmware is at level 6.1.16.0 or higher to support the zHPF feature. Check that the zHPF feature is enabled in the DS8000. Check that the correct fiber optic cables are available to connect the FICON Director to the z10 server and the DS8000 storage to the FICON Director. A 9m single mode (SM) fiber optic cable is required to support longwave laser (LX) for a maximum distance of 10km (6.2 miles) at maximum speed. A 50m or 62.5m multi mode (MM) fiber optic cable is required to support shortwave laser (SX). See System z FICON feature support on page 25 for the maximum supported distance, depending on cable type and speed. An LC duplex connector is required at both ends of the fiber optic cables to connect to the System z server FICON adapter, as well as the ports in the FICON Director and the DS8000 host adapter. Note: All fiber optic cables used in a link must be of the same type. For example, they must be either all single mode or all multi-mode fiber optic cables.
137
CHPID PATH=(CSS(0,2),7E),SHARED, * PARTITION=((CSS(0),(A01),(=)),(CSS(2),(A23,A24,A25,A29),* (=))),SWITCH=65,PCHID=5A2,TYPE=FC CHPID PATH=(CSS(1,2),82),SHARED, * PARTITION=((CSS(1),(A11),(=)),(CSS(2),(A23,A24,A25,A29),* (=))),SWITCH=65,PCHID=5D2,TYPE=FC The PATH keyword in the CHPID statement defines CHPID 7E in CSS 2 as SHARED. The LPAR names A23 and A24 are specified in the PARTITION keyword to use CSS 2 to access CHPID 7E. The LPAR names specified in ICODS correspond to system names SC30 and SC31. The SWITCH keyword is used to provide the switch number (65) of the Director to the channel to which it is connected. PCHID 5A2 has been merged to the IOCDS by the CMT, which assigned CHPID 7E to PCHID 5A2. The TYPE of the channel is specified as native FICON (FC).
138
The same definition rules apply to the definition of CHPID 82. With the CHPIDs defined, we illustrate next how the CUs that are attached to the CHPIDs are defined. Example 7-2 displays the CNTLUNIT statement and keywords.
Example 7-2 CNTLUNIT definition for a cascaded FICON configuration
CNTLUNIT CUNUMBR=0065,PATH=((CSS(2),7E,82)), * UNITADD=((00,001)),LINK=((CSS(2),65FE,65FE)),UNIT=2032 CNTLUNIT CUNUMBR=0066,PATH=((CSS(2),7E,82)), * UNITADD=((00,001)),LINK=((CSS(2),66FE,66FE)),UNIT=2032 CNTLUNIT CUNUMBR=D000, * PATH=((CSS(0),50,54,58,5C),(CSS(1),50,54,58,5C),(CSS(2),* 50,54,58,5C,7E,82),(CSS(3),50,54,58,5C)), * UNITADD=((00,256)), * LINK=((CSS(0),2C,2C,0A,0A),(CSS(1),2C,2C,0A,0A),(CSS(2),* 2C,2C,0A,0A,6688,66CF),(CSS(3),2C,2C,0A,0A)),CUADD=0, * UNIT=2107 CNTLUNIT CUNUMBR=D100, * PATH=((CSS(0),51,55,59,5D),(CSS(1),51,55,59,5D),(CSS(2),* 51,55,59,5D,7E,82),(CSS(3),51,55,59,5D)), * UNITADD=((00,256)), * LINK=((CSS(0),28,28,0E,0E),(CSS(1),28,28,0E,0E),(CSS(2),* 28,28,0E,0E,6688,66CF),(CSS(3),28,28,0E,0E)),CUADD=1, * UNIT=2107 There are four control units defined by the CUNUMBR keyword: 0065, 0066, D000, and D100. The LINK keyword defines the destination port address the control unit to which it is attached. The first byte of the link address represents the switch address and the second byte defines the port address in that switch. Control units 0065 and 0066 are the CUP ports (port number FE) in the FICON Directors defined on the LINK statement (in our example, 65FE and 66FE). The destination ports specified for control units D000 and D100 are 6688 and 66CF. The logical CU image number 0 is specified for control unit D000. The logical CU image number 1 for control unit D100 is specified in the CUADD keyword. CHPIDs 7E and 82 in CSS2 have access to all control units, as specified in the PATH keyword for all four CUs. The CHPIDs and the CUs are now defined. The next step is to define the devices owned by the CU. Example 7-3 displays the IODEVICE statement and keywords.
Example 7-3 IODEVICE statement for cascaded FICON configuration
IODEVICE ADDRESS=065,UNITADD=00,CUNUMBR=(0065),STADET=Y, * UNIT=2032 IODEVICE ADDRESS=066,UNITADD=00,CUNUMBR=(0066),STADET=Y, * UNIT=2032 IODEVICE ADDRESS=(D000,113),CUNUMBR=(D000),STADET=Y,UNIT=3390B IODEVICE ADDRESS=(D071,143),CUNUMBR=(D000),STADET=Y,UNIT=3390A IODEVICE ADDRESS=(D100,113),CUNUMBR=(D100),STADET=Y,UNIT=3390B IODEVICE ADDRESS=(D171,143),CUNUMBR=(D100),STADET=Y,UNIT=3390A
139
The IODEVICE statement specifies the characteristics of devices 0065, D000, and D100. Device 0065 belongs to control unit 0065 which is the CUP port in FICON Director. On control unit D000 there are 113 devices defined, starting at device address D000. The same definitions are done for devices associated to control unit D100. These are base devices in the storage unit. In addition, 143 alias devices are defined starting at device address D071 and D171.
z10 EC
SCZP201
LPAR SC30 (A23) z/OS V1R10
FICON Express8 LX CSS2 PCHID 5A2 CHPID 7E
z10 EC
SCZP202
ISL
Port 03 Port 2D Port 34
FICON Express8 LX PCHID 5D2 CHPID 82
CU# 5034
LPAR SC31 (A24) z/OS V1R10
Switch # 65 Switch @# 65
CSS2
SAN768B
Figure 7-3 FCTC configuration
Both z10 servers are attached to cascaded FICON Directors: CHPID 7E to port 03 in switch 65, and CHPID 82 to port CF in switch 66. Each LPAR will have access to a logical control unit and their logical devices defined in the corresponding LPAR in different z10 servers. LPAR A23 will communicate with CU 5034, which is accessed by LPAR A24. LPAR A24 will communicate with CU 4044, which is accessed by LPAR A23. Example 7-4 shows the definition of FCTC control units and devices.
Example 7-4 FCTC configuration
* *
* *
One control unit is defined for each LPAR: CU 5034 is defined for LPAR A23 and CU 4044 is defined for LPAR A24. CU 5034 resides in server SCZP202. CU 4044 resides in server SCZP201. The LINK keyword specifies the destination port to communicate to the control unit in the other server, which is port 66CF for LPAR 23 to communicate with CU 5034, and port 6503 for LPAR A24 with communicate to CU 4044. The PARTITION keyword in the IODEVICE statement specifies the logical partition that can access the device. This allows data traffic to flow between the LPARs using the corresponding devices, as follows: SC30 4044 4045 4046 4047 <---> <---> <---> <---> SC31 5034 5035 5036 5037
The rules we followed for device numbering are described in FCTC device numbering scheme on page 306. Figure 7-4 illustrates the logical view of the FCTC configuration and the data path between two LPARs. To simplify the illustration, only two logical CUs are shown.
LPAR A23
Data transfer
CU 4044
7E
03
@65
ISL
@66
CF
82
CU 5034
Data transfer
LPAR A24
Figure 7-4 FCTC data transfer
Data is transferred over the FICON link, in both directions, between the CUs and the logical partitions (LPARs). LPAR A23 sends data to A24 via CU 5034 and receives data from A24 via CU 4044. The reverse applies to LPAR A24 when data is sent to or received from A23.
141
The tasks that are required to save definition data and activate the changes dynamically are performed via HCD. Refer to HCD Users Guide, SC33-7988, for detailed descriptions of all the activation procedures. On the HCD start menu, select option 2 (Activate or process configuration data). Figure 7-5 shows the Activate or process configuration data menu.
The options on the menu to activate the changes are: Option 1. Build production I/O definition file Option 2. Build ICODS Option 6. Activate or verify configuration dynamically The Build production I/O definition file (IODF) function (option 1) saves definition data into an IODFxx dataset, where xx in the data set name is a suffix to identify the IODF among other IODFs which may already exist. The suffix of the IODF data set is used in the IPL parameter when the z/OS image is brought up. This ensures that the desired IODF used by the operating system image matches the IOCDS that is loaded into the HSA. Using Build IOCDS (option 2) allows you to build an IOCDS and save it to the CPC. The IOCDS is built using the definitions saved in the IODFxx data set. Using Activate or verify configuration dynamically (option 6) allows you to dynamically activate changes in the I/O definitions. There is no Power-on Reset (POR) is required because changes are directly updated to the HSA in the CPC. This is the preferred method of activating changes. Another way to activate changes in I/O definitions is to perform a POR at the CPC with the newly-built IOCDS.Keep in mind, however, that a POR causes a system outage. For this reason, if a POR is required, then we suggest that you plan ahead to make any changes in I/O definitions so that the new IOCDS is ready for use at the next POR. Changes can be
142
made at any time and saved in an IODF data set. The newly built IOCDS based on the I/O definition in the IODF should be sent to the CPCs Support Element and marked for use at the next POR. This ensures that the newly built IOCDS will be used the next time the CPC is activated. To verify which I/O definitions (IODF) are currently active, enter the command D IOS,CONFIG at the z/OS operator console. The response to the command is shown in Example 7-5.
Example 7-5 D IOS,CONFIG command
IOS506I 13.10.28 I/O CONFIG DATA 003 ACTIVE IODF DATA SET = SC30.IODF78 CONFIGURATION ID = TEST2094 EDT ID = 01 TOKEN: PROCESSOR DATE TIME DESCRIPTION SOURCE: SCZP201 09-05-07 09:26:49 SC30 IODF78 ACTIVE CSS: 2 SUBCHANNEL SETS CONFIGURED: 0, 1 CHANNEL MEASUREMENT BLOCK FACILITY IS ACTIVE Example 7-4 on page 140 shows that IODF78 data set is currently active in z/OS image SC30. It also provides information about the CSSs used in this z/OS image. CSS2 is the active CSS in system SC30. Compare the information provided by the D IOS,CONFIG command with the data of the ID statement in the IOCDS, as shown in Example 7-6.
Example 7-6 ID statement in IOCDS
ID
The suffix of the IODF data set that is shown in the IOCDS ID statement and displayed by the D IOS command should be the same number (that is, IODF78). This proves that the IOCDS was built from the current active IODF data set and that the latest changes and new IO definitions are successfully activated.
143
All the required steps to configure the FICON Directors are described in detail in Chapter 8, Configuring FICON Directors on page 151. Go to 8.1.1, Configuration flowchart on page 152 and follow the procedures described there. Return to this section after the FICON Directors are configured and ready for use.
CF CHP(7E) IEE502I CHP(7E),ONLINE IEE712I CONFIG PROCESSING COMPLETE To achieve the best performance on the FICON channel, make sure that zHPF is enabled. See 4.9.6, High Performance FICON on page 88 for considerations about exploiting zHPF on System z servers. Enter D IOS,ZHPF at the z/OS console to display zHPF settings for the z/OS image. If zHPF is disabled, enter SETIOS ZHPF,YES. This enables zHPF, as shown in Example 7-8.
144
D IOS,ZHPF RESPONSE=SC30 IOS630I 13.27.23 ZHPF FACILITY 021 HIGH PERFORMANCE FICON FACILITY IS ENABLED Using the SETIOS ZHPF=YES command enables zHPF temporarily. However, after the next system IPL, the zHPF facility will be reset to the default (disabled). To permanently enable zHPF for z/OS, change the FCX parameter in SYS1.PARMLIB member IECIOSxx to FCX=YES. Now you can query the status and functional details of the channel path by entering D M=CHP(7E) at the operator console. The command output is shown in Example 7-9. It provides information about the channel and the attached devices.
Example 7-9 D M=CHP(7E)
D M=CHP(7E) IEE174I 13.22.46 DISPLAY M 144 CHPID 7E: TYPE=1B, DESC=FICON SWITCHED, ONLINE DEVICE STATUS FOR CHANNEL PATH 7E 0 1 2 3 4 5 6 7 8 9 A B C D E 0006 . . . . . + . . . . . . . . . 0D00 + + + + + + + + + + + + + + + 0D01 + + + + + + + + + + + + + + + 0D02 + + + + + + + + + + + + + + + 0DFD HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA 0DFE HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA 0DFF HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA SWITCH DEVICE NUMBER = NONE ATTACHED ND = SLKWRM.DCX.BRD.CA.0AFX0642C00Z PHYSICAL CHANNEL ID = 05A2 FACILITIES SUPPORTED = ZHPF
F . + + + HA HA HA
Example 7-9 shows that CHPID 7E is online and operating. Information is also displayed about the attached devices and the supported facilities (for example, zHPF). To check that communication to the attached devices is working properly, enter the command D M=DEV(xxxx), where xxxx is any device number. For example, to check the status of device 0065 (the CUP in FICON Director 65) and device D000 (a storage device), enter D M=DEV(0065,D000) on a z/OS console, as shown in Example 7-10.
Example 7-10 D M=DEV(0065,D000)
D M=DEV(0065,D000) IEE174I 11.59.11 DISPLAY M 986 DEVICE 0065 STATUS=ONLINE CHP 7E 82 ENTRY LINK ADDRESS 6503 651B DEST LINK ADDRESS 65FE 65FE PATH ONLINE Y Y CHP PHYSICALLY ONLINE Y Y PATH OPERATIONAL Y Y MANAGED N N CU NUMBER 0065 0065
Chapter 7. Configuring a cascaded FICON Director topology
145
MAXIMUM MANAGED CHPID(S) ALLOWED: 0 DESTINATION CU LOGICAL ADDRESS = 00 SCP CU ND = SLKWRM.DCX.BRD.CA.0AFX0642C00Z.0003 SCP TOKEN NED = SLKWRM.DCX.BRD.CA.0AFX0642C00Z.0000 SCP DEVICE NED = SLKWRM.DCX.BRD.CA.0AFX0642C00Z.0000 DEVICE D000 STATUS=ONLINE CHP 50 54 58 5C 7E 82 ENTRY LINK ADDRESS 0F 0F 1B 1B 6503 651B DEST LINK ADDRESS 2C 2C 0A 0A 6688 66CF PATH ONLINE N N Y Y Y Y CHP PHYSICALLY ONLINE N N Y Y Y Y PATH OPERATIONAL N N Y Y Y Y MANAGED N N N N N N CU NUMBER D000 D000 D000 D000 D000 D000 MAXIMUM MANAGED CHPID(S) ALLOWED: 0 DESTINATION CU LOGICAL ADDRESS = 00 SCP CU ND = 002107.900.IBM.75.0000000BALB1.0300 SCP TOKEN NED = 002107.900.IBM.75.0000000BALB1.0000 SCP DEVICE NED = 002107.900.IBM.75.0000000BALB1.0000 HYPERPAV ALIASES CONFIGURED = 143 FUNCTIONS ENABLED = MIDAW, ZHPF
The response to D M=DEV(0065,D000) displays all available paths to the devices and their status. Information is provided about the device (for example, device type), the control unit number, the link address, and the functions supported by the device (for example, MIDAW and zHPF).
146
Essential information for PCHID 5A2 is shown on the PCHID Details panel: The PCHID status is Operating. The PCHID type is FICON Express8. CSS.CHPID 2.7E (and 0.7E) is assigned to PCHID 5A2. The owning image of PCHID 5A2 is A23 and A24 (among other images). The CHPIDs assigned to PCHID 5A2 are shared across images. Similar information can be found on the CHPID Details panel (see Figure 7-7).
The CHPID Details panel provides similar information to the PCHID details panel. To display the CHPID details, select the channel list for an operating system image from the Support Element workplace. As shown, information for PCHID 5A2 (CHPID 2.7E) provided on the details window matches the designated configuration. This proves that PCHID 5A2 has CHPID 2.7E assigned and images A23 and A24 can access the channel. Repeat this checking on other channels that have been recently configured.
147
Important: If any of the data displayed on the PCHID or CHPID detail panel does not match the desired configuration, you must correct the definitions in the IOCDS. Next, check that the FICON channels are connected to the correct FICON switch and port. On either the PCHID details or CHPID details panel, click the Channel Problem Determination button. This will display the Channel Problem Determination panel, where you can select which information you want to display; see Figure 7-8.
Select Analyze channel information and click OK. The Analyze Channel Information window is displayed as shown in Figure 7-9 on page 149, and it provides information about the node attached to the FICON channel.
148
Information about the nodes is displayed at the bottom part of the window. The lower left side displays information about the node in the System z server. The lower right side displays information about the attached node. Important: Verify that the Node status shown for both nodes is Valid. If any other status is shown, none of the information displayed is valid. Check that the Type/model information, as well as the serial number (Seq. number), is as expected. Next, examine the tag field for each node. The tag provides information about the port number of the node. The two right-most digits of the tag value represent the CHPID number for the channel node (7E), and for the attached node it represents the port number (0003). Be aware, however, that the tag value is provided by the attached device during link initialization and may have different meanings, depending on the vendor. The World Wide Node Name (WWN) and the World Wide Port Name (WWP) are also shown for each port, and may be used to prove that the channel is connected to the correct FICON Director (if the WWN or WWP of the attached device is known). If the node status is not Valid or the tag value and WWPN are not correct, you must check the fiber optic cable link between the z10 server and the FICON Director to ensure that it is plugged correctly.
149
Next, check the Channel link address field, which shows the switch address and the port number of the attached Director. The two left-most digits represent the switch address (65). The two digits in the middle represent the port number (03). In our scenario we are now sure that PCHID 5A2 has connectivity to port number 03 on switch address 65, which matches our desired configuration (see Figure 7-1 on page 134). If the displayed values are not as expected, it means that the fiber optic cables may not be plugged correctly and must be checked. After completing the preceding steps and proving that the physical path to the FICON Director and the logical definitions of the link are correct, check that the path to the control unit image is initialized correctly and properly defined. On the Channel Problem Determination panel, shown in Figure 7-8 on page 148, select Analyze Serial Link Status and click OK. The Analyze Serial Link Status window is displayed, as shown in Figure 7-10.
The Analyze Serial Link Status window provides status information about the link to the control unit images defined in the IOCDS. Scroll through the list of CU images and check that status for all CUs is Initialization Complete. In this case, Figure 7-10 shows a link status of Initialization Complete for all defined CU images on link address 65FE and 6628. In a cascaded FICON configuration, you must use a two-byte link address to specify the link address. That means link address 65FE is the CUP port in switch 65, and link address 6688 points to a storage control unit attached to switch 66. Link address 6688 is the destination port where the CU is physically attached. The link to CU address 00 and 01 (among other CU images) is initialized. CU address 00 and 01 correspond to control unit number D000 (CUADD=0) and D1000 (CUADD=1), as defined in the IOCDS. Information is also displayed that PCHID 05A2 (CHPID 2.7E) is connected to port 03 on switch address 65 (Switch number and Channel link address). If the link status Initialization Complete is not shown, you must check that the ports in the FICON Director and CU are correctly configured, and that the fiber optic cable link between the Director and the CU has the correct cable type and plugging.
150
Chapter 8.
151
152
Depending on your requirements, some optional steps may not be needed, and the last two steps are only relevant to a cascaded FICON Directors topology. However, if you plan on installing a single FICON Director and want to use the 2-byte link addressing on the System z server for future use in a two-site configuration, you must configure the Switch Connection Control (SCC) as described in 8.4.2, Setting up a high integrity fabric on page 192. In the SCC policy, you would only have one WWNN entry (from that particular FICON Director). Before we started with the configuration of the FICON Directors, we installed the Data Center Fabric Manager (DCFM). DCFM was used to set up our FICON Directors, as well as to configure some optional functions.
153
IP addresses are required: two for the CP cards, and one for the Director. One IP address is required for each DCFM server. Note: IP addresses in the range of 10.0.0.1 to 10.0.0.255 are reserved for the internal communication of the IBM 2499 FICON Directors and must not be used for the CP cards or Directors. Because the DCFM server and client applications do direct polling of the fabric to gather configuration and status information, they must be able to access every Director in the fabric. Any Director the DCFM server and client cannot reach will go grey within their respective DCFM application. Our FICON Director management environment is shown in Figure 8-2. It also shows the IP addresses used for our FICON Directors, DCFM server, and firewall/router.
Data Center Fabric Manager Server/Client Data Center Fabric Manager Client
SW 65 CP0 CP1
10.1.1.21 10.1.1.31 10.1.1.22 10.1.1.32 10.1.1.10
CP0 CP1 SW 66
10.77.77.62 10.1.1.30
10.1.1.1
Firewall
Corporate Network
In this configuration, the DCFM server and all CP cards in the FICON Directors are connected through an Ethernet LAN (known as the service LAN). The first three octets of the subnet mask for the service LAN are set to a value of 255.255.255.x to ensure that it is isolated from the corporate network. The forth octet (x) of the subnet mask can be adjusted based on the number of FICON Directors and DCFM servers that are connected to the service LAN. In our case, we are using 255.255.255.0, which will support up to 254 IP addresses. A router with firewall capabilities was used to allow only remote access to the FICON Directors and the DCFM server from the DCFM client, which resided in the corporate network.
154
The DCFM is a client-server management software package. After the server software is installed, all other users can download the client part of the package by typing the servers IP address in a Web browser address field. Check the Data Center Fabric Manager (DCFM) data sheet for hardware and software prerequisites at: ftp://ftp.software.ibm.com/common/ssi/pm/sp/n/tsd03064usen/TSD03064USEN.PDF
Ensure that Launch DFCM Configuration is selected, then click Done. 3. On the Welcome window, click Next. 4. If you do not need to migrate data from a previous installation, then select No and click Next (see Figure 8-4 on page 156). If you are migrating data, select the appropriate field and then type the location of the installation you wish to import.You can import data from an Enterprise Fabric Connectivity Manager (EFCM), a Fabric Manager (FM), or from a previous Data Center Fabric Manager (DCFM) installation.
155
5. Enter the serial number (from the DVD box) and the Server License (from the Key Certificate). If the window does not appear, then your installation does not require a license. The License Key field is not case sensitive. 6. Select Internal FTP Server, as shown in Figure 8-5, and click Next.
156
Note: You can change the FTP Server settings later in the DCFM by selecting SAN Options. In the Options window, select Software Configuration FTP/SCP and make the required changes. 7. From the pull-down menu in Figure 8-6, select the IP address that will be used by the clients as Return Address (10.1.1.10) and the address that will be used to connect to the Directors and switches as Preferred Address (10.1.1.10). If only one network adapter is installed in the server, then the address will be the same for both pull-down menus. Important: Do not select 127.0.0.1 as Return Address, because if it is selected, clients will not be able to connect to the servers Web application to download the client part of the software. If you would prefer to select the server name, make sure that it is configured in the DNS server. 8. After making your selection, as shown in Figure 8-6, click Next.
9. Enter the appropriate port numbers (see Figure 8-7 on page 158), if you are not using the default values. The Syslog Port # must remain as 514, because this port number cannot be changed in the FICON Director. Note: If a firewall is installed between the DCFM server or client and the FICON Directors, then the configured ports need to be defined in the firewall to allow that traffic flow. Keep in mind that 16 consecutive ports will be needed from the defined Starting Port #. Also, do not use port 2638, because it is used internally by the DCFM server.
157
Tip: You can change these port numbers later in the DCFM by selecting SAN Options. In the Options window, select Software Configuration Server Port to make the changes. To change the SNMP Port #, use Monitor SNMP Traps. 10.Click Next and select the size of your FICON/SAN installation, which will be managed by the DCFM server as shown in Figure 8-8 on page 159.
158
11.Click Next after you have made the selection. 12.Wait until the database is initialized, then click Next at the Server License Summary window. 13.At the Start Server window select Start DCFM Client, then click Finish. 14.Wait until the login window appears (see Figure 8-9). If this is a migration or upgrade from a previous management software installation, then you need to use the user and password from that version. If it is a new install, then the user is administrator and the password is password. Click Login.
159
15.You will see the DCFM Login Banner, which you can change or remove if it is not needed. This can be done after clicking OK. Figure 8-10 will appear.
16.Select SAN Options. In the Options window, select Security Misc. and change the text in the Banner Message field or just remove the check mark, if it is not needed.
160
4. Enter the server IP address from the DCFM installation (in our example, 10.1.1.10). The Port Number is 2638 (this port is automatically assigned during DCFM installation and cannot be changed). The user is guest and the default password is password. 5. Click Next, then Done. The installation is complete. To import data from the DCFM database and create a customized report in Microsoft Excel, follow these steps: 1. Open Microsoft Excel and select Data Import External Data New Database Query. 2. At the Choose Data Source panel, select DCFM* (this entry was created with the installation of the ODBC Driver.) and click OK. 3. Now you can select the required entries and add them to your selection by clicking the arrow pointing to the right. Click Next after your selections are made. 4. In the Filter Data panel you can set filters, if needed. Click Next when done. 5. Select a sort order, if needed. Click Next when done. 6. The last panel will give you the possibility to save your query setup. Click Finish. 7. Select the cell you where want to enter the queried data and click OK. If you need to refresh the queried data of a saved form, select Data Refresh Data.
161
162
C:\Program Files\DCFM 10.1.3\bin>dbpassword guest password test1test test1test DB is updated Successfully C:\Program Files\DCFM 10.1.3\bin>
Now you need to uninstall the ODBC driver on all workstations that use this function and install it again with the new password, as explained in Installing the Open DataBase Connectivity (ODBC) Driver (optional) on page 160.
163
164
4. Select Internet Protocol (TCP/IP) and click Properties (see Figure 8-14).
5. Select Use the following IP address and enter the address (10.77.77.70, in this case) to be in the same network as the Director; see Figure 8-15 on page 166. 6. Click OK on both Properties windows.
165
7. Connect the cable to the active CP Card in the Director and open a Web browser. Enter the default IP address in the Web browsers address field (10.77.77.77, in this case) and press Enter. The default IP address for CP0 in Slot 6 is 10.77.77.75. The default IP address for CP1 in Slot 7 is 10.77.77.74. 8. In the login window, enter the default user (admin) and the default password (password), then click OK. (The Mozilla Firefox Web browser will ask for open or save. Select open with and click OK.) If you receive a message like the one shown in Figure 8-16, you will need to install a new Java version (Version 1.6 or later). If you do not receive this message, proceed to step 9 on page 167.
If Version 1.6 is installed, go to Start Control Panel Folder Options (in classic view). Select the File Types tab, then scroll down to and select JNLP. Click Advanced. In the Edit File Type window, select Launch and click Edit. The window shown in Figure 8-17 on page 167 will appear.
166
In the field Application used to perform action you must change the location of your Java version; for example, C:\Program Files\Java\jre1.6.0_05\bin\javaws.exe" "%1.
Click OK after the changes are made. The login window should appear after you enter the IP address in the Web browsers address field. Type in the user (admin) and password (password) and click OK. 9. At the Directors graphical user interface, select Manage Switch Admin. Select the Network tab at the Switch Administration window to reach Figure 8-18 on page 168.
167
10.Fill in the required fields. Our example is based on Figure 8-2 on page 154. Table 8-1 lists the IP addressing scheme we used.
Table 8-1 IP address example Virtual address Ethernet IP Ethernet mask Gateway IP 10.1.1.20 255.255.255.0 10.1.1.1 CP0 address 10.1.1.21 255.255.255.0 n/a CP1 address 10.1.1.22 255.255.255.0 n/a
Important: The Virtual address is later used to connect to the Director. This will guarantee a connection even if a Control Processor switchover has occurred (for example, during a firmware upgrade). When connecting with the IP address of CP0 or CP1, only the active CP is manageable. IP addresses between 10.0.0.0 to 10.0.0.255 cannot be used because they are already used internally in the Director. 11.To activate the new IP addresses, scroll down to the bottom of the window and click Apply. 12.Connect each Control Processor Management Port to the network (Ethernet switch). The Director needs to be discovered by DCFM, as described in Discovering a FICON Director on page 162.
168
3. Click Add, then enter the license key for the feature you want to enable, as shown in Figure 8-20 on page 170. Note: The license key is case sensitive and will only work on the Director to which it is assigned. Therefore, you need to compare the WWN on the license with your Director.
169
4. Click Add License to activate the license. If you have more than one license, then repeat step 3 on page 169 for each. The feature is now enabled and can be used or configured.
170
3. Read the warning message and select OK. Important: Clicking OK will immediately reboot the Director and create a logical switch, which is disruptive to the Director. 4. Wait for the Director to complete the reboot. You need to discover the Director again after the reboot, because the Director was changed and appears as a new Director. Refer to Discovering a FICON Director on page 162 for more details about this topic. 5. Select Configure Logical Switches at the DCFM menu bar. 6. Wait for the Logical Switches configuration window to appear, as shown in Figure 8-22 on page 172. 7. In the Chassis drop-down menu, select the chassis where you want to create a new logical switch.
171
8. Select Undiscovered Logical Switch, then click the New Switch button. 9. Enter the desired settings for the fabric using the Fabric tab; see Figure 8-23.
The Logical Fabric ID (FID) can be a number from 1 to 128 (decimal). The FID must be unique in the selected chassis. Note: Keep in mind that only logical switches with the same FID can be cascaded. Do not select Base Switch or Base Fabric for Transport, and do not set the 256 Area Limit to Disable, because this is not supported by FICON. The 256 Area Limit can be set to Zero-Based Area Assignment. Ports will be numbered in ascending order, starting from 00 to FF, for each logical switch. Ports with a Port Index above 255 can be added, but only in Brocade Native Interoperability Mode or Port Based 172
FICON Planning and Implementation Guide
Area Assignment (ports with a Port Index above 255 cannot be added; instead, the default Port numbering will be used). You can change the port numbering for logical switches (except for the default logical switch) later in the Port Admin panel in the Element Manager by using the Bind PID and Un-Bind PID buttons. 10.Select the Switch tab and assign the Domain ID and the logical switch name. Also, check-mark the Insistent Domain ID field. 11.Click OK after all settings are complete. 12.Now you need to assign the desired ports to the new logical switch. Select the ports on the left side, and click the arrow pointing to the right; see Figure 8-24.
13.After the ports are assigned, click OK. 14.Verify the changes in the confirmation window, and select Re-Enable Ports after moving them (if not selected, all ports must be enabled manually), then click Start. 15.Wait for the changes to complete. After the status Success displays, click Close. 16.At this point you need to discover the new logical switch. Instructions that explain this process are provided in Discovering a FICON Director on page 162. The logical switch is now created and can be used like any other physical switch or Director.
173
Follow these steps to change the Domain ID to the desired value: 1. Go to the Switch Admin window in the Element Manager of the Director. This can be done via the DCFM by right-clicking the Director and selecting Configure Element Manager Admin. 2. Select the Switch tab, as shown in Figure 8-25.
3. Select Disable in the Switch Status field and click Apply. Important: Setting the Director to disable is disruptive. 4. Read the warning message, and confirm it by clicking Yes. 5. Now you can change the Domain ID field to the desired value. In our example it will be 65 in hex, which is 101 in decimal. The value has to be entered in decimal. You can also change the Name of the Director so it can be identified easily. Note: The Domain ID must match the switch address configured in HCD/IOCP on the System z side. Be aware that the switch address is configured in hex, but the Domain ID in the Director is defined with a decimal value. 6. Click Apply. After reading the warning message, click Yes to confirm the action. 7. If not already done, click Show Advanced Mode in the upper right corner of the Switch Administration window. 8. Click the Configure tab. 9. Click Insistent Domain ID Mode to check-mark it, as shown in Figure 8-26 on page 175.
174
10.Click Apply to save the changes. 11.To enable the Director at the Switch Status field, return to the Switch tab. (You may also skip this step and configure Port-Based-Routing (PBR), Dynamic Load Sharing (DLS) and In-Order Delivery (IOD), before you enable the Director.) 12.Select Enable in the Switch Status field and click Apply. 13.Read the warning message and click Yes to confirm the action. The Domain ID is now set. Note that it cannot be automatically changed if another Director joins the Fabric with the same Domain ID. Instead, the new Director will be segmented until another Domain ID is set at the joining Director.
175
Important: Setting the Director to disable is disruptive. 4. Read the warning message, and confirm it by clicking Yes. 5. Click Show Advanced Mode in the upper right corner. 6. Select the Routing tab, as shown in Figure 8-27.
7. Select Port-Based-Routing in the Advanced Performance Tuning (APT) Policy field. Set Dynamic Load Sharing (DLS) to Off if lossless DLS should not be used. Set In-Order Delivery (IOD) to On. Click Apply to save the changes. 8. If you have set DLS to On, then you need to enable lossless DLS also via the command-line interface (CLI). Otherwise, proceed to step 10. The following steps explain one method of using the CLI: a. In the DCFM server or client, right-click the Director and select Telnet. A new window opens. (You can also use an ssh connection to the Director, if needed.) b. Enter the user name (in our case, admin) and press Enter. c. Enter the password and press Enter (the default is password). d. Enter, for example, setcontext 1 to connect to a logical switch with FID 1. e. Type the command iodset --enable -losslessdls and press Enter To check the settings, enter iodset --show and dlsshow. f. For long distance environments, execute portcfgcreditrecovery --disable slot/port for each ISL. g. Type logout and close the window. 9. To enable the Director at the Switch Status field, return to the Switch tab. (You may also skip this step and configure the Control Unit Port (CUP) before enabling the Director.) 10.Select Enable in the Switch Status field and click Apply.
176
11.Read the warning message and click Yes to confirm the action. The Director is now ready to be used in a FICON environment.
177
7. Select Enable in the FICON Management Server Mode field. 8. Click Apply. 9. Read the message and click Yes to confirm the action. 10.Now you can alter the FICON Management Server Behavior Control (Mode Register) field. Select the desired functions you want to use with the CUP. 11.Click Apply after the changes are made. 12.Read the message and click Yes to confirm the action. 13.Return to the Switch tab, to enable the Director at the Switch Status field. 14.Select Enable in the Switch Status field and click Apply. 15.Read the warning message and click Yes to confirm the action. The CUP Port is now usable from the System z server.
IBM_SAN768B login: admin Password: ----------------------------------------------------------------IBM_SAN768B:FID128:admin> ficoncupshow mihpto MIHPTO for the CUP: 180 seconds IBM_SAN768B:FID128:admin> ficoncupset mihpto 181 The input value of the MIHPTO will be rounded down to the nearest value divisible by 10. This is due to IBM's MIHPTO specification. MIHPTO has been changed to 180 seconds IBM_SAN768B:FID128:admin>logout 5. To see the timeout that is currently set, enter ficoncupshow mihpto. 6. To change the value, enter ficoncupset mihpto xxx, where xxx is the new value. This value must be divisible by 10 if over 63, or will be rounded down. A value between 15 and 600 seconds can be used. 7. Enter logout and close the window. The MIHPTO for the CUP is now set and active.
179
6. Select the Port Type you want to configure, or accept the defaults if you only want to change the speed. Note the following points: - E_Port is for ISLs. - L_Port is for loop devices. - F_Port is for all other devices. Be aware that EX_Port is not selectable, because it is only available at the Base Switch in a logical switch configuration, which is not supported for a FICON environment. 7. Click Next and Figure 8-30 on page 181 will appear.
180
Note the following points: - At the Speed drop-down menu, you can select Auto (for auto-negotiate) or 1, 2, 4, and 8 Gbps. - The Ingress Speed Limit (MBps) drop-down menu is a Quality of Service function, to limit the throughput of the port (200, 400, 600, 800, 1000, 1500, 2000, 3500, 4000, 5000, 7000 and 8000 Mbps are possible). - With Long Distance Mode, you can configure the distance of the link (L0:normal is the default value). For more detailed information about this topic, refer to 8.3.8, Changing buffer credits on page 183. 8. Click Next after all selections are done. 9. At the Confirmation window, click Save after verifying the settings. 10.Click Close after the configuration complete message. 11.Click Enable to enable the port after the configuration is complete. After the port type and speed are configured, it is useful to give the port a name by clicking Rename.
181
Important: Changing the mode is disruptive for the port. The following method illustrates one way to change the fill word for a 8 Gbps interface: 1. In the DCFM server or client, right-click the Director and select Telnet. A new window opens. (You can also use an ssh connection to the Director, if needed.) 2. Enter the user name (in this case, admin) and press Enter. 3. Enter the password and press Enter (the default is password). 4. Enter, for example, setcontext 1 if you need to connect to a logical switch with FID 1. 5. The portcfgfillword 1/8 0 command changes the fill word for Slot1, Port8 to Idles. The portcfgfillword 1/8 1 command changes the fill word for Slot1, Port8 to ARB(FF). 6. The portcfgshow command will list all ports of the Director. The portcfgshow 1/8 command will only list the settings for Slot1, Port8 (see Example 8-3).
Example 8-3 Changing fill words
DCXBOT login: admin Password: ----------------------------------------------------------------IBM_SAN768B:FID128:admin> portcfgfillword 1/8 0 IBM_SAN768B:FID128:admin> portcfgshow Ports of Slot 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 -----------------+--+--+--+--+----+--+--+--+----+--+--+--+----+--+--+-Speed AN AN AN AN AN AN AN AN AN AN AN AN AN AN AN AN Fill Word 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 ~~~ IBM_SAN768B:FID128:admin>logout 7. You can also use a script that changes all ports on a given port card, as shown in Example 8-4.
Example 8-4 Script to change the fill word for a 16-port card in slot 2
for ((i=0;i<16;i++)); do (echo 2/$i;portcfgfillword 2/$i 1); done This script will change the settings for the first 16 ports at the card in Slot 2 to ARB(FF), which is specified by the 1 in the script. If a 32-port card is used, you must replace 16 with 32. The slot also needs to be changed in both positions in the script. For a 32-port card in slot 3, it will look as shown in Example 8-5.
Example 8-5 Script to change the fill word for a 32-port card in slot 3
for ((i=0;i<32;i++)); do (echo 3/$i;portcfgfillword 3/$i 1); done 8. Enter logout and close the window. The fill word is now changed, and the Director will use the configured mode.
182
183
6. After you make your selection and click Edit or New, the window shown in Figure 8-32 on page 185 appears.
184
At this point, you can modify the matrix by clicking in the fields to allow or prohibit a connection. Figure 8-32 shows that port 03 (hex) is prohibited to connect to ports 00, 01, and 02. All other ports are allowed to connect to port 3. Note the following points: - The Block button will block the selected port. It is the same function as a port disable. - The Prohibit button will prohibit all connections to and from the selected port. If you do not want to activate the changes right away, then click Save (not available for the active matrix), or Save As, to create a new configuration. 7. After making the desired changes, you can activate the changes by clicking Activate. 8. Click Yes to confirm the action.
185
2. In the Zoning Scope drop-down menu in the upper left corner of the window, select the Director for which you want to create Zoning. 3. To create a new Zone, click the New Zone drop-down menu and select New Zone. 4. Enter a name for the new Zone (in this case, Test_LPAR). 5. Ensure that Domain Port Index is selected in the Type drop-down menu, because port-zoning is used in FICON environments. 6. Select the ports you want to include in the zone you just created at the left side of the window. (You may select more than one port at a time by pressing and holding Ctrl key on the keyboard while selecting the ports with the mouse. If you want to select a range of ports, use the Shift key.) 7. To place the selected ports in the new zone in the middle of the window, click the arrow pointing to the right. Repeat steps 3 to 7 for all zones, you want to create. In our example, we created a Rest_of_the_world zone, which includes all other ports of the Director (except ISLs). The result is shown in Figure 8-34 on page 187.
186
8. After all zones are created, you must put them in a Config. Click the New Config button. Enter a name for the new Zoneconfig (in this case, FICON_Zoneset). 9. Select the zones in the middle of the window that you want to place in the newly created Zone Config, then click the arrow pointing to the right. The result is shown in Figure 8-35.
10.Now you must activate the new Zone Config. Select the Zone Config (in this case, FICON Zoneset) and click Activate. 11.You will reach the Activate Zone Config window. Verify that all ports are in the correct Zones. Click OK after you verify all zones this will activate the newly created Zone Config. 12.Click Yes when you reach the confirmation window. 13.After activation you will see that all active zones, as well as the active Zone Config, have green markers attached (see Figure 8-36 on page 188).
187
To verify the active zone, click the Active Zone Config tab. You will see the active Zone Config, as well as the active zones with all zone members. 14.To leave the Zoning dialog, click OK. 15.If you get a confirmation window, read the warning and click OK. Zoning is now completed and active in the Director.
188
2. Select the Fencing Policy from the Violation Type drop-down menu (in this case, the Invalid CRCs policy). 3. Select the policy on the left part of the window, or click Add to create a new one. If you want to change or verify the policy, click Edit. 4. Two selections are presented: m-EOS (which is for McData Directors) and FOS. In this case, we selected FOS for our SAN768B Director. There are two possible policy types in FOS: the Default and the Custom Policy. We selected the default policy. You can also change the policy name in this panel. (Some policies are only for FOS or m-EOS as indicated at the selection from the drop down menu.) Click OK after the settings are done. 5. Assign the policy by highlighting it on the left site, and highlighting the Director on the right side (or the fabric), then clicking the arrow pointing to the right. 6. The policy is added to the Director as indicated by the green plus (+) sign. 7. Click OK to activate the policy on the Director (or fabric). You have now assigned the policy to the Director. When the threshold is reached for the assigned error type, the port will go offline. Refer to Fabric OS (FOS) Administration Guide, the Data Center Fabric Manager (DCFM) User Manual, and the Fabric OS Command Reference Manual for other functions required for your installation, but not covered in this section.
189
Trunking (optional)
If trunking is enabled for an ISL port, it will automatically form a trunk. For trunking, all links in the trunk (maximum 8) must be in the same port group and must be set up with the same speed (for example, 8 Gbps). All fiber optic links in the trunk must be the same length (maximum of 30m difference). If two of four ports have Quality of Service (QoS) enabled, two trunks will be formed with two ports each (even if they are in the same port group): one trunk with QoS, and one trunk without QoS.Trunking will only work in Brocade native Interoperability Mode. To enable trunking, follow these steps: 1. Right-click the Director in the DCFM and select Element Manager Ports. 2. In the Port Administration window, click Show Advanced Mode in the upper right corner. 3. From the list, select the port that you want to enable for trunking. 4. Click the Enable Trunking button. Important: This will directly enable trunking for the selected port. If this port is online, the link will go offline for a short moment, which will interrupt the traffic on the link.
190
2. To activate the TI zone, right-click the TI zone and select Configured Enabled. You can also select Configured Failover. If Failover is enabled, the devices in the TI zone will use another available ISL (which is not in the TI Zone). When the configured ISL is available again, it will automatically fall back. Important: When Failover is disabled, then the devices in the TI zone will not use another available ISL outside its TI zone. 3. To activate one or more created TI zones in the Directors, click Activate in the Zoning window. 4. Compare the settings at the confirmation window and select OK to activate the zoning changes. 5. If a second confirmation window appears, read the message and select Yes. 6. Click OK after the changes are activated. 7. To verify that the TI zone is active, click the Active Zone Config tab.
191
To use Quality of Service zones, follow these steps: 1. Complete the steps in Setting up zoning (optional) on page 185, but select WWN instead of Domain, Port Index in the Type drop-down menu. 2. Right-click the created zone and select QoS Priority QoS_High or QoS_Low as shown in Figure 8-39. The default is QoS_Medium, for all zones.
3. The name of the zone will be changed to QoSH (for high priority) or QoSL (for low priority) in front of the given zone name. 4. To activate the QoS zone, click Activate in the zoning window. 5. Compare the settings at the confirmation window and select OK to activate the zoning changes. 6. If a second confirmation window appears, read the message and select Yes. 7. Click OK after the changes are activated. 8. To verify that the zone is active, click the Active Zone Config tab.
192
SCC policy, we show how include the World Wide Node Name (WWNN) of the two Directors in the fabric. To do this, follow these steps: 1. Go to the Switch Admin window in the Element Manager of the Director. This can be done via the DCFM by right-clicking the Director and selecting Configure Element Manager Admin. 2. Click Show Advanced Mode in the upper right corner. 3. Select the Security Policies tab. 4. On the left side of the window, click FWCP, which is the Fabric Wide Consistency Policy. 5. Select Strict at the SCC Consistency Behavior drop-down menu. 6. Click Apply. Note: This will change the security settings of the Directors. From now on, each System z server needs to use the 2-byte addressing scheme to initialize the link. 7. Select ACL on the left side of the window. 8. Click Edit to set up the SCC Policy. 9. Select only SCC and click Next. 10.Click Modify to add the WWNNs of the two Directors. 11.Select the Director on the left side, and click Add Switch (see Figure 8-40). This will add the WWNN of the Director you are connected to.
193
12.In the Add Other Switch field, enter the WWNN of the second Director and click Add. (The WWNN of the Director is shown on top of the Switch Admin window). 13.Click OK, then click Next. 14.Verify the settings you just made, and click Finish. 15.Back in the Switch Administration window, you now see Defined Policy Set. Click Activate to activate the Policy (see Figure 8-41).
16.Perform steps 1 on page 193 through 6 on page 193 on the second FICON Director. (Set the Fabric Wide Consistency Policy for SCC to Strict.) 17.Connect the ISLs. The ISLs will be initialized, and a cascaded fabric will be created. Remember to set up the ISLs on both sides to the same settings as described in Setting up Inter-Switch Links on page 190. The SCC Policy will be distributed to the second Director automatically. In DCFM, you will now see that the two Directors are merged and connected via ISLs, as shown in Figure 8-42 on page 195.
194
To verify the ISLs, right-click them in the DCFM Directors View and select Properties. You will see all active ISLs and their settings on both Directors, as shown in Figure 8-43.
195
If ISLs are missing, go to the Port Admin window of the two Directors and check the settings on both sides. Use the Advanced View to see all settings. Scroll to the right side and check the Additional Port Info field (this field also displays some error reasons for the ISL).
portcfgqos --disable 1/15 portcfgcreditrecovery --disable 1/15 portcfglongdistance 1/15 LS 0 100 portcfgislmode 1/15, 1 portcfgfillword 1/15 0 portcfgshow 1/15
196
197
To restore, right-click the Director you want to restore and select Configuration Restore. You will obtain a list of available backup files. Select the backup you want to restore and click OK. Important: The restore of the Directors configuration data is disruptive, because the Director will perform a reboot to activate the new configuration.
198
Part 4
Part
199
200
Chapter 9.
201
Note: The utilization reported by the Activity task for most channel types will correspond with the utilization reported by the Resource Measurement Facility (RMF). For Fibre Channels, however, this task considers the channel to be busy any time an operation is pending, even if the channel is waiting for a device to respond. RMF, in contrast, looks at the amount of work performed versus the amount of work that could be performed by the channel. This means that if you have devices that are relatively slow to respond, leaving the channel waiting for a response but otherwise idle, activity will show a utilization that is significantly higher than that reported by RMF.
202
203
z/OS 4 z/OS 5
z/OS 6
z10 EC Server
z9 EC Server
z10 BC Server
Switch 1
Switch 2
FICON DASD CU
FICON exploits CCW chaining, so there will be fewer channel ends and device ends. Also, PEND time ends at the FICON channel when the first command response (CMR) is received from the CU.
204
Accessing data across the sysplex We introduce data gathering and RMF reporting in the next sections.
205
10017 TBIG26 1.0H 0028 0.000 .000 .000 .000 .000 .000 .000 .000 LCU 0028 448.780 .250 .000 .018 .000 .144 .002 .105
The fields of most interest are: AVG RESP TIME - This is the average response time, in milliseconds, for an I/O to this device or LCU. This can be considered as the overall measurement that reflects the health of the device. The measurement is composed of the sum of the following metrics: AVG CMR (command response) Delay - This delay indicates the time between a successful I/O operation being initiated on the channel and the device acknowledging the connection. These delays could be caused by contention in the fabric and at the destination port. AVG DB (device busy) Delay - This delay is due to the device being busy because of I/O from another sharing z/OS system. Channel path delay - Any time that is not accounted for in the preceding two measurements is due to a delay at the SAP or channel, or a CU busy. AVG DISC time - This reflects the time when the device was in use but not transferring data. AVG CONN time - This is the time measured by the channel subsystem during which the device is actually connected to the CPU through the path (channel, control unit, DASD) and transferring data. Figure 9-3 on page 207 illustrates when and how IOSQ, Pend, and Connect times appear in a DASD I/O operation when a CACHE hit occurs.
206
IOSQ
UCB Busy PAV/HiperPAV reduces it with extra UCBs
Pend
Channel Busy FICON Director Port Busy # of Buffer Credits CU/Device Busy Open Exchange limit
Connect
Working Transferring Data
FC frame Multiplexing allows for better link utilization, but may extend some connect times FICON connect time is not as predictable as ESCON, more an awareness than a problem
Connect
Figure 9-3 FICON DASD I/O operation times with a CACHE hit
Figure 9-4 illustrates when and how IOSQ, Pend, ESS Logical time, and Connect times appear in an DASD I/O operation when a CACHE miss or an Extent conflict has to be resolved by the control unit.
Application I/O Request Start I/O (IOS-CSS)
IOSQ
UCB Busy PAV/HiperPAV reduces it with extra UCBs
Pend
Channel Busy FICON Director Port Busy # of Buffer Credits CU/Device Busy Open Exchange limit
Connect
Cache miss
ESS
Logical Disconnect
Connect
Working Transferring Data
Extent Confict
207
-TOTAL IOP 0 00 01 02 03 04 05 0 SYS LCU 0 0001 0 0002 0 0003 0 0004 0 0007 0 0008 0 000B 0 000C
In a PR/SM environment (such as our configuration), the report is split into two sections. The top section reports PR/SM system activity. The rest of the report applies to I/O activity for the z/OS system being measured by RMF. The fields we focus on are: PR/SM fields from the I/O Queuing Report INITIATIVE QUEUE: This queue reflects the I/O initiation activity for each I/O processor (IOP). Average queue length reflects the average number of entries on the initiative queue for each IOP. The activity rate reflects the assignment of I/O requests. IOP UTILIZATION shows fields to measure the I/Os started and interrupted on each IOP and an overall percentage busy indicator. Note: IOP or I/O Processors refer to the System Assist Processors (SAPs) on your server.
I/O REQUESTS RETRIED reflects the ratio of retried I/O requests. Retries/SSCH (start subchannel) reflects the retried I/O requests initially started. Reasons for each: CP - Channel path busy DP - Director port busy CU - Control unit busy DV - Device busy
z/OS System activity fields from the I/O Queuing Report LCU and CONTROL UNITS show the logical assignment to the physical resources. The Dynamic Channel Path Management (DCM) GROUP reports the minimum, maximum, and initially defined number of DCM channels for each LCU in this interval. CHPID taken shows how evenly I/O requests are spread across the available paths to the LCU.
208
The percentage busy fields refer to Director Port (DP) and control unit (CU). The value reflects the ratio of the number of times an I/O request was deferred due to either resource being busy. AVG CUB DLY indicates the time (in milliseconds) that an I/O operation was delayed due to the control unit being busy. AVG CMR DLY indicates the time (in milliseconds) between a successful I/O operation being initiated and the device acknowledging the connection. CONTENTION RATE indicates the rate that the I/O processor places the request on the control unit header for this interval. This occurs when all paths to the subchannel are busy and at least one path to the control unit is busy. DELAY Q LENGTH reflects the average number of delayed requests on the control unit header for this interval. AVQ CSS DLY reflects a time (in milliseconds) that an I/O is delayed from the acceptance of a start on the subchannel to the point that the channel subsystem actually initiates the operation on this LCU.
The fields of interest are: CHANNEL PATH ID reflects the CHPID number. TYPE reflects that the type (FC_S) is FICON switched. G reflects the generation type (2 denotes a 2 GBps channel - FICON). SHR indicates whether the channel is shared between one or more partitions.
UTILIZATION PART reflects the utilization for this partition for this interval. TOTAL reflects the utilization from the entire processor for this interval. BUS reflects the percentage of cycles the bus was found to be busy for this channel in relation to the potential limit.
209
READ(MB/SEC) PART shows the data transfer rate in MBps for this channel from this partition to the control unit in this interval. TOTAL shows the data transfer rate in MBps for this channel from the entire processor to the control unit for this interval. WRITE(MB/SEC) PART shows the data transfer rate in MBps for this channel from this partition to the control unit in this interval. TOTAL shows the data transfer rate in MBps for this channel from the entire processor to the control unit for this interval. Note: On a machine running in LPAR mode, but with only one LPAR defined, the PART columns of this report will reflect a zero (0) value for the READ, WRITE, and UTILIZATION displays for FICON channels. FICON and zHPF Operations (Physical Channel) RATE is the number of native FICON or zHPF operations per second at the physical channel level. ACTIVE is the average number of native FICON or zHPF operations that are concurrently active. Often referred to as the number of open exchanges. DEFER is the number of deferred native FICON or zHPF operations per second. This is the number of operations that could not be initiated by the channel due to lack of available resources.
56 279 965 329 459 107 40 284 397 449 934 76 686 171 0 111 0
0.00 0.00 0.05 0.02 0.02 0.00 0.00 0.04 0.14 0.00 0.11 0.00 0.07 0.00 0.00 0.10 0.00
0.00 0.00 0.17 0.00 0.01 0.00 0.00 0.01 0.07 0.01 0.27 0.00 0.17 0.00 0.00 0.01 0.00
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
210
1E 1F 20 21
8E 95 42 ----
0 0 0 0
72 0 0 56
169 0 0 58
0 0 0 0
Fields of significance are: AVG FRAME PACING - This is the average time (in microseconds) that a frame had to wait before it could be transmitted due to no buffer credits being available. AVG FRAME SIZE READ/WRITE - The average frame size (in bytes) used to transmit and receive data during this interval. PORT BANDWIDTH READ/WRITE - The rate (in MBps) of data transmitted and received during the interval. ERROR COUNT - The number of errors that were encountered during the interval.
211
The fields are: ADAPTER SAID reflects the System Adapter Identifier address. TYPE reflects the characteristics of the connection (2/4 Gb fiber, in our example). LINK TYPE will be one of the following: PPRC send and receive SCSI read and write BYTES/SEC reflects the average number of bytes transferred for all operations on this link. BYTES/OPERATION reflects the average number of bytes transferred for each individual operation. OPERATIONS/SEC reflects the average number of operations per second. RESP TIME/OPERATION reflects the average response time (in milliseconds). I/O INTENSITY is measured in milliseconds/second. It reflects the duration of the interval for which the adapter was active. A value of 1000 indicates that the link was busy for the entire time period. A value greater than 1000 would indicate that concurrent operations had been active.
DASD
Here we discuss basic practices for DASD.
Use I/O priority queuing (IOQ=PRTY in IEAIPSxx). Move or copy data sets to other volumes to reduce contention. Run requests sequentially to reduce contention.
9.5.7 Tape
The following list items reflect typical approaches to improving tape performance. These are basic recommendations. The significant developments in tape technology (such as the IBM Virtual Tape Server) will mitigate many of these points, and you should refer to the specific performance recommendations of your chosen solution for more guidance. Review blocksize; increase where applicable. Allocate data sets to DASD. Tape mount management (TMM) is a method that you can use to accomplish this, and thereby significantly reduce tape mount requirements. Increase host buffers for priority jobs. Increase buffers from the default of 5, to as high as 20 (but no more than 20). Use VIO for temporary data sets. Add channels (if not already one channel per control unit function). Reduce mount delay (for example, with Automated Cartridge Loader). Use enhanced-capacity cartridges and tape drives. Restructure for more parallelism (for example, to allow an application to use multiple drives).
213
3. To create the monitor pair, click the arrow pointing to the right. 4. Click Apply to save the changes. 5. Now you can select the End-to-End monitor from the Monitored Pairs part of the window and select the Real-Time Graph or Historical Graph button. 6. You will get the realtime or historical data of the monitored device pair. You can also select more measurements from the Additional Measures drop-down menu. The End-to-End Monitor is now set up and can be used.
4. Click OK to save the changes. 5. You will get the real-time data of the monitored devices. You can also select more measurements from the Additional Measures drop-down menu. The Real-Time Monitor is now set up and can be used.
215
216
10
Chapter 10.
217
218
CPU(0),ONLINE CPU(1),ONLINE CPU(2),ONLINE CHP(00-3F),ONLINE STOR(E=0),ONLINE STOR(E=1),ONLINE STOR(E=2),ONLINE DEV(0200-02FF),ONLINE DEV(100O-2FFF),ONLINE DEV(5100-6FFF),ONLINE DEV(9A00-BF00),ONLINE ... Example 10-2 shows the output of a D M=CONFIG(xx) command where deviations from a desired configuration (S1) exist. If the current running configuration matches the definitions in the CONFIGxx SYS1.PARMLIB member, the output of the z/OS command would contain the message NO DEVIATION FROM REQUESTED CONFIGURATION.
Example 10-2 D M=CONFIG(xx) output
D M=CONFIG(S1) IEE097I 11.38.13 DEVIATION STATUS 568 FROM CONFIGS1 DEVICE DESIRED ACTUAL D2A6 ONLINE OFFLINE D080 BASE HIPERPAV E200 OFFLINE ONLINE F890 PAV UNBOUND
219
220
levels of tools. Figure 10-1 illustrates the different levels of software and hardware, as well as the tools provided from a verification perspective.
z/OS commands for status z/OS system log and LOGREC EREP and Purge Path Extended HMC and SE provide:
CHPID status CU and device status Node descriptor information SAD display
z/OS level
Problem Determination Approach
FICON Director
221
222
passes this information to z/OS as Extended Subchannel Logout Data (ESLD), and an SLH record is cut in the LOGREC medium. The overall process is shown in Figure 10-2.
SYS1.LOGREC SLH
z/OS
FICON - FC
LESBs abort reason ESLD model-dependent Channel data Channel Abort reason code Channel port LESB Channel attached Port LESB
FICON
LESBs abort reason
Director
CUattached Port LESB data CU port LESB CU Abort error code CU model dependent data
FICON - CU
This function provides advanced problem determination of Fibre Channel bit errors. A subset of a sample Subchannel Logout Handler (SLH) record, formatted by EREP, is included in Figure 10-3 on page 224.
223
0C07A 3390
REPORT: SCP:
JOB
CPU MODEL: 2084XA CHANNEL PATH ID: 57 LOGICAL CPU ID: 01796A PHYSICAL CHAN ID: 0131 PHYSICAL CPU ADDRESS: 31 CC CA FL CT FAILING CCW 00 00000000 00 0000 K FLAGS CA US SS CT 04 824017 00000008 00 02 0000 SUB-CHANNEL STATUS
SCSW
---UNIT STATUS----
ATTENTION 0 PGM-CTLD IRPT 0 STATUS MODIFIER 0 INCORRECT LENGTH 0 CONTROL UNIT END 0 PROGRAM CHECK 0 BUSY 0 PROTECTION CHECK 0 CHANNEL END 0 CHAN DATA CHECK 0 DEVICE END 0 CHAN CTL CHECK 0 UNIT CHECK 0 I/F CTL CHECK 1 UNIT EXCEPTION 0 CHAINING CHECK 0 ----SOFTWARE RECOVERY STATUS----HARD FAIL 0 DEGRADE FAIL 0 SOFT FAIL 1 PASSED 0 CHANNEL ERROR ANALYSIS IRB STORED BY INTERRUPT TERMINATION BY -- SELECTIVE RESET SEQ CODE *** INVALID *** VALIDITY OF RECORDED DATA COUNT INVALID TERMINATION CODE VALID SEQUENCE CODE INVALID DEVICE STATUS INVALID CCW ADDRESS INVALID DEVICE NUMBER VALID SENSE DATA NOT STORED CHANNEL LOGOUT DATA 0000 00000000 00000000 00000000 0020 00000001 00000000 000023C7 0040 00000000 00613613 00006106 0060 01600CE6 50050764 00C1796A 0080 2200002A 00000000 02000000 00A0 00200100 30303231 30353830 00C0 00000000 00000000 00000000 00E0 00000000 00000000 00000000 CONTROL UNIT LOGOUT DATA 0000 00000000 00000000 0020 00000000 00000000 0040 00000000 00000000 0060 00000000 00000000 0080 00000000 00000000 00A0 00000000 00000000
--------------------------SCSW FLAGS--FLAG 0 FLAG 1 CCW FORMAT 1 RESERVED 0 PRE-FETCH CCW 0 SSCH FUNCTION 1 INIT STATUS 0 HSCH FUNCTION 0 ADDR LIMIT 0 CSCH FUNCTION 0 SUPP SUSPEND INT 0 RESUME PENDING 0 ZERO COND CODE 0 START PENDING 0 EXTENDED CONTROL 1 HALT PENDING 0 PATH NOT OPER 0 CLEAR PENDING 0
--
CODE
Figure 10-4 on page 225 shows the CHANNEL and CONTROL UNIT LOGOUT DATA fields extracted from the EREP SLH record shown in Figure 10-3. Some fields are interpreted to give you an idea of how this information can be broken down.
224
CHANNEL LOGOUT DATA 0000 00000000 00000000 0020 00000001 00000000 0040 00000000 00613613 0060 01600CE6 50050764 0080 2200002A 00000000 00A0 00200100 30303231 00C0 00000000 00000000 00E0 00000000 00000000 CONTROL UNIT LOGOUT DATA 0000 00000000 00000000 0020 00000000 00000000 0040 00000000 00000000 0060 00000000 00000000 0080 00000000 00000000 00A0 00000000 00000000
Entry port on the LESB for the Director FICON Director entry port. Hex count of a specific error type
D C
FICON CHPID FICON Control Unit N-Port
F-Port
F-Port
N-Port
C
Device C07A
For a detailed description regarding the debugging of these records, refer to the ANSI standard architecture, which can be found at:
http://www.t11.org
225
Examples of the z/OS commands can be found in Appendix F, Useful z/OS commands on page 287. Example 10-3 shows the output of a z/OS D M=SWITCH(xx) command.
Example 10-3 Output of the D M=SWITCH(65) z/OS command
D M=SWITCH(65) IEE174I 12.45.15 DISPLAY M 215 SWITCH 0065, PORT STATUS 0 1 2 3 4 5 6 7 8 9 A B C D E F 0 u u u c u u u u u u u u u u u u 1 u u u u u u u u u u u c u u u u 2 u u u u u u u u u u u u u x u u 3 u u u u x u u u u u u u u u u u 4 . . . . . . . . . . . . . . . . 5 . . . . . . . . . . . . . . . . 6 . . . . . . . . . . . . . . . . 7 . . . . . . . . . . . . . . . . 8 . . . . . . . . . . . . . . . . 9 . . . . . . . . . . . . . . . . A . . . . . . . . . . . . . . . . B . . . . . . . . . . . . . . . . C . . . . . . . . . . . . . . . . D . . . . . . . . . . . . . . . . E . . . . . . . . . . . . . . . . F . . . . . . . . . . . . . . . . ***************** SYMBOL EXPLANATION ***************** + DCM ALLOWED - DCM NOT ALLOWED BY OPERATOR x NOT DCM ELIGIBLE p DCM NOT ALLOWED DUE TO PORT STATE c CHANNEL ATTACHED $ UNABLE TO DETERMINE CURRENT ATTACHMENT u NOT ATTACHED . DOES NOT EXIST The D M=SWITCH(xx) command provides important information about the configured and in-use ports at the specified switch, along with their current state. Notice in the example that Switch 65, ports 03 and 1B are in use and attached to a FICON Channel. Ports 2D and 34 display an x state; these ports are used as ISL ports.
226
You will see the Node Descriptor details in the DCFM for each port at the left hand side of the window. To change the port view, right-click a Director (on the left of the window) and select Port Display. Check mark the details you want to see. You can also see the Node Descriptor for a Director by right-clicking and selecting Element Manager Hardware. In the Directors Element Manager, click Name Server at the left side of the window. This will open the Name Server window displaying all connected devices, as shown in Figure 10-5.
Clicking the Accessible Devices button displays all devices that have access to the selected list entry (that is, each device that is in the same Zone as the selected list entry). Selecting the list entry and clicking Detail View produces a detailed view for the selected device, as shown in Figure 10-6 on page 228.
227
The Tag field for the System z server is a four-digit number, and it consists of two parts. The last two digits are the CHPID and the first digit gives you information about the Channel Subsystem (CSS) in which the CHPID is defined. In our example, the Tag is a07e, which means that the CHPID is 7E and is defined in CSS 0 and 2 (see Table 10-1).
Table 10-1 CSS mapping First digit 1 2 3 4 5 CSS 3 2 2 and 3 1 1 and 3 First digit 6 7 8 9 A CSS 1 and 2 1,2 and 3 0 0 and 3 0 and 2 First digit B C D E F CSS 0,2 and 3 0 and 1 0,1 and 3 0,1 and 2 0,1,2 and 3
The Tag field details for a DS8000 are shown in DS8000 port layout on page 262.
228
You can display the log from a selected Director or logical Switch by right-clicking the Director and selecting Events.
229
Select the Director or logical switch at the left side of the window and click the arrow pointing to the right, to put it in the list of selected switches for data collection. Click OK to start the data collection. Read the message at the confirmation window and then click OK. After data collection is complete (which may take up to 30 minutes, depending on the size of your configuration), you can select the zip file by going to Monitor Technical Support View Repository. Select the correct zip file by looking at the date and time stamp included in the file name (if the size is 0 KB, it means that data collection is not finished yet and you need to wait a few minutes longer). Click the zip file and save it to your hard disk. If you are at the DCFM Server, you can also go into the following directory to get the data: C:\Program Files\DCFM 10.1.3\data\ftproot\technicalsupport\. Data collection is finished now and can be sent to the Support Center.
Note: The messages covered here may change, depending on the z/OS release and version. APARs may also implement architectural changes regarding how messages are generated and displayed. We recommend that readers refer to the current z/OS Messages library for reviewing the output whenever a discrepancy is noted. This section provides insight to the most common FICON problem reporting messages.
Tip: Check for a disabled Control Unit interface, an altered FICON director configuration, or a disabled channel path. Refer to the explanations of messages IOS2001I or IOS2002I in this section for possible reasons why you might receive message IOS001E.
231
This is the channel path identifier (CHPID), if known. Otherwise, this field will be set to asterisks (*). This is the failing command being executed by the device, if known. Otherwise, this field will be set to asterisks (*). This is the device and subchannel status, if known. Otherwise, this field will be set to asterisks (*). This is the System z server Physical Channel IDentifier.
Tip: Run EREP, or examine the SE CHPID work area for the IFCC Threshold Exceeded CHPID status. Alternatively, use the IFCC and Other Errors service task on the SE to further investigate possible reasons why this message was issued.
Tip: The IOS will check the state of the timeout bit (the timeout indication is reported by the channel in the Extended Status Word Timeout (ESWT) before generating the IOS051I message for FICON channels. If timeout is not on, then IOS050I will be issued instead. Note that IOS050I and IOS051I are the most common z/OS FICON error messages. Various enhancements have been made via APARs OW47845 and OW57038 to provide more information about the possible error root cause. Both messages should be addressed at the hardware level.
232
This is the failing command being executed by the device, if known. Otherwise, this field will be set to asterisks (*). This is the device and subchannel status, if known. Otherwise, this field will be set to asterisks (*). This is the System z server Physical Channel IDentifier.
Tip: This message indicates that the channel error has been recovered by the channel and it is logged only to SYSLOG (not to consoles), and will be logged in SYS1.LOGREC. (No operator console message). The message appears if the channel error has occurred with Extended Subchannel Logout Data pending and the Logout-only bit is ON.
233
This message indicates that the Recursive MIH condition was not cleared by the previous MIH Clear Subchannel (CSCH) recovery sequence. It is possible that the original MIH message may have been lost. Tips: If you specify an MIH HALT interval that is too short (such as 1 second), change it to at least 2 seconds because the interval of the FICON Interface-Timeout IFCC (No Response from CU) is 2 seconds. CSCH was rescheduled due to the recursive MIH. This means that the recursive MIH condition was detected before the original MIH recovery completed. As a result, the original MIH message may not have been issued and this message was issued instead. The difference from message IOS077E is that this message is issued (instead of IOS077E) when the MIH message type is unknown (whether IOTIMING, Primary Status Missing, Secondary Status Missing, Start Pending, Halt Interrupt Missing, or Clear Interrupt Missing), as indicated by the CONDITON flag being OFF in the MIH message module. CSCH may not complete quickly if the CSS-selected CHPID (based on the LPUM of the subchannel) is in a permanent busy condition.
234
dddd,**,jobname, IDLE WITH WORK QUEUED dddd,pp,jobname, HALT SUBCHANNEL INTERRUPT MISSING Tip: This message means that another missing interrupt condition (MIH) was detected before the previous MIH recovery completed.
235
Tip: The Device Recovery routine invoked in the Channel Path Recovery detected one of these conditions as it tried to Re-Reserve/Assign the device after RCHP - Reset Channel Path (System Reset). Any new I/O request to the device will result in a Permanent I/O error. This condition is generally associated to a hardware failure in the CU or device. Note that this condition could result from BOX_LP=(xxx) being specified for HOTIO or TERMINAL in the IECIOSxx PARMLIB member. BOX_LP=(ALL) is the default from z/OS 1.4. (with ASSIGN or RESERVE LOST text).
236
D M=DEV(D003,(7E)) IEE174I 18.00.58 DISPLAY M 907 DEVICE D003 STATUS=ONLINE CHP 7E ENTRY LINK ADDRESS 6503 DEST LINK ADDRESS 6688 PATH ONLINE Y CHP PHYSICALLY ONLINE Y PATH OPERATIONAL Y MANAGED N CU NUMBER D000 DESTINATION CU LOGICAL ADDRESS = 00 SCP CU ND = 002107.900.IBM.75.0000000BALB1.0300 SCP TOKEN NED = 002107.900.IBM.75.0000000BALB1.0000 SCP DEVICE NED = 002107.900.IBM.75.0000000BALB1.0003 CHANNEL LINK LEVEL FACILITY DETECTED A LINK ERROR (90) LINK RCVY THRESHOLD EXCEEDED FOR ALL LOGICAL PATHS OF DEST LINK (60) HYPERPAV ALIASES CONFIGURED = 143 FUNCTIONS ENABLED = MIDAW, ZHPF When message MSGIOS001E, MSGIOS002A, or MSGIOS450E is issued, IOS will retrieve the not-operational reason from the Channel Subsystem and issue message MSGIOS2001I (for MSGIOS001E or MSGIOS450E) or MSGIOS2002I (for MSGIOS002A), to SYSLOG only, with text describing the reason. Tip: The message displays the STATUS FOR PATH(S) chp, chp,... not-operational reason text. The second line of text, if it exists, contains the specific reason for the not-operative path (as shown in Example 10-4 on page 237).
237
The paths specified were found to be not operational for the specified device. Some FICON-relevant Status Code message texts are listed here: 50: CHANNEL SUBSYSTEM DETECTED A LINK FAILURE CONDITION) 10: LOSS OF SIGNAL OR SYNCHRONIZATION CONDITION RECOGNIZED 20: NOT OPERATIONAL SEQUENCE RECOGNIZED 30: SEQUENCE TIMEOUT RECOGNIZED 40: ILLEGAL SEQUENCE RECOGNIZED 60: CHANNEL LINK LEVEL FACILITY IN OFFLINE RECEPTION Note: This is the case when the Channel-to-CU logical path (H/W link-level) has been de-established (broken). 70: PORT REJECT WAS ENCOUNTERED 10: ADDRESS INVALID ERROR 11: UNDEFINED DESTINATION ADDRESS ERROR 12: DESTINATION PORT MALFUNCTION 13: DYNAMIC SWITCH PORT INTERVENTION REQUIRED 80: LINK LEVEL REJECT WAS ENCOUNTERED 01: TRANSMISSION ERROR 05: DESTINATION ADDRESS INVALID ERROR 07: RESERVED FIELD ERROR 08: UNRECOGNIZED LINK CONTROL FUNCTION 09: PROTOCOL ERROR 0A: ACQUIRE LINK ADDRESS ERROR 0B: UNRECOGNIZED DEVICE LEVEL 90: CHANNEL LINK LEVEL FACILITY DETECTED A LINK ERROR 10: CONNECTION ERROR 20: TRANSMISSION ERROR 30: PROTOCOL ERROR 40: DESTINATION ADDRESS INVALID ERROR 50: DEVICE LEVEL ERROR (50) 60: LINK RCVY THRESHOLD EXCEEDED FOR ALL LOGICAL PATHS OF DEST LINKS. Note: For System z server FICON channels, this is the case where the CU logical path connection has been fenced because the up/down (flapping) error count has been exceeded. (Channel Path or Link level threshold exceeded). You must enter the V PATH (dddd,pp),ONLINE command to issue a Reset Link Recovery Threshold CHSC command, or Config CHPID OFF then ON (using a CF z/OS command or via the HMC). A0: LOGICAL PATH IS REMOVED OR NOT ESTABLISHED 01: PACING PARAMETERS ERROR 02: NO RESOURCES AVAILABLE 04: DESIGNATED CONTROL UNIT IMAGE DOES NOT EXIST
238
05: LOGICAL PATH PRECLUDED BY CONFIGURATION AT CONTROL UNIT IMAGE 06: LINK RECOVERY THRESHOLD EXCEEDED FOR LOGICAL PATH Note: For System z servers FICON channels, this is the case where CU logical path connection has been fenced due to up/down (flapping) error count exceeded. (Channel Path or Link level threshold exceeded). You must enter the V PATH (dddd,pp),ONLINE command to issue a Reset Link Recovery Threshold CHSC command, or Config CHPID OFF then ON (using a CF z/OS command or via the HMC). B0: IN PROCESS OF INITIALIZING PATH 10: CONTROL UNIT DEVICE LEVEL INITIALIZATION IS NOT COMPLETE 20: LINK BUSY CONDITION LAST ENCOUNTERED 30: PORT BUSY CONDITION LAST ENCOUNTERED 30: PORT BUSY CONDITION LAST ENCOUNTERED 30: PORT BUSY CONDITION LAST ENCOUNTERED FF: NO FURTHER INFORMATION AVAILABLE OR UNKNOWN CONDITION
239
This is caused by the following conditions: CRW w/ Channel Path, Terminal condition (I/F Hung) Reset-Event-Notification (Path Verification) Hot I/O & Action specified for its message is CHPK
IOS288A SYSTEM-INITIATED OFFLINE | ONLINE RECONFIGURATION IS IN PROGRESS FOR THE FOLLWING CHPIDS: CC, CC-CC,...
Where: cc is the channel path. System z servers allow the use of the Hardware Management Consoles (HMCs) to Configure CHPIDs On and Off without stealing the channel from the configuration. This facility, called System Initiated CHPID Reconfiguration, uses a Support Element (SE) component to request a config off or on of all the CSS.CHPIDs associated with a particular channel ID. Normally, in a repair action, the operator has to take the channels offline using the z/OS commands to configure off or on all of the associated CHPIDs in all partition. With this facility, z/OS handles the SE requests configuring all the CSS.CHPIDs, with the exception of the last path to a device.
240
Part 5
Part
Appendixes
241
242
Appendix A.
243
Step 1 - Documentation
Creating and maintaining documentation throughout all the planning, design, and implementation phases is very important. Throughout this book we provide information regarding the various features and functions offered by System z servers and the FICON Directors (and under which conditions they should be applied). That information can be used as a starting point for your documentation. For documentation of our implementation steps for a cascaded FICON Director environment see Chapter 7, Configuring a cascaded FICON Director topology on page 133, and Chapter 8, Configuring FICON Directors on page 151. Configuration worksheets for your FICON Director and FICON CTC environments can be found in Appendix B, Configuration worksheets on page 251.
Step 2 - Requirements
We based our requirements on the need to access non-business-critical data in a remote location from two z/OS LPARs in the main location. Isolating the new configuration from the existing one is essential to the solution. High availability is not a key requirement. Based on our physical and logical inventory, these were the components involved in the solution: An IBM System z10 Enterprise Class in the main location Two z/OS V1R10 LPARs Two FICON Express8 channels An IBM System Storage DS8000 in the remote location Two storage control units and two devices Two 4 Gbps ports that will be upgraded to 8 Gbps in the near future Two IBM System Storage SAN b-type family (FICON Directors) SAN768B with four 8 Gbps ports (in the main location) SAN384B with four 8 Gbps ports (in the remote location) Figure A-1 displays the components used to create our new FICON environment.
z10 EC
SCZP201
LPAR A23 z/OS V1R10
FICON Express8 PCHID 5A2
System Storage
SAN384B
Port 88 Port 8D Port C4 Port CF
DS8000
Port 0003
Port 03
Port 2D
Port 34
Port 0242
Port 1B
SAN768B
Figure A-1 Results of the inventory effort
244
Step 3 - Context
Taking into consideration the existing and planned components (System z, FICON Director, and storage control device), we determined that the scenarios 4.4.2, Moving to a high bandwidth environment (FICON Express8) on page 65 and 4.4.3, Migrating from a single site to a multi-site environment on page 66, best fit our requirements. In Figure A-2, ports 2D and 34 in SAN768B and ports 8D and C4 in SAN384B will be used for ISLs. All ports will be running at date link rate of 8 Gbps, except for the ports in the DS8000; they will run at a date link rate of 4 Gbps. Ports 0003 and 0242 in the DS8000 will be upgraded at a later point.
z10 EC
SCZP201
LPAR A23 z/OS V1R10
FICON Express8 PCHID 5A2
DS8000
Port 03 Port 2D
Port 1B
Port 34
SAN384B
Port 8D Port C4 Port 88 Port CF
Port 0003
SAN768B
Port 0242
z10 EC DS8000
FICON Express8 LX
LX
LX
SAN384B
FICON Express8 LX
LX
LX
LX LX LX
LX LX
SAN768B
LX
245
Step 5 - Convergence
In our FICON environment we did not set up an intermix environment. However, the FICON Directors do offer this possibility, if needed. Because we were using existing environments, other relevant factors for the FICON fabric such as power consumption, cooling, and space inside the data center were not a concern. Our FICON environment was connected to an isolated network within a secured area, therefore we only used username and password authentication and authorization. However, we highly recommend that you change all the default passwords for all default accounts. To isolate the new configuration from the existing configuration, we secured the fabric using zoning and the fabric configuration server with strict SCC policies.
Step 6 - Management
For management purposes, we used the Data Center Fabric Manager (DCFM) for the entire FICON environment including setup and monitoring, as illustrated in Figure A-4.
Our FICON Director management environment is shown in Figure A-5. The figure also shows the IP addresses that we used for our FICON Directors, DCFM server, and firewall/router.
Data Center Fabric Manager Server/Client Data Center Fabric Manager Client
SW 65 CP0 CP1
10.1.1.21 10.1.1.31 10.1.1.22 10.1.1.32 10.1.1.10
CP0 CP1 SW 66
10.77.77.62 10.1.1.30
10.1.1.1
Firewall
Corporate Network
246
We also used the CUP and SA for I/O to manage and control System z FICON connectivity.
Step 7 - Virtualization
In System z, we had two z/OS LPARs. Although the two z/OS LPARs use the same CSS, each FICON channels will be defined to an additional CSS (spanned) for future use. The channels were on two different FICON features. We defined redundant paths from both z/OS LPARs to the storage control unit.
z10 EC
SCZP201
LPAR SC30 A23 z/OS V1R10
FICON Express8 LX CSS2 PCHID 5A2 CHPID 7E
DS8000
CU# D000
LX 0003
D0xx
Port 03
Port 2D
Port 34
Switch # 65 Switch @ 65
LX 0242
D1xx
CU# D100
Port 1B
SAN768B
Figure A-6 Virtualization
For the DS8000, we set up two ports for redundant access to all devices. We had two logical control units inside the DS8000, and we defined the DS8000 channels to different FICON features. Our FICON Directors were, by default, set up with virtualization support and we used this technology to define port addresses beyond the FICON port addressing range. We created a new logical switch to be able address the 48-port line cards. We also assigned our ports to two different cards for redundancy. The two virtual FICON Directors were interconnected with ISLs. The following two figures show the worksheets that were used to configure the FICON connectivity for the z10 EC, the two FICON Directors, and the DS8000. Figure A-7 on page 248 displays the configuration worksheet for FICON Director 1.
247
Cascaded Directors No Yes x Corresponding Cascaded Director Domain ID x66 Fabric Name Cascaded Fabric 1
Machine Type
Model
Serial Number
1 2 3 4
3 11 13 4
LX LX LX LX
Z10 EC PCHID 5A2 Z10 EC PCHID 5D2 ISL1 to SAN384B ISL2 to SAN384B
CHNL CHNL FD FD
Cascaded Directors No Yes x Corresponding Cascaded Director Domain ID x65 Fabric Name Cascaded Fabric 1
Machine Type
Model
Serial Number
7 8 7 8
8 15 13 4
LX LX LX LX
CU CU FD FD
For high integrity, we set up Insistent Domain IDs and strict SCC policy. We used one port-based zone for all our FICON channels.
Step 8 - Performance
The short distance between the two locations allowed us to keep the default buffer credit assignment in the FICON Directors. For high performance, we used the zHPF feature in both the System z server and the DS8000. For better performance of the fabric links, we used lossless DLS with frame-based trunking. We also used port fencing to ensure that no loss of performance occurs due to excessive errors on the interfaces, and to make certain that the fabric remains stable.
248
To monitor and evaluate the utilization of the paths and the traffic patterns, we used RMF on our System z server.
z10 EC
FICON Express8 LX PCHID 5A2 CHPID 7E
DS8000
SM
9 m
9 m
SM
LX 03
LX 0003
m 9 SM
m 9 SM
LX 2D LX 34
SM
LX 1B
LX 0242
SAN768B
Figure A-9 Physical connectivity
For the cabling infrastructure we used IBM Fiber transport service products with single mode fiber.
Conclusion
After all these steps were completed, our final FICON design appeared as shown in Figure A-10 on page 250.
249
z10 EC
SCZP201
LPAR SC30 A23 z/OS V1R10
FICON Express8 LX CSS2 PCHID 5A2 CHPID 7E
SAN384B
Switch # 66 Switch @ 66
Port 8D Port C4 Port 88 Port CF
DS8000
9 m SM
CU# D000
LX 0003
9 m
SM
D0xx
m 9 SM
Port 2D
L IS
L IS
9 m SM
Port 03
Port 34
9 m
Switch # 65
LX 0242
D1xx
SM
Switch @ 65
CU# D100
Port 1B * All cable connectors are LC Duplex type
SAN768B
Figure A-10 Our final FICON design
250
Appendix B.
Configuration worksheets
This appendix contains the following worksheets: FICON Director Configuration Worksheet Use this worksheet to document the layout of your FICON Director. It can be applied as a tool to help you understand how the ports are allocated for configuration and problem determination purposes. FICON CTC Image-ID Worksheet Use this worksheet as an aid in planning and documenting the association between LPARs, MIF IDs, Logical Channel subsystems, and CTC Image-IDs.
Copyright IBM Corp. 2005, 2006, 2009, 2006. All rights reserved.
251
252
HCD Defined Switch ID _____ (Switch ID) FICON Director Domain ID _____ (Switch @)
Slot Number
Port Number
Server: Type:
LPAR Name CSS MIF ID CTC-Image ID
Server: Type:
LPAR Name CSS MIF ID CTC-Image ID
253
254
Appendix C.
255
256
ID statement
The ID statement is used to specify identification information of the server and IODF used to build the IOCDS. In addition to other keywords allowed in the ID statement, only keywords used later in this chapter are described here. SYSTEM The SYSTEM keyword specifies the machine limits and rules that are used for verification of input data set. The device type of the server using the IOCDS has to be specified in the ID keyword. The TOK keyword provides information about the source file used to build the IOCDS.
RESOURCE statement
The RESOURCE statement provides information about logical partitions. The partition names and their MIF IDs and associated CSSs are defined here. PART The PART keyword allows you to specify an LPAR name and to assign the MIF ID to a partition. Optionally, a CSS number can be assigned to the logical partition.
CHPID statement
The CHIPD statement contain keywords that define the characteristics of a channel. Note that not all keywords allowed in the CHPID statement are described here; only keywords used to define a FICON channel (FC) are listed. PATH The PATH parameter allows you to specify the CSSs and the CHPID number that will be assigned to a channel path. The PCHID keyword will be used to identify the location of the physical channel in the processor. The CHPID type for FICON channels in a point-to-point configuration is required and must be specified as TYPE=FC. This indicates that the FICON channel will operate in FICON native (FC) mode. The Switch keyword is required, and it specifies an arbitrary number for the FICON Director to which the channel path is assigned. Note that the switch number defined in the IOCDS is not the same as the switch address that is set up in the FICON Director. The switch address is the Domain ID specified in the FICON Director. It is recommended that you have the same value for the switch number and the switch address.
TYPE
SWITCH
CNTLUNIT statement
The CNTLUNIT statement contains keywords that define the characteristics of a control unit. Note that not all the keywords allowed in the CNTLUNIT statement are described here; only keywords used to define a control unit connected to a FICON channel (FC) are listed. PATH LINK The PATH keyword specifies the channel path in each channel subsystem (CSS) attached to the control unit. The LINK keyword is required to define the destination port address where the control unit is connected to the FICON Director. The link address can be
257
specified with one or two bytes. Specifying a two-byte link address requires a special feature (High Integrity Fabric) and setup in the FICON Director. CUADD For FICON native (FC) channel paths, the logical address is specified as two hexadecimal digits in the range 00FE. Not all FICON control units support logical addressing. To determine a products logical addressing information, contact your System Service Representative supporting the FICON control units. UNITADD The UNITADD keyword specifies the unit addresses of the I/O devices that the control unit recognizes.
IODEVICE statement
The IODEVICE statement specifies the characteristics of an I/O device and defines to which CU the devices is attached. CUNUMBR The CUNUMBR keyword specifies the control unit number to which the device is attached.
258
Appendix D.
259
Getting started
The LX ports 0003 and 0242 in our DS8000 are used to connect to either z10 FICON Express8 ports or to FICON Director ports. The following steps describe how to set up the ports for FICON use: 1. Start a Web browser on a workstation that has network connectivity to the DS8000 HMC. 2. Type the address of the DS8000 in the Web browser address field, for example http://xxx.xxx.xxx.xxx:8451/DS8000/Login (where xxx.xxx.xxx.xxx is the IP address of the DS8000 HMC), and then press Enter. 3. When the logon panel is displayed, type in the user ID and password. The welcome window will appear as shown in Figure D-1.
4. Click Storage images to continue. The Storage images panel is displayed next (see Figure D-2 on page 261).
260
5. On the Storage Images window click the check box on the left of the image name to select the Storage Image. 6. In the Select Action pull-down menu, click Configure I/O Ports. The Configure I/O Ports panel will be displayed as shown in Figure D-3.
7. Click the check box of the port number to select the port you want to change. 8. In the Select Action pull-down menu, click Change to FICON. This will specify the port to use native FICON protocol for communication. 9. Complete these steps for all ports desired for native FICON protocol.
261
10.Finally, check that all ports using native FICON protocol are specified correctly. See Figure D-4 for an example of the Configure I/O Ports panel (port 0003 is set up and online).
For our configuration we need to have I/O ports 0003 and 0242 set up for FICON in the controller.
262
The logical port numbering is written as I0000 (it can also be written without the I in front). Table D-2 shows the last three digits (I0xxx) of the logical port layout for rack 1.
Table D-2 Logical port numbering for rack1 (rear view) I1 000 001 002 003 C1 200 201 202 203 C1 010 011 012 013 C2 210 211 212 213 C2 020 021 022 023 C3 220 221 222 223 C3 I3 R I O R I O 030 031 032 033 C4 230 231 232 233 C4 040 041 042 043 C5 240 241 242 243 C5 050 051 052 053 C6 250 251 252 253 C6 100 101 102 103 C1 300 301 302 303 C1 110 111 112 113 C2 310 311 312 313 C2 120 121 122 123 C3 320 321 322 323 C3 I4 R I O R I O I2 130 131 132 133 C4 330 331 332 333 C4 140 141 142 143 C5 340 341 342 342 C5 150 151 152 153 C6 350 351 352 353 C6
Table D-3 on page 264 shows the last three digits of the logical port layout for rack 2.
263
Table D-3 Logical port numbering for rack 2 (rear view) I1 400 401 402 403 C1 600 601 602 603 C1 410 411 412 413 C2 610 611 612 613 C2 420 421 422 423 C3 620 621 622 623 C3 I3 R I O R I O 430 431 432 433 C4 630 631 632 633 C4 440 441 442 443 C5 640 641 642 643 C5 450 451 452 453 C6 650 651 652 653 C6 500 501 502 503 C1 700 701 702 703 C1 510 511 512 513 C2 710 711 712 713 C2 520 521 522 523 C3 720 721 722 723 C3 I4 R I O R I O I2 530 531 532 533 C4 730 731 732 733 C4 540 541 542 543 C5 740 741 742 742 C5 550 551 552 553 C6 750 751 752 753 C6
264
Appendix E.
265
266
As highlighted in the CPC Instance Information panel, the following information can be retrieved (the values shown reflect our scenario): The IOCDS slot used to Activate the CPC: A2 The IOCDS name used during Activation: IODF00 The Activation Profile associated with the CEC: SCZP201 The Activation Profile last used to Activate the CEC: SCZP201 Figure E-2 on page 268 shows CPC product information.
267
As highlighted in the CPC Product Information panel shown in Figure E-2, the following information can be retrieved from this panel: The machine type and hardware model: 2097 / E28 The machine serial number: 02-001DE50 The Model-Capacity Identifier: 714 The Model-Temporary Capacity Identifier:714 The Model-Permanent Capacity Identifier: 714
268
Note: The HMC has a similar panel under CPC configuration tasks that allows the Frame Layout panel to be viewed.
269
In addition to showing search and different view modes, this panel displays information about the channels in use such as: Channel Location Physical location of a channel including Cage, Card Slot and Jack (card port) Book and Fanout Provides information about what fanout is used to access the channel, its book location and port Channel State The current state of the channel, such as Online and Offline PCHID - the channels corresponding PCHID number CSS.CHPID - the CHPID and CSSs that this channel belongs to Card Type - the channels corresponding hardware
270
Figure E-5 shows the PCHID Details panel for PCHID 05A2. Note that there is a panel called Status in the field. The status reflects the current state of the PCHID or CHPID.
This list shows the possible state or status that a particular PCHID/CHPID might be in: Stopped Operating Stand-by Stand-by/Reserved IFCC Threshold Exceeded This state indicates that the channel is defined, but not in use. This is the normal indication of a working CHPID. This indicates that the CHPID has been taken offline. This means that the channel has been put in service mode for repair or test. This is a channel status condition that will appear when the IFCC events have exceeded a coded preset value. For FICON channels, the threshold established limit number is 4. For each IFCC detected, this value will be decremented by the code. When a value of 0 (zero) is reached, this will be indicated in the CHPID status/state. The threshold value is shown in the Analyze Channel Status SE PD panel. This condition is caused by the defined device not matching the attached device. The channel is a FICON channel but it is not compatible with the channel type defined in the IOCDS.
Definition Error
Note: When a definition error is detected, the CHPID icon will default to the icon that represents a FICON Converter CHPID, and it is not necessarily representative of the defined channel type. Sequence Time Out A FICON sequence is a FICON frame or a group of FICON frames transmitted between the Channel and the CU or between the CU and the channel. The number of frames that make up a
271
sequence is determined by specific fields inside the frames themselves. Sequences are logically grouped to form an exchange. An exchange represents an I/O operation in the FICON implementation. A Sequence Time Out is detected by the channel or the CU when an ongoing exchange does not detect the arrival of a required sequence. Sequence Not Permitted This error is reported in the Analyze Channel information panel as Illegal Sequence. It reports that the FICON channel-attached control unit or FICON Director port does not support the presented FICON service parameters.
272
Note the blocks that are highlighted: On the top left side, in the block labeled Definition Information, notice that Partition ID 23 is the LPAR ID associated to the partition where this panel was selected from. MIF image ID 3 corresponds to the LPAR MIF ID on CSS 2. also note that CHPID 7E is reported as Spanned, meaning that it has been shared between LPARs in different CSSs and is associated to CHPID 5A2. It also reports that CHPID 7E is connected to Switch number 65. The block labeled Status Information indicates the status and the state information of this CHPID. This block also displays the Error code and the Bit Error Counter (Ber) accumulated values for the CHPID. The Error code, when not equal to 0 (zero), may be displayed using a panel button called Error details. The block on the top right side labeled Hardware Card type Cascade definition indicates that this channel was defined using 2 byte control unit link address. Normally, this is an indication that cascaded switch topology is being used. Note: HCD allows a dual-byte address definition even when a single FICON Director is used in the configuration. (In this case, the entry port and the destination port belong to the same switch number). The block labeled Error Status shows the link status, the connection port, and the SAP affinity associated to the CHPID. The IFCC threshold is has an initial value of 4 and decrements at every occurrence of an IFCC on this channel. When the value reaches 0 (zero), the indication of a IFCC Threshold Exceeded will be associated to the CHPID icon on the SE desktop. The Temporary error threshold works the same way as the IFCC threshold. The SAP affinity indicates which of the SAPs this CHPID is associated to.
273
The block labeled Card Type Connection rate shows the FICON card description type associated with this CHPID. It has a direct relationship with the information provided about the Hardware type and Subtype presented in the top right corner of the panel.
274
This is a unique 64-bit address that is used to identify the node in an FC topology. This address is assigned by the manufacturer. This is a unique 64-bit address that is used to identify the port in an FC topology. This address is assigned by the manufacturer.
275
Note the fields that are highlighted: The field Irpt parm: 02362250 displays this subchannels associated UCB SCP absolute address. During IPL device mapping process, the subchannel is connected to its UCW and this field contains the respective pointer to the UCB in SCP memory. The Pathing information is displayed in the block in the center of the panel. The list of candidate CHPIDs is shown in the top right corner. This Pathing information has various fields that are bit-position-relative to the defined channel paths to this device. Each bit position represents one channel path. Bit 0, or the first position bit from the left, corresponds to CSS.CHPID0.PCHID. Bit 1, or the second position bit from the left, corresponds to CSS.CHPID1.PCHI, and so on. The following fields are shown: CHPID - Associated CHPID number. LPM - Logical Path Mask: the bits show which CHPIDS can be used to access the device. LPE - Logical Path Established: this field indicates the paths to this device that successfully went through the pathing process. PNOM - Path Non-Operational Mask: this field indicates which paths from the list of candidate CHPIDs are in the Not Operational state.
276
LPUM - Last Path Used Mask: this field indicates the last path used to access the device when successful or any abnormal condition occurred for the I/O operation. It is updated when the Ending Status is received from the device. PIM - Path Installed Mask: this field indicates the channel paths that are defined in the IOCDS. POM - Path Operational Mask: this field contains the paths online to the device. Initially (at System Reset), this value is set to FF (all ones) and the corresponding bit will be turned OFF when the first attempt to access the device fails (Inoperative). PAM - Path Available Mask: this field indicates the Physically Available channel paths. The initial value will be the paths defined in the IOCDS for the LPAR. This bit will be turned off when: The corresponding CHPID is configured OFF from the HMC. The CF CHP(xx), OFFLINE z/OS command is issued from the operating system console.
277
The starting point to obtain this data is provided by the device number entered when selecting this PD action. The output presented shows: This indicates all possible paths to get to the entered device. This indicates the channels associated to the displayed paths. This indicates the destination link address in use by each of the CHPIDs and respective paths. Node Type This indicates the type of attachment to the CHPIDs and Paths. Node Status This indicates the current status of the node. Type/Model, MFG, Plant, Seq. Number and Tag All together, these represent the Control Unit RNID (Remote Node Identification) information. Path CHPIDS Linkqq All the possible paths to get to the entered device are displayed, along with the CHPIDs associated to these paths and their respective FICON Director links, where applicable. This devices control unit characteristics are also displayed, along with the tags (which represent the attaching Host Bay Adapters) that the channels connect to. In our example we highlighted the FICON CPID 2.7E in the control unit header frame, which is connected to a 2107 using tag 0003. Refer to DS8000 port layout on page 262 for the logical port layout of a 2107. Using Table D-2 on page 263 to translate tag value 0003 to a physical port results in the following: Rack 1 (R1) I/O enclosure 1 (I1) Card 1 (C1) Port 3 (T3)
278
2. On the SE workplace, double-click Groups. 3. Double-click CPC if you know the PCHID number associated. Otherwise, double-click Images and select the LPAR you are working from. 4. Right-click the CPC or Image and select Channels. If you select CPC, the PCHIDs will display. If you select Images, the CHPIDs associated to a logical partition will display. 5. Highlight the channel you want to view. 6. Double-click Channel Problem Determination, which is located in the CHPID Operations task list. Note: The SE will post a panel for an image (LPAR) selection if you are working with a PCHID instead of a CHPID. 7. Select the option Analyze Paths to a Device and click OK. 8. Select the option Search by Device Number (you can also search by a unit address or subchannel number if appropriate) and click OK. 9. Enter the device number (or unit address or subchannel number, depending on what you have selected in the previous step) and select OK. The Analyze Path to a Device panel is shown in Figure E-9.
It can display important device path information, as follows: The devices associated subchannel. All paths by which the device can be accessed (this is relevant only for the partition the channel path has been selected from), with the following additional information: Avail CHPID Switch This shows if that path is available at that instance. This is the associated CHPID number. This shows if there is a FICON Director in the link.
Appendix E. Using HMC and SE for problem determination information
279
This is the FICON Director number (if applicable). This is the destination link address (the port address the CU is connected to). This is the logical CU that the device is connected to.
280
The Analyze Device Status panel reports the current state of all devices pertaining to a Logical Control Unit. Note that a Filter Status button is provided to find a specific status a particular device.
281
The following logical pathing information is displayed: MIF Image ID - This is the MIF ID of the image we selected the channel from. CHPID - This is the logical channel consisting of CSS and CHPID. Channel type - This is the channel type as defined in the IOCP. Switch number - This is the switch number of the FICON Director that the channel is connected to. Switch number valid - This is the total number of switches in the links. Channel link address - This is the link address (port address) that the channel is connected to. The status of all logical link addresses can be seen in this panel, as well. This provides information about the logical CU initialization from a channel point of view. The given CU address cannot be accessed if the status is other than INTIALIZATION COMPLETE. Channel 2.7E from MIF Image 3 connected to Switch number 65 port address 03. A series of logical Control Units (residing in the same physical CU) being accessed from Channel 2.7E show the status INITIALIZATION COMPLETE. Any status other than INITIALIZATION COMPLETE indicates a potential problem that may need to be investigated. Note, however, that a status of INITIALIZATION PENDING is not necessarily an error indication. It depends on whether the logical CU is expected to be available. For example, if a channel is trying to establish a FCTC connection to another server or image but the LPAR is not activated, this would show a status of INITIALIZATION PENDING. However, the status is expected to change after the target LPAR is activated.
282
Other typical error messages are: Initialization Complete - CU Busy Initialization Complete - Dynamic Switch Port Busy Initialization in Progress CU Reset in Progress Remove Logical Path in Progress Link Configuration Problem - Port Reject - Address not Valid Switch Port Not Defined - Port Reject - Undefined Destination Address Switch Port Malfunction - Port Reject - Port Destination Malfunction Switch Port Not Available - Port Reject - Port Intervention Required Link Configuration Problem - Link Reject - Unrecognized Device Level Link Configuration Problem - Link Reject - Uninstalled Link Control Function Link Level Error - Link Reject - Transmission Error Link Level Error - Link Reject - Destination Address not Valid Link Level Error - Link Reject - Acquire Address Error Link Configuration Problem - Initialization Failure - Channel/CU mismatch CU Resources Exceeded - Init Failure - No Resources Available Link Level Error - Channel Detected Error Internal CTC Initialization Complete Link Level Error - FICON FLOGI ELS Error Link Level Error - FICON PLOGI ELS Error Link Level Error - FICON RNID ELS Error Link Level Error - FICON SCR ELS Error Link Level Error - FICON LIR ELS Error Link Level Error - FICON Invalid Attachment CU resources exceeded - This indicates that the CU ran out of logical CUs. CU reset in progress - This indicates that there is a reset initiated by the CU itself in progress. Channel/CU mismatch - This indicates that the logical CU address (defined in the CU) does not match its definition in the IOC.P PLOGI error - This indicates that the link initialization did not succeed to the Destination Port.
Note: The latest implementation of the System z Server SE Problem Determination facility improves the PLOGI and FLOGI error messages substantially. With this new implementation the panel will show, in plain English, what went wrong during the CU link initialization. Figure E-12 shows one of the new messages issued when initialization failures occur.
283
The Analyze Link Error Statistics Block (LESB) is a new implementation on the System z Servers Problem Determination SE panels. It displays the Channel N-Port LESB concurrently with channel operations.
284
The various types of errors are accumulated to provide you with a view of how the channel is operating. A refresh provides a dynamic display of the counters. The information gathered from the Channel N-Port is also recorded in the EREP SLH records when a Purge Path Extended (PPE) operation is performed. For more information about the PPE process, refer to 10.5, FICON Purge Path Extended on page 222.
285
286
Appendix F.
287
z10 EC
SCZP201
LPAR SC30 A23 z/OS V1R10
FICON Express8 LX CSS2 MIF 3 PCHID 5A2 CHPID 7E
DS8000
CU# D000
LX 0003
D0xx
ISL
Port 03 Port 2D Port 34
Switch # 65 Switch @ 65
LX 0242
D1xx
CU# D100
Port 1B
* All cable connectors are LC Duplex type
SAN768B
Figure F-1 Our FICON configuration
D M=CPU IEE174I 11.38.13 DISPLAY M 944 PROCESSOR STATUS ID CPU SERIAL 00 + 23DE502097 01 + 23DE502097 02 03 04 -A 05 -A 06 +I 23DE502097 07 -I
288
CPC ND = 002097.E26.IBM.02.00000001DE50 CPC SI = 2097.714.IBM.02.000000000001DE50 Model: E26 CPC ID = 00 CPC NAME = CZP201 LP NAME = A23 LP ID = 23 CSS ID = 2 MIF ID = 3 + ONLINE - OFFLINE . DOES NOT EXIST W WLM-MANAGED N NOT AVAILABLE A APPLICATION ASSIST PROCESSOR (zAAP) I INTEGRATED INFORMATION PROCESSOR (zIIP) CPC ND CENTRAL PROCESSING COMPLEX NODE DESCRIPTOR CPC SI SYSTEM INFORMATION FROM STSI INSTRUCTION CPC ID CENTRAL PROCESSING COMPLEX IDENTIFIER CPC NAME CENTRAL PROCESSING COMPLEX NAME LP NAME LOGICAL PARTITION NAME LP ID LOGICAL PARTITION IDENTIFIER CSS ID CHANNEL SUBSYSTEM IDENTIFIER MIF ID MULTIPLE IMAGE FACILITY IMAGE IDENTIFIER
The other fields in the display output have the following meanings: CPC name This reflects the CPC object name customized to the HMC at System z installation time. The CPC name is also defined in the HCD/HCM (a CPC can only be defined once). HCD/HCM also requires a Proc.ID to be specified for the CPC. LP name This is defined in the HCD/HCM logical partition definition or in the IOCP RESOURCE statement. The name has to be unique for each logical partition across all four Channel Subsystems (CSSs) on System z. Changing, adding, or deleting LPARs requires a new IOCDS and a Power-on Reset (POR) on System z9, and is fully concurrent on System z10. Note: System z10 (2097 and 2098) implemented a fixed Hardware System Area (HSA) storage approach that allows for concurrent adding logical partitions, channel subsystems, subchannel sets, logical processors and cryptographic co-processors. Adding logical processors to a LPAR requires z/OS 1.10 or later. CSS ID This indicates in which CSS this LPAR has been created. The number of CSSs that a System z supports depends on the machine model. The Business Class (BC) models for the z9 and z10 support two CSS. The Enterprise Class (EC) models support four CSSs in total. Increasing or decreasing the number of CSSs in use in a z9 requires a POR and is concurrent on z10. Each CSS supports two subchannel sets: SS0 and SS1. MIF ID This is a value, between 1 and 15, that is associated to a partition in one CSS. The MIF ID numbers are defined in the HCD/HCM in the IOCP RESOURCE statement. The MIF ID must be unique for each logical partition in a System z CSS. The MIF ID is used by the System z channel subsystem and channels to identify the source of an I/O
289
request. You need to know the MIF ID to resolve LP-related problems or to resolve failures in establishing a FICON logical path. LP ID The logical partition ID (LP ID) is specified in the Logical Partition Image Profile using the HMC. The LP ID number, from x00 to x3F, must be unique across all partitions on a System z.
D IPLINFO IEE254I 09.58.13 IPLINFO DISPLAY 899 SYSTEM IPLED AT 09.16.10 ON 03/12/2009 RELEASE z/OS 01.10.00 LICENSE = z/OS USED LOAD01 IN SYS0.IPLPARM ON C730 ARCHLVL = 2 MTLSHARE = N IEASYM LIST = XX IEASYS LIST = (00) (OP) IODF DEVICE C730 IPL DEVICE DC0D VOLUME Z1ARC1 The output displays the following information: The z/OS release (RELEASE z/OS 01.10.00)that this partition is running The time stamp, which shows that this partition was IPLed (SYSTEM IPLED AT 09.16.10 ON 03/12/2009) The LOAD member selected from the PARMLIB (USED LOAD01 IN SYS0.IPLPARM ON C730) The system architecture level (ARCHLVL = 2), indicating a 64-bit operating system in this case The IODF device The IPL device and its VOLSER (IPL DEVICE DC0D VOLUME Z1ARC1) Note: IPLing the system from the wrong device, or selecting an unappropriated IODF device, may lead to I/O-related and configuration-related problems.
290
D IOS,CONFIG IOS506I 10.00.19 I/O CONFIG DATA 901 ACTIVE IODF DATA SET = SYS6.IODF75 CONFIGURATION ID = TEST2094 EDT ID = 01 TOKEN: PROCESSOR DATE TIME DESCRIPTION SOURCE: SCZP201 09-05-04 09:50:17 SYS6 IODF75 ACTIVE CSS: 2 SUBCHANNEL SETS CONFIGURED: 0, 1 CHANNEL MEASUREMENT BLOCK FACILITY IS ACTIVE
D IOS,CONFIG(HSA) IOS506I 10.01.22 I/O CONFIG DATA 903 HARDWARE SYSTEM AREA AVAILABLE FOR CONFIGURATION CHANGES PHYSICAL CONTROL UNITS 8064 CSS 0 - LOGICAL CONTROL UNITS 3980 SS 0 SUBCHANNELS 51215 SS 1 SUBCHANNELS 65535 CSS 1 - LOGICAL CONTROL UNITS 3987 SS 0 SUBCHANNELS 51375 SS 1 SUBCHANNELS 65535 CSS 2 - LOGICAL CONTROL UNITS 4004 SS 0 SUBCHANNELS 52515 SS 1 SUBCHANNELS 65535 CSS 3 - LOGICAL CONTROL UNITS 4050 SS 0 SUBCHANNELS 56520 SS 1 SUBCHANNELS 65535 The output displays: The total number of physical control units defined in the configuration: 8064 The number of logical control units per Channel Subsystem The number of subchannels defined in each CSSs subchannel set (0 and 1)
291
VOLSER = NE800
Whenever a z/OS Display Units command (or any other general form of this command such as D U,DASD,,dddd,n or D U,,ALLOC,dddd,n) is entered, z/OS will display the status of the devices that the z/OS SCP has recorded at that time. If the devices are not defined to z/OS, or if they are exception devices, then z/OS will display one of the following pieces of information: The next device number or the next device number of that type, for a display units type request, or The next device of that status, for a display units by status, or If a combination of both type and status is used, then it will be the next device of that type and status (refer to Example F-6 on page 294) Because of this, always verify that the device (or devices) displayed in response to a z/OS Display Units command request is for your display units requested device number, or that it includes your requested device number.
D U,,,dddd,1
The Display Units command is useful for determining the z/OS status of a device. The output from this command is shown in Example F-5 on page 293. The example shows the normal result of displaying the z/OS status of a group of devices. Devices D000, D001, and D002 are online and not allocated; that is, the devices are not in use by a job or application. 292
FICON Planning and Implementation Guide
Device D003 is both online and allocated. Being allocated means the device is in use by a job or an application. If the device is online and allocated, you can use the z/OS display allocation command to determine who the device is allocated to. Enter the display allocation command DU,,ALLOC,8003,1 and it will show who the device is allocated to.
Example: F-5 z/OS display device status of a group of four devices defined and online
D U,,,D000,4 IEE457I 20.35.52 UNIT STATUS 727 UNIT TYPE STATUS VOLSER VOLSTATE D000 3390 O Z1BRZ1 PRIV/RSDNT D001 3390 O Z18DL2 PRIV/RSDNT D002 3390 O Z18RB1 PRIV/RSDNT D003 3390 A TOTD94 PRIV/RSDNT There are also a few other device statuses that may be returned for a D U command: Automatically switchable. Hardware failure. The device has been BOXED (refer to Boxed status notes: below). BSY Busy. C Console. F Offline. F indicates that more than one bit is set in the UCB. This is used when a combination of offline and some other status value needs to be displayed (for example, F-NRD). L The release on a device is pending and reserve may or may not have occurred. M A device managed by a device manager, such as JES3 or a non-IBM tape management subsystem. MTP - Mount Pending NRD - Not Ready O - ONLINE OFFLINE This is used when the only status value that needs to be displayed is OFFLINE. P Reserve Pending PND Offline pending PO Offline pending, and also not ready. This status value is displayed only if NRD is also displayed on the status line. PUL Unload pending. R Reserved, shared DASD or exclusively-assigned device. RAL Restricted to Allocation. S - SYSRES SPD Suspended. (a paging volume). The channel program is temporarily suspended while the system is using the device. SYS Allocated to system. UNAVL The device has been marked as unavailable for allocation by the VARY xxxx,UNAVAIL operator command. AS BOX
293
Boxed status notes: If the reported status of a device is O-BOX (Online and Boxed), that status can be cleared by using a vary online with unconditional parameter: V dddd,ONLINE,UNCOND. If the reported status is F-BOX (Offline and Boxed), the device can be brought back online with the VARY dddd,ONLINE command. This will enable the UCW and perform the online processing to the device. Assuming that the error condition has been resolved, the device will come online. If the error condition still exists, however, the device may remain in the boxed state.
D U,,,8045,1 IEE457I 08.57.21 UNIT STATUS 822 UNIT TYPE STATUS VOLSER VOLSTATE 8100 3390 O NW8100 PRIV/RSDNT The D U request shown in this example was for device 8045; however, notice that the actual display reports the status for device 8100.
D U,,,ALLOC,dddd,n
The D U,,,ALLOC,ddd,n command is one of the various formats for the Display Units z/OS device commands. Its main purpose is to display the allocations of a device when its status has been determined to be Allocated. Example F-7 shows the device number D003 as online, allocated to two jobs, and currently in use.
Example: F-7 z/OS display device allocation
D U,,,ALLOC,D003,1 IEE106I 08.07.58 UNITS ALLOCATED 722 UNIT JOBNAME ASID JOBNAME ASID JOBNAME ASID JOBNAME ASID D003 DUMPSRV 0005 MVSRECVA 0028
The information displayed shows that the jobs using the device are: Jobname - DUMPSRV Jobname - MVSRECVA
V dddd,offline
In Example F-8 on page 295, the V dddd,OFFLINE command is issued against device D003. However, as shown in Example F-7, device D003 is allocated and in use. Therefore, the action of varying this device offline will be in a pending status.
294
V D003,OFFLINE IEF524I 8003, VOLUME TOTD94 PENDING OFFLINE .... D U,,,D000,4 IEE457I 08.16.21 UNIT STATUS 756 UNIT TYPE STATUS VOLSER VOLSTATE D000 3390 O Z1BRZ1 PRIV/RSDNT D001 3390 O Z18DL2 PRIV/RSDNT D002 3390 O Z18RB1 PRIV/RSDNT D003 3390 A-PND TOTD94 PRIV/RSDNT
D U,,,D003,1 IEE457I 18.27.12 UNIT STATUS 674 UNIT TYPE STATUS VOLSER D003 3390 OFFLINE
VOLSTATE /RSDNT
A device can be offline for a number of reasons: It has been varied offline by the operator (this is the case shown in Example F-90. It was defined to the operating system as being offline at system IPL time. The device is defined to the operating system but not to the logical partition that the z/OS is running in; that is, there is no subchannel. The device is defined to the operating system but not to the channel subsystem; that is, there is no subchannel. There is no operational path to the control-unit or device. All the operational paths were logically varied offline. The definition to access the control unit device is not correct. CHPID. Link destination link address (destination port address). CUADD - the CU logical image address. For an IBM 2107, the CUADD is unique for an LSS. For a CTC, the CUADD is the MIF ID of the target LPAR. The device addressing definition is not correct. The device is not ready. Link destination link address (destination port address). The device is in an error state. The defined device type and the physical device type (the actual device) do not match. An error condition occurred during the use of the device (that is, a reserve or assign was lost) and the device was forced offline (it may have been boxed during the recovery process, as well).
V dddd,ONLINE
Use the z/OS command shown in Example F-10 on page 296 to bring a single device online. There are variations to the V dddd,online command that allow more than one device to be brought online simultaneously. A group of devices can be varied online using dddd,dddd or dddd-dddd as variables for the vary command.
295
ONLINE
A failure to bring a device online can be caused by: There is no UCB for the target device. There is no UCW (subchannel) for the target device. All paths to the device are offline. None of the defined paths are operational. The CU and or the device addressing is not correct. There is a mismatch of device type between the definition and the actual device.
D M=DEV(dddd)
The D M=DEV(dddd) command displays the device status for device dddd. Example F-11 shows the output of this display device matrix command. It shows the path status of all paths that have been defined to this single device address (device D003, in the example). The destination link address as well as the entry link address are individually provided for each channel accessing the device.
Example: F-11 z/OS display output for D M=DEV(dddd)
D M=DEV(D003) IEE174I 09.50.12 DISPLAY M 767 DEVICE D003 STATUS=ONLINE CHP 50 54 58 5C 7E 82 ENTRY LINK ADDRESS 0F 0F 1B 1B 6503 651B DEST LINK ADDRESS 2C 2C 0A 0A 6688 66CF PATH ONLINE N N Y Y Y Y CHP PHYSICALLY ONLINE N N Y Y Y Y PATH OPERATIONAL N N Y Y Y Y MANAGED N N N N N N CU NUMBER D000 D000 D000 D000 D000 D000 MAXIMUM MANAGED CHPID(S) ALLOWED: 0 DESTINATION CU LOGICAL ADDRESS = 00 SCP CU ND = 002107.900.IBM.75.0000000BALB1.0300 SCP TOKEN NED = 002107.900.IBM.75.0000000BALB1.0000 SCP DEVICE NED = 002107.900.IBM.75.0000000BALB1.0003 HYPERPAV ALIASES CONFIGURED = 143 FUNCTIONS ENABLED = MIDAW, ZHPF Example F-12 is a variation of the D M=DEV(dddd) command where a unique CHPID that accesses the device dddd is selected for display.
Example: F-12 Variation of D M=DEV(dddd,(cc))
D M=DEV(D003,(7E)) IEE174I 18.00.58 DISPLAY M 907 DEVICE D003 STATUS=ONLINE CHP 7E ENTRY LINK ADDRESS 6503 DEST LINK ADDRESS 6688 PATH ONLINE Y CHP PHYSICALLY ONLINE Y
296
PATH OPERATIONAL Y MANAGED N CU NUMBER D000 DESTINATION CU LOGICAL ADDRESS = 00 SCP CU ND = 002107.900.IBM.75.0000000BALB1.0300 SCP TOKEN NED = 002107.900.IBM.75.0000000BALB1.0000 SCP DEVICE NED = 002107.900.IBM.75.0000000BALB1.0003 HYPERPAV ALIASES CONFIGURED = 143 FUNCTIONS ENABLED = MIDAW, ZHPF For both D M=DEV examples, the following important information is displayed: The device current status: DEVICE D003 STATUS=ONLINE. The FICON Director entry port: 6503 - FICON Director @ 65, Port 03. The FICON Director exit port: 6688 - FICON Director @ 66, Port 88. Note that the FICON Director addresses are different for the entry and exit (Dest) Ports. This is an indication that a FICON Director cascading topology is being used. PATH ONLINE = Y. The PATH ONLINE field indicates whether the path is logically online to z/OS and will change as a result of the VARY PATH command. Information for this field is obtained from the UCBLPM (UCB Logical Path Mask). CHP PHYSICALLY ONLINE = Y. The CHP PHYSICALLY ONLINE field indicates whether the path is physically available. This field will change as a result of the z/OS CF CHP command. Information for this field is obtained from the Path Available Mask (PAM) in the Unit Control Word (UCW) or subchannel. Note: The CHP PHYSICALLY ONLINE field might not be accurate when the CHPID has been stolen from z/OS using the HMC/SE CHPID Config OFF/ON facility instead of the z/OS CF CHP(cc),OFFLINE command. PATH OPERATIONAL = Y This means that the PATH OPERATIONAL field status is changed as a result of the channel subsystem attempting an I/O operation on a path and the path responding with a not operational I/O interface sequence. This response can be due to: Disabling a channel interface Host Bay Adapter (HBA) at the control unit Disabling a port on a FICON Director Powering off a control unit or cluster Powering off a DASD controller Note: The PATH OPERATIONAL status can be easily misinterpreted. The information displayed is a direct reflection of a path mask called the Path Operational Mask (POM) in the UCW. PATH NOT VALIDATED = Y The PATH NOT VALIDATED field may be shown if the device was not online at IPL time. If a device is not online when the IPL device pathing process occurs, the UCBVALPH bit is set to On to indicate that the paths to the device were not validated. Validation will occur when the device is varied online. MANAGED = N This field indicates whether the channel is using the Dynamic Channel Management (DCM) facility of z/OS.
297
CU NUMBER = D000 This field presents the Control Unit Number associated to the physically attached defined CU. DESTINATION CU LOGICAL ADDRESS= 00 The DESTINATION CU LOGICAL ADDRESS field shows the logical link address of a control unit that is used to access the I/O device associated with the specified channel paths. SCP CU ND = 002107.900.IBM.75.0000000BALB1.0300 This shows the Node Descriptor (ND) LAST obtained by the System Control Program (SCP). ACTUAL CU ND = xxxxxx.xxx.xxx.xx.xxxxxxxxx.xxxx (not shown in the example) This shows the Node Descriptor of the attached subsystem read from the device in response to this command. SCP TOKEN NED = 002107.900.IBM.75.0000000BALB1.0000 This shows the Node-Element Descriptor last obtained by the SCP. ACTUAL TOKEN NED = xxxxx.xxx.xxx.xx.xxxxxxxxxx.xxxx (not shown in the example) This shows the Node-Element Descriptor read from the device in response to this command. SCP DEVICE NED = 002107.900.IBM.75.0000000BALB1.0003 This shows the Node-Element Descriptor last obtained by the SCP. ACTUAL DEVICE NED = xxxxx.xxx.xxx.xx.xxxxxxxxxx.xxxx (not shown in the example) This shows the Node-Element Descriptor read from the device in response to this command. RNID = xxxxx.xxx.xxx.xx.xxxxxxxxxx.xxxx (not shown in the example) This is the Control Unit Remote Node IDentifier obtained by the Channel Subsystem during Channel - CU initialization using Store Subsystem Channel Information (CHSC). This RNID information is kept in the Hardware System Area (HSA). HIPERPAV ALIASES CONFIGURED = 143 This field shows the number of (ALIAS) HIPERPAV ALIAS devices configured in this LCU. See Example F-14 on page 299 for a display of an alias device via D M=DEV command. FUNCTIONS ENABLED = MIDAW, ZHPF This field shows the additional facilities that this device is capable of using.
D M=DEV IEE174I 13.46.46 DISPLAY M 238 DEVICE STATUS: NUMBER OF ONLINE 0 1 2 3 4 5 6 7 8 0006 . 1 1 1 1 2 2 . . 00B0 # # # # # # # # # 0180 4 4 4 4 4 4 4 4 4 298
FICON Planning and Implementation Guide
CHANNEL PATHS 9 A B C D . . . . . # # # # # 4 4 4 4 4
E . # 4
F . # 4
0181 4 4 4 4 4 4 4 4 4 0182 4 4 4 4 4 4 4 4 4 0183 4 4 4 4 4 4 4 4 4 ..... 0404 DN DN DN DN 1 1 1 1 DN 04B0 1 1 1 1 . . . . 1 0504 DN DN DN DN 1 1 1 1 DN ..... 0FB0 # # # # 1 1 1 1 1 ************************ SYMBOL @ ONLINE, PHYSICALLY ONLINE, AND + ONLINE # DEVICE OFFLINE BX DEVICE IS BOXED SN DN DEVICE NOT AVAILABLE PE AL DEVICE IS AN ALIAS UL HA DEVICE IS A HYPERPAV ALIAS HU
4 4 4
4 4 4
4 4 4
4 4 4
4 4 4 . . .
4 4 4 . . .
4 4 4 . . .
DN DN DN . 1 1 1 . DN DN DN .
1 1 1 1 1 1 1 EXPLANATIONS ************************ OPERATIONAL INDICATORS ARE NOT EQUAL . DOES NOT EXIST SUBCHANNEL NOT AVAILABLE SUBCHANNEL IN PERMANENT ERROR DEVICE IS AN UNBOUND ALIAS HYPERPAV ALIAS UNUSABLE
D M=DEV, which is shown in Example F-13 on page 298, provides symbols identifying the state of existing devices. Significant symbolic meanings are explained here: DN The DN symbol means that the device has been defined to the z/OS IOGEN (OSCONFIG) but has not been defined in the IOCP; that is, there is no subchannel for this device known to the channel subsystem.This normally occurs when the wrong IOCDS or OSCONFIG is loaded. The UCB for a DN device has the Not Connected bit left on during the device mapping time during IPL. The dot (.) indicates that the device has not been defined to the z/OS in the config. The numeric digit (1) indicates the number of paths defined for the device. Its possible to have a combination of a digit and a symbol. For example, 4@ indicates an out-of-line path condition; online, physically online, and operational indicators are not equal with at least one of these paths, where: Online - means the path is online. Physically online - means the CHPID is physically online. Operational - means the path is operational (POM).
. 1
D M=DEV(4000)
299
300
An indication if the device extended function status information is inconsistent between z/OS control blocks and the storage subsystem An indication if the defined (UCB) device type is inconsistent with the real device type Optionally, the total number of cylinders for each unique track format (3380, 3390, and 9345) for all of the devices within the scope of the request The following information, if the device belongs to a tape library: Device type equivalent to DTYPE from the DS P command Device status indicating online/offline and ready/not ready Device type and model Device serial number Library identification number An indication if the defined (UCB) device type is inconsistent with the real device type Example F-16 shows the output of a DEVSERV command. The response is a display of basic status information about a device, a group of devices, or storage control units, and optionally can include a broad range of additional information. The path status for all defined channel paths is included, as well. There are different ways of using the DEVSERV command: DS P,1C00 - Test device 1C00 only through all paths. DS P,2000,16,ON - Test only ONLINE devices in 2000-200F. DS QD,3200,1,UCB,DCE,SSSCB,DPCT - Display MVS control blocks (1 device). DS QD,4800,1,RDC,RCD,SNSS - Get hardware information and update control blocks.
Example: F-16 Devserv QDASD with options RDC and DCE
DS QD,9000,RDC,DCE IEE459I 09.53.04 DEVSERV QDASD 445 UNIT VOLSER SCUTYPE DEVTYPE CYL SSID SCU-SERIAL DEV-SERIAL EF-CHK 9000 IN9000 2107921 2107000 32760 1000 0113-00511 0113-00511 **OK** READ DEVICE CHARACTERISTIC 2107E833900A5E80 FFF720247FF8000F E000E5A205940222 1309067400000000 0000000000000000 24241F02DFEE0001 0677080F007F4A00 003C000000000000 DCE AT V020A2A20 3878807100C445C0 0000000001F0FED8 D8007FF87FF72424 1FF7080000410000 00FC24DC9400F0FE 001F3C1E00078000 0000000000000000 **** 1 DEVICE(S) MET THE SELECTION CRITERIA **** 0 DEVICE(S) FAILED EXTENDED FUNCTION CHECKIN DS QT,TYPE=3490<,DEFINED> - Display TAPE with 3490 information. You can use the QTAPE (QT) parameter to display tape information. Using the MED (medium) option DS QT,xxxx,MED,nnn allows you to display information for the device type, media type, and cartridge volume serial number. DS QPAVS,dddd,n - Display status of Parallel Access Volume devices. To learn about other uses of the DEVSERV command, refer to z/OS MVS System Commands, SA22-7627, or to DFSMS Storage Administration Reference, SC26-7402.
301
Note: The DEVSERV command is one of the very few commands that actually performs an I/O operation against the device or devices specified. Most display commands do not actually perform a real I/O operation and report back the state of specific fields in the UCB and UCW (subchannel). Example F-17 shows the DS P,dddd,nnn command.
Example: F-17 Output of one type of DEVSERV command: DS P,dddd,nnn
DS P,D000,4 IEE459I 18.33.40 DEVSERV PATHS 919 UNIT DTYPE M CNT VOLSER CHPID=PATH STATUS RTYPE SSID CFW TC DFW PIN DC-STATE CCA DDC CYL CU-TYPE D000,33909 ,O,000,Z1BRZ1,50=- 54=- 58=+ 5C=+ 7E=+ 82=+ 2107 89E0 Y YY. YY. N SIMPLEX 00 00 10017 2107 D001,33909 ,O,000,Z18DL2,50=- 54=- 58=+ 5C=+ 7E=+ 82=+ 2107 89E0 Y YY. YY. N SIMPLEX 01 01 10017 2107 D002,33909 ,O,000,Z18RB1,50=- 54=- 58=+ 5C=+ 7E=+ 82=+ 2107 89E0 Y YY. YY. N SIMPLEX 02 02 10017 2107 D003,33909 ,A,002,TOTD94,50=- 54=- 58=+ 5C=+ 7E=+ 82=+ 2107 89E0 Y YY. YY. N SIMPLEX 03 03 10017 2107 ************************ SYMBOL DEFINITIONS ************************ O = ONLINE + = PATH AVAILABLE - = LOGICALLY OFF, PHYSICALLY OFF The fields reported by the DASD DEVSERV command are listed here. DTYPE = 33909 This is the device type of the device reported by DEVSERV command. In this case, device D000 reported back as a 3390 model 9. RTYPE = 2107 This is the real (true) device type of the device reported by the DEVSERV command. 2107 is the true device type where the DEVSERV command has been executed. M=O This field represents the Device UCB status: A - Allocated F - Offline M - Mount Pending O - Online P - Offline Pending N - Not Allocated CNT This field indicates the number of data sets allocated on this volume. SSID This field indicates the subsystem ID of the device CU. VOLSER This field indicates the VOLSER of the device pointed to by the DEVSERV command.
302
CFW Cache Fast Write, which indicates the status of the CFW: Y - CFW is Active N - CFW is Inactive S - CFW is Suspended with Pinned Data TC (c,d) C=Device Cache; D=Subsystem Cache These two characters indicate the Device Cache and Subsystem Cache status: Y - Active N - Inactive A - Pending Active F - Pending Inactive M - Disabled for maintenance (this will override other statuses) P - Pending Inactive - Destage in Progress S - CU is running in Single Cluster mode (half of the cache in one cluster is not in use) T - Terminated due to a Subsystem Cache Error DFW (e,f) E=DASD Fast Write; F=NVS Status These two characters indicate the DASD Fast Write and the NVS status: Y - Active/Available N - Inactive/Unavailable F - NVS Pending Unavailable/Disable - Destage failed I - DFW - Deactivate Pending - Destage in Progress I - NVS - Battery Defective P - NVS Pending Unavailable - Destage in Progress or has failed S - DFW is temporarily Suspended with Pinned Data U - DFW Deactivate Pending with Destage failed U - NVS Terminated due to error PIN (p) This single character indicates whether pinned data exists: N - No Pinned Data Y - Pinned Data in Cache or NVS - CFW/DFW allowed S - Retriable Pinned Data in Cache/NVS - CFW/DFW temporarily suspended DC-STATE This field reports the Dual Copy current status: SIMPLEX (Simplex: Not Duplex-Pair) PRIMARY (Primary Device of Active Duplex-Pair) SECONDARY (Secondary Device of Active Duplex-Pair) PRI-PNDG (Establish/Copy in Progress as Primary) SEC-PNDG (Establish/Copy in Progress as Secondary) PRI-SDPL (Suspended, current and original Primary) - DDC same) SEC-SDPL (Suspended, current and original Secondary) - DDC same) PRI-SSEC (Suspended, was Secondary and now Primary - DDC Swapped) SEC-SPRI (Suspended, was Primary and now Secondary - DDC Swapped) SPARE (RAMAC: RDC byte 57=33/34 and SNSS byte 36 bit 4,5=01/10) SPAR-PNDG (Spare Device being copied or has been copied and Copy-Back is pending) SPAR-BRKN (Broken Device copied to Spare device) PPRIMARY (PPRC Primary) PPRI-PNDG (PPRC Primary Pending) PPRI-FAIL (PPRC Primary Fail) PPRI-SUSP (PPRC Primary Suspended) PSECONDRY (PPRC Secondary) PSEC-PNDG (PPRC Secondary Pending) PSEC-FAIL (PPRC Secondary Fail) PSEC-SUSP (PPRC Secondary Suspend)
303
MIRR-OPER|PEND|FAIL (mirroring status per SNSS byte 26 bit 6, 7) CCA-xx xx - This indicates the Channel Connection address: The xx value should match UNITADD-XX in the IOCP. DDC-yy yy - This indicates the Device-to-Director Connection address. CYL This field reports the number of cylinders of the subject device, when applicable. CU-TYPE This field reports the control unit type of the subject device.
304
Appendix G.
Copyright IBM Corp. 2005, 2006, 2009, 2006. All rights reserved.
305
306
Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this book.
IBM Redbooks
For information about ordering these publications, see How to get Redbooks on page 309. Note that some of the documents referenced here may be available in softcopy only. IBM System z Connectivity Handbook, SG24-5444 Getting Started with the IBM 2109 M12 FICON Director, SG24-6089 Implementing an IBM/Brocade SAN with 8 Gbps Directors and Switches, SG24-6116 Implementing the Cisco MDS9000 in an Intermix FCP, FCIP, and FICON Environment, SG24-6397 FICON Implementation Guide, SG24-6497 IBM System Storage DS8000 Architecture and Implementation, SG24-6786 Getting Started with the McDATA Interpid FICON Director, SG24-6857 Getting Started with the INRANGE FC/9000 FICON Director, SG24-6858 IBM Tivoli System Automation for z/OS Enterprise Automation, SG24-7308 IBM System z10 Enterprise Class Technical Introduction, SG24-7515 IBM System z10 Enterprise Class technical Guide, SG24-7516 IBM/Cisco Multiprotocol Routing: An Introduction and Implementation, SG24-7543 IBM System Storage/Brocade Multiprotocol Routing: An Introduction and Implementation, SG24-7544 Implementing an IBM/Cisco SAN, SG24-7545 IBM System z10 Enterprise Class Configuration Setup, SG24-7571 FICON CTC Implementation, REDP-0158 Disk Storage Access with DB2 for z/OS, REDP-4187 How Does the MIDAW Facility Improve the Performance of FICON Channels Using DB2 and Other Workloads?, REDP-4201 Multiple Subchannel Sets: An Implementation View, REDP-4387 Cisco FICON Basic Implementation, REDP-4392
Other publications
These publications are also relevant as further information sources: z/OS MVS Diagnosis: Reference, GA22-7588 z/OS MVS Diagnosis: Tools and Service Aids, GA22-7589
307
System z Planning for Fiber Optic Links (ESCON, FICON, InfiniBand, Coupling Links, and Open System Adapters), GA23-0367 IBM System Storage SAN768B Installation, Service, and User's Guide, GA32-0574 IBM System Storage SAN384B Installation, Service, and User's Guide, GA52-1333 IBM System Storage DS8000 Introduction and Planning Guide, GC35-0515 FICON Express2 Channel Performance Version 1.0, GM13-0702 System z10 Enterprise Class System Overview, SA22-1084 z/OS MVS System Commands, SA22-7627 z/OS MVS System Messages SA22-7637 z/Architecture, Principles of Operation, SA22-7832 System z Input/Output Configuration Program Users Guide for ICP IOCP, SB10-7037 System z10 Processor resource/Systems Manager Planning Guide, SB10-7153 DFSMS Storage Administration Reference, SC26-7402 HMC Operations Guide, SC28-6830 HMC Operations Guide, SC28-6873 System z10 Support Element Operations Guide, SC28-6879 HCD Users Guide, SC33-7988 z/OS RMF Users Guide, SC33-7990 z/OS RMF Report Analysis, SC33-7991 z/OS RMF Performance Management Guide, SC33-7992 z/OS RMF Programmers Guide, SC33-7994 System z Maintenance Information for Fiber Optic Links (ESCON, FICON, Coupling Links, and Open System Adapters), SY27-2597 IBM Fiber Optic Cleaning Procedure, SY27-2604 z/OS RMF Reference Summary, SX33-9033 Brocade Fabric OS Administrators Guide, 53-1001185 High Performance FICON for System z Technical Summary for Customer Planning, ZSW03058USEN Performance Considerations for a Cascaded FICON Director Environment Version 0.2x, Richard Basener and Catherine Cronin
http://www-03.ibm.com/servers/eserver/zseries/library/techpapers/pdf/gm130237.pdf
IBM System z9 I/O and FICON Express4 Channel Performance, ZSW03005USEN IBM System z10 I/O and High Performance FICON for System z Channel Performance, ZSW03059USEN IBM System Storage SAN768B, TSD03037USEN IBM System Storage SAN768B Fiber Backbone Interoperability Matrix Cisco MDS 9506 for IBM System Storage, TSD00069USEN Cisco MDS 9509 for IBM System Storage, TSD00070USEN Cisco MDS 9513 for IBM System Storage, TSD01754USEN Cisco MDS 9506, 9509, 9513 for IBM System Storage Directors Interoperability Matrix
308
Online resources
These Web sites are also relevant as further information sources: Fibre Channel standard Web site http://www.t11.org Brocade Communications Systems, Inc. Web site http://www.brocade.com Cisco Systems, Inc. Web site http://www.cisco.com
Related publications
309
310
Index
A
active CSS 108, 125, 143, 290 Active Zone Config 188 Advanced Performance Tuning Policy 176 American National Standard Institute (ANSI) 4 attached FICON Director error condition 44 CHPID Mapping Tool (CMT) 103, 120, 138, 255 CHPID number 113, 131, 149, 209, 257 CHPID statement 38, 103, 120, 138, 257 CHPID type FC 10, 26 FCP 26 FCV 9, 29 CHPIDS 12, 26, 103, 119, 138, 268 Cisco MDS 9506 44 Cisco MDS 9509 44 Cisco MDS 9513 44 CNTLUNIT CUNUMBR 104, 120, 139 CNTLUNIT statement 104, 120, 139, 257 CNTUNIT statement 122 command mode 14 CCW operation 20 channel program 21 command response (CMR) 204 CONFIG command 108, 125, 143 connecting FICON Director Domain ID 42 control processor (CP) 45, 153 Control unit logical link address 298 logical-path establishment 14 login 14 node-identifier acquisition 14 serial number 300 state-change registration 14 type 300 control unit 271 N_Port address 39 node descriptor 24 control unit (CU) 1, 8, 14, 34, 99, 115, 133, 185, 206, 218, 255, 272, 295 Control Unit Port (CUP) 39, 152 control units as specified 138 D000 120, 139 CPC icon 268 CRC errors 91 CSS 2 103, 120, 138, 273 CSS.CHPI D 2.7E 129, 147 2.7F 111 CU 4044 105, 123, 140 Cyclical Redundancy Checking (CRC) 41
B
buffer credit 23, 39, 152, 211 build IOCDS 107, 124, 142 Business Class (BC) 28, 289
C
Cache Fast Write (CFW) 302 cascaded FICON Director Additional configuration steps 164 CCW 17 other parameters 17 CCW chaining 204 CCW execution 20 control unit 20 Central Processor Complex (CPC) 107, 124, 202, 255, 266, 288 CF CHP 109, 126, 144, 297 changing passwords on Director 162 Channel feature 26 Channel Information PD panel 272 window 112, 130, 148 channel path functional details 127 shared definition 122 channel path (CP) 11, 14, 99, 115, 119, 133, 220, 235, 255, 266 Channel Path Identifier 12 channel paths, CUs (CPC) 138, 219 Channel Problem Determination 272, 275, 277, 279281, 284 Channel Subsystem channel path 257 logical control units 291 logical partition 289 not-operational reason 237 Channel Subsystem (CSS) 288 Channel-to-Channel (CTC) 11, 26 CHPID 100, 116, 135, 220, 270, 295 CHPID 7E 103, 116, 135, 273 current status 126 direct connection 105 CHPID detail 111, 129, 147
D
DASD Fast Write (DFW) 302 Data Center Fabric Manager (DCFM) 47, 152155, 226, 246 DCFM server 153, 229, 246 IP address 162
311
ODBC password 162 welcome window 162 defined channel path path status 301 Device Number 109, 127, 145, 220, 275, 292, 306 option Search 277 Device Status (DS) 109, 127, 145, 280, 293 device type 110, 128, 146, 224, 257, 292 cam display information 301 DEVSERV command 292 other uses 301 Devserv command 300 Director port (DP) 208 Domain ID 37, 135, 152, 257 DS P 225, 300 DS8000 storage control unit 108, 125, 143 controller 102, 118, 137 device 100, 116, 134 system 100, 116, 134 Dynamic Load Sharing (DLS) 49, 152 Dynamic Path Selection (DPS) 49, 233 Dynamic Pathing Switch (DPS) 231
E
E_Port 6, 179, 183 Enterprise Class (EC) 244, 289 Enterprise Fabric Connectivity Manager (EFCM) 155 Environmental Record, Editing, and Printing (EREP) 222 ESCON Director 39 ESCON solution 9 Establish Logical Path (ELP) 15 Extended Link Service (ELS) 19 Extended Subchannel Logout Data (ESLD) 223, 233
F
F_Port 6 fabric 6 Fabric Manager (FM) 54 fabric port (F_PORT) 6, 54, 179 FC link 6, 16, 35 FC-SB-2, SB-3, SB-4 Web site 5 FCTC control unit 11, 105, 140, 305 Feature Check 100, 116, 135 Feature code 26, 46, 136, 249 fiber optic cable 6, 38, 101102, 134135 CHPID 82 105 Fibre Channel adapter 17 Arbitrated Loop 6 architecture 1, 39 bit error 223 data traffic 55 fabric 39 link 18 logical switch 197 Physical 4 Physical and Signaling Standard 8 physical framing 5
physical interface 4 port 222 Protocol 10, 19, 43, 222 Protocol traffic 26 Routing 48 Security Protocol 54 Single Byte Command Sets-3 13 Single Byte Command Sets-4 13 Single-Byte-3 8 Single-Byte-4 8 standard 4 Switch Fabric and Switch Control Requirements 8 Fibre Channel (FC) 3, 13, 34, 197, 202 FICON 1, 3, 13, 3334, 115, 151, 201, 255, 265, 305 FICON advantage 9 FICON channel 5, 14, 34, 99, 115, 133, 185, 220221, 247, 257, 268, 271 CHPID type 257 different flavors 203 fiber links 108 full duplex data flow capabilities 306 IOS051I message 232 PEND time ends 204 physical connectivity 37 physical transmission path 36 topologies 34 FICON channel (FC) 210 FICON Channel-to-Channel (FCTC) 121, 140 FICON Director 1, 10, 15, 34, 104, 116, 133, 150152, 203, 218, 244, 251, 257, 272, 297, 305 arbitrary number 257 Basic functions 37 control unit 35 cooling capability 37 CP cards 154 CUP port 121 default buffer credit assignment 248 detailed configuration 210 Domain ID range 38 fabric binding database 42 FC links 35 Fiber optic cables 144 IBM qualification testing 44 in-band management 39 ISL port 190 Other settings 125 physical port 11 port address 185 switch module 37 Switch number 282 FICON environment 4, 33, 57, 97, 99, 119, 133, 175, 199, 217, 225, 244, 287, 305 FICON Express 30 FICON Express2 3, 16, 101, 118, 136 FICON Express4 10KM LX 26 10KM LX feature 27 4 KM LX feature 28 4KM LX 26 CHPIDs 27
312
feature 9, 26 SX 26 FICON Express4 10KM LX 27 FICON Express4 4KM LX 28 FICON Express4 SX 28 FICON Express4-2C 4 KM LX 28 FICON Express4-2C SX 28 FICON Express8 3, 16, 100, 116, 134, 244 FICON FC-SB3 and SB-4 Web site 19 FICON feature 8, 13, 136, 247 FICON Purge Path Extended 222 FL_Port 6
G
G_Port 6 Gbps 16, 37, 181, 244 graphical user interface (GUI) 54, 154
Input/Output architecture 11 Input/Output Supervisor (IOS) 16, 126 Insistent Domain (ID) 152 Inter-Chassis Link (ICL) 48 Inter-Switch Link (ISL) 35, 38, 135, 152, 175 Invalid Transmission Words 91 IOCP User 256 IODEVICE Address 104, 121, 139 IODEVICE statement 103, 119, 138, 258 PARTITION keyword 105 IODF data (ID) 107, 123, 141, 290 IOS 220 IOS command 108, 125, 143 IP address 55, 152, 154, 246, 260 ISL Trunking 54 ISLs 35, 135, 152, 245
K
km transceivers 28
H
Hardware Configuration Definition (HCD) 8, 38, 255, 289 Hardware Configuration Manager (HCM) 255, 266 Hardware Management Console 110, 128, 146, 219, 265, 289 Hardware System Area (HSA) 14, 103, 119, 138, 289 HCD User 103, 119, 138, 256 Host Bay Adapter (HBA) 221, 297 HYPERPAV ALIAS (HA) 299
L
L_Port 6 LC Duplex 27 LC Duplex connector 27, 101, 117, 135 Link-Incident-Record Registration (LIRR) 14 logical control unit (LCU) 100, 117, 135, 207, 247, 281, 291 logical partition channel subsystem 292 high number 305 logical partition (LP) 12, 51, 105106, 122, 141, 220, 257, 275, 289, 305 Logical Path 15, 35, 221, 276 Logical switch scalability.Partition Brocade SAN768B 50 logical switch 50, 152, 170, 229, 247 management isolation 50 port numbering 173 logical switch types 50 Long Wave (LW) 46 Longitudinal Redundancy Checking (LRC) 41 longwave laser 101, 117, 134135 Loss of Sync 91 lossless DLS 175, 248 LPAR A23 105, 122, 140, 288 CU 5034 105 LPARs 104, 116, 134
I
I/O Configuration Data 255 Data Set 99, 115, 133, 255, 267, 289 Program 255, 282, 289 I/O definition 103, 119, 137 file 106107, 124, 142, 206, 256 I/O definition file (IODF) 103, 119, 138 I/O device 11, 17, 255, 298 unit addresses 258 I/O information 206 I/O operation 12, 14, 34, 206, 231, 272, 292 exchange pair 19 FICON architecture 16 MIH value 22 I/O performance measurement points 203 I/O priority queuing (IOQ) 213 I/O processor (IOP) 209 I/O request 16, 42, 209, 235236, 289 FC-FS FC-2 frame 18 I/O specification 103, 119, 138 IBM RMF Web site 203 IBM SAN384B 44 IBM SAN768B 44 IBM System Storage Web site 93 in-order delivery (IOD) 49, 152 Input Output Configuration Program (IOCP) 8, 38, 99, 115, 133 Input/output (I/O) 3, 13, 203, 256
M
MIF ID 257, 273, 289 Missing Interrupt Handler (MIH) 21, 233 MM 62.5 30 mode conditioning patch (MCP) 2728 MULTIPLE IMAGE FACILITY (MIF) 288 Multiple Image Facility (MIF) 12
N
N_Port 4, 14 Index
313
Native FICON 26, 103, 120, 138, 210 average number 210 NL_Port 6 node 6 Node Descriptor (ND) 16, 163, 222, 274, 288 Node-Element Descriptor (NED) 296
RMF 204
S
SCP CU Neodymium 110, 127, 146, 237, 296 Secure File Transfer Protocol (SFTP) 54 selected central processor complex data sets 256 Setting the display to hex 163 Short Wave (SW) 46 Simple Network Management Protocol (SNMP) 37 single mode (SM) 47, 101, 117, 134 Single Object Operation (SOO) 110, 128, 146, 266 Small Form Factor Pluggable (SFP) 37 State-Change Registration (SCR) 14 Storage Area Network (SAN) 43 storage device 34, 100, 116, 134, 222 Store System Information (STSI) 288 subchannel 276, 279 Subchannel Logout Handler (SLH) 223 subchannel number 275 subchannel set 12, 108, 125, 143, 289 Support Element (SE) 240, 255, 265 switch address 11, 37, 117, 135, 174, 257 Switch Connection Control (SCC) 153, 192 switched fabric 11 System Activity Display (SAD) 201 System Control Program (SCP) 11, 298 System Information (SI) 288 System Management Facility (SMF) 203 System z 3, 13, 33, 100, 116, 244, 255, 266, 305 High Performance FICON 26 System z environment 8 System z server 134 System z10 8, 13, 244, 289 Systems Automation (SA) 39
O
operating system 17, 39, 100, 116, 134, 277, 295, 306 FICON Express support 137 FICON Express8 support 102 Operation Request Block additional parameters 17 Operation Request Block (ORB) 17
P
Parallel Access Volume (PAV) 8, 212 Path Available Mask (PAM) 277 PATH keyword 103, 120, 138, 257 Path Not Operational Mask (PNOM) 231 Path Operational Mask (POM) 231, 277, 297 PCHID 5A2 116, 135, 273 Essential information 129 owning image 129 PCHID 5A3 100 Essential information 111 owning image 111 PCHID detail 110, 128, 146 PCHID number 270 PCHIDs 268 performance FICON 9, 13 performance monitoring 203 point-to-point configuration 11, 100, 116, 257 point-to-point topology 35, 99, 115, 255 FICON environment 99 port 6 port address 8, 38, 139, 177, 185, 247, 280 port fencing 91 port number 38, 100, 117, 120, 135, 149, 161, 197, 261 port type 5, 16, 152, 179, 214 Port-Based Routing (PBR) 152 Power-on Reset (POR) 103, 119, 138 PPRC Secondary Suspend (PSEC-SUSP) 303 problem determination (PD) 108, 126, 144, 217, 265 Processor Card (PC) 160 Purge Path Extended (PPE) 222, 285
T
Tag 274 TE port 54 terminology 4 TI zone 190 Traffic Isolation (TI) zone 48 Traffic Isolation Routing feature 48 transceiver 126, 144 Transport Control Word (TCW) 16 Transport Indirect Data Address Word (TIDAW) 22 transport mode 14 I/O operation 14 Trivial file transfer protocol (TFTP) 43 trunk or ISL (TI) 152
Q
QoS SID/DID pairs 49 Quality of Service (QoS) 48, 152
U
UCB Logical Path Mask (UCBLPM) 297 unit address (UA) 9, 18, 258, 275 Unit Control Block (UCB) 17, 300 Unit Control Word (UCW) 291 Unrepeated distance 9, 27 Upper Level Protocol (ULP) 5
R
Redbooks Web site 309 Contact us xiii Registered State Change Notification (RSCN) 177 Remote Node Identification (RNID) 274, 298 Resource Measurement Facility (RMF) 202204
314
V
Virtual Fabric 51, 170 Virtual Fabrics feature 50 Virtual ISL 49 Virtual Storage Area Network (VSAN) 54
W
Wave Division Multiplexing (WDM) 196 working CHPID normal indication 271 World Wide Name 7 World Wide Node Name 16, 113, 131, 149, 193 World Wide Node (WWN) 113, 131, 149, 169 World Wide Node Name (WWNN) 42 World Wide Node_Name (WWNN) 7 World Wide Port Name 16, 113, 131, 149, 222 World Wide Port (WWP) 113, 131, 149 World Wide Port_Name (WWPN) 7
Z
z10 BC 12, 28, 291 z10 EC 12, 30, 247, 291 FICON connectivity 247 z10 server 100, 116, 134 configuration tasks 110 corresponding LPAR 140 designated ports 128 fiber optic cable link 113, 149 FICON Express8 feature 102 PCHID 5A2 128 PCHID 5A3 110 standard feature 100 zHPF feature 101, 117, 135, 248 zHPF on System (Z/OS) 100, 116, 134, 218 Zone Config 187
Index
315
316
Back cover
Topologies, concepts, and terminology Planning, implementation, and migration guidance Realistic examples and scenarios
This IBM Redbooks publication covers the planning, implementation, and management of IBM System z FICON environments. It discusses the FICON and Fibre Channel architectures, terminology, and supported topologies. The book focuses on the hardware installation and the software definitions that are needed to provide connectivity to FICON environments.You will find configuration examples required to support FICON control units, FICON Channel-to-Channel (FCTC), and FICON Directors. It also discusses utilities and commands that are useful for monitoring and managing the FICON environment. The target audience for this document includes IT Architects, data center planners, SAN administrators, and system programmers who plan for and configure FICON environments. The reader is expected to have a basic understanding of IBM System z10 and IBM System z9 hardware, HCD or IOCP, as well as a broad understanding of the Fibre Channel and FICON architectures.