Merging Systems Into A Sysplex Ibm Redbooks instant download
Merging Systems Into A Sysplex Ibm Redbooks instant download
download
https://ebookbell.com/product/merging-systems-into-a-sysplex-ibm-
redbooks-51388446
https://ebookbell.com/product/designing-enterprise-information-
systems-merging-enterprise-modeling-and-software-specification-boris-
shishkov-10581850
https://ebookbell.com/product/stochastic-systems-in-merging-phase-
space-v-s-koroliuk-n-limnios-4108094
https://ebookbell.com/product/merging-optimization-and-control-in-
power-systems-physical-and-cyber-restrictions-in-distributed-
frequency-control-and-beyond-1st-edition-feng-liu-45413710
https://ebookbell.com/product/merging-wright-road-trips-romance-lb-
dunbar-46532052
Merging With Iva Hinduisms Contemporary Metaphysics Satguru Sivaya
Subramuniyaswami
https://ebookbell.com/product/merging-with-iva-hinduisms-contemporary-
metaphysics-satguru-sivaya-subramuniyaswami-2367838
https://ebookbell.com/product/merging-processes-in-galaxy-clusters-
softcover-reprint-of-hardcover-1st-ed-2002-l-feretti-2624902
https://ebookbell.com/product/merging-with-siva-hinduisms-
contemporary-metaphysics-sixth-satguru-sivaya-
subramuniyaswami-34586390
https://ebookbell.com/product/merging-numeracy-with-literacy-
practices-for-equity-in-multilingual-early-year-settings-robyn-
jorgensen-37244434
https://ebookbell.com/product/merging-processes-in-galaxy-
clusters-1st-edition-craig-l-sarazin-auth-4490682
Front cover
Merging Systems
ms into
a Sysplex
ex
Position for PSLC and WLC benefits
Frank Kyne
Jeff Belot
Grant Bigham
Alberto Camara Jr.
Michael Ferguson
Gavin Foster
Roger Lowe
Mirian Minomizaki Sato
Graeme Simpson
Valeria Sokal
Feroni Suhood
ibm.com/redbooks
International Technical Support Organization
December 2002
SG24-6818-00
Take Note! Before using this information and the product it supports, be sure to read the general
information in “Notices” on page xi.
When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in
any way it believes appropriate without incurring any obligation to you.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
Chapter 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Why this book was produced . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Starting and ending points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.1 BronzePlex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 GoldPlex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.3 PlatinumPlex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Structure of each chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.1 Are multiple subplexes supported or desirable . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.2 Considerations checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.3 Implementation methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.4 Tools and documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 Terminology and assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.1 ‘Plexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.5 “Gotchas” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5.1 Duplicate data set names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5.2 Duplicate volsers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5.3 TCP/IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5.4 System Logger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5.5 Sysplex HFS sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5.6 Legal implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.6 Updates to this document . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Contents v
12.3 Considerations for merging TCPplexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
12.4 Tools and documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Contents vii
20.10 Eligible Device Table and esoterics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
20.11 Single or multiple OS Configs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
20.12 ESCON logical paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
20.12.1 DASD control unit considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
20.12.2 Tape control unit considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
20.12.3 Non-IBM hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
20.12.4 Switch (ESCON/FICON) port considerations . . . . . . . . . . . . . . . . . . . . . . . . . 344
20.13 CF connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
20.14 FICON/ESCON CTCs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
20.14.1 ESCON CTC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
20.14.2 FICON CTC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
20.14.3 Differences between ESCON and FICON CTC . . . . . . . . . . . . . . . . . . . . . . . 346
20.15 Sysplex Timer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
20.15.1 LPAR Sysplex ETR support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
20.15.2 Parmlib . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
20.16 Console connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
20.17 Hardware Management Console (HMC) connectivity . . . . . . . . . . . . . . . . . . . . . . . 348
20.18 Tools and Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
20.18.1 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
20.18.2 Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
Contents ix
x Merging Systems into a Sysplex
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrates programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and
distribute these sample programs in any form without payment to IBM for the purposes of developing, using,
marketing, or distributing application programs conforming to IBM's application programming interfaces.
ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel Corporation in the United
States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems,
Inc. in the United States, other countries, or both.
C-bus is a trademark of Corollary, Inc. in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure Electronic
Transaction LLC.
Other company, product, and service names may be trademarks or service marks of others.
This IBM Redbook provides information to help Systems Programmers plan for merging
systems into a sysplex. zSeries systems are highly flexibile systems capable of processing
many workloads. As a result, there are many things to consider when merging independent
systems into the more closely integrated environment of a sysplex. This book will help you
identify these issues in advance and thereby ensure a successful project.
Frank Kyne is a Senior I/T Specialist at the International Technical Support Organization,
Poughkeepsie Center. He has been an author of a number of other Parallel Sysplex
Redbooks. Before joining the ITSO four years ago, Frank worked in IBM Global Services in
Ireland as an MVS Systems Programmer.
Jeff Belot is an OS/390 Technical Consultant at IBM Global Services Australia. He has 28
years experience in the mainframe operating systems field. His areas of expertise include
sysplex, performance and security.
Alberto Camara Jr. is a Customer Support Representative working for Computer Associates
in Brazil. He has 15 years of experience in operating systems and mainframe field. His areas
of expertise include storage, performance and security.
Michael Ferguson is a Senior I/T specialist in the IBM Support centre in Australia. He has 16
years of experience in the OS/390 software field. His areas of expertise include Parallel
Sysplex, OS/390 and Tivoli OPC. He has been an author of a number of Parallel Sysplex
Redbooks. Michael teaches both Parallel Syplex and Tivoli OPC courses in the Asia Pacific
region, as well as providing consulting services to customers in these areas.
Gavin Foster is an OS/390 Technical Consultant working for IBM Global Services in
Australia. He has 16 years of experience in the mainframe operating systems field. His areas
of expertise include systems programming and consultancy on system design, upgrade
strategies and platform deployment.
Roger Lowe is a Senior S/390 Technical Consultant in the Professional Services division of
Independent Systems Integrators, an IBM Large Systems Business Partner in Australia. He
has 18 years of experience in the operating systems and mainframe field. His areas of
expertise include the implementation and configuration of the OS/390 and z/OS operating
system and Parallel Sysplex.
Valeria Sokal is a System Programmer in a large IBM customer in Brazil. She has 12 years of
experience working on many aspects of MVS, including building and managing very large
sysplexes. She has been involved in writing a number of other IBM Redbooks.
Feroni Suhood is a Senior Performance Analyst at IBM Global Services Australia. He has 20
years experience in the mainframe operating systems field. His areas of expertise include
sysplex, performance and hardware evaluation.
Rich Conway
International Technical Support Organization, Poughkeepsie Center
Robert Haimowitz
International Technical Support Organization, Poughkeepsie Center
Jay Aiken
IBM USA
Cy Atkinson
IBM USA
Paola Bari
IBM USA
Frank Becker
Sparkassen-Informatik-Services West GmbH, Germany
Charlie Burger
IBM USA
Jose Castano
IBM USA
Angelo Corridori
IBM USA
Vic Cross
Independent Systems Integrators, Australia
Greg Daynes
IBM USA
Jerry Dearing
IBM USA
Dave Draper
IBM UK
Scott Fagen
IBM USA
Kingsley Furneaux
IBM UK
Johnathan Harter
IBM USA
Evan Haruta
IBM USA
Axel Hirschfeld
IBM Germany
Gayle Huntling
IBM USA
Michael P Kasper
IBM USA
John Kinn
IBM USA
Paul M. Koniar
Metavante Corporation, USA
Matti Laakso
IBM Finland
Tony Langan
IBM Canada
Jim McCoy
IBM USA
Jeff Miller
IBM USA
Bruce Minton
CSC USA
Marcy Nechemias
IBM USA
Mark Noonan
IBM Australia
Bill Richardson
IBM USA
Alvaro Salla
Maffei Informática, Brazil
Sim Schindel
IBM Turkey
Norbert Schlumberger
IBM Germany
Preface xv
William Schoen
IBM USA
Gregory Silliman
IBM USA
Dave Sudlik
IBM USA
Kenneth Trowell
IBM Australia
Tom Wasik
IBM USA
Bob Watling
IBM UK
Gail Whistance
IBM USA
Mike Wood
IBM UK
Bob Wright
IBM USA
Dave Yackel
IBM USA
Comments welcome
Your comments are important to us!
We want our Redbooks to be as helpful as possible. Send us your comments about this or
other Redbooks in one of the following ways:
Use the online Contact us review redbook form found at:
ibm.com/redbooks
Send your comments in an Internet note to:
[email protected]
Mail your comments to the address on page ii.
Chapter 1. Introduction
This chapter discusses the reason for producing this book and introduces the structure used
in most of the subsequent chapters. It also describes some terms used throughout the book,
and some assumptions that were used in its production. Summary information is provided
about which system components must be merged as a set, and the sequence of those
merges.
While no two consolidation exercises are identical, there are a number of aspects that apply
to many, if not all, situations. This book was produced in an effort to make these
consolidations easier to plan and implement, and to avoid customers having to constantly
“reinvent the wheel”.
This document discusses the things to be considered when an existing system (or group of
systems) is to be moved into an existing sysplex. It does not cover the considerations for
moving applications from one system image to another. Such moves are more common and
generally less complex than moving a complete system into a sysplex.
There are endless ways to configure z/OS environments, and it is unlikely that any two
customer’s environments are exactly the same. However, in order to write this book, we had
to make some assumptions about the starting configuration, and the objectives for doing the
merge.
Before you can proceed with the merge project, there is a very fundamental decision that
needs to be made: are you going to share all your user data between all the systems? This
decision needs to be based on many criteria, including:
Who uses the system? For example, if the users of the incoming system work for a
different company than the current users of the target sysplex systems, you will probably
not want to share data between the different systems.
An additional important consideration that we have to point out is that the original concept of a
sysplex is a group of systems with similar characteristics, similar service level objectives, and
similar workloads, sharing data between the systems and doing dynamic workload balancing
across all systems. While this is the ideal Parallel Sysplex implementation which will allow a
customer to derive maximum benefit fromt the technology,, we realize that many customers
may not, for business or technical reasons, be able to implement it. Therefore, asymmetric
configurations where some of the workloads on each system are not shared are certainly
supported. However, if you find that you have a Parallel Sysplex in which the majority of
systems and/or workloads are completely disjoint that—that is, they have nothing in
common—then you should give additional consideration as to whether those systems should
really reside in the same sysplex.
Specifically, sysplex was not designed to support subplexes—that is, subsets of the sysplex
that have nothing in common with the other members of the sysplex except that they share
the sysplex couple data sets and Sysplex Timer. For example, the design point is not to have
development and production systems in the same sysplex. While some products do support
the idea of subplexes (DFSMShsm, for example), others do not (TCP/IP, for example).
While Parallel Sysplex provides the capability to deliver higher application availability than any
other solution, the closer relationship between the systems in a sysplex mean that it is
possible for a problem on one system to have an impact on other members of the sysplex.
Such problems, while rare, are more likely to arise if the sysplex consists of widely disparate
systems (for example, test and production). For the highest levels of availability, therefore, we
recommend against mixing very different types of systems, that are completely unrelated, in
the same sysplex.
On the other hand, the pricing mechanisms implemented through Parallel Sysplex License
Charging (PSLC) and Workload License Charging (WLC) do unfortunately provide a financial
incentive to place as many systems as possible in the same sysplex.
At the end of the day, if you are in the position of deciding whether to merge a completely
unrelated system into a sysplex, you need to balance the financial and technical (if any)
benefits of the merge against the possible impact such a move could have on the availability
of all the systems in the sysplex.
Chapter 1. Introduction 3
Similarly, if the merge would result in a very large number of systems in the sysplex (and
those systems are not all related to each other), you need to consider the impact to your
business if a sysplex outage were to take out all those systems. While the software savings
could be significant (and are easily calculated), the cost of the loss of all systems could also
be significant (although more difficult to calculate, unfortunately).
Once you have decided how the user DASD are going to be handled, and have satisfied
yourself with the availability and financial aspects, you are in a position to start planning for
the merge. This document is designed to help you through the process of merging each of the
affected components. However, because there are so many potential end points (ranging
from sharing nothing to sharing everything), we had to make some assumptions about the
objective for doing the merge, and how things will be configured when the exercise is
complete. We describe these assumptions in the following sections.
1.2.1 BronzePlex
Some customers will want to move systems that are completely unrelated into a sysplex
simply to get the benefits of PSLC or WLC charging. In this case, there will be practically no
sharing between the incoming system and the other systems in the target sysplex. This is a
typical outsourcing configuration, where the sysplex consists of systems from different
customers, and there is no sharing of anything (except the minimal sharing required to be part
of a sysplex) between the systems. We have used the term “BronzePlex” to describe this type
of sysplex.
1.2.2 GoldPlex
Other customers may wish to move the incoming system into the sysplex, and do some fairly
basic sharing, such as sharing of sysres volumes, for example. In this case, the final
configuration might consist of more than one JES2 MAS, and two logical DASD pools, each of
which is only accessed by a subset of the systems in the sysplex. We have included in this
configuration all of the components that can easily be merged, and do not require a number of
other components to be merged at the same time. This configuration provides more benefits,
in terms of improved system management, than the BronzePlex, so we have called this
configuration a “GoldPlex”.
1.2.3 PlatinumPlex
The third configuration we have considered is where the objective is to maximize the benefits
of sysplex. In this case, after the merge is complete, the incoming system will be sharing
everything with all the other members of the sysplex. So there will be a single shared sysres
between all the systems, a single JES MAS, a single security environment, a single
automation focal point, and basically just one of everything in the sysplex. This configuration
provides the maximum in systems management benefits and efficiency, so we have called it
the “PlatinumPlex”.
Depending on which type of plex you want to achieve and which products you use, some or all
of the chapters in this book will apply to you. Table 1-1 contains a list of the topics addressed
in the chapters and indicates which ones apply to each plex type. However, we recommend
that even if your objective is a BronzePlex you should still read all the chapters that address
products in your configuration, to be sure you have not overlooked anything that could impact
your particular environment.
System Logger X X X
WLM X X X
GRS X X X
Language Environment X X
SMF X X
JES2 X X
Shared HFS X
Parmlib/Proclib X X
VTAM X X X
TCP X X X
RACF X
SMS X
HSM X
RMM X
OPC X X X
Automated operations X X X
Physical considerations X X X
Software Licensing X X X
Operations X X X
Maintenance X X X
In addition, in each chapter there is a table of considerations specific to that topic. One of the
columns in that table is headed “TYPE”, and the meaning of the symbols in that column are
as follows:
B This consideration applies if the target environment is a BronzePlex.
G This consideration applies if the target environment is a GoldPlex.
P This consideration applies if the target environment is a PlatinumPlex.
Some considerations apply across all target environments, and some only apply to a subset
of environments.
We also thought it would be helpful to identify up front a suggested sequence for merging the
various components. For example, for some components, most of the merge work can and
should be done in advance of the day the system is actually moved into the sysplex. The
merge of other components must happen immediately prior to the system joining the sysplex.
Chapter 1. Introduction 5
And still others can be merged at some time after the system joins the sysplex. In Table 1-2
on page 6, we list each of the components addressed in this book, and indicate when the
merge work can take place. This table assumes that your target environment is a
PlatinumPlex. If your target is one of the other environments, refer to the specific chapter for
more information.
WLM Before
GRS Before
VTAM Before
TCP Before
Maintenance Before
In Table 1-2 on page 6, the entries in the “Sequence” column have the following meanings:
Before The bulk of the work required to merge these components can and
should be carried out in the weeks and months preceding the day
the incoming system is brought up as a member of the target
sysplex.
Immediately preceding At least some of the work to merge this component must be done
in the short period between when the incoming system is shut
down, and when it is IPLed as a member of the target sysplex.
The other chapters are not related to a specific product, and may address something more
esoteric, like data set naming conventions, or operator procedures. It does not make sense to
apply the same structure to those chapters as to the product ones, so each of those chapters
will be structured in a way that is suitable for that particular topic.
In this section in each of the relevant chapters, we discuss whether it is possible to run with
multiple subplexes, and if so, the advantages and disadvantages of running one or multiple
subplexes.
Chapter 1. Introduction 7
1.3.4 Tools and documentation
In this section, we describe tools that may be available to help you do the merge, and tell you
where they may be obtained. In addition, if there is other documentation that would be useful
during this exercise, we provide a list of those sources.
This book discusses the considerations for merging existing systems into a sysplex. A sysplex
is the set of systems that share a common sysplex CDS, and all have the same sysplex
name. A subplex is a subset of those systems that have something in common. Throughout
this document, you will see a lot of use of the term plex. The following section discusses the
subplexes that we talk about, and the terms that we use to refer to them.
1.4.1 ‘Plexes
In this document we frequently talk about the set of systems that share a particular resource.
In order to avoid repetitive use of long-winded terms like “all the systems in the sysplex that
share a single Master Catalog”, we have coined terms like CATALOGplex to describe this set
of systems. The following are some of these terms we use later in this document:
CATALOGplex The set of systems in a sysplex that share a set of user catalogs.
DASDplex The set of systems in a sysplex that all share the same DASD
volumes.
ECSplex The set of systems that are using Enhanced Catalog Sharing (ECS) to
improve performance for a set of shared catalogs. All the systems
sharing a given catalog with ECS must be in the same GRSplex.
GRSplex The set of systems that are in a single GRS complex and serialize a
set of shared resources using either a GRS Ring or using the GRS
Star structure in the Coupling Facility (CF).
JESplex The set of systems, either JES2 or JES3, that share a single spool. In
JES2 terms, this would be a single MAS. In JES3 terms, this would be
a Global/Local complex.
HFSplex An HFSplex is a collection of z/OS systems that share the OMVS
Couple Data Set (CDS). The OMVS CDS contains the sysplex-wide
mount table and information about all participating systems, and all
mounted file systems in the HFSplex.
HMCplex The set of Central Processing Complexes (CPCs—also sometimes
referred to as a CEC or CPU) that can be controlled from a single
HMC. It is possible to have just one sysplex in a HMCplex, or many
sysplex per HMCplex, or more than one HMCplex per sysplex.
HSMplex The set of systems that share a set of DFSMShsm Journals and
CDSs.
OAMplex The set of OAM instances that are all in the same XCF group. The
scope of the OAMplex must be the same as the DB2 data sharing
Table 1-3 summarizes how many of each of these ‘plexes are possible in a single sysplex.
Note that there are often relationships between various plexes - for example, if you have a
single SMSplex, you would normally also have a single RACFplex. These relationships are
discussed in more detail in the corresponding chapters of this book.
CATALOGplex X
DASDplex X
ECSplex X
GRSplex X
JESplex X
HMCplex X
HSMplex X
OAMplex X
OPClpex X
RACFplex X
RMFplex X
RMMplex X
SMSplex X
Chapter 1. Introduction 9
‘Plex 1 per sysplex >1 per sysplex
TCPplex X
VTAMplex X
WLMplex X
There are also other plexes, which are not covered in this book, such as BatchPipesPlex,
CICSplex, DB2plex, IMSplex, MQplex, TapePlex, VSAM/RLSplex, and so on. However, we
felt that data sharing has been adequately covered in other books, so we did not include
CICS, DB2, IMS, MQSeries, or VSAM/RLS in this book.
1.5 “Gotchas”
While adding completely new systems to an existing sysplex is a relatively trivial task, adding
an existing system (complete with all its workloads and customization) to an existing sysplex
can be quite complex. And you do not have to be aiming for a PlatinumPlex for this to be the
case. In some ways, trying to establish a BronzePlex can be even more complex, depending
on how your systems are set up prior to the merge and how stringent the requirements are to
keep them apart.
In this section, we review some situations that can make the merge to a BronzePlex or
GoldPlex configuration difficult or maybe even impossible. We felt it was important to highlight
these situations up front, so that if any of them apply to you, you can investigate these
particular aspects in more detail before you make a final decision about how to proceed.
However, what happens if the systems are no longer completely separated? Let’s look at a
BronzePlex (apparently the simplest of the three configurations) situation. In a BronzePlex,
the systems in the two subplexes cannot see each other’s DASD, and there are completely
separate catalog structures, RACF databases, production control systems, and so on.
However, we need to look at what is not separate.
The first thing is GRS. In a multi-system environment, you would always treat data sets as
global resources, meaning that any time you want to update a data set, GRS will check that
no one else in the GRSplex (either on the same system or another system) is using that data
set. However, what happens if you have a situation like that shown in Figure 1-1?
SUBPLEXA
9
8
7 5
4
3
SUBPLEXB
6
BIG.PAY.LOAD
BIG.PAY.LOAD
In this case, systems SYSA and SYSB are in one subplex, SUBPLEXA (these might be the
original target sysplex systems). System FRED is in another subplex, SUBPLEXB (this might
be what was the incoming system). There are two data sets called BIG.PAY.LOAD, one that is
used by system FRED, and another that is shared between systems SYSA and SYSB.
If the SYSA version of the data set is being updated by a job on SYSA, and someone on
system FRED tries to update the FRED version of the data set, GRS will make the FRED
update wait because it thinks someone on SYSA is already updating the same data set.
This is called false contention, and while it can potentially happen on any resource that is
serialized by GRS, it is far more likely to happen if you have many duplicate data set names,
or even if you have a small number of duplicate names, but those data sets are used very
frequently.
To identify if this is likely to cause a problem in your installation, the first thing to do is to size
the magnitude of the problem. A good place to start is by looking for duplicate aliases in the
Master Catalogs. If you do not have a “clean” Master Catalog (that is, the Master Catalog
contains many data sets other than those contained on the sysres volumes), then you should
also check for duplicate data set entries in the Master Catalogs. If these checks indicate that
duplicates may exist, further investigation is required. You might try running DCOLLECT
against all the volumes in each subplex and all the DFSMShsm Migration Control Data Sets
in each subplex, then use something like ICETOOL to display any duplicate records. While
this is not ideal (it only reflects the situation at the instant the job is run, and it does not read
information from the catalogs, such as for tape data sets), at least it will give you a starting
place. For more information about the use of DCOLLECT, refer to z/OS DFSMS Access
Methods Services for Catalogs, SC26-7394.
Chapter 1. Introduction 11
1.5.2 Duplicate volsers
Just as duplicate data set names are something you should avoid at all costs, a similar
concern is duplicate volsers, whether they are on DASD or tape. The chapters on merging
DFSMShsm and DFSMSrmm discuss this further; however, if your target environment is a
BronzePlex or a GoldPlex, you may not be merging those components and therefore may not
read that material. Duplicate volsers can cause confusion, operational errors, delayed IPLs
(because of duplicate volser messages, each of which must be responded to manually), and
potentially, integrity problems.
Another concern with duplicate volsers is in relation to PDSE extended sharing. Depending
on how the library is being accessed, there will be normal ENQ-type serialization
mechanisms—PDF, for example, serializes on the data set and member name. However, the
PDSE code has its own serialization mechanism in addition to any used by the application.
Unlike PDF, however, the PDSE code does not use the data set name for serialization; rather,
it uses the volser and the TTR of the DSCB to serialize access to the data set during create,
delete, and member update-in-place processing. Therefore, it is possible that two different
PDSE data sets, with different names, that reside on the same place on volumes with
duplicate volsers could experience false contention during certain processes.
1.5.3 TCP/IP
There have been many enhancements to TCP/IP in an OS/390 and z/OS environment to
improve both performance and availability. Many of these enhancements depend on a TCP/IP
capability known as Dynamic XCF.
In addition to enabling these new enhancements, however, Dynamic XCF also automatically
connects all TCP/IP stacks that indicate that they wish to use this feature. If you do not wish
the TCP/IP stack on the incoming system to connect to the TCP/IP stacks on the target
sysplex systems, then you cannot use Dynamic XCF on the incoming system. If the “incoming
system” is actually just a single system, then this may not be a significant problem, however if
the “incoming system” actually consists of more than one system, and you would like to use
the TCP/IP sysplex enhancements between those systems, then this may be an issue.
If this is a potential concern to you, refer to Chapter 12, “TCP/IP considerations” on page 175
before proceeding.
However, there is another consideration you must plan for if you have any Logger structures
(note that we said structures, not logstreams) that are connected to both subplexes. For most
logstreams, you can specify any name you like for the logstream, so it should be possible to
set up Logger structures so that all the connectors are in a single subplex. For example,
structure SUBA_CICS_DFLOG could contain all the CICS DFHLOG logstreams from
SUBPLEXA, and structure SUBB_CICS_DFHLOG could contain all the CICS DFHLOG
logstreams from SUBPLEXB. In a BronzePlex and maybe in a GoldPlex, the only structures
that would be connected to by members of both subplexes should be those containing
logstreams that have a fixed name, and therefore you can only have one per sysplex—at the
time of writing, the only such logstreams are SYSPLEX.OPERLOG and
SYSPLEX.LOGREC.ALLRECS.
An additional consideration is how do you manage the offload data sets. You can’t use
DFSMShsm to migrate them unless you have a single HSMplex, and you would normally only
have a single HSMplex in a PlatinumPlex. The reason for this is that if DFSMShsm in
SUBPLEXA migrates the data set, the catalog will be updated to change the volser of the
migrated data set to MIGRAT. If a system in SUBPLEXB needs to access data in the migrated
offload data set, it will see the catalog entry indicating that the data set has been migrated,
call DFSMShsm on that system to recall it, and the recall will fail because the data set was
migrated by a DFSMShsm in a different HSMplex. If you wish to use DFSMShsm to manage
the Logger offload data sets, all systems that are connected to a Logger structure must be in
the same HSMplex.
Furthermore, if you currently place your offload data sets on SMS-managed volumes (as
many customers do), you will have to discontinue this practice if all the systems in the new
enlarged sysplex will not be in the same SMSplex.
Therefore, if you are considering a BronzePlex or a GoldPlex configuration, you must give
some thought to how your Logger structures will be used (especially for OPERLOG and
LOGREC), and how the offloaded Logger data will be managed.
If the incoming system is a single system, and you are moving into a BronzePlex or a
GoldPlex, you should not have a problem. In this case, there is no reason to enable sysplex
HFS sharing on the incoming system, so it will only be able to see its own HFSs, and the
systems in the target sysplex will only be able to see their HFSs, even if they have enabled
sysplex HFS sharing.
However, if the incoming system is actually a multi-system sysplex and you want to share the
HFSs in that sysplex using sysplex HFS sharing, then you could have a problem if the
systems in the target sysplex also want to use sysplex HFS sharing to share their HFSs. The
fact that the DASD containing the incoming system’s HFSs are offline to the target sysplex
systems does not provide any protection because all I/Os for a given HFS (that is being
shared using sysplex HFS sharing) are issued from a single system. So conceivably, you
could have the volume containing all your HFSs online to just one system, but every system in
the sysplex could still read and write to those HFS if all systems have enabled sysplex HFS
sharing.
Therefore, there can only be one HFSplex per sysplex, and systems whose HFS data sets
should not be accessible from other systems must not enable sysplex HFS sharing.
Chapter 1. Introduction 13
1.5.6 Legal implications
Even in a BronzePlex environment, where the incoming system can only see its own DASD,
and the original members of the target sysplex can only see their own DASD, the fact that all
systems are in the same sysplex means that some information is available to all members of
the systems. One example is syslog data. The current design of z/OS is such that all
messages from every system get sent to every other system in the sysplex. Another example
is RMF—because RMF uses a fixed XCF group name, every RMF in the sysplex will
automatically communicate with every other RMF in the sysplex. This means that if you are
logged on to SYSA and have access to RMF, you can potentially get performance information
about any of the other members of the sysplex.
In most situations, this visibility of information is not a problem. However, if the two systems
belong to different companies, possibly in the defense or finance industries, there may be
legal restrictions on this level of access to information from the other company.
One of the parameters used when allocating any CDS (ARM, CFRM, LOGR, sysplex, OMVS,
WLM) is the number of systems in the sysplex. If you are increasing the number of systems in
the sysplex, you need to review all of the CDSs to check the current setting of the
MAXSYSTEM parameter, which can be different for each CDS. To check the setting of the
parameter, issue the D XCF,C,TYPE=cdstype command, as shown in Figure 2-1, for each type
of CDS in use in the target sysplex.
D XCF,C,TYPE=SYSPLEX
IXC358I 22.55.49 DISPLAY XCF 534
SYSPLEX COUPLE DATA SETS
PRIMARY DSN: SYS1.XCF.CDS01
VOLSER: #@$#X1 DEVN: 37AC
FORMAT TOD MAXSYSTEM MAXGROUP(PEAK) MAXMEMBER(PEAK)
05/15/2002 22:09:38 8 100 (52) 203 (12)
ADDITIONAL INFORMATION:
ALL TYPES OF COUPLE DATA SETS SUPPORTED
GRS STAR MODE IS SUPPORTED
ALTERNATE DSN: SYS1.XCF.CDS02
VOLSER: #@$#X2 DEVN: 37AD
FORMAT TOD MAXSYSTEM MAXGROUP MAXMEMBER
05/15/2002 22:09:43 8 100 203
ADDITIONAL INFORMATION:
ALL TYPES OF COUPLE DATA SETS SUPPORTED
GRS STAR MODE IS SUPPORTED
A program on any system in the sysplex can communicate with another program on any other
member of the sysplex as long as they are both connected to the same XCF group. Some
products provide the ability to specify the name of the XCF group they will use, or provide the
ability to control whether they connect to the group or not. Other products do not provide this
level of control. This is an important consideration when determining whether your target
sysplex will be a BronzePlex, a GoldPlex, or a PlatinumPlex—if you wish to maintain
maximum separation between the incoming system and the other systems in the target
sysplex, how products use XCF may have a bearing on that decision.
If your target environment is a BronzePlex, the incoming system must share the sysplex
CDSs and take part in XCF signalling with the systems in the target sysplex. In fact, even if
you share absolutely nothing else, to be a member of the sysplex, the incoming system must
share the sysplex CDSs.
If your target environment is a GoldPlex, the incoming system will share the sysplex CDSs
and take part in XCF signalling with the systems in the target sysplex.
Finally, if your target environment is a PlatinumPlex, the incoming system will again share the
sysplex CDSs and take part in XCF signalling with the systems in the target sysplex.
The “Type” specified in Table 2-1 relates to the sysplex target environment—B represents a
BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex.
D XCF,G
IXC331I 23.35.26 DISPLAY XCF 568
GROUPS(SIZE): ATRRRS(3) COFVLFNO(3) CSQGMQ$G(3)
CSQGPSMG(3) D#$#(3) DR$#IRLM(3)
DSNGSGA(3) EZBTCPCS(3) IDAVQUI0(3)
IGWXSGIS(5) INGXSGA0(2) IRRXCF00(3)
ISTCFS01(3) ISTXCF(3) ITSOIMS(1)
IXCLO00C(3) IXCLO00D(3) IXCLO001(3)
IXCLO004(3) SYSBPX(3) SYSDAE(7)
SYSENF(3) SYSGRS(3) SYSGRS2(1)
SYSIEFTS(3) SYSIGW00(4) SYSIGW01(4)
SYSIGW02(3) SYSIGW03(3) SYSIKJBC(3)
SYSIOS01(3) SYSJES(3) SYSMCS(8)
SYSMCS2(8) SYSRMF(3) SYSTTRC(3)
SYSWLM(3) XCFJES2A(3) XCFJES2K(1)
2. If your target environment is a BronzePlex, you will not be sharing the Master Catalog, so
you must ensure that the CDSs are in the Master Catalog of each system. Remember that
when the incoming system joins the target sysplex, it should use the same COUPLExx
Parmlib member as the systems in the target sysplex, or at least use a member that is
identical to the COUPLExx member used by those systems. If your target environment is a
GoldPlex or a PlatinumPlex, you will be sharing SYS1.PARMLIB, so all systems should
share the same COUPLExx member.
3. Prior to the merge, you should review your XCF performance in the target sysplex to
ensure there are no hidden problems that could be exacerbated by adding another system
to the sysplex. You should use WSC Flash 10011, Parallel Sysplex Performance: XCF
Performance Considerations, to help identify the fields in RMF reports to check. If any
problems are identified, they should be addressed prior to the merge.
On a system, each transport class must have a unique name. The class name is used in
system commands and shown in display output and reports.
By explicitly assigning an XCF group to a transport class, you give the group priority
access to the signaling resources (signaling paths and message buffer space) of the
transport class. All groups assigned to a transport class have equal access to the
signaling resources of that class.
You should check to see if any transport classes are defined in either the incoming system
or in the target sysplex. If there are, first check to see if they are still required—as a
general rule, we recommend against having many transport classes unless absolutely
necessary. If you determine that the transport classes are still necessary, you should then
assess the impact of the incoming system, and make sure you add the appropriate
definitions to the COUPLExx member that will be used by that system after it joins the
sysplex.
CICS VR DWWCVRCM C
DAE SYSDAE F
ENF SYSENF I
HSM ARCxxxxxx K
IOS SYSIOSxx N
MQSeries CSQGxxxx R
OMVS SYSBPX T
RACF IRRXCF00 U
RMF SYSRMF V
RRS ATRRRS W
TCP/IP EZBTCPCS Z
Trace SYSTTRC ab
VLF COFVLFNO ad
WLM SYSWLM ag
XES IXCLOxxx ah
CFRM policies are stored in the CFRM CDS. In addition to the policies, the CDS also
contains status information about the CFs.
If your target environment is a BronzePlex, the incoming system’s use of the CF is likely to be
limited to sharing the XCF structures, IEFAUTOS for tape sharing (for systems prior to z/OS
1.2), and the GRS Star structure. You may also wish to use the OPERLOG and
SYSTEM_LOGREC facilities, in which case you should refer to 1.5.4, “System Logger” on
page 12. In addition, if the incoming system is using any CF structures prior to the merge, you
will probably set up those in the CF, with only the incoming system using those structures.
If your target environment is a GoldPlex, the incoming system will probably share at least the
XCF and GRS structures. It may also share other structures like the Enhanced Catalog
Sharing structure, OPERLOG and LOGREC, JES2 checkpoint, and so on. Once again, if you
are considering using OPERLOG and/or LOGREC, you should refer to 1.5.4, “System
Logger” on page 12. In addition, if the incoming system is using any CF structures prior to the
merge, you will probably set up those in the CF—in this case, it may or may not share those
structures with the other systems in the target sysplex.
If your target environment is a PlatinumPlex, the incoming system will probably share all the
structures in the CF with the other systems in the target sysplex. In addition, if the incoming
system is using any CF structures prior to the merge, you will probably set up those in the
CF—in this case, it may or may not share those structures with the other systems in the target
sysplex.
Ensure all systems in the sysplex have at least two links to all B, G, P
attached CFs.
Check that the CFs have enough storage, and enough white B, G, P
space, to handle the additional structures as well as existing
structures that will increase in size.
Check that performance of CFs are acceptable and that CFs have 2 B, G, P
sufficient spare capacity to handle the increased load.
Check the size of all structures that will be used by the incoming 5 B, G, P
system.
The “Type” specified in Table 2-3 on page 28 relates to the sysplex target environment—B
represents a BronzePlex, G represents a GoldPlex, and P represents a PlatinumPlex.
DATA TYPE(CFRM)
ITEM NAME(POLICY) NUMBER(5)
ITEM NAME(STR) NUMBER(200)
ITEM NAME(CF) NUMBER(8)
ITEM NAME(CONNECT) NUMBER(32)
ITEM NAME(SMREBLD) NUMBER(1)
ITEM NAME(SMDUPLEX) NUMBER(1)
...
7. If the incoming system is using structures that are not being used in the target sysplex,
those structures (for example, the structure for a DB2 in the incoming system) must be
added to the CFRM policy in the target sysplex.
Other structures that may be in use in the incoming system, like the XCF structures, will
not be moved over to the target sysplex because they will be replaced with shared ones
that are already defined in the target sysplex. In that case, you must update any
references to those structure names in the incoming system.
8. For structures that existed in both the incoming system and the target sysplex, like the
XCF structures, make sure that the definition of those structures in the target sysplex is
compatible with the definitions in the incoming system. For example, if the incoming
system uses Auto Alter for a given structure and the target sysplex does not, you need to
investigate and make a conscious decision about which option you will use.
9. Some structures have fixed names that you cannot control. Therefore, every system in the
sysplex that wants to use that function must connect to the same structure instance. At the
time of writing, the structures with fixed names are:
– OPERLOG
– LOGREC
– GRS Lock structure, ISGLOCK
Some SFM actions are specified on the sysplex level, while others can be specified for
individual systems. SFM controls actions for three scenarios:
1. If there is a loss of XCF signalling connectivity between a subset of the systems, SFM can
decide, based on the weights you specify for the systems, which systems must be
removed from the sysplex in order to restore full connectivity. This is controlled by the
CONNFAIL keyword, and is enabled or disabled at the entire sysplex level.
2. In the event of the loss of a system, SFM can automatically partition that system out of the
sysplex, releasing any resources it was holding, and allowing work on other systems to
continue. This action can be specified at the individual system level. For example, you can
tell SFM to automatically partition SYSA out of the sysplex if it fails, however to prompt the
operator in case SYSB fails. This is controlled via the ISOLATETIME and PROMPT
keywords. As a general rule however, we recommend that ISOLATETIME is always
specified so that dead systems can be partitioned out of the sysplex as quickly as
possible.
3. If you use PR/SM reconfiguration to move processor storage from one LPAR to another, in
the event of a system failing, SFM will control that processing if you provide the
appropriate definitions.
Figure 2-3 shows a typical configuration, where systems SYSA and SYSB were in the original
target sysplex, and SYSC is the incoming system which is now a member of the sysplex. If
the link between systems SYSB and SYSC fail, the sysplex can continue processing with
SYSA and SYSB, or SYSA and SYSC. If the applications in SYSA and SYSB support data
sharing and workload balancing, you may decide to remove SYSB as the applications from
that system can continue to run on SYSA. On the other hand, if SYSA and SYSB contain
production work, and SYSC only contains development work, you may decide to remove
SYSC.
SYSA SYSB
SYSC
You can see that the decision about how to handle this scenario depends on your specific
configuration and availability requirements. For each system, you then need to carefully
evaluate the relative weight of that system within the sysplex, and whether you wish SFM to
automatically partition it out of the sysplex (by specifying ISOLATETIME), or you wish to
control that process manually (by specifying PROMPT). If you can assign a set of weights that
accurately reflect the importance of the work in the sysplex, we recommend that you use
ISOLATETIME.
If your target environment is a GoldPlex, your SFM decision will be based on the level or
workload sharing across the systems in the sysplex. Once again, you want to partition dead
systems out of the sysplex as quickly as possible. If the work is spread across all the systems
in the sysplex, we recommend that you specify CONNFAIL YES, to quickly remove any
ebookbell.com