Sg245204 - Subarea To APPN Migration - HPR and DLUR Implementation
Sg245204 - Subarea To APPN Migration - HPR and DLUR Implementation
http://www.redbooks.ibm.com
SG24-5204-00
IBML
SG24-5204-00
International Technical Support Organization
September 1998
Take Note!
Before using this information and the product it supports, be sure to read the general information in
Appendix E, “Special Notices” on page 323.
When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any
way it believes appropriate without incurring any obligation to you.
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
The Team That Wrote This Redbook . . . . . . . . . . . . . . . . . . . . . . . . xiii
Comments Welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Contents v
B.1 Network Startup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
B.1.1 XID Exchange, LINK0001 . . . . . . . . . . . . . . . . . . . . . . . . . . 248
B.1.2 CP-CP Sessions with NNP61A . . . . . . . . . . . . . . . . . . . . . . 251
B.1.3 XID Exchange with NNP41A . . . . . . . . . . . . . . . . . . . . . . . . 254
B.1.4 CP-CP Sessions with NNP41A . . . . . . . . . . . . . . . . . . . . . . 256
B.1.5 Route Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
B.1.6 BIND for DLUR/S Session . . . . . . . . . . . . . . . . . . . . . . . . . 262
B.1.7 Dependent LU Activation . . . . . . . . . . . . . . . . . . . . . . . . . 266
B.1.8 Route Setup to RA39 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
B.1.9 Communications Server/2 Displays . . . . . . . . . . . . . . . . . . . 279
B.2 Logon to NetView on RAA . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
B.2.1 Communications Server/2 Displays . . . . . . . . . . . . . . . . . . . 298
B.3 Path Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
B.3.1 Route Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
B.3.2 Communications Server/2 Displays . . . . . . . . . . . . . . . . . . . 308
B.3.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Figures ix
162. RTP Connections from RAA . . . . . . . . . . . . . . . . . . . . . . . . . . 172
163. Dependent LU RTP Pipe . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
164. Dependent LU Pipe from CS/2 . . . . . . . . . . . . . . . . . . . . . . . . 174
165. DLUR/S RTP Pipe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
166. DLUR/S RTP Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
167. Displays Issued on 2216 after Session Establishment . . . . . . . . . . 177
168. Active Stations on NNP61A after Breaking the Token-Ring . . . . . . . 177
169. HPR Connections after Path Switch . . . . . . . . . . . . . . . . . . . . . 178
170. Path Switch on RAA Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
171. Newly Switched RTP Pipe . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
172. New Path after Token-Ring Break . . . . . . . . . . . . . . . . . . . . . . 180
173. HPR Connection after Path Switch . . . . . . . . . . . . . . . . . . . . . . 181
174. Active RTP Connections on 2216 after TG to NNP61A Failed . . . . . . 181
175. 2216 As DLUR and RTP Endpoint . . . . . . . . . . . . . . . . . . . . . . . 182
176. Enabling DLUR on 2216 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
177. Specifying a Different DLUS for a Station . . . . . . . . . . . . . . . . . . 183
178. Displays Issued on 2216 before CS/2 Activation . . . . . . . . . . . . . . 184
179. Logical Links on LEN Node . . . . . . . . . . . . . . . . . . . . . . . . . . 185
180. Logical Link Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
181. Displays on 2216 after CS/2 Activation . . . . . . . . . . . . . . . . . . . 187
182. DLUR/S Pipe Activation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
183. New DLUR/S Pipe from RAA . . . . . . . . . . . . . . . . . . . . . . . . . 188
184. NN and DLUR Node Display . . . . . . . . . . . . . . . . . . . . . . . . . . 189
185. DLUR-Owned PU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
186. Logical Unit Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
187. Owned LU in Session with Application . . . . . . . . . . . . . . . . . . . 191
188. Cross Domain LU in Session with Application . . . . . . . . . . . . . . . 192
189. Displays on 2216 after Opening Two Sessions . . . . . . . . . . . . . . . 193
190. Path Switch of DLUR/S Pipe . . . . . . . . . . . . . . . . . . . . . . . . . 194
191. Path Switch for LU-LU Pipe . . . . . . . . . . . . . . . . . . . . . . . . . . 194
192. Displays on 2216 After Link Failure . . . . . . . . . . . . . . . . . . . . . 195
193. Summary of DLUR Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
194. Network Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
195. Invoking 2210 APPN Configuration . . . . . . . . . . . . . . . . . . . . . . 198
196. 2210 Node Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
197. Downstream Port of 2210 . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
198. Definition of Link to NNP41A . . . . . . . . . . . . . . . . . . . . . . . . . 200
199. Link Definition to 2216 via Downstream Link . . . . . . . . . . . . . . . . 201
200. 2210 DLUR Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
201. Listing of APPN/HPR Defined Options . . . . . . . . . . . . . . . . . . . . 202
202. NDF Listing of CS/2 Used for 3270 Sessions . . . . . . . . . . . . . . . . 203
203. Display of Active Links on 2210 before CS/2 Connection . . . . . . . . 204
204. Displays Issued on 2210 before CS/2 Connection . . . . . . . . . . . . . 204
205. Active Stations on NNP41A . . . . . . . . . . . . . . . . . . . . . . . . . . 205
206. Active HPR Connections on NNP41A . . . . . . . . . . . . . . . . . . . . 205
207. DLUR/S Pipe from RA39 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
208. DLUR/S Pipe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
209. Displays Issued on 2210 after CS/2 Connection . . . . . . . . . . . . . . 208
210. Cross-Domain LU Display . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
211. Active RTP Connections on 2210 after LU-LU Session Establishment . 209
212. LU-LU Pipe after Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
213. New Path for LU-LU RTP Pipe . . . . . . . . . . . . . . . . . . . . . . . . 211
214. DLUR/S Pipe New Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
215. Displays Issued on 2210 Side . . . . . . . . . . . . . . . . . . . . . . . . . 213
216. Node Definition for BX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Figures xi
xii Subarea to APPN Migration: HPR and DLUR Implementation
Preface
This redbook is the second of two volumes containing detailed coverage of the
migration of a subarea network to an APPN network. It focuses on the migration
steps and requirements that can be used as guidelines by others to accomplish
the migration in a simple and constructive manner. The first volume
(SG24-4656-01) covers the implementation of the basic VTAM APPN functions
including Border Node, VR-TG and network management. The second volume
completes the coverage of a typical customer′s network using HPR, DLUR and
APPN/HPR routers such as 3746, 2216 and 2210. In each case tested examples
and definitions illustrate the theory.
Daniela di Casoli is an IT Specialist with IBM Italy. She has eight years of
experience in defect and non-defect networking support on the MVS platform.
Thanks to the following people for their invaluable contributions to this project:
Andrew Arrowood
APPN Development, Research Triangle Park
Roy Brabson
CS OS/390 Development, Research Triangle Park
Paul Braun
Systems Management and Networking ITSO Center, Raleigh
Mac Devine
CS OS/390 Development, Research Triangle Park
Brian Dorling
Systems Management and Networking ITSO Center, Raleigh
Nancy Gates
Networking Systems Center, Gaithersburg, MD
Mike Haley
Systems Management and Networking ITSO Center, Raleigh
Johnathan Harter
CS OS/390 Development, Research Triangle Park
Lap Huynh
CS OS/390 Development, Research Triangle Park
Tim Kearby
Systems Management and Networking ITSO Center, Raleigh
Chris Mason
International Education Centre, La Hulpe, Belgium
John Parker
IBM United Kingdom
Sam Reynolds
CS OS/390 Development, Research Triangle Park
Juan Rodriguez
Systems Management and Networking ITSO Center, Raleigh
Carla Sadtler
Systems Management and Networking ITSO Center, Raleigh
Comments Welcome
Your comments are important to us!
This volume has two major objectives that were not addressed by the first
volume:
• To describe the implementation of high-performance routing (HPR). HPR is
an extension to APPN which can improve performance and availability with
minimal impact on network definition and administration. Although the
conversion from base APPN to HPR is much more straightforward than that
from subarea to APPN, we feel that HPR is sufficiently important to justify a
separate redbook.
• To extend the scope of our tested scenarios outside the narrow realm of
VTAM and NCP. In the first volume most of VTAM′s partner APPN nodes
were PCs running Communications Manager or Communications Server,
with dependent LUs directly attached to a subarea node. Here we describe:
− The use of APPN/HPR routers such as the 3746, the 2216 and the 2210 to
build a comprehensive SNA network.
− The use of dependent LU requester/server functions to free the network
from the restrictions imposed by the subarea architecture. Although an
APPN network can support dependent LU sessions without this function,
DLUR/S permits such sessions to take the optimum route all the way to
the APPN node supporting the dependent LUs. By the same token, these
sessions can enjoy the benefits of HPR all the way from the dependent
LU requester node to the application.
Note
This pair of redbooks covers migration from subarea to APPN, and then to
HPR, in two separate volumes. We must emphasize that this is purely to
help the reader understand APPN and HPR, and is not intended as a
recommendation to perform the migration in two phases. The difference
between APPN and HPR is small compared to the difference between
subarea and APPN. You may find it advantageous to let VTAM′s defaults
take you all the way to HPR as you implement APPN.
The intermediate nodes in an HPR network have only one function, to route data
quickly and efficiently from one link to another. This technique is known as
automatic network routing (ANR) .
All the rest of the HPR function is implemented in the endpoints of a connection.
The major part of this function is called rapid transport protocol (RTP) . RTP
provides a reliable end-to-end connection for sessions using HPR, and includes
the following components:
• End-to-end error recovery removes the need for intermediate routing nodes
to detect, check, acknowledge and retransmit packets in error. The only
checking done at intermediate nodes is for data integrity (CRC checking),
ANR is a source-routing protocol; the routing information (ANR label) for every
node on the path is contained in the packet header. Furthermore, the ANR label
represents the onward link for each node, not the session as with APPN ISR.
There is no session awareness in a node performing ANR routing. All it has to
do is inspect the first ANR label in the packet header, strip it off, and forward the
packet to the correct outbound link.
This label stripping technique is much more efficient then the label swapping
technique (ISR) used by base APPN nodes. ISR requires that the node inspects
the LFSID in the incoming packet, uses it to look up a session table, swaps it to
the outbound LFSID, and forwards the packet.
Figure 1 on page 4 illustrates the way ANR labels are used. 1.3, “Rapid
Transport Protocol” on page 4 describes how they are assigned in the first
place.
1. Node A sends an NLP to node B with ANR labels 21 / 33 / 65 / FF in the NLP
header as shown.
2. Node B looks in the header for the first ANR label. This is 21, so node B
removes it from the header and transfers the truncated NLP to the link it
knows as 21.
3. Node C receives the NLP, removes the next ANR label (33), and sends the
NLP on the link it knows as 33.
4. Node D receives the NLP and recognizes that the next ANR label (65)
represents not a link but the endpoint of the RTP connection. Therefore, it
passes the data in the NLP to the higher protocol layers for processing.
5. The response to this message takes a similar course through the network in
the opposite direction.
On a FID-2 connection, the session information held in each node contains the
transmission priority. With HPR there is no session awareness in ANR nodes,
therefore the transmission priority is carried in the NLP headers. Priority
queueing is implemented on HPR links (or combined HPR/FID-2 links) in the
same way as on base APPN (FID-2 only) links.
ANR is significantly faster than current APPN routing as seen in VTAM testing.
The intermediate nodes have between 3 and 10 times less code to execute to
route with ANR. This is offset to some extent by the increase in code at the RTP
endpoints. Also, no intermediate node storage is required to maintain routing
tables and no pre-committed buffers are necessary for each session.
Nearly all the search and session setup mechanisms in HPR are exactly the
same as in base APPN. The searches flow the same way and the session RSCV
is calculated in exactly the same way, except that some extra HPR-related
information flows at certain times. Note that HPR capability is not a node or TG
Session endpoints may reside in HPR nodes, or in APPN nodes. If only part of a
session route is HPR-capable, there will be one or more APPN/HPR boundaries
on the path. The nodes containing the boundary function act as RTP endpoints
for the session. The RTP endpoints are identified by network connection
endpoints (NCEs); an NCE identifier looks like an ANR label and is the last label
in the routing part of the network layer header (NHDR). An NCE may represent
an LU, a group of LUs, or an APPN/HPR boundary. If the node supports Control
Flows over RTP (see 1.8, “HPR Control Flows over RTP Option” on page 12), the
NCE may also represent a CP or the Route Setup function. A successful reply to
a search will contain the NCE representing the destination LU, if CP(DLU)
supports RTP.
The point where base APPN and HPR processing diverge comes as the BIND is
about to flow across the network. In base APPN, each node on the session path
examines the RSCV in the BIND and builds suitable session tables before
sending the BIND to the next node. With HPR, this changes as soon as the BIND
reaches an RTP-capable node; this node may be CP(PLU) itself, the session
endpoint and the BIND origin.
When an RTP-capable node on the session path sees the BIND, it does the
following:
• It examines the session RSCV, starting with its own entry, for a continuous
sequence of ANR-capable nodes. Once it reaches the last ANR-capable
node on the path it scans back until it finds an RTP-capable node. This
sequence (between this node and the furthest RTP-capable node) is now a
candidate for an RTP connection. It has an RTP-capable node at each end
with any intermediate nodes able to perform ANR.
• If no such RTP-capable route is found, the BIND is forwarded into the
network in the base APPN fashion after the session tables have been built.
• If an RTP-capable route is found, the node then checks whether there is
already a suitable RTP connection that this new session can use. Such an
RTP connection must have the same RSCV (for this part of the route) and the
same APPN class of service as the new session. If the RTP partner is also
the session endpoint (and thus the NCE to be used for the new session is
known), then the NCEs must also match. If the RTP partner is an APPN/HPR
boundary function, then the existing pipe must also be to such a function.
• If a suitable RTP connection already exists, the BIND is sent on that pipe as
an NLP. The intermediate nodes on the RTP connection do not see the
BIND.
• If no suitable RTP connection exists, a request called a Route Setup is sent
to the RTP partner. Route Setup flows like a BIND, using the session RSCV
(or that part of it between the RTP partners) to navigate through the network.
The Route Setup flow is what establishes the ANR labels for an RTP pipe.
As Route Setup passes through the network, each node converts the RSCV
to ANR routing information; the RSCV tells it which TG to which node is next
on the route, and the ANR node assigns a label to represent that TG. By the
time Route Setup has reached the destination node, the ANR routing
information is complete for this direction. Now the Route Setup Reply must
┌───────────┐ ┌───────────┐
│ │ │ │
│ OLU │ A P P N F l o w s │ DLU │
│ │ │ │
└───────────┘ └───────────┘
───────────────────────────────────────────────
Locate, Find, CDINIT
───────────────────────────────────────────────
Locate, Found, CDINIT
───────────────────────────────────────────────
BIND, RSCV
───────────────────────────────────────────────
BIND response, RSCV
───────────────────────────────────────────────
Locate, Find, CDINIT
───────────────────────────────────────────────
Locate, Found, CDINIT, NCE
───────────────────────────────────────────────
Route Setup, RSCV (picks up ANR labels)
───────────────────────────────────────────────
Route Setup Reply (picks up ANR labels)
───────────────────────────────────────────────
Connection Setup (NLP, assigns TCID)
───────────────────────────────────────────────
Connection Identifier (NLP, assigns TCID)
───────────────────────────────────────────────
BIND (NLP), RSCV
───────────────────────────────────────────────
BIND Response (NLP), RSCV
The TCIDs are sufficient to identify an RTP pipe, but not enough to identify a
session since many sessions may use the same pipe. Since there is no LFSID in
HPR, sessions are distinguished by a session address . This address is assigned
by the RTP endpoints as the BIND and BIND response flow, and is carried in a
new FID-5 header within the NLP. The session address is unique in each
direction.
Thus an NLP will contain the network layer header (with ANR labels), a transport
layer header (with a TCID) and, if there is data, a FID-5 header (with a session
address). Figure 4 illustrates.
┌─────┬─────┬─────┬─────┬──────────┬────────────┬────────────────┬───────────┐
│ │ │ │ │ │ │ │ │
│ ANR │ ANR │ NCE │X′ FF′ │ TCID │ THDR info │ Session Addr │ Data │
│ │ │ │ │ │ │ │ │
└─────┴─────┴─────┴─────┴──────────┴────────────┴────────────────┴───────────┘
───Network Header──── ───Transport Header── ────FID 5───── ──Data───
Figure 4. Format of Network Layer Packet
As with base SNA, data can be segmented to fit into NLPs and reassembled at
the far end of the connection. Segmenting and reassembly is done by the RTP
endpoints, never at the intermediate nodes. However, NLP headers (NHDR,
THDR, FID5) cannot be segmented. As the reader will appreciate these headers
can be quite lengthy; therefore, HPR demands that each link on an RTP
connection can support a minimum of 768 bytes in an NLP. The maximum
Note that the sender must buffer all packets sent until an acknowledgment is
received from the endpoint. The RTP acknowledgment is requested by the RTP
endpoints at the RTP level, not the session level.
The ARB algorithm is self-adjusting in that it will balance itself between the
traffic requirements and the load on the network. However, it needs to be
initialized with reasonable values at the start of the RTP connection. ARB finds
out the lowest capacity link on the HPR route, and uses 10% of that value as a
starting point for the sending rate. This is felt to be high enough to ensure that
capacity is attained quickly, but low enough not to cause congestion due to the
initial extra load on the network.
The Route Setup and its reply are used to determine the lowest capacity link on
the connection; each node on the path checks the APPN CAPACITY
characteristic of the next link on the RSCV, and substitutes it into the Route
Setup if it is lower than the value it finds there. It is important to get the
CAPACITY values correct; if ARB is initialized with too low a value (for example,
8K instead of 32M), it can take a very long time to reach capacity and the
performance gains from HPR may not be realized. Prior to the introduction of
HPR, CAPACITY was used only for the purpose of route calculation.
Most other congestion control schemes merely react once congestion has
occurred. ARB is designed to detect the onset of congestion, thereby ensuring
that congestion leading to the discarding of data packets does not occur.
For a more detailed description of the way ARB flow control works, please see
Appendix A, “Adaptive Rate-Based Flow and Congestion Control” on page 231.
Path switch can also preserve a session across a failure even if there is no
alternate route available. If the network connection (typically a shared facility
such as frame relay or token-ring) recovers in a timely manner, the new path
may take the same route as the old one.
A path switch may be initiated when a link failure is detected, or when a timeout
procedure fails to elicit a response. A path switch will normally involve the
same flows as when the connection was initially set up: Locate searches and
Route Setup (but not, of course, the BIND).
The path switch need not be initiated by the node that requested the session (or
the RTP connection) in the first place; therefore, the new path may be calculated
by a different node from the one that calculated the original route.
At initial session setup, the route calculated by NNS(PLU) will take no account of
HPR, even if CP(PLU) is HPR-capable. This is to preserve compatibility between
APPN and HPR. However, at path switch time there is no choice; an
RTP-capable route must be selected. Therefore, each NN that is capable of ANR
must also be capable of calculating HPR-only routes. New bits in the TDUs
(CV45 and CV46) carry HPR-related information, which is understood and stored
only by HPR-capable nodes.
When all the nodes are active and all the desired CP-CP sessions have been
established, each NN has a representation of the whole network in its topology
database. Those nodes that do not understand HPR see no difference in
capability between themselves and the other NNs. Those nodes that do have
HPR capability are aware of the HPR capability in all other nodes. The APPN
protocols relating to topology exchange ensure that the bits in the TDUs
indicating HPR support are ignored, yet propagated, by the nodes that do not
understand them.
HPR and path switching can lead to some interesting session paths. For
example, suppose C and D were the only RTP-capable nodes in Figure 6, and E
and F were ANR-capable. A session from A to H might take the route ABCEFGH
if A decided that was the best route. In that case there would be no HPR within
the session since there are not two RTP nodes on the path. However, if the
route chosen was ABCDFGH, then an HPR portion would be established between
C and D. Now, if the link CD failed, a path switch would be attempted by C or D.
The new route chosen for the RTP pipe would be CEFD, since that is the only
valid alternative. Thus the session would take the route ABCEFDFGH. This
route traverses the same link twice: once as base APPN and once as part of an
RTP pipe. Only a path switch could cause such a route, since no network node
would ever calculate it at session setup.
A node that implements RTP can, if it wishes, also implement the Control Flows
over RTP APPN option. This addition to the architecture permits both CP-CP
sessions and Route Setup messages to flow over RTP connections.
Nodes supporting the Control Flows option use RTP connections (if both adjacent
nodes support it) for CP-CP sessions. These connections (one or two depending
on timing) will be dedicated to the CP-CP sessions because the CPSVCMG class
of service is used only for these sessions. As with FID-2 CP-CP sessions there is
no need for Locate flows, nor is there any need for Route Setup since the nodes
have determined all the relevant information at the time the XIDs flow after link
establishment.
When a link connecting two nodes that both support this option is needed for an
LU-LU session, a long-lived RTP connection is established between the adjacent
nodes solely for the purpose of forwarding Route Setup messages. This
connection is called long-lived because it stays up as long as the link stays up,
even though no sessions ever use it. The long-lived RTP connection is never
path-switched, even though the CP-CP sessions may be switched. If a link
breaks, its long-lived RTP connection will be deactivated, and may be
re-established if the link is recovered.
Aside from providing additional resilience, the Control Flows option is required if
the connection supports HPR only and does not permit FID-2 traffic.
Subarea networking has implemented MLTGs between NCPs for many years, but
base APPN does not support them. One reason for this is the requirement for
resequencing packets, which imposes extra processing overhead on the
network. Packets transmitted on an MLTG may not arrive at their destination in
the same sequence as they were sent, because the packet sizes, link speeds
and link utilizations will vary. Therefore, a node on the path somewhere must
buffer them and reorder them into their original sequence.
In subarea networking, the NCPs at either end of the MLTG perform the
resequencing function. In a modern high-speed network this is not possible
because the overhead on the intermediate nodes would be excessive.
Moreover, nodes implementing RTP already have the capability of buffering quite
large amounts of data because acknowledgements are end-to-end and can be
infrequent. Therefore, an APPN MLTG must support HPR to the exclusion of
FID-2 packets. In other words, ISR is not supported. This in turn implies that
┌───────────────────────┐
│ │
│ M L T G Option │
│ │
┌───────────────────────┬───────┴───────────────────────┤
│ │ │
│ Dedicated RTP Option │ Control Flows Option │
│ │ │
┌───────┴───────────────────────┴───────────────────────────────┤
│ │
│ T r a n s p o r t O p t i o n │
│ │
┌───────┴───────────────────────────────────────────────────────────────┤
│ │
│ H P R B a s e O p t i o n │
│ │
├───────────────────────────────────────────────────────────────────────┤
│ │
│ A P P N N e t w o r k o r E n d N o d e │
│ │
└───────────────────────────────────────────────────────────────────────┘
Clearly any node implementing HPR must be an APPN end node or network
node.
The HPR Base (Option Set 1400) includes the following functions:
• ANR Routing (NNs only)
• Ability to use FID-2 packets for CP-CP sessions
• Ability to share links between FID-2 packets and NLPs
• HPR support indication in topology updates
• Ability to send and receive Route Setup
• Support for at least 768-byte packets on HPR-capable links
The HPR Transport option (Option Set 1401) includes the following functions:
• RTP, including end-to-end error recovery and ARB flow/congestion control
• APPN/HPR boundary function
• Nondisruptive path switch
• Ability to return NCE on successful search reply
The Control Flows option (option set 1402) comprises the ability to set up CP-CP
sessions over RTP connections, and to send Route Setup messages over RTP
connections. RTP (the Transport option) is a prerequisite for Control Flows.
Control Flows is required where a connection is unable to support packets other
than NLPs; examples include ATM, Enterprise Extender, MLTGs and some
implementations of MPC + .
The Dedicated RTP Connections option (Option Set 1403) allows a node to
establish an RTP connection that is only used for one session. This is useful for
applications that are able to specify their own quality of service. It is intended to
be used with APPN Option Set 2005, which permits an ATM SVC to be dedicated
to a single RTP pipe. Thus an application can have an ATM connection all to
itself. RTP capability is a prerequisite for this option set.
HPR MLTG (Option Set 1404) allows multiple links to be included in a single TG.
Because MLTG cannot accommodate base APPN packets, Control Flows is a
prerequisite for MLTG.
Some older products, particularly the 3174 with LIC C6 and the 6611, also
implement some HPR functions.
If one node says “not allowed” and the other says “required”, then the
connection becomes base APPN and HPR is suppressed. This is not normally a
problem because most nodes now default to “not preferred” unless the DLC
forces a particular value (required for channels or X.25, not allowed for native
ATM). Some older versions of HPR products either insisted on link-level error
recovery or prohibited it. This was a particular concern with early releases of
the 3746-9X0 (required on a token-ring) and Communications Manager/2
(prohibited). Today most nodes permit the default value of “not preferred” to be
overridden. If you do override these values, you need to make sure the partners
have not specified conflicting requirements.
In the figure:
• b, c, f and j are FID-4 subarea connections that carry the NCP ownership
sessions from the three VTAM NNs.
• l is a FID-4 connection on which a VR-TG has been defined between NN1 and
NN2.
• a, d, e, g, h and m are FID-2 (APPN BF-TG) connections between VTAM
nodes and other APPN nodes.
• i and k are peripheral subarea connections to type 2 nodes with dependent
LUs.
Assuming that all nodes and all TGs are configured to support RTP, the following
observations can be made:
• Control Flows over RTP is supported over connections a, d, e and g as long
as they are not XCA connections. All the other APPN links are either VR-TG
or NCP-attached.
• A session between EN6 and EN4 can run over an RTP connection using the
route h-l-m-f-e. NCP 7, NCP 8 and NCP 9 all act as ANR routers, as does
NN3. NCP 7 and NCP 8 translate HPR protocols between APPN BF-TGs and
subarea explicit routes.
• A session between LU2 and NN2 over the connections k-j-m-c will not use
HPR. The first RTP-capable node on this path is NN2, although to the APPN
network NN3 is RTP-capable. Similarly, a session between LU2 and NN1
using k-j-m-l-b will not use HPR.
• A session between LU2 and NN2 over the connections k-j-f-d will use HPR on
the d link alone. Both NN3 and NN2 are RTP-capable. A session between
LU2 and NN1 using k-j-f-d-c-l-b will also use HPR between NN3 and NN1.
• If the start option HPRNCPBF is set to YES on NN3, the session between LU2
and NN2 can be established using the path k-j-f-f-m-c. The HPR portion of
this route is f-m-c, which retraces the subarea (internal to the CNN) TG f. If
As stated above, VTAM and NCP support HPR over VR-TG connections. The
Route Setup flows are the same as for BF-TG connections, and the VTAMs or
NCPs at the APPN/subarea boundary assign ANR labels in the usual way.
However, the NLPs do not flow on a virtual route (despite the name virtual
route-based transmission group). A VR is used for the Route Setup flow itself,
but if it is not required, it is deactivated after the Route Setup reply has been
sent. Subsequently, NLPs flow on an ER between the VR-TG endpoints. Once
again, this is done to minimize the amount of processing an ANR router has to
do. There is no VR flow control over the VR-TG portion of an RTP connection.
There is a second issue relating to the fact that HPDT is independent of both
VTAM and TCP/IP. If two VTAM nodes at the requisite level (V4R4 or above) are
connected over an HPDT-capable link, this link will be established as HPDT by
default even before the VTAMs have exchanged capabilities. Thus, an HPDT
connection between VTAMs where one VTAM has been configured not to
support RTP will be usable only by TCP/IP. To run SNA over an HPDT
connection, both VTAMs must be capable of RTP (which also makes them
capable of Control Flows).
The RTP PUs are stored by VTAM in the major node ISTRTPMN. Three distinct
types of RTP connections are identified:
• LULU, for LU-LU sessions
Please refer to NCP, EP and SSP Resource Definition Reference for a complete
description of these parameters. Note that that manual refers to a “composite
ANR node”, but the parameters apply equally to a composite RTP node where
VTAM is the endpoint of the RTP connections passing through the NCP.
The MAE can usually be expected to support the same software levels as the
2216 within a month or two. At the time of writing a new release (V3R1) of the
Multiprotocol Access Services software has just become available on the 2216,
and will shortly be available on the MAE. This release adds support for session
services extensions and the extended border node function.
A 3746 can be connected to a VTAM host, via the MAE, using MPC + . This
requires RTP support in VTAM, and will prevent SNA sessions from crossing
between the MAE and a subarea network unless VR-TG is available within the
subarea network. 2.1.1, “High-Performance Data Transfer” on page 18
describes the reasons for this.
The 2216 supports HPR over a wide variety of connections, since it must be able
to communicate with other multiprotocol routers as well as SNA workstations
and servers. The connection types over which HPR is available are:
• Token-ring, Ethernet (DIX V2 or IEEE 802.3) and FDDI
• 100 Mbps token-ring and Ethernet
• Frame relay BNN and BAN, over serial or ISDN link
• PPP, over serial or ISDN link
• ATM, which is available in two flavors:
− Forum-compliant LAN emulation.
− Native. Native ATM support is an HPR-only connection and requires
Control Flows over RTP in both partner nodes. Currently VTAM, the
2216, 2210 and MAE are the only products that can communicate using
native SNA over ATM.
Because the 2216 is always a network node, if a session traverses the
native ATM link between the 2216 and VTAM, it cannot then continue into
the subarea network (unless VR-TG is used in the subarea network).
This is because of the architectural restriction described in 2.1.1,
“High-Performance Data Transfer” on page 18.
The 2210 software, Multiprotocol Routing Services, is essentially the same code
as 2216 MAS. However, the 2210 is a smaller and less powerful machine, and
cannot concurrently support as many networking protocols and options as the
2216. Presently the only major functional difference between the 2210 and 2216
is that the 2210 has no channel attachments (ESCON or parallel).
CS/2 can also use HPR over generalized DLC (GDLC) connections. GDLC is a
developer API for support of intelligent adapters that provide the link protocol
within the adapter.
CS/NT support for HPR is almost identical to that in CS/2, but there are four
main differences:
• CS/NT supports Enterprise Extender.
• CS/NT supports HPR over X.25 connections.
• CS/NT does not support APPN MLTGs.
• CS/NT does not support HPR over frame relay through the Wide Area
Connector (WAC card), although it does so using non-IBM cards.
Communications Server/AIX does not support Control Flows over RTP, therefore
it cannot establish APPN MLTGs.
HPR support (and therefore ANR or RTP depending on the overall node
capabilities) can be individually specified on each APPN connection. As with
CS/AIX, it is not possible to configure some links as supporting ANR only and
others supporting RTP. The connections capable of HPR are:
• SDLC
• X.25
• ISDN
• Token-ring
• Ethernet
• ATM (LAN Emulation)
• Frame relay
• FDDI and SDDI
• Wireless
Because the DLUS presents itself as the network node server for the dependent
LUs, it must always be a network node.
Figure 9 shows a node with dependent LUs, connected to two VTAM APPN
nodes via their NCPs. Without DLUR/S, the SSCP-PU and SSCP-LU sessions go
directly to the VTAM (VTAMA) that owns the PU, while sessions between
dependent LUs and applications on VTAMB must flow via NCP1 and NCP2 since
NCP1 provides the boundary function.
With DLUR/S, the SSCP sessions still flow the same way, except they are now
encapsulated in the DLUR/S pipe. However, the PU is now an APPN node and
itself provides the boundary function, so that LU-LU sessions to VTAMB flow
directly via NCP2.
With DLUR/S, all these sessions flow directly to VTAM from the 3174. The
AS/400 is now just an APPN network node on the session path; even the LU 2
sessions are routed by the AS/400 since they are carried in APPN FID-2 packets.
If the DLUR node was RTP-capable (the 3174 is not), then the AS/400 could
perform ANR instead of ISR.
Even if your dependent LUs cannot be upgraded to a software level that supports
DLUR, the DLUR/S architecture will allow you to gain some of the benefits. This
involves connecting the dependent LU nodes to a node that provides the DLUR
function on behalf of external type 2 nodes, instead of internal LUs. Such a
DLUR node can be located anywhere within the APPN network, thus the
dependent LU sessions can be APPN/HPR almost all the way to the actual nodes
containing the dependent LUs. Figure 11 on page 30 illustrates this DLUR
Passthrough function.
There is a major difference between DLUR passthrough and the SNA gateway
configuration, in which the gateway node happens to use DLUR for its dependent
LUs. In the gateway configuration, the dependent PUs and LUs are defined
internally to the DLUR node, and therefore the SNA sessions (SSCP-PU,
SSCP-LU and LU-LU) are split within the DLUR. For example, a session between
TSO and a DLUR LU is terminated in the DLUR node and a second session goes
from the DLUR node to the external dependent LU.
In the DLUR passthrough configuration all the sessions are preserved intact
through the DLUR node. The SSCP-PU and SSCP-LU sessions are passed in and
out of the DLUR/S pipe, so that the SNA identity of the dependent resources is
preserved for the DLUS and any management product that is running with it.
The LU-LU sessions, while still being required to traverse the DLUR node, are
simply converted from type 2.0 FID-2 packets to APPN FID-2 (or NLPs, if HPR is
used). This is similar to what an NCP would do under the same circumstances.
The NCP would forward the packets as either FID-4 or APPN FID-2 packets
depending on the session path; it could not convert them directly to NLPs
because it cannot act as an RTP endpoint.
The examples shown later in this book include both internal LU and external LU
configurations of DLUR nodes.
The DLUR/S sessions are established between the two control points in the
server and the requester, but they are distinct from any CP-CP sessions that
may be present if the DLUS and DLUR happen to be adjacent. The format of the
RUs flowing on the DLUR/S sessions is similar to that on the CP-CP sessions,
The DLUR/S function follows switched procedures, and thus the dependent
resources are defined to VTAM (if at all) in switched major nodes. This is quite
independent of whether the DLUR node, or any external downstream PUs, are
connected to the network using leased or switched connections. The use of
switched procedures in DLUR/S reflects the dynamic nature of APPN in general
and DLUR/S in particular.
Connections other than DLUR/S that use switched procedures to contact VTAM
(for example, token-ring connections or real switched SDLC lines) rely upon the
exchange of XID frames after contact to provide identification and operational
parameters. This identification allows VTAM to select suitable definitions to
represent the LUs and PUs. The identification information can be either the
node ID (IDBLK and IDNUM) or the CP name of the remote resource. When
contact is made via an NCP, the XID image is sent to the owning VTAM on a
REQCONT request.
In general, we recommend that the IDBLK/IDNUM method is used for all DLUR/S
connections. There is always a possibility that a node contacting VTAM as a
DLUR-supported PU is also an APPN or LEN node adjacent to VTAM. In such a
case the same CP name might be called upon to resolve two quite different
connections.
It is important to remember that the LU-LU sessions are not encapsulated; they
flow natively on whichever link is chosen for them by APPN route selection.
Figure 12 on page 32 shows the relationship of PUs, LUs and control sessions in
a DLUR/S environment.
The DLUR/S sessions and the SSCP sessions they encapsulate are always
present in a DLUR/S environment. The adjacent link station PU is only known to
VTAM if the DLUR node is directly connected to VTAM′s domain. The CP-CP
sessions are present only if the node is directly connected, and if the two nodes
agree to establish them.
All that is necessary for a VTAM to provide DLUS function is that it is configured
as an APPN network node. There is no definition anywhere that turns the DLUS
function on or off. Definitions may be required, however, to allow the DLUS to
contact a DLUR and to provide session services for the dependent LUs on that
DLUR. Either the DLUR or the DLUS may initiate the DLUR/S connection, and
once that connection is established then either the DLUR or the DLUS may
initiate the dependent LU/PU activation process.
If the DLUR initiates the DLUR/S connection, it sets up its CONWINNER LU 6.2
session in the normal APPN fashion, with the aid of its network node server if
appropriate. VTAM responds by setting up its own CONWINNER session and the
DLUR/S pipe is in place.
If VTAM is to initiate the DLUR/S connection, the process is similar, but this time
VTAM needs a definition to enable it to locate the DLUR. In fact, VTAM treats
the operation like a switched dial-out operation driven by PU activation. Thus
the definition required is that of a PU in a switched major node. The PU defined
thus is one of the PUs served by the DLUR, but the definition differs from normal
dial-out practice in two ways:
• The DLURNAME on the PATH statement identifies the CP name of the DLUR
node. This is sufficient information for VTAM to identify the DLUR node,
since APPN searching will be used to locate it.
• The actual PU is identified using the DLCADDR keywords on the PATH
statement. The node ID or the CP name are coded here. This information is
transmitted to the DLUR node, on the DLUR/S pipe, when the PU is to be
contacted.
Once the DLUR/S pipe is active, the DLUR may send a REQACTPU on behalf of
any of its served PUs. This is treated by VTAM exactly like a switched type 2
connection and the same definition considerations apply:
• A switched major node may be defined in the normal way, specifying the
type 2 DLUR PU and its dependent LUs.
• The configuration services exit ISTEXCCS can be used to define the
resources dynamically, upon being presented with the IDBLK/IDNUM of the
type 2 PU.
• If the DLUR node supports it, the dependent LUs can be defined using the
dynamic dependent LU definition exit ISTEXCSD.
DLUR resources are treated by VTAM just like any other type 2.0 resources, and
their displays have almost no indication that they are on a DLUR node. There is
just one message in the PU type 2 display that tells you what the DLUR CP name
is.
The ability to release all the PUs on a DLUR with one command, together with
the FINAL enhancement, was introduced by APAR OW25386.
Some older products, particularly the 3174 with LIC C6, the 2217 and the 6611,
also implement DLUR.
The attached external nodes can be type 2.0 or 2.1. Any node (such as
SDLC-attached type 2.0) that does not support the exchange of XID information
can have the relevant information (node ID) predefined for it in the 3746.
Different PUs can be served by different DLU servers, and primary and backup
DLU servers can be defined for any PU.
The 3746 MAE provides exactly the same DLUR functions as the 2216. These are
identical to those of the NNP, with the addition of internal DLUR PUs for the
TN3270E server function.
The 2216 provides DLUR functions for external type 2 devices attached to it by
any supported method except PPP. This includes Ethernet, token-ring, frame
relay, SDLC, X.25, ATM, and FDDI. The upstream connection to the APPN
network can be any 2216-supported link including PPP, enterprise extender and
ESCON or parallel channel.
The 2210 DLUR support is the same as that provided by the 2216. Remember
though, that the 2210 has less capacity than the 2216 and may not be able to
support the same range of options concurrently.
DLUR support is available to LUA sessions (as used by PComm), TN3270 clients
and SNA gateway clients.
CS/2 treats DLUR as being one more type of logical link. Multiple internal PUs
can be defined, each with a different DLU server (or pair of primary/alternate
DLU servers). LU definitions are then assigned to the PUs at the user′ s
discretion. DLUR PUs can be assigned a user-specified node ID to identify them
to the DLUS.
CS/2 supports cross-border DLUR flows. It can also act as a DLUR on behalf of
downstream type 2.0 nodes when it is acting as a branch extender, but it still
uses gateway rather than passthrough functions for this.
The DLUR support in CS/NT is identical to that in CS/2, with the exception that
CS/NT provides DLUR passthrough. This preserves the SNA identity and the
SSCP control sessions all the way to the downstream node. There are no
internal PUs and LUs defined at which the sessions are split. Thus management
products such as NetView can see the correct session configuration.
OS/400 provides DLUR support for all its internal LUs except for dependent
APPC LUs. This support therefore includes 3270 devices, 5250 devices (via 3270
emulation), RJE, and other AS/400-unique applications.
The border node functions described in Subarea to APPN Migration: VTAM and
APPN Implementation are very extensive in the two major areas for which they
were designed: joining together two distinct APPN networks, and subnetting a
single network to reduce topology and search traffic. However, there is one
particular scenario where neither the peripheral border node nor the extended
border node function fits the requirements closely. This is the case where an
organization has many remote branch locations, each with a number of APPN
nodes. In this case:
• The gateway between each branch and the backbone network must be a
network node (or multiple network nodes for availability), else the branch
workstations will be unable to communicate across the backbone.
• If the network is not subnetted, then each branch NN will have to maintain
the topology database, and each will exchange broadcast search traffic and
topology updates with the backbone and with each other.
• If extended or peripheral borders are used to isolate the branches, then the
branch gateways are no longer in the backbone topology database. There
are three possibilities:
1. Extended borders throughout, in other words an EBN in each branch and
one or more EBNs in the backbone
2. Peripheral borders managed by the branch gateways, which appear as
ENs to the backbone
3. Peripheral borders managed by the backbone EBNs, which appear as
ENs to the branches
All three have drawbacks:
− In all three cases, resources in the branches cannot be registered to a
CDS in the backbone. A search for an unknown resource, whether from
a branch or from the backbone, could be sent (at worst) to every branch.
− In option 3, session routes may not take the best path. In particular, a
session between two branches could traverse the backbone more than
once, even if there is a direct route between the branches. This is
because the branch gateway will search the backbone EBN (which looks
like a served EN) before it searches other NNs in its own subnetwork.
Similarly, it is possible for a session between two nodes in the same
branch to pass through the backbone twice.
− Options 1 and 3 do not permit a connection network to be used across
the backbone. If the wide area backbone uses switched protocols, then a
connection network can save much definition work. A connection
network cannot cross an APPN border.
− In option 2, using PBNs rather than EBNs for the branch gateways will
greatly restrict the function. Sessions can cross only one border
managed by a PBN, and such a border cannot support the same network
ID on each side.
Like a PBN, a branch extender implements both the network node and the end
node functionalities. It appears as a network node to the APPN nodes
downstream of it, but presents an end node image to the APPN nodes upstream
of it. Since each BX is seen by the backbone as an end node, none of the
topology in the branches is maintained in the backbone topology database. This
Since the BX looks like an end node upstream, it must have a network node
server in the backbone network. The BX presents itself to its NN server as the
owner of all resources downstream of itself. It uses a process called resource
hierarchy modification , whereby it modifies search and session setup requests
that cross its border. Downstream CPs are downgraded to LUs as far as the
backbone is concerned. What is actually an LU on an EN served by the BX is
presented upstream as an LU on the BX. Additional control vectors are added to
the Locate requests to tell the upstream network of the true hierarchy, but this
information is used only for management purposes and is not required for route
calculation or session setup. Thus the nodes in the backbone network need not
be aware of the branch extender′s presence or function.
Chapter 4. B r a n c h Extender 41
4.1.2 BX Features and Restrictions
As can be gathered from the preceding discussion, the BX design assumes that
APPN nodes downstream are all end nodes (or LEN nodes, whose capability is a
subset of the EN capability). Network nodes are not permitted in the
downstream network, nor are border nodes, which may look like ENs
downstream of the BX. However, cascaded BXs are permitted.
Direct connections between branches are permitted, but these must be on the
upstream side. These appear to the backbone as EN-to-EN connections, and are
perfectly acceptable for session routes. A session from an EN in one branch to
an EN in another will appear to the backbone as being between adjacent ENs
(actually BXs), and should take the direct path provided the TG characteristics
and COS tables have been properly set up.
Similarly, a direct connection between BXs within the same branch must be on
the upstream side. This means that a session between ENs in the same branch
but on different BXs must pass through the upstream link. This link, of course,
can be physically within the branch so the backbone need not be troubled with
the session.
A BX can choose only one NN in the backbone as its NN server, and maintain
CP-CP sessions with it, at any one time. This is not true of a border node
managing a peripheral border, which can appear as a served EN to multiple NNs
in the backbone. This restriction is imposed to prevent search looping; an EBN
has additional logic to prevent this.
Since a session RSCV has a maximum length of 255 bytes, the number of hops
on a session path is restricted by this limit. In base APPN, a border isolates the
route segments so that each subnetwork sees only a subset of the complete
route. With HPR, however, the RTP endpoints must be aware of the entire route
so that the Route Setup message flows from end to end. An HPR-capable border
node, if faced with an excessive RSCV, can split the route into back-to-back RTP
connections to alleviate the problem. A branch extender does not have this
logic and so, in extreme cases, an HPR route across a BX may be restricted to
as few as four hops.
The 2210, 2216 and MAE are always network nodes if they are APPN-capable.
BX support can be configured at the node, adapter (physical port) or station
(logical connection) level. There are no restrictions on sharing adapters; any
combination of upstream and downstream connections can be configured on any
set of ports.
Chapter 4. B r a n c h Extender 43
44 Subarea to APPN Migration: HPR and DLUR Implementation
Chapter 5. Enterprise Extender
One of the major changes in corporate networks in recent years has been the
growth in Internet-based communication, whose explosive expansion has
resulted in huge new business opportunities. At the same time this technology
has made its way into internal business applications, since the same workstation
with the same software can be used to access both internal and external
application sites. The result has been a massive increase in the need for TCP/IP
communication.
At the same time, many large organizations have retained and upgraded their
SNA networks, because of the requirement for consistent and manageable
service levels for critical applications. TCP/IP, because of its underlying
connectionless Internet Protocol, is inherently unstable and unmanageable. Its
former major advantage over SNA, the ability to reroute automatically around
failure points, is now present in HPR. The challenge facing most customers
today is how best to integrate the SNA and TCP/IP worlds while preserving the
advantages of both.
There are many possible technical solutions to this challenge, most of which are
beyond the scope of this book. However, no document describing HPR can be
regarded as complete without at least an overview of the enterprise extender
technology, which is one of these solutions.
Such a design requires that the IP network somehow takes account of the SNA
class of service. Also, in order to compensate for the IP network ′s inherent
instability, it requires an end-to-end protocol that can tolerate lost packets and
network outages with minimum effort. Fortunately, the former requirement can
be met on many IP backbones thanks to the presence of a priority scheme in
most routers. The latter requirement is met by HPR, which also happens to be a
prerequisite for the full exploitation of a sysplex. The enterprise extender
technology, therefore, is based on running HPR over UDP/IP.
The enterprise extender function is very similar in concept to the way native SNA
over ATM is implemented:
• The underlying transport network appears as an APPN TG, but uses logical
data link control (LDLC) to exchange XIDs and NLPs. LDLC is a subset of
LLC2 that eliminates much of the error handling and acknowledging that RTP
makes unnecessary at link level. LDLC, used also for native SNA over ATM,
includes only the XID, TEST, DISC, DM and UI frame types.
• The UDP port number identifies the destination of the datagram as being the
partner IP host′s ANR routing function. Five UDP ports have been registered
with the IANA for this purpose. One of these default ports is mapped to each
An enterprise extender connection can use ARB flow control, as described in 1.5,
“ARB Flow and Congestion Control” on page 8, to ensure the smooth passage
of SNA traffic across an RTP connection. However, the standard ARB algorithm
is designed to promote fairness between various types of SNA traffic; IP traffic
tends to take as much resource as it can and does not take account of any rival
traffic. Therefore, a new version of ARB has been developed to allow enterprise
extender traffic to compete fairly with IP traffic on the same IP network. This is
called responsive mode ARB , and is an option for nodes that implement
enterprise extender.
In the testing scenarios described in the latter chapters of this redbook, we start
by implementing HPR on VTAM alone, and gradually extend it to the furthest
reaches of the network via NCP, 3746, 2210 and 2210 all the way to the
workstation. At the same time we extend DLUR support outwards from the 3746
to the workstation. Our objective in this first test is to get HPR working among
MVS VTAM nodes.
HPR can be utilized on all connections supported by a VTAM or its owned NCPs,
including subarea connections if VR-TG is defined. HPR allows you to migrate
existing NCP connections to APPN without incurring the additional overhead
associated with converting FID-4 links to FID-2 links; NCP handles the former
more efficiently than the latter. If the connections are converted directly to HPR,
then NCP is not aware of the sessions passing across those connections, with a
corresponding saving in processing power and storage.
Of course, the other main advantage of HPR is that a failure in a connection can
be bypassed (without disrupting the sessions that were using that connection) if
there is an alternative path. This applies equally to VTAM-attached,
NCP-attached and VR-TG connections.
For a stand-alone VTAM node, RTP and ANR support has been available on
BF-TGs (FID-2 connections) since V4R3. V4R4 introduced VR-TG support and
Control Flows over RTP. The Control Flows option is not available, however,
over XCA LAN connections even with CS OS/390 Release 5. HPR over an
XCA-connected LAN was first made available in APAR OW26732.
If the HPR start option is coded as above, all connections owned by the VTAM
node by default take on the HPR capability of the node itself. Thus a VTAM that
is capable of RTP can act as an RTP endpoint on all attached links. However,
you may want to downgrade the capability of particular links so that (for
instance) HPR is not permitted at all on some connections, or perhaps HPR
traffic arriving on an NCP link can only be rerouted out of the node instead of
terminating in VTAM. This downgrading can be done in two ways:
• The second operand on the HPR start option assigns the default HPR
capability for all APPN connections. Thus HPR=(RTP,ANR) makes VTAM
itself capable of RTP, but allows it to perform only ANR routing for traffic
carried on all the attached links. Clearly this setup is the same as
HPR=ANR unless at least one link is upgraded to RTP support. The
permitted combinations are (RTP,ANR), (RTP,NONE) and (ANR,NONE).
• The HPR keyword on the link definition statement (PU for BF-TGs and CDRM
for VR-TGs) can be used to upgrade a connection to VTAM ′s level of HPR
support, or downgraded to no HPR support. The allowable values are
HPR=YES or NO meaning upgrade or downgrade. The greatest flexibility
comes if the start option is (RTP,ANR). Then each APPN link can be
upgraded to RTP (HPR=YES), downgraded to base APPN (HPR=NO) or left
as ANR (no HPR coded).
There are three other definitions that you can use to influence the way VTAM
implements HPR:
• The HPRPST start option (the path switch timer) controls how long VTAM will
wait for a path switch to complete before giving up and terminating the
connection. HPRPST comes into effect as soon as VTAM has detected a
failure on an RTP connection.
VTAM does not always initiate a path switch when it detects an RTP failure.
When using multinode persistent sessions in a sysplex, the VTAM in the
sysplex declares itself as mobile when establishing the RTP connection,
whereas VTAM in all other circumstances declares itself as stationary . The
architecture demands that if one partner is stationary and one is mobile, only
the mobile partner can initiate a path switch. Thus the RTP partner outside
the sysplex must simply wait for the MNPS partner to switch the path.
There is one HPRPST value for each APPN transmission priority; the default
is HPRPST=(8M,4M,2M,1M) for low, medium high, and network priorities
respectively. It is recommended that the network priority value is much less
than the others. This is because if CP-CP sessions are running on an RTP
connection, it is better to terminate them quickly than to wait a long time for
the path switch to fail. Those CP-CP sessions will be restarted after
termination, and they may be needed to switch the LU-LU sessions that are
waiting on the other HPRPST values.
Once VTAM is running as an APPN node, there are very few changes you need
to make to the start options to enable HPR functions. We let them all default so
the values in use were:
• HPR=RTP, so full RTP support is available on all connections. Before VTAM
V4R4 this option was not valid on ICNs. In our case, if the ICN is in the
middle of a session path it can also act as an ANR router. HPR=RTP can
be overridden by the HPR keyword in the following definitions: local SNA,
NCP, switched major node, and CDRM. Before VTAM V4R4 the HPR keyword
was not valid on the CDRM statement.
• HPRPST=(8M,4M,2M,1M), so the time allowed to switch an RTP path is
lower for the higher-priority connections. Before VTAM V4R4 there were
only three values because the network priority was not supported.
• PSRETRY=(0,0,0,0) so no automatic path switch is to be attempted.
• Link-level error recovery will be done only where required by the partner
node or by the DLC itself.
1 You can see that we have three RTP pipes between RAA and RAK, all
labelled LULU. These are for LU-LU sessions that we have established with
TSO. There are several possible reasons why the sessions are spread over
three connections. Most likely they take different paths through the network.
Possibly they have different APPN COS values, or some of them are the results
of path switches.
2 These pipes, labelled RSTP, were created for Route Setup flows. They are
the long-lived pipes described in 1.8, “HPR Control Flows over RTP Option” on
page 12, and their presence proves that the partner VTAMs are using Control
Flows over RTP on these links. RSTP pipes are only ever established between
adjacent nodes, and are only ever used for Route Setup flows. They are never
path-switched. There is no RSTP pipe between RAA and RAK because Control
Flows are not supported over a VR-TG.
3 Because VTAM V4R4 supports Control Flows over RTP, the same pairs of
nodes have a CPCP RTP pipe as have an RSTP pipe. The CPCP pipe is only
used for CP-CP sessions, and will be path-switched if a suitable alternative route
is available. Again, there is no CPCP pipe between RAA and RAK because their
only connection is a VR-TG.
The CPCP and RSTP pipes are set up at different times and under different
circumstances. The CPCP pipe is established immediately after the XIDs have
been exchanged, provided both nodes request CP-CP sessions and the
architecture permits their establishment at this time. There may be one, or at
most two, CP-CP pipes between any two nodes. If there are two, it means that
each partner has independently established the RTP connection at the same
time as the other, even though the route and the APPN COS are the same.
Usually on an EN to NN connection there is only one pipe, because the NN does
not set up its CONWINNER session until the EN has done so; thus the NN will
already have a suitable RTP pipe to re-use.
The RSTP pipe is established over each eligible connection between two nodes,
when an LU-LU session requires that link. Therefore, there may be as many
RSTP pipes as there are Control Flows capable links between two nodes.
D NET,ID=ISTRTPMN,E
IST097I DISPLAY ACCEPTED
IST075I NAME = ISTRTPMN, TYPE = RTP MAJOR NODE
IST486I STATUS= ACTIV, DESIRED STATE= ACTIV
IST1486I RTP NAME STATE DESTINATION CP MNPS TYPE
IST1487I CNR0000F CONNECTED USIBMRA.RAA NO LULU
IST1487I CNR0000D CONNECTED USIBMRA.RAS NO LULU
IST1487I CNR0000B CONNECTED USIBMRA.RAA NO LULU
IST1487I CNR00009 CONNECTED USIBMRA.RA39 NO LULU
IST1487I CNR00006 CONNECTED USIBMRA.RAS NO LULU
IST1487I CNR00005 CONNECTED USIBMRA.RAA NO LULU
IST314I END
Figure 17. Display of ISTRTPMN on RAK
To see how many sessions and which LUs are mapped on to these RTPs, as well
as the physical path they are using, a further display is needed. See Figure 18
for a display of one particular RTP pipe on RAA.
DISPLAY NET,ID=CNR0004A,SCOPE=ALL
IST097I DISPLAY ACCEPTED
IST075I NAME = CNR0004A , TYPE = PU_T2.1 4
IST1392I DISCNTIM = 00010 DEFINED AT PU FOR DISCONNECT
IST486I STATUS= ACTIV--LX-, DESIRED STATE= ACTIV
IST1043I CP NAME = RAK , CP NETID = USIBMRA , DYNAMIC LU = YES
IST1589I XNETALS = YES
IST933I LOGMODE=***NA***, COS=#CONNECT 10
IST1476I TCID X′ 1 B6DB739000000AF′ - REMOTE TCID X′ 1 EC14EC8000001EF′ 5
IST1481I DESTINATION CP USIBMRA.RAK - NCE X′ D000000000000000′ 5
IST1587I ORIGIN NCE X′ D000000000000000′ 5
IST1477I ALLOWED DATA FLOW RATE = 18 KBITS/SEC 6
IST1516I INITIAL DATA FLOW RATE = 6400 BITS/SEC 6
IST1511I MAXIMUM NETWORK LAYER PACKET SIZE = 4046 BYTES 7
IST1478I NUMBER OF UNACKNOWLEDGED BUFFERS = 0
IST1479I RTP CONNECTION STATE = CONNECTED - MNPS = NO
IST1480I RTP END TO END ROUTE - PHYSICAL PATH
IST1460I TGN CPNAME TG TYPE HPR
IST1461I 255 USIBMRA.RAK VRTG RTP 8
IST231I RTP MAJOR NODE = ISTRTPMN
IST654I I/O TRACE = OFF, BUFFER TRACE = OFF
IST1500I STATE TRACE = OFF
IST355I LOGICAL UNITS: 9
IST080I RAKTX022 ACT/S----Y RAKTX017 ACT/S----Y RAKTX058 ACT/S----Y
IST080I RAKTX046 ACT/S----Y RAKTX034 ACT/S----Y
IST314I END
Figure 18. Display of Active Sessions Mapped to Pipe CNR0004A on RAA
Now we display another (different) RTP connection on the RAK side (see
Figure 19).
D NET,ID=CNR00005,E
IST097I DISPLAY ACCEPTED
IST075I NAME = CNR00005, TYPE = PU_T2.1 456
IST1392I DISCNTIM = 00010 DEFINED AT PU FOR DISCONNECT
IST486I STATUS= ACTIV--LX-, DESIRED STATE= ACTIV
IST1043I CP NAME = RAA, CP NETID = USIBMRA, DYNAMIC LU = YES
IST1589I XNETALS = YES
IST933I LOGMODE=***NA***, COS=#CONNECT
IST1476I TCID X′ 1 EC14EBE000001F5′ - REMOTE TCID X′ 1 B6DB7350000008E′
IST1481I DESTINATION CP USIBMRA.RAA - NCE X′ D000000000000000′
IST1587I ORIGIN NCE X′ D000000000000000′
IST1477I ALLOWED DATA FLOW RATE = 14 KBITS/SEC
IST1516I INITIAL DATA FLOW RATE = 6400 BITS/SEC
IST1511I MAXIMUM NETWORK LAYER PACKET SIZE = 4046 BYTES
IST1478I NUMBER OF UNACKNOWLEDGED BUFFERS = 0
IST1479I RTP CONNECTION STATE = CONNECTED - MNPS = NO
IST1480I RTP END TO END ROUTE - PHYSICAL PATH
IST1460I TGN CPNAME TG TYPE HPR
IST1461I 255 USIBMRA.RAA VRTG RTP
IST231I RTP MAJOR NODE = ISTRTPMN
IST654I I/O TRACE = OFF, BUFFER TRACE = OFF
IST1500I STATE TRACE = OFF
IST355I LOGICAL UNITS:
IST080I RAAAN010 ACT/S----Y RAAAT02 ACT/S----Y RAAAN009 ACT/S----Y
IST080I RAAT420 ACT/S----Y
IST314I END
Figure 19. Display of Active Sessions Mapped to Pipe CNR00005 on RAK
You can tell this is not the same pipe as previously displayed, because the
TCIDs are different. The NCEs, the route and the APPN class of service are the
same. Both CNR00005 and CNR0004A are using the VR-TG between RAA and
RAK, but which ER is actually used for each pipe cannot usually be determined
from VTAM displays.
17:53:21 V NET,INACT,ID=RAACTCA
17:53:21 IST097I VARY ACCEPTED
17:53:21 IST1494I PATH SWITCH STARTED FOR RTP CNR0004A 11
17:53:21 2 IST1133I RAAPC13 IS NOW INACTIVE, TYPE = LINK STATION
17:53:22 IST526I ROUTE FAILED FROM 10 TO 20 - DSA ...
17:53:22 IST526I ROUTE FAILED FROM 10 TO 20 - DSA ...
17:53:22 IST526I ROUTE FAILED FROM 10 TO 20 - DSA ...
17:53:22 2 IST1133I RAACTCA IS NOW INACTIVE, TYPE = CA MAJOR NODE
17:53:22 IST526I ROUTE FAILED FROM 10 TO 20 - DSA ...
17:53:22 IST1097I CP-CP SESSION WITH USIBMRA.RAK TERMINATED
17:53:22 IST1280I SESSION TYPE = CONWINNER - SENSE = 08420001
17:53:22 IST314I END
17:53:22 IST1196I APPN CONNECTION FOR USIBMRA.RAK INACTIVE - TGN = 255
17:53:22 IST819I CDRM RAK COMMUNICATION LOST - RECOVERY IN PROGRESS
17:53:22 IST1097I CP-CP SESSION WITH USIBMRA.RAK TERMINATED
17:53:22 IST1280I SESSION TYPE = CONLOSER - SENSE = 08420001
17:53:22 IST314I END
17:53:22 IST663I INIT OTHER REQUEST FAILED , SENSE=80140001
17:53:22 IST664I REAL OLU=USIBMRA.RAA REAL DLU=USIBMRA.RAK
17:53:22 IST889I SID = F7FF61644FEBEA1A
17:53:22 IST314I END
17:53:22 IST1110I ACTIVATION OF CP-CP SESSION WITH USIBMRA.RAK FAILED
17:53:22 IST1280I SESSION TYPE = CONWINNER - SENSE = 80140001
17:53:22 IST1002I RCPRI=0004 RCSEC=0000
17:53:22 IST314I END
17:53:23 IST1494I PATH SWITCH COMPLETED FOR RTP CNR0004A 11
17:53:23 IST1480I RTP END TO END ROUTE - PHYSICAL PATH
17:53:23 IST1460I TGN CPNAME TG TYPE HPR
17:53:23 IST1461I 21 USIBMRA.RAS APPN RTP
17:53:23 IST1461I 255 USIBMRA.RAK VRTG RTP
17:53:23 IST314I END
Figure 20. Path Switch for CNR0004A. The IST526I messages have been truncated to
allow the time stamps to appear.
We only saw messages related to the CNR0004A pipe on RAA 11. The other
two LULU pipes (CNR00046 and CNR00048) were also recovered, but we must
look to RAK for the related messages. The pipes are called CNR00005 and
CNR0000B on RAK. See Figure 21 on page 57 for RAK′s log, where 12 and
13 show the path switch messages.
The path switch is started by the first RTP endpoint that detects the failure. The
criteria used to detect the failure include loss of the local link (the first link on
the pipe), or a timeout. It is possible for both endpoints to start the path switch
process at the same time because both detect the problem at the same time. In
this case the two nodes may even calculate (or have calculated on their behalf)
different alternative routes for the new RTP path. If this happens, the path
(RSCV) calculated on behalf of the active partner (the one that set up the pipe
originally) will be chosen. For a deeper understanding of the mechanism that
triggers a path switch, please refer to Inside APPN - The Essential Guide to the
Next-Generation SNA , SG24-3669-03, or to the APPN/HPR Architecture Reference ,
SV40-1018-02.
When VTAM initiates the path switch, you will see the IST1494I message
sequence in the log (see 13 in Figure 21 for example). This is not so if VTAM
did not initiate the path switch. Thus we have to look at both logs to understand
the full picture.
Let us now take a look at the pipe CNR0004A on RAA. All the sessions are still
active and it is now a two-hop path passing through RAS 14 (see Figure 22 and
Figure 23 on page 59).
DISPLAY NET,ID=CNR0004A,SCOPE=ALL
IST097I DISPLAY ACCEPTED
IST075I NAME = CNR0004A , TYPE = PU_T2.1
IST1392I DISCNTIM = 00010 DEFINED AT PU FOR DISCONNECT
IST486I STATUS= ACTIV--LX-, DESIRED STATE= ACTIV
IST1043I CP NAME = RAK , CP NETID = USIBMRA , DYNAMIC LU
IST1589I XNETALS = YES
IST933I LOGMODE=***NA***, COS=#CONNECT
IST1476I TCID X′ 1 B6DB739000000AF′ - REMOTE TCID X′ 1 EC14EC8000001EF′
IST1481I DESTINATION CP USIBMRA.RAK - NCE X′ D000000000000000′
IST1587I ORIGIN NCE X′ D000000000000000′
IST1477I ALLOWED DATA FLOW RATE = 6000 BITS/SEC
IST1516I INITIAL DATA FLOW RATE = 6400 BITS/SEC
IST1511I MAXIMUM NETWORK LAYER PACKET SIZE = 4046 BYTES
IST1478I NUMBER OF UNACKNOWLEDGED BUFFERS = 0
IST1479I RTP CONNECTION STATE = CONNECTED - MNPS = NO
IST1480I RTP END TO END ROUTE - PHYSICAL PATH
IST1460I TGN CPNAME TG TYPE HPR
IST1461I 21 USIBMRA.RAS APPN RTP 14
IST1461I 255 USIBMRA.RAK VRTG RTP
IST231I RTP MAJOR NODE = ISTRTPMN
IST654I I/O TRACE = OFF, BUFFER TRACE = OFF
IST1500I STATE TRACE = OFF
IST355I LOGICAL UNITS:
IST080I RAKTX022 ACT/S----Y RAKTX017 ACT/S----Y RAKTX058 ACT/S----Y
IST080I RAKTX046 ACT/S----Y RAKTX034 ACT/S----Y
IST314I END
Figure 22. CNR0004A Display on RAA
D NET,ID=CNR00005,E
IST097I DISPLAY ACCEPTED
DISPLAY NET,ID=CNR00005,E
IST075I NAME = CNR00005, TYPE = PU_T2.1
IST1392I DISCNTIM = 00010 DEFINED AT PU FOR DISCONNECT
IST486I STATUS= ACTIV--LX-, DESIRED STATE= ACTIV
IST1043I CP NAME = RAA, CP NETID = USIBMRA, DYNAMIC LU = YES
IST1589I XNETALS = YES
IST933I LOGMODE=***NA***, COS=#CONNECT
IST1476I TCID X′ 1 EC14EBE000001F5′ - REMOTE TCID X′ 1 B6DB7350000008E′
IST1481I DESTINATION CP USIBMRA.RAA - NCE X′ D000000000000000′
IST1587I ORIGIN NCE X′ D000000000000000′
IST1477I ALLOWED DATA FLOW RATE = 6000 BITS/SEC
IST1516I INITIAL DATA FLOW RATE = 6400 BITS/SEC
IST1511I MAXIMUM NETWORK LAYER PACKET SIZE = 4046 BYTES
IST1478I NUMBER OF UNACKNOWLEDGED BUFFERS = 1
IST1479I RTP CONNECTION STATE = CONNECTED - MNPS = NO
IST1480I RTP END TO END ROUTE - PHYSICAL PATH
IST1460I TGN CPNAME TG TYPE HPR
IST1461I 255 USIBMRA.RA39 VRTG RTP 15
IST1461I 21 USIBMRA.RAA APPN RTP
IST231I RTP MAJOR NODE = ISTRTPMN
IST654I I/O TRACE = OFF, BUFFER TRACE = OFF
IST1500I STATE TRACE = OFF
IST355I LOGICAL UNITS:
IST080I RAAAN010 ACT/S----Y RAAAT02 ACT/S----Y RAAAN009 ACT/S-----Y
IST080I RAAT420 ACT/S----Y
IST314I END
Figure 24. CNR00005 Display on RAK
The TSO users continued to work during our test without noticing any failure or
delay. The HPR path switch was very quick as you can see from the log time
stamps.
This test highlights some of the new facilities offered by VTAM V4R4 and later
releases, which include:
• HPR over VR-TG
• The ability of an interchange node to act as an RTP endpoint
• Control Flows over RTP
F netproc,ID=CNRnnnnn,RTP
To verify this function we first reactivated the subarea connection (and therefore
the VR-TG) between RAA and RAK, to ensure that the optimum route was
available again. You can see in Figure 25 that the SSCP session, the VR-TG and
the CP-CP sessions between RAA and RAK start as soon as the subarea link is
restored 16. As soon as the SSCP-SSCP session has been re-established the
two nodes are adjacent again (in APPN terms) so they set up the APPN
connection.
We then issued the MODIFY command to switch the path, with the result seen at
17. The RTP connection has been switched back to the original, direct path.
V NET,ACT,ID=RAACTCA
IST097I VARY ACCEPTED
IST1132I RAACTCA IS ACTIVE, TYPE = CA MAJOR NODE
IST464I LINK STATION RAAPC13 HAS CONTACTED RAK SA 20
IST1132I RAAPC13 IS ACTIVE, TYPE = LINK STATION
IST1086I APPN CONNECTION FOR USIBMRA.RAK IS ACTIVE - TGN = 21
IST1132I USIBMRA.RAK IS ACTIVE, TYPE = CDRM
IST1096I CP-CP SESSIONS WITH USIBMRA.RAK ACTIVATED 16
*
F NETA0,RTP,ID=CNR0004A
IST097I MODIFY ACCEPTED
IST1494I PATH SWITCH COMPLETED FOR RTP CNR0004A 17
IST1480I RTP END TO END ROUTE - PHYSICAL PATH
IST1460I TGN CPNAME TG TYPE HPR
IST1461I 255 USIBMRA.RAK VRTG RTP
IST314I END
Figure 25. RAA Log during Forced Path Switch
The following displays show what happened when we used the PSRETRY option
on RAA. In Figure 26 we enabled the automatic path switch feature 18,
specifying that RTP pipes of network priority should be switched every 60
seconds, while the others should be switched every 90 seconds. We then
displayed an RTP connection, CNR00001, between RAA and RAK. This
connection used the path via RAS, then a VR-TG between RAS and RAK,
because the direct route had been deactivated.
F NETA0,VTAMOPTS,PSRETRY=(90,90,90,60) 18
*
IST1189I PSRETRY = LOW 90S PSRETRY = MEDIUM 90S
IST1189I PSRETRY = HIGH 90S PSRETRY = NETWRK 60S
*
D NET,ID=CNR00001,E
IST097I DISPLAY ACCEPTED
IST075I NAME = CNR00001 , TYPE = PU_T2.1
IST1392I DISCNTIM = 00010 DEFINED AT PU FOR DISCONNECT
IST486I STATUS= ACTIV--LX-, DESIRED STATE= ACTIV
IST1043I CP NAME = RAK , CP NETID = USIBMRA , DYNAMIC LU = YES
IST1589I XNETALS = YES
IST933I LOGMODE=***NA***, COS=#CONNECT
IST1476I TCID X′2538DF2500000090′ - REMOTE TCID X′ 1 EC14ECF000001F9′
IST1481I DESTINATION CP USIBMRA.RAK - NCE X′ D000000000000000′
IST1587I ORIGIN NCE X′ D000000000000000′
IST1477I ALLOWED DATA FLOW RATE = 10 KBITS/SEC
IST1516I INITIAL DATA FLOW RATE = 6400 BITS/SEC
IST1511I MAXIMUM NETWORK LAYER PACKET SIZE = 4046 BYTES
IST1478I NUMBER OF UNACKNOWLEDGED BUFFERS = 1
IST1479I RTP CONNECTION STATE = CONNECTED - MNPS = NO
IST1480I RTP END TO END ROUTE - PHYSICAL PATH
IST1460I TGN CPNAME TG TYPE HPR
IST1461I 21 USIBMRA.RAS APPN RTP
IST1461I 255 USIBMRA.RAK VRTG RTP
IST231I RTP MAJOR NODE = ISTRTPMN
IST654I I/O TRACE = OFF, BUFFER TRACE = OFF
IST1500I STATE TRACE = OFF
IST355I LOGICAL UNITS:
IST080I RAKTX024 ACT/S----Y RAKTX016 ACT/S----Y RAKTX021 ACT/S----Y
IST314I END
Figure 26. Display of CNR00001 before Path Switch Takes Place
We then activated the direct VR-TG connection between RAA and RAK, as shown
in Figure 27 on page 62, and waited. The messages relating to APPN
connection and CP-CP session establishment are not shown in the display.
Exactly 90 seconds after our MODIFY VTAMOPTS command, the route was
switched 19. The PSRETRY timer runs individually for each RTP connection, to
ensure that the path switch attempts are not all tried at the same time. When
we initialized the PSRETRY timer a path switch was attempted immediately for
CNR00001, but did not work because there was no alternative path. After 90
seconds the switch was attempted again, and this time completed successfully
because the new path was available.
Note: Specifying too short an interval might have an adverse impact on network
performance, especially if there are a large number of RTP connections.
If you refer to Figure 23 on page 59, this time we have established LU-LU
sessions between RAA and RAS over the MPC connection between them. The
alternate path available is that comprising the two VR-TGs from RAA to RAK and
from RAK to RAS.
While all the connections were active, we displayed the RTP pipe major node
from RAA, as shown in Figure 28. This shows two LU-LU session pipes, and the
expected CP-CP and long-lived pipes because the MPC connection supports
Control Flows.
Next, we deactivated the MPC connection. The results are shown in Figure 29
on page 63.
However, the rest of the display was not expected. The LU-LU session pipes
were not switched 5, even though there seemed to be a valid alternate APPN
route available. Although we were careful to set up the APPN environment for
our tests, we had forgotten the old rules of subarea networking.
We displayed the active subarea routes (Figure 30) and active CDRMs
(Figure 31) from RAA to see what was going on.
D NET,ROUTE,DESTSA=28
IST097I DISPLAY ACCEPTED
IST535I ROUTE DISPLAY 4 FROM SA 10 TO SA 28
IST808I ORIGIN PU = ISTPUSA0 DEST PU = ***NA*** NETID = USIBMRA
IST536I VR TP STATUS ER ADJSUB TGN STATUS CUR MIN MAX
IST537I 0 0 INACT 0 8 1 INOP
IST537I 0 1 INACT 0 8 1 INOP
IST537I 0 2 INACT 0 8 1 INOP
IST537I 1 0 INACT 1 5 1 INOP
IST537I 1 1 INACT 1 5 1 INOP
IST537I 1 2 INACT 1 5 1 INOP
IST537I 2 0 INACT 2 5 1 INOP
IST537I 2 1 INACT 2 5 1 INOP
IST537I 2 2 INACT 2 5 1 INOP
IST537I 3 28 1 INOP
IST537I 3 0 INACT 4 28 1 INOP
IST537I 3 1 INACT 4 28 1 INOP
Figure 30. Routes between RAA and RAS
There were no virtual routes defined between RAA and RAS through RAK
(subarea 20).
D NET,CDRMS
IST097I DISPLAY ACCEPTED
IST350I DISPLAY TYPE = CDRMS
IST089I RAACDRM TYPE = CDRM SEGMENT , ACTIV
IST1546I CDRM STATUS SUBAREA ELEMENT NETID SSCPID
IST1547I RAA ACTIV 10 1 USIBMRA 99
IST1547I RAK ACTIV 20 1 USIBMRA 20
IST1454I 2 RESOURCE(S) DISPLAYED
Figure 31. Active CDRMs from RAA
Not surprisingly, there was no SSCP-SSCP session between RAA and RAS
either. That would have required a virtual route.
In fact we had no definitions for the missing routes and CDRMs, so we corrected
the problem by defining path tables on all three nodes (RAA, RAS, and RAK) and
adding an extra CDRM definition to both RAA and RAS. Figure 32 on page 65
*******************************************************************
* *
* CDRM MAJORNODE FOR RAA *
* *
*******************************************************************
VBUILD TYPE=CDRM
NETWORK NETID=USIBMRA
RAA CDRM SUBAREA=10,CDRDYN=YES,CDRSC=OPT
RAK CDRM SUBAREA=20,CDRDYN=YES,CDRSC=OPT,TGP=CHANNEL
RAS CDRM SUBAREA=28,CDRDYN=YES,CDRSC=OPT,TGP=CHANNEL
* VRTG=YES,HPR=YES
RA39 CDRM SUBAREA=39,CDRDYN=YES,CDRSC=OPT,TGP=CHANNEL
We now activated the new path tables. A display of the path table from RAA
includes the entries seen in Figure 33.
D NET,PATHTAB
IST097I DISPLAY ACCEPTED
IST350I DISPLAY TYPE = PATH TABLE CONTENTS
IST516I DESTSUB ADJSUB TGN ER ER STATUS VR(S)
IST517I 28 20 1 0 INACT 0
IST517I 28 20 1 1 INACT 1
Figure 33. Path Table from RAA
Now we had some VRs but no SSCP-SSCP session. We activated our new
CDRMs to see the display in Figure 34.
VARY NET,ACT,ID=RAACDRM1
IST097I VARY ACCEPTED
IST1132I RAACDRM1 IS ACTIVE, TYPE = CDRM SEGMENT
IST1086I APPN CONNECTION FOR USIBMRA.RAS IS ACTIVE - TGN = 255 6
IST1132I USIBMRA.RAS IS ACTIVE, TYPE = CDRM
IST1096I CP-CP SESSIONS WITH USIBMRA.RAS ACTIVATED 7
Figure 34. CDRM and VR-TG Activation
As soon as the SSCP-SSCP session was established, both the VR-TG 6 and the
CP-CP sessions 7 were set up. The APPN connections were now as shown in
Figure 35 on page 66.
After the corrections were made, we restored the failed MPC connection, set up
the LU-LU sessions again, and restarted our test. Again, we started by
displaying the RTP pipes from RAA as shown in Figure 36.
D NET,ID=ISTRTPMN,E
IST097I DISPLAY ACCEPTED
IST075I NAME = ISTRTPMN , TYPE = RTP MAJOR NODE
IST486I STATUS= ACTIV , DESIRED STATE= ACTIV
IST1486I RTP NAME STATE DESTINATION CP MNPS TYPE
IST1487I CNR0007A CONNECTED USIBMRA.RAS NO LULU
IST1487I CNR00079 CONNECTED USIBMRA.RAS NO RSTP
Figure 36. RTP Connections from RAA
We see again the LU-LU session pipe, and the long-lived pipe which was used to
set up the LU-LU session pipe. Although the Route Setup messages flow over
an RTP connection (RSTP pipe), the CP-CP sessions themselves are on the
VR-TG and therefore have no CPCP pipe displayed.
Next, we inactivated the MPC connection again and this time we saw the
messages in Figure 37 on page 67.
A display of the RTP major node (Figure 38) shows that the LU-LU RTP pipe
CNR0007A is now the only active RTP connection between RAA and RAS.
D NET,ID=ISTRTPMN,E
IST097I DISPLAY ACCEPTED
IST075I NAME = ISTRTPMN , TYPE = RTP MAJOR NODE
IST486I STATUS= ACTIV , DESIRED STATE= ACTIV
RTP NAME STATE DESTINATION CP MNPS TYPE
CNR0007A CONNECTED USIBMRA.RAS NO LULU
Figure 38. RTP Major Node after Path Switch
However, if you look at the scenario described in 6.5, “Path Switch over VR-TG”
on page 62, you will see that this does not happen. In Figure 29 on page 63, the
LU-LU session pipe is terminated (the path switch timer expires) for CNR00076
5 before RAA attempts to set up new CP-CP sessions 12. RAA cannot make
this attempt before the existing CP-CP sessions are terminated. Those sessions
are currently on a CP-CP RTP connection, and will not be terminated until their
path switch timer expires 3. Therefore, you must make sure that CP-CP
sessions that traverse an RTP connection either have an alternative Control
Flows capable connection available, or will be terminated quickly.
Note
Ensure that the path switch timer for network priority is much lower than
those for the other three priorities. This will give failed CP-CP sessions a
chance to recover before LU-LU session pipes time out.
Note, however, that some products will terminate CP-CP sessions immediately if
there is no Control Flows-capable alternative connection available. This
removes the need to tune the path switch timer for CP-CP sessions.
This will not usually be a problem if you are converting a working subarea
network to APPN, but is quite likely to occur if you add a subarea-capable
sysplex to an existing subarea network.
Note
Also, every pair of subarea nodes between which an LU-LU session will be
established must have a suitable VR defined between them. If this is not so,
session setup will fail.
Note
VTAM is not aware of the actual speed of a VR-TG, since a VR-TG could
constitute a number of routes of various speeds. It assumes a CAPACITY of 8
kbps for such a VR-TG unless you tell it otherwise. Therefore, it is important that
you make sure that VTAM is aware of the true capacity. This can be done by
coding CAPACITY= on each CDRM definition for a VR-TG connection. However,
a better way is to use TG profiles as we illustrate in Figure 32 on page 65.
When we corrected the problem of the missing CDRM definitions, we added the
keyword TGP=CHANNEL to both CDRM definitions. This points to an entry
named CHANNEL in the TG profiles member of VTAMLST. An extract from the
TG profiles we were using (IBMTGPS) is shown in Figure 39.
Why does the display show a value of 35M when the definition is 36M? This is
because the CAPACITY value is stored internally (as are all the TG and node
characteristics) as a single byte. Thus precision may be lost, and numbers that
are close together will be represented by the same value. The encoding of the
CAPACITY value is in a form similar to floating point encoding, and has the effect
that high values are less granular than low values. The actual coded value can
be seen in the display of an APPN link station (message IST1106I). The
CAPACITY is the second byte of the string of hex under TG characteristics. A
Note
The ARB parameters required for an RTP connection are normally obtained from
the XID exchanges on the TGs between nodes, and made known to the RTP
endpoints via the Route Setup and connection setup flows. If the HPR
connection is actually a VR-TG, this is still true, since the SSCP-SSCP session is
used to convey the XID parameters. However, if the HPR connection is simply
an ER within a CNN there is no XID exchange and the ARB parameters must be
coded. They are coded within the NCP source, on the BUILD statement. Please
Another reason for additional NCP coding relates to the fact that there is no
subarea flow control across an HPR VR-TG. NLPs across the subarea network
flow on ERs, whereas subarea flow control is done using VR pacing. Therefore,
additional parameters are provided in the NCP to address this issue.
HPRMPS defines the largest packet size that can be sent across the CNN without
being segmented on any of the subarea links along the path. This value must be
at least 768 bytes for HPR to work at all. If all the NCPs in the CNN are V7R3 or
above, they are capable of determining the value for themselves so code
HPRMPS=0 to let them do this. If some of the NCPs are at an earlier level, the
later NCPs will not be able to work out the maximum size for those links owned
by the back-level NCPs. There is a table in NCP, SSP and EP Resource
Definition Reference to enable you to work out the correct value. If the HPRMPS
value is too large, then extra overhead may be incurred by unnecessary
segmenting and reassembly.
HPRATT is used to define the average time in microseconds that it takes to route
1200 bits across the composite ANR node′s subarea network. NCP estimates
this time for the subarea network if the value is greater than or equal to 200,000
microseconds. If the value is less than 200,000 microseconds, code a value on
the HPRATT keyword. The value you provide is used when transmission begins,
being sent in the ARB segment at connection setup time. After transmission has
begun the actual value, which the RTP endpoints track, is used.
There is a table in NCP, SSP and EP Resource Definition Reference to help you
work out the correct values for HPRATT in your network.
HPRMLC specifies the capacity in kilobits per second of the slowest subarea
transmission group in the composite ANR node′s subarea network that can carry
APPN HPR data. Again, the NCP, SSP and EP Resource Definition Reference
provides useful assistance in calculating the correct values. Note that the
default is 9 kbps, which means that the initial data flow rate allowed by ARB will
be just 900 bits per second. You should always code a value here corresponding
to your actual network, otherwise performance may suffer.
The ARB mechanism determines, at Route Setup time, what traffic rate it will use
when the RTP connection is initialized. The Route Setup flows will contain the
lowest CAPACITY value of any link in the RTP path, and the RTP endpoint will
use a figure of 10% of that value as the ARB starting point. As the Route Setup
traverses the RTP path, each node checks the CAPACITY value of the next link
and substitutes that value in the Route Setup if it is found to be smaller than the
previous value; thus the Route Setup reaches its destination with the minimum
CAPACITY value.
If an NCP is on the Route Setup path, it cannot use the APPN CAPACITY value
as it does not know it. There is no CP in an NCP and therefore no knowledge of
the TG characteristics. Therefore, until V7R5 the NCP used the link SPEED value
(if a FID-2 connection) or the HPRMLC value from the BUILD (if a VR-TG
connection). Note that Route Setup does not flow between CPs; like a BIND, it
flows on what will become the session path.
The issue arises because the SPEED keyword is not coded correctly, if at all, on
many NCP definitions. There is usually no need to code it unless internal
clocking is used or it is required for performance monitoring.
NCP V7R5 learns the true CAPACITY values of its FID-2 links from its owning
VTAM (at VTAM V4R4 or above), and therefore uses the correct values in the
Route Setup for the ARB algorithm. This function was implemented by APAR
IR33946.
The corresponding thresholds for subarea TGs are defined in the PATH
statements. The first ERn keyword on the PATH statement specifies the subarea
flow control thresholds for the three subarea transmission priorities as well as a
total threshold for all priorities combined. The sixth and last operand, the total
threshold, is also used by NCP to limit the the amount of HPR data that can be
queued on the TG at any time. When the threshold is exceeded, additional data
is discarded.
While RTP will recover lost data packets, you should try to ensure that packet
loss is a rare occurrence. Lost data will cause drastic cuts in the ARB flow
control rates, thus affecting throughput.
NCP supports the use of link-level error recovery on all its HPR-capable
connections, but allows it to be turned off only on the following:
• Frame relay (subarea or peripheral)
• Token-ring (peripheral, and subarea only on TIC-3)
• ISDN (subarea or peripheral)
This is because, as you move from LEN or APPN to HPR, session awareness
moves from the NCP boundary function to the RTP endpoint (which is never
NCP).
You can monitor your session control block usage with NTuneMON.
The two CNNs have three subarea connections, on which a VR-TG is defined: a
CTC, a channel link from RAK to RA6NCR0, and a token-ring connection between
the NCPs. These connections form part of VR0, VR1 and VR2 respectively
between the host subareas. We also have an APPN connection over MPC
between the two VTAMs; thus there are four physical paths between them. In
APPN terms, as shown in Figure 42 on page 78 (the topology as seen from
D NET,TOPO,ID=RAA,LIST=ALL
IST097I DISPLAY ACCEPTED
IST350I DISPLAY TYPE = TOPOLOGY
IST1295I CP NAME NODETYPE ROUTERES CONGESTION CP-CP WEIGHT
IST1296I USIBMRA.RAA NN 128 NONE *NA* *NA*
IST1579I ------------------------------------------
IST1297I ICN/MDH CDSERVR RSN HPR
IST1298I YES YES 115782 RTP
IST1579I ------------------------------------------
IST1223I BN NATIVE TIME LEFT
IST1224I NO YES 15
IST1299I TRANSMISSION GROUPS ORIGINATING AT CP USIBMRA.RAA
IST1357I CPCP
IST1300I DESTINATION CP TGN STATUS TGTYPE VALUE WEIGHT
IST1301I USIBMRA.RAK 255 OPER INTERM VRTG YES *NA* 2
IST1301I USIBMRA.RAK 21 OPER INTERM YES *NA* 1
Figure 42. APPN View of the Configuration
The RTP connections CNR00007 and CNR00006 4 are the session pipe and the
long-lived Route Setup pipe respectively.
CNR00007 is carrying our 3270 session and is using the APPN BF-TG 3. As the
subarea CTC link was activated before the local SNA major node for RAK, the
CP-CP sessions were started over the VR-TG and therefore there is no CP-CP
pipe in the ISTRTPMN major node 4.
Next, we brought down the MPC connection between RAA and RAK, as seen in
Figure 44 on page 80.
V NET,INACT,ID=RAAAHHK
IST097I VARY ACCEPTED
IST1196I APPN CONNECTION FOR USIBMRA.RAK INACTIVE - TGN = 21
IST1494I PATH SWITCH STARTED FOR RTP CNR00007
IST1133I RAAIRAK IS NOW INACTIVE, TYPE = PU_T2
IST1488I INACTIVATION FOR RTP CNR00006 AS PASSIVE PARTNER COMPLETED
IST1416I ID = CNR00006 FAILED - RECOVERY IN PROGRESS
IST1133I RAAAHHK IS NOW INACTIVE, TYPE = LCL SNA MAJ NODE
IST1136I VARY INACT CNR00006 SCHEDULED - UNRECOVERABLE ERROR
IST1133I CNR00006 IS NOW INACTIVE, TYPE = PU_T2.1
IST871I RESOURCE CNR00006 DELETED
IST1494I PATH SWITCH COMPLETED FOR RTP CNR00007
IST1480I RTP END TO END ROUTE - PHYSICAL PATH
IST1460I TGN CPNAME TG TYPE HPR
IST1461I 255 USIBMRA.RAK VRTG RTP 5
IST314I END
Figure 45. Network Log on RAA
As we expected, the long-lived pipe CNR00006 was deactivated while the session
pipe was switched to the VR-TG connection 5. The CP-CP sessions were
unaffected by the failure so there is no message related to them.
Now we verified the active virtual routes between RAA and RAK, as seen in
Figure 46 on page 81 on the RAA side.
In our case we have VR0 active because the SSCP and CP-CP sessions are
flowing over it. VR1 is inactive, but this does not prove that the HPR connection
is also flowing across VR0. In fact, the only alternative path at the time is VR1,
whose ER has not yet been activated 11; VR2 via the token-ring has not yet
been defined. The fact that ER1 was never activated indicates that ER0 is,
indeed, being used for the HPR connection. However, in the current
implementation, the only way to be sure of the actual link being used by HPR
over a VR-TG is to trace the physical link and verify the presence of NLPs.
V NET,INACT,ID=RAACTCA
IST097I VARY ACCEPTED
IST526I ROUTE FAILED FROM 10 TO 20 - DSA
IST526I ROUTE FAILED FROM 10 TO 20 - DSA
IST1196I APPN CONNECTION FOR USIBMRA.RAK INACTIVE - TGN = 255 6
IST819I CDRM RAK COMMUNICATION LOST - RECOVERY IN PROGRESS
IST1110I ACTIVATION OF CP-CP SESSION WITH USIBMRA.RAK FAILED
IST1280I SESSION TYPE = CONWINNER - SENSE = 80200007
IST1002I RCPRI=0004 RCSEC=0000
IST314I END
IST1133I RAACTCA IS NOW INACTIVE, TYPE = CA MAJOR NODE
IST1086I APPN CONNECTION FOR USIBMRA.RAK IS ACTIVE - TGN = 255 7
IST1132I USIBMRA.RAK IS ACTIVE, TYPE = CDRM
IST1096I CP-CP SESSIONS WITH USIBMRA.RAK ACTIVATED
IST1494I PATH SWITCH STARTED FOR RTP CNR00007
IST1494I PATH SWITCH COMPLETED FOR RTP CNR00007
IST1480I RTP END TO END ROUTE - PHYSICAL PATH
IST1460I TGN CPNAME TG TYPE HPR
IST1461I 255 USIBMRA.RAK VRTG RTP 8
IST314I END
Figure 47. Network Log on RAA
You can see that the TG 255 between RAA and RAK is deactivated 6 but it is
then opened again 7. Of course, it now traverses a different subarea VR. The
You can see that CNR00007 has been switched correctly, as expected, to the
same VR-TG 8. You are not told the new subarea route after the path switch
message; TG 255 is always shown.
We then verified what happened to the subarea routes from RAA, as shown in
Figure 48.
D NET,ROUTE,DESTSUB=20
IST097I DISPLAY ACCEPTED
IST535I ROUTE DISPLAY 8 FROM SA 10 TO SA
IST808I ORIGIN PU = ISTPUSA0 DEST PU = ***NA*** NETID = USIBMRA
IST536I VR TP STATUS ER ADJSUB TGN STATUS
IST537I 0 0 INACT 0 20 1 INOP
IST537I 0 1 INACT 0 20 1 INOP
IST537I 0 2 INACT 0 20 1 INOP
IST537I 1 0 ACTIV 1 6 1 ACTIV3
IST537I 1 1 INACT 1 6 1 ACTIV3
IST537I 1 2 ACTIV 1 6 1 ACTIV3
Figure 48. Active Subarea Routes on RAA
You can see that the only active explicit route now is ER1, which goes through
NCP RA6NCR0. All sessions are flowing over VR1, while the RTP pipe CNR00007
must also flow across ER1 as ER0 is inoperative.
We started by disabling the NCP connection between RA6NCR0 and RAK, then
established a session, and an HPR connection, betwen RAK and RAA through
the VR-TG as shown in Figure 49 on page 83.
The above display verifies that we had only two routes available for the VR-TG.
ER0 was the CTC connection between RAA and RAK, and ER2 was through the
token-ring connection. Since ER0 was the only active ER, the RTP pipe must
have been flowing through the CTC.
The above display shows that the connection is still going through TG255, so we
did another display of the route (see Figure 52) to check where the RTP pipe
now was. Since ER2 was the only active ER, CNR00008 must have taken that
path.
D NET,ROUTE,DESTSUB=20
IST097I DISPLAY ACCEPTED
IST535I ROUTE DISPLAY 12 FROM SA 10 TO SA 20
IST808I ORIGIN PU = ISTPUSA0 DEST PU = ***NA*** NETID = USIBMRA
IST536I VR TP STATUS ER ADJSUB TGN STATUS CUR MIN MAX
IST537I 0 0 INACT 0 20 1 INOP
IST537I 0 1 INACT 0 20 1 INOP
IST537I 0 2 INACT 0 20 1 INOP
IST537I 1 0 INACT 1 6 1 INOP
IST537I 1 1 INACT 1 6 1 INOP
IST537I 1 2 INACT 1 6 1 INOP
IST537I 2 0 INACT 2 6 1 ACTIV3
IST537I 2 1 INACT 2 6 1 ACTIV3
IST537I 2 2 ACTIV 2 6 1 ACTIV3 11 10 30
Figure 52. Route Showing ER2 Active
If you want such a session to use an RTP connection once it leaves RAA′ s
domain, the session path must traverse RAA itself because that is the only
RTP-capable subarea node in the domain. Now if the connection from RA6NCR0
to RAK is the only one available, the session will have to go back on itself
through RA6NCR0 to reach RAK by means of an RTP pipe. Thus the path will be
(W05333 - RA6NCR0 - RAA) as the subarea portion and (RAA - RA6NCR0 - RAK)
as the HPR portion.
HPRNCPBF defaults to NO, and is modifiable. If the value is changed from YES to
NO, new sessions can use existing RTP pipes, but no new RTP pipes will be set
up that cause the traffic to go through an NCP twice. Sessions will still be
established but will use base APPN until they happen to reach an RTP-capable
node.
D NET,ROUTE,DESTSUB=20
IST097I DISPLAY ACCEPTED
IST535I ROUTE DISPLAY 22 FROM SA 10 TO SA
IST808I ORIGIN PU = ISTPUSA0 DEST PU = ***NA*** NETID = USIBMRA
IST536I VR TP STATUS ER ADJSUB TGN STATUS
IST537I 0 0 INACT 0 20 1 INOP
IST537I 0 1 INACT 0 20 1 INOP
IST537I 0 2 INACT 0 20 1 INOP
IST537I 1 0 ACTIV 1 6 1 ACTIV3
IST537I 1 1 INACT 1 6 1 ACTIV3
IST537I 1 2 ACTIV 1 6 1 ACTIV3
Figure 53. Subarea Routes between RAA and RAK
From an APPN point of view, as seen in Figure 54, there was only one active TG.
D NET,TOPO,ID=RAA,LIST=ALL
IST097I DISPLAY ACCEPTED
IST350I DISPLAY TYPE = TOPOLOGY
IST1295I CP NAME NODETYPE ROUTERES CONGESTION CP-CP WEIGHT
IST1296I USIBMRA.RAA NN 128 NONE *NA* *NA*
IST1579I ------------------------------------------
IST1297I ICN/MDH CDSERVR RSN HPR
IST1298I YES YES 115782 RTP
IST1579I ------------------------------------------
IST1223I BN NATIVE TIME LEFT
IST1224I NO YES 14
IST1299I TRANSMISSION GROUPS ORIGINATING AT CP USIBMRA.RAA
IST1357I CPCP
IST1300I DESTINATION CP TGN STATUS TGTYPE VALUE WEIGHT
IST1301I USIBMRA.RAK 255 OPER INTERM VRTG YES *NA*
IST1301I USIBMRA.RAK 21 INOP INTERM YES *NA*
Figure 54. Display of Topology Database
Next we displayed the ISTRTPMN major node on RAA to verify the current
number of active RTP connections to RAK and which sessions they carried (see
Figure 55 on page 86).
We only had one active pipe, and it was carrying three sessions between RAA
and RAK. These sessions had been established before we broke the MPC and
the CTC. They were still there because HPR always managed to find an
alternative path for them.
Then we activated the workstation W05333, with its dependent LUs, connected to
NCP subarea 6. From the USS10 message displayed by RAA we logged on to
the TSO subsystem on RAK.
The session path for the TSO session, as expected, went via RA6NCR0 directly to
RAK. The display in Figure 56 on page 87 confirms that no new RTP
connections have been set up, so our new session uses base APPN.
To see where the session went, we displayed the workstation dependent LU from
RAA, as shown in Figure 57 on page 88.
In fact this display does not prove whether the session used base APPN or HPR;
all it shows is that W0533302 was an LU owned by RAA and connected to
RA6NCR0, and that it was in session with RAK′s TSO. What did prove that the
session was base APPN was a similar display from RAK. This showed that the
link station being used by the CDRSC named W0533302 was the FID-2 connection
to RA6NCR0 and not a PU of the form CNRxxxxx.
Now we enabled the HPRNCPBF VTAM start option as you can see in Figure 58.
F NETA0,VTAMOPTS,HPRNCPBF=YES
IST097I MODIFY ACCEPTED
IST223I MODIFY COMMAND COMPLETED
D NET,VTAMOPTS,OPT=HPRNCPBF
IST097I DISPLAY ACCEPTED
IST1188I ACF/VTAM V4R4 STARTED AT 12:46:02 ON 01/28/98
IST1349I COMPONENT ID IS 5695-11701-401
IST1348I VTAM STARTED AS INTERCHANGE NODE
IST1189I HPRNCPBF = YES
IST314I END
Figure 58. Modify Start Option Command Issued on RAA
We can see that the new connection CNR00045 carries the session to TSO 9
and uses the VR-TG 10. The session path is shown in Figure 60 on page 90,
and comprises (W05333 - RA6NCR0 - RAA - RA6NCR0 - RAK). The RTP
connection forms only part of this route.
Allowing the CS/2 workstation to be an RTP endpoint allows any session to use
HPR all the way. However, if dependent LU sessions are to traverse an HPR
path all the way they must first be capable of APPN. This means using DLUR,
because only DLUR allows that vital first hop (CS/2 to NCP) to be APPN and
therefore HPR. Figure 61 on page 91 illustrates this setup.
The CS/2 machine CP05153 was configured as a network node, although the
node type did not affect the HPR or DLUR configuration. CP05153 was connected
to both NCPs (and therefore to both VTAM APPN nodes), and we wanted it to be
able to use both VTAMs as dependent LU servers at the same time. This was
not necessary; we could have defined one VTAM as the primary DLU server and
the other as backup. However, we wanted to demonstrate the full flexibility of
the DLUR configuration on CS/2.
Since both our VTAMs were NNs, no further definitions were required to give
them DLUS capability. All VTAM NNs from V4R2 onwards are automatically able
to perform DLUS functions.
On the real link stations, we used no VTAM definitions and allowed them to be
created dynamically using the configuration services exit ISTEXCCS. These link
stations had no dependent LUs, so each VTAM acquired a link station
represented by the PU W05153, as named by ISTEXCCS from the node ID.
As to the DLUR PUs, we allowed the configuration services exit to define the one
on RAK but we coded a manual definition on RAA as seen in Figure 62 on
page 93. All DLUR/S connections appear to VTAM as switched connections; the
actual physical connectivity is irrelevant.
The IDNUM 1 in this definition matched the IDNUM we defined for the type 2
PU which used RAA as its DLUS. Note that although MAXDATA was coded, it is
irrelevant for an internal DLUR PU. Sessions using DLUR resources flow using
APPN protocols and the maximum PIU size is negotiated by the DLUR node with
its adjacent nodes. However, an external PU has a real link to the DLUR and on
this link MAXDATA (and other parameters such as MAXOUT) can be used to
configure the link.
Figure 64 on page 95 shows our configuration for the two DLUR/S connections.
Each logical connection has the DLUS name and the node ID defined. CS/2
needs no more information than this to get the dependent LUs activated. APPN
is used to locate the defined DLU server, and the node ID is used to identify the
appropriate PU from the IDBLK and IDNUM keywords in a switched major node.
If you select Create or Change, you get the Dependent LU Server Parameters
panel, as in Figure 65 on page 96. This demonstrates what you can define for a
DLUR/S connection.
The link name and local PU name are known only to this CS/2 node, and need
not match anything outside the node. The backup DLUS name is optional. If the
DLUR/S pipe to the primary DLU server breaks, CS/2 is able to connect to a
backup DLU server without disrupting existing dependent LU sessions. This is
comparable to the existing support for SSCP takeover of real link stations
defined in an NCP.
Once the DLUR/S connection(s) have been defined, the dependent LUs
themselves are defined on the appropriate logical links. Figure 66 on page 97
shows that we defined two LUs on each DLUS. The LU names specified here are
not the LU names known to VTAM. These (local to CS/2) names are used to
connect the LU definitions to a product that uses the LUA API, PComm being the
prime example. When you define SNA LUs to PComm you define only high-level
parameters such as screen sizes and graphics capability. All the rest comes
from the CS/2 definitions which you refer to using this LU name.
When you select Create or Change you get the LUA API Parameters panel as
shown in Figure 67 on page 98. This panel allows you to enter the LU name
(locally known only), the host link name, and the NAU address which must, of
course, correspond to the VTAM LOCADDR keyword on the LU definition. The
LU model name, an optional parameter, allows you to supply a name to the
ISTEXCSD exit which can define the dependent LUs dynamically. ISTEXCSD and
ISTEXCCS can be used together, to provide the most flexible way to define
resources dynamically. ISTEXCCS can define adjacent link stations (and PUs)
based on XID fields, while ISTEXCSD can define dependent LUs based on NMVT
requests sent on the SSCP-PU session.
After starting CS/2, we first verified the connectivity. Figure 68 shows the
messages issued by RAA when the CS/2 node was connected. Figure 69 on
page 99 shows the corresponding messages on RAK.
17:30:43 IST1488I ACTIVATION FOR RTP CNR00710 AS PASSIVE PARTNER COMPLETED 6
17:30:43 IST1488I ACTIVATION FOR RTP CNR00711 AS ACTIVE PARTNER COMPLETED 7
17:30:45 IST1132I PU05153 IS ACTIVE, TYPE = PU_T2 8
We displayed one of the new RTP pipes from RAA, as shown in Figure 70 on
page 100.
Note that:
• This pipe connects VTAM with remote LU CP05153 14 using COS
SNASVCMG 12, as we expect for a DLUR/S connection.
• The route is via the MPC connection to RAK 13, as we deduced from the
order of activation of the various connections.
• We have not checked our TG characteristics 15. Somewhere on this RTP
pipe there is a connection whose capacity VTAM does not know. We did not
investigate the cause of this because our purpose was to demonstrate
function, not performance.
D NET,ID=CNR00714,E
IST097I DISPLAY ACCEPTED
IST075I NAME = CNR00714 , TYPE = PU_T2.1
IST1392I DISCNTIM = 00010 DEFINED AT PU FOR DISCONNECT
IST486I STATUS= ACTIV--LX-, DESIRED STATE= ACTIV
IST1043I CP NAME = CP05153 , CP NETID = USIBMRA , DYNAMIC LU = YES
IST1589I XNETALS = YES
IST933I LOGMODE=***NA***, COS=#CONNECT
IST1476I TCID X′310BB4CA000000C3′ - REMOTE TCID X′000000000000005A′
IST1481I DESTINATION CP USIBMRA.CP05153 - NCE X′ 8 0 ′
IST1587I ORIGIN NCE X′ D000000000000000′
IST1477I ALLOWED DATA FLOW RATE = 9000 BITS/SEC
IST1516I INITIAL DATA FLOW RATE = 1000 BITS/SEC
IST1511I MAXIMUM NETWORK LAYER PACKET SIZE = 2224 BYTES
IST1478I NUMBER OF UNACKNOWLEDGED BUFFERS = 0
IST1479I RTP CONNECTION STATE = CONNECTED
IST1480I RTP END TO END ROUTE - PHYSICAL PATH
IST1460I TGN CPNAME TG TYPE HPR
IST1461I 21 USIBMRA.CP05153 APPN RTP
IST231I RTP MAJOR NODE = ISTRTPMN
IST654I I/O TRACE = OFF, BUFFER TRACE = OFF
IST1500I STATE TRACE = OFF
IST355I LOGICAL UNITS:
IST080I RA5153L1 ACT/S
IST314I END
Figure 72. New RTP Connection
D NET,ID=CP05153,E
IST097I DISPLAY ACCEPTED
IST075I NAME = USIBMRA.CP05153 , TYPE = ADJACENT CP
IST486I STATUS= ACT/S----Y, DESIRED STATE= ACTIV
IST1402I SRTIMER = 120 SRCOUNT = 60
IST1447I REGISTRATION TYPE = NO
IST977I MDLTAB=***NA*** ASLTAB=***NA***
IST1333I ADJLIST = ***NA***
IST861I MODETAB=***NA*** USSTAB=***NA*** LOGTAB=***NA***
IST934I DLOGMOD=CPSVCMG USS LANGTAB=***NA***
IST597I CAPABILITY-PLU ENABLED ,SLU ENABLED ,SESSION LIMIT NONE
IST231I CDRSC MAJOR NODE = ISTCDRDY
IST1184I CPNAME = USIBMRA.CP05153 - NETSRVR = ***NA***
IST1044I ALSLIST = ISTAPNPU
IST082I DEVTYPE = INDEPENDENT LU / CDRSC
IST654I I/O TRACE = OFF, BUFFER TRACE = OFF
IST1500I STATE TRACE = OFF
IST171I ACTIVE SESSIONS = 0000000006, SESSION REQUESTS = 0000000000
IST206I SESSIONS:
IST1081I ADJACENT LINK STATION = CNR009D6 17
IST634I NAME STATUS SID SEND RECV VR TP
IST635I RAK ACTIV-S ECBF5D353E4D269F 000A 000D 0 0
IST1081I ADJACENT LINK STATION = CNR009D4 18
IST634I NAME STATUS SID SEND RECV VR TP
IST635I RAK ACTIV/SV-S ECBF5D353D4D269F 0001 0001 0 0
IST635I RAK ACTIV/DL-S ECBF5D35394D269F 0016 0000 0 0
IST1081I ADJACENT LINK STATION = W05153 19
IST634I NAME STATUS SID SEND RECV VR TP
IST635I RAK ACTIV/CP-S ECBF5D35374D269F 0055 0001 0 0
IST635I RAK ACTIV/CP-P F8D3D164311C85EA 0001 0056 0 0
IST1081I ADJACENT LINK STATION = CNR009D5 20
IST634I NAME STATUS SID SEND RECV VR TP
IST635I RAK ACTIV/DL-P F8D3D164311C85EB 0000 0016 0 0
IST1355I PHYSICAL UNITS SUPPORTED BY DLUR USIBMRA.CP05153
IST089I WDDD54 TYPE = PU_T2 21
IST924I -----------------------------------------------------
IST075I NAME = USIBMRA.CP05153 , TYPE = DIRECTORY ENTRY
IST1186I DIRECTORY ENTRY = DYNAMIC NN
IST1184I CPNAME = USIBMRA.CP05153 - NETSRVR = ***NA***
IST1402I SRTIMER = 120 SRCOUNT = 60
IST314I END
Figure 73. Display of CS/2 CP and Its DLUR PU
There were actually four logical links between RAK and CP05153:
• The new RTP pipe CNR009D6 17 carries the new APING LU 6.2 session.
Because this session is the only one that uses the APPN COS #INTER, it has
an RTP pipe to itself.
• The RTP pipe CNR009D4 18 carries one of the two DLUR/S sessions. The
notation ACTIV/DL indicates a DLUR/S session. The APPN COS used by
DLUR/S pipes is SNASVCMG, thus an RTP connection carrying a DLUR/S
Note also that CP05153 is identified 21 as the DLU requester that looks after
the type 2 PU WDDD54. This name was also created by ISTEXCCS from the
IDNUM we gave it in our CS/2 setup. Figure 74 illustrates the structure of the
sessions and RTP pipes between RAK and CP05153 at this time.
Having established all the sessions and displayed all the details of the RTP and
DLUR connections, we deactivated the connection between RAA and CP05153
across the token-ring. Figure 75 on page 104 shows the APPN view of what
happened. Figure 76 on page 104 shows the resulting display.
We see that the APPN connection is broken, the CP-CP sessions are terminated
and the PU W05153 is deactivated. But a display of the RTP pipe carrying the
dependent LU session (Figure 77 on page 105) shows that the session is still
alive.
The session has moved to the path via RAK and RA7NCPB.
This shows that, with HPR and DLUR, dependent LU sessions can receive
exactly the same availability benefits from end to end as independent LU
sessions. Without DLUR the HPR resilience only works between RTP-capable
nodes, which means that the portion of the session between the workstation and
the nearest RTP node on the path is not recoverable in case of failure.
In this chapter we have replaced the NCPs with 3746 network node processors,
and we describe how HPR and DLUR may be implemented on the 3746-9X0
platform. Before we discuss how the APPN functions are configured in the 3746,
we show in Figure 78 the test scenario that we used.
Here there is no need for VR-TG connection as most of the reasons for it are no
longer valid. There are no NCPs, no subarea MLTGs, no SSCP takeover and no
BSC 3270 sessions. The two VTAM NNs we used, RAA and RA39, are connected
by an APPN MPC link. Two 3746 NNPs, NNP61A and NNP41A, have replaced the
NCPs and are connected to the VTAMs in a square as shown. All connections
are FID-2. Strictly speaking, they should be called APPN rather than FID-2
because they can all carry NLPs as well as FID-2 traffic. The connection
between the two 3746s is a token-ring, while the connections between 3746s and
VTAMs are ESCON channels.
The workstation we used, with its dependent LUs, was configured in two ways to
illustrate various methods of connecting such a device:
The 3746s we used were both Model 900s with NNPs attached to 3745s. As there
were no NCP-owned resources in use, Model 950 stand-alone machines would
have worked in exactly the same way.
Each of the configurations depicted in this section requires network node, port
and link station definitions on the 3746 NN. A port provides access to the
physical medium (ESCON, SDLC, frame relay, token-ring) enabling
communication to a destination node. A link station represents the adjacent
node on an APPN link and specifies the characteristics of the connection. Apart
from VTAM, all the products we used (3746, 2216, 2210, CS/2) have a similar
pattern when it comes to defining link resources.
First you define a port, which corresponds to an adapter, a protocol and possibly
a local SAP used by that protocol. Next, in a node that will initiate contact, you
define the adjacent link station that will be contacted. Except for non-switched
SDLC connections, the adjacent link station that initiates the contact is not
defined at the node that receives the contact. In general, an adjacent link station
representing a connection between a workstation EN and an NN is defined only
in the EN.
The configuration definition is shown on the basis of the type of attachment used
to connect our equipment to the 3746 Model 900:
• ESCON coupler:
− Connection between a VTAM network node and a 3746 NN using a single
ESCON port
• Token-ring coupler:
− TIC3 connection between two 3746-900 machines
− Connection between a PS/2 (CS/2) node supporting dependent LUs and a
3746 NN
− Connection between a PS/2 (CS/2) network node with DLUR support and
a 3746 NN
The two main components of the CCM application are the configurator and the
management interface. The management interface allows the user to manage
the APPN NN, for example starting/stopping the APPN node, ports and link
stations. The configurator is used to define all resources used by the 3746-9x0.
It enables the user to customize the APPN NN, define APPN and non-APPN
attachments, and configure DLUR functions. In addition, there is support for IP
and associated routing protocols, and for frame relay as a frame handler (FRFH).
The configurator produces a 3746 NN configuration file that is used by the APPN
control point (CP). It also produces files to define all supported interfaces, such
as ESCON, serial and token-ring.
The CCM program runs like any other application. To start CCM double-click on
the CCM icon that is automatically created when the CCM program is installed.
Before opening a configuration, a list of available configuration data files is
presented. After selecting the appropriate configuration, the topology is
displayed on the primary configuration window. This reflects the hardware
configuration of the 3746-9x0; different configurations can be customized to your
particular requirements.
CCM creates a set of output files. The files of a given configuration are
compressed into a unique configuration file, which will be referred to as the 3746
NN configuration file. Several configurations may be defined by the user but
only one can be active (in use by the 3746) at a time. Configuration files cannot
be edited by the customer other than using CCM facilities. Facilities exist to
export the configuration file to disk, or import the configuration file from disk.
Import and export functions are especially important when creating a
configuration on a stand-alone PC.
Once CCM is active it can be used to configure and manage the 3746 NN, start
and stop the APPN control point, and perform adapter traces.
To import a configuration from a diskette, select File on the CCM primary menu.
From the pull-down menu click on Import a configuration. You will then be
prompted to insert a diskette in the drive. Once you click on Yes the
configuration file will be copied to the hard disk of the network node processor(s)
and the service processor. If the configuration file already exists, you have the
option to overwrite the file already stored.
From the configuration files displayed, select the one you want to activate and
click on Activate. You will then be prompted to confirm your activation request,
as seen in Figure 81 on page 112. This is because some functions (such as CP
The CCM window will show the active APPN configuration that the network node
node processor is using and the configuration opened for customization.
The CCM primary screen also displays the installation and configuration status
of each coupler for the opened configuration. Each coupler is represented by an
icon with the coupler address and coupler type (if installed and known) indicated
just below it.
Figure 82 shows a pristine 3746 configuration with nothing defined as yet. All
the couplers are shown as unshaded, with no coupler description, except for
three that CCM knows about even without any predefinition:
• Addresses 2048 and 2112 are the CBSPs (controller bus and service
couplers) that connect the two 3745 CCUs to the 3746.
• Address 2080 is the token-ring coupler that connects the NNP to the service
processor.
After completing the task of configuring all our adapters, our CCM primary
display for this particular 3746 was updated to Figure 83 on page 114. A
transparent icon with a check mark indicates a coupler that has been configured
(ports 2144 and 2176). These couplers now have their correct coupler type
shown beneath the coupler icon. At the same time, coupler 2208 has been
shaded to denote that it is not available to be configured. CCM knows that an
ESCON processor can have only one coupler attached to it, so the second
coupler slot of the pair has been shaded.
The file name you filled in will appear as the opened APPN configuration file.
You can now start to configure the node, DLUR and the couplers as needed.
Figure 85 shows how the NN, DLUR, and focal point definitions have been
specified for our test environment.
The level of HPR support can be specified on the scroll bar labelled HPR
support. Figure 86 shows that the options available range from no HPR support
to Control Flows over RTP. The default value is ANR support. HPR capability
can be turned on or off for each individual port or link station.
The observant reader will see that the panel in Figure 86 differs slightly from
that used in Figure 85 on page 115. We displayed a similar configuration using
the two latest releases of CCM, to show that the panels you see may not be
identical to ours. Figure 86 is the later (F12380). If you select NN characteristics
from this panel you will see Figure 87 on page 117, whereas selecting DLUR
Retry parameters will give you Figure 88 on page 117.
The DLUR retry algorithm on the 3746 is the same as that on the 2216/2210. If
the cause of failure of the DLUR/S pipe is a non-disruptive UNBIND, the 3746
attempts to contact the DLUS or the backup DLUS at intervals determined by the
long retries setting. With other failures, it tries more often; it performs a
sequence of attempts based on the short retries timer and count, this sequence
being repeated at intervals based on the long retries setting.
You can also define the RTP parameters such as the path switch timers, by
means of the RTP Parameters button. Figure 89 on page 118 shows the choices
you have.
The ESCON configuration has three levels of hierarchy rather than the two
normally associated with APPN links. The physical port can have multiple host
links, which are effectively logical ports connecting the 3746 with different hosts.
Each host link can have multiple link stations for various purposes (for example,
APPN parallel TGs and NCP connections).
We needed only one connection (to RAA), so we defined a single host link with a
single link station. The host link was CHPID 18 and the link station was address
92E on RAA.
The names assigned in the 3746 were APPN2176 for the port, HL2176A for the
host link and ST92E for the link station.
In the next section we show how port 2176, its host link and the link station have
been defined on the 3746 NN.
Once a coupler has been configured for ESCON, this window will be skipped the
next time the ESCON port, host links, or link stations are customized.
The critical fields that we had to define correctly on the ESCON panels were:
• We needed APPN but not TCP/IP.
• The ESCON connection was running in basic mode (no LPARs, no EMIF).
• The CHPID must correspond to the PATH keyword on the CHPID statement in
the IOCP definition.
• The host link address can be safely left to the ESCON subsystem to be
defined dynamically, although we chose to enter it manually. Only if you
plan to IPL an NCP over this connection must you specify a real address.
This field defines the port in the ESCON director to which the host is
connected.
Note that, in this example, the ESCD number is the same as the host link
address. This is purely coincidental.
We selected the Add option to complete this panel. Once a host link is defined
(and selected in the Already Configured window) then you can proceed to define
APPN parameters and link stations. Table 2 on page 313 gives more details on
the parameters on this panel.
We took the default for ERP support (required), enabling HPR support over the
channel.
*********************************************************************
* IOCP ESCON GENERATION SUBSET: Multipro
* FOR HOST: SYS6
* COMMUNICATION CONTROLLER: 3745-61A
*********************************************************************
CHPID PATH=(18),SWITCH=E1,TYPE=CNC
CNTLUNIT CUNUMBR=92E, X
UNITADD=((01,16)), X
PATH=(18), X
LINK=E0, X
UNIT=3745
IODEVICE CUNUMBR=92E, * Same as CNTLUNIT CUNUMBR * X
UNIT=3745, X
ADAPTER=TYPE7, X
UNITADD=0F, X
ADDRESS=(92E,2) *xxx must match the CUADDR of a PCCU
* * in NCP , when requested
**********************************************************************
* LOCAL MAJOR NODE FROM RA39 TO 3746 NNP41A *
**********************************************************************
CP900D VBUILD TYPE=LOCAL
**
*-------CHANNEL PU
**
CP9DD41A PU PUTYPE=2,CUADDR=90F,XID=YES, X
CONNTYPE=APPN,MAXBFRU=15, X
CPCP=YES
Note also that the CUADDR keyword in the VTAM PU definition must correspond
with the ADDRESS keyword in the IODEVICE statement in the IOCP. The
CUNUMBR keywords in the IOCP are used to match the CNTLUNIT to the
IODEVICE; they have no relationship with the CUADDR and it is merely a
coincidence that they are the same.
The coupler type needs to be entered only once per coupler. The port
configuration parameters are required for each token-ring (TIC3) port. Default
token-ring station parameters can be used, or they can be individually
configured for each station. Stations must be defined for dial-out. Dynamically
defined link stations automatically use the default station parameters. In cases
where non-default parameters are to be used, a station must be predefined, and
the appropriate parameters set. Multiple connection networks can be defined
per token-ring port; connection networks can span a single or multiple ports.
By clicking on APPN parameters we were presented with the panel in Figure 104
on page 132. This panel is very similar to the ESCON Host Link parameters
panel (Figure 96 on page 125), except that the TG characteristics are more
suitable to a LAN connection and there are more choices in the HPR scroll bar.
Figure 105 on page 133 shows that link-level ERP can be prohibited or allowed
to be negotiated with the adjacent node. You must make sure that the adjacent
node definitions are consistent. We allowed all the parameters to remain at
their default values.
We did not define a connection network because we only ever had a small
number of nodes on our LAN, mostly network nodes. The Connection Network
panel accessible from the Port Configuration panel (Figure 103 on page 131)
prompts you for the fully qualified names of all the virtual nodes you wish to
define on this port.
Note: The remote SAP defaults to 08. You have to make sure that the MAC
address and the SAP correspond with the remote station.
The LAN-attached CS/2 node (PU05170) was not explicitly defined as a token-ring
station in the 3746-61A in the first test, so the 3746 dynamically created a link
station when the CS/2 node connected to it. We just defined a link in CS/2
pointing to the 3746 MAC/SAP address.
We defined in the 3746 the network node used for the second test, CM5HPRNN.
As stated previously, the link station can be defined at either partner node. So,
if you are defining the link station at the 3746 NN you do not have to define a link
station at the PC and vice versa. The side that does not have explicit definitions
for its partner cannot initiate the connection itself and must wait for the other
side to do so.
Note: If the link station in CS/2 is dynamically defined, care must be taken not
to specify a value of zero in the Percent of Incoming Calls field in the DLC
Adapters definition.
The panel also allows you to enter DLUR parameters. This information is
applicable only if the remote station contains dependent LUs, and the default
Once the station APPN parameters were set we selected the DLC Parameters to
verify them. We left them alone because the defaults (shown in Figure 106 on
page 134) were acceptable.
V NET,ID=RACAHHC,SCOPE=ALL,ACT
IST097I VARY ACCEPTED
IST093I RACAHHC ACTIVE
IST1086I APPN CONNECTION FOR USIBMRA.RAA IS ACTIVE - TGN = 21
IST093I RACHRAA ACTIVE
IST1488I ACTIVATION FOR RTP CNR00001 AS PASSIVE PARTNER COMPLETED 1
IST1096I CP-CP SESSIONS WITH USIBMRA.RAA ACTIVATED
Figure 109. VTAM-to-VTAM HPDT Connection
A display of ISTRTPMN confirmed that the RTP pipe was indeed for CP-CP
sessions. There was as yet no LU-LU pipe because there were no LU-LU
sessions between the VTAMs. Nor was there an RSTP pipe because there were
no LU-LU sessions using this link (please see Figure 110).
D NET,ID=ISTRTPMN,E
IST097I DISPLAY ACCEPTED
IST075I NAME = ISTRTPMN, TYPE = RTP MAJOR NODE 584
IST486I STATUS= ACTIV, DESIRED STATE= ACTIV
IST1486I RTP NAME STATE DESTINATION CP MNPS TYPE
IST1487I CNR00001 CONNECTED USIBMRA.RAA NO CPCP
IST314I END
Figure 110. RTP Major Node on RA39
D NET,ID=CNR00001,E
IST097I DISPLAY ACCEPTED
IST075I NAME = CNR00001, TYPE = PU_T2.1 587
IST1392I DISCNTIM = 00010 DEFINED AT PU FOR DISCONNECT
IST486I STATUS= ACTIV--LX-, DESIRED STATE= ACTIV
IST1043I CP NAME = RAA, CP NETID = USIBMRA, DYNAMIC LU = YES
IST1589I XNETALS = YES
IST933I LOGMODE=***NA***, COS=CPSVCMG 2
IST1476I TCID X′38273EAD0000009D′ - REMOTE TCID X′310BB5170000009C′
IST1481I DESTINATION CP USIBMRA.RAA - NCE X′ D400000000000000′ 3
IST1587I ORIGIN NCE X′ D400000000000000′ 3
IST1477I ALLOWED DATA FLOW RATE = 1876 KBITS/SEC 4
IST1516I INITIAL DATA FLOW RATE = 3200 KBITS/SEC
IST1511I MAXIMUM NETWORK LAYER PACKET SIZE = 20479 BYTES
IST1478I NUMBER OF UNACKNOWLEDGED BUFFERS = 0
IST1479I RTP CONNECTION STATE = CONNECTED - MNPS = NO
IST1480I RTP END TO END ROUTE - PHYSICAL PATH
IST1460I TGN CPNAME TG TYPE HPR
IST1461I 21 USIBMRA.RAA APPN RTP
IST231I RTP MAJOR NODE = ISTRTPMN
IST654I I/O TRACE = OFF, BUFFER TRACE = OFF
IST1500I STATE TRACE = OFF
IST355I LOGICAL UNITS:
IST080I RAA ACT/S----Y 5
IST314I END
Figure 111. Details of CP-CP RTP Pipe
V NET,ACT,ID=NNP61A
IST097I VARY ACCEPTED
IST1488I ACTIVATION FOR RTP CNR0076F AS ACTIVE PARTNER COMPLETED
IST1096I CP-CP SESSIONS WITH USIBMRA.NNP61A ACTIVATED
Figure 112. Activation of Link to 3746 NN
Once again, the HPR connection CNR0076F was activated before the CP-CP
sessions were established. Both VTAM and the 3746 support Control Flows over
RTP over a channel connection.
A display of the RTP connection showed (Figure 113) very similar characteristics
to those of the VTAM-VTAM connection. Note that the 3746 uses different
formats for TCIDs and NCEs.
D NET,ID=CNR0076F,E
IST097I DISPLAY ACCEPTED
IST075I NAME = CNR0076F, TYPE = PU_T2.1 393
IST1392I DISCNTIM = 00010 DEFINED AT PU FOR DISCONNECT
IST486I STATUS= ACTIV--LX-, DESIRED STATE= ACTIV
IST1043I CP NAME = NNP61A, CP NETID = USIBMRA, DYNAMIC LU = YES
IST1589I XNETALS = YES
IST933I LOGMODE=***NA***, COS=CPSVCMG
IST1476I TCID X′310BB525000000AA′ - REMOTE TCID X′000000000B447050′
IST1481I DESTINATION CP USIBMRA.NNP61A - NCE X′ D0201025′
IST1587I ORIGIN NCE X′ D400000000000000′
IST1477I ALLOWED DATA FLOW RATE = 1876 KBITS/SEC
IST1516I INITIAL DATA FLOW RATE = 3200 KBITS/SEC
IST1511I MAXIMUM NETWORK LAYER PACKET SIZE = 2074 BYTES
IST1478I NUMBER OF UNACKNOWLEDGED BUFFERS = 0
IST1479I RTP CONNECTION STATE = CONNECTED - MNPS = NO
IST1480I RTP END TO END ROUTE - PHYSICAL PATH
IST1460I TGN CPNAME TG TYPE HPR
IST1461I 21 USIBMRA.NNP61A APPN RTP
IST231I RTP MAJOR NODE = ISTRTPMN
IST654I I/O TRACE = OFF, BUFFER TRACE = OFF
IST1500I STATE TRACE = OFF
IST355I LOGICAL UNITS:
IST080I NNP61A ACT/S----Y
IST314I END
Figure 113. RTP Pipe to 3746
Displaying the 3746 control point yielded Figure 114 on page 139.
Note 6 that both CP-CP sessions used the same RTP pipe. On NNP41A, as it
happened, the CP-CP sessions used separate pipes. This depends purely on the
timing of the activation requests from the partner nodes. The APPN COS and the
route taken for this pipe is always the same.
To display the link stations known to the 3746, we clicked on Stations to see
Figure 115 on page 140.
The first connection on the list was not relevant to our tests, but:
• ST92E is the ESCON link station to RAA. RAA is a network node, and is in
contact with the 3746.
• LINE1 is the connection we defined to the CS/2 network node. As this station
has not been contacted, its details (except for the defined MAC address) are
not known.
• P2144AP is the connection we defined to NNP41A. This is also in contact
with NNP61A. The MAC address was predefined but the CP name has been
discovered by the 3746 during XID exchange. We did not define it.
Now we checked the active HPR connections ending in this NNP by selecting
Management, then APPN Specifics followed by HPR Connections. This resulted
in the panel in Figure 116.
We can see two CP-CP session pipes (to each adjacent APPN node that supports
Control Flows) and two Route Setup pipes (because by this time we had
established some LU-LU sessions across those links).
CCM also allows you to verify how many LU 6.2 sessions starting or ending in
this NNP are active. From the CCM main panel select Management and then
Non Intermediate Sessions to view the display as in Figure 117 on page 141.
The only sessions of which the 3746 was aware were the CP-CP sessions to
adjacent nodes. As yet there were no DLUR sessions; the LU-LU sessions that
caused the Route Setup pipes to be established are transparent to the 3746,
which only performs ANR routing for them.
The node PU05170 was not predefined on the 3746, so the link station was
implicitly defined as soon as the XIDs were exchanged. We had defined in CS/2
a link to the 3746, giving the TIC3 MAC and SAP address as the destination.
PU05170 is in fact an end node with a LEN connection to NNP61A.
Figure 119 on page 142 shows what happened to the active HPR connections.
The last RTP connection on the display, with APPN COS SNASVCMG, has been
created to carry the DLUR/S LU 6.2 sessions. Of course, we immediately
displayed the Non Intermediate Sessions panel again (Figure 120) to see these.
We now had four sessions between NNP61A and RAA. The two new ones (the
last two) were the two DLUR/S sessions using mode CPSVRMGR.
D NET,ID=ISTRTPMN,E
IST097I DISPLAY ACCEPTED
IST075I NAME = ISTRTPMN, TYPE = RTP MAJOR NODE 636
IST486I STATUS= ACTIV, DESIRED STATE= ACTIV
IST1486I RTP NAME STATE DESTINATION CP MNPS TYPE
IST1487I CNR0077C CONNECTED USIBMRA.NNP61A NO LULU
IST1487I CNR0076F CONNECTED USIBMRA.NNP61A NO CPCP
IST1487I CNR0076C CONNECTED USIBMRA.NNP61A NO RSTP
Figure 121. RTP Pipes from DLU Server
Remember that RAA was the DLU server for the dependent LU, but not the
application owner. We therefore expected an LU-LU session pipe for the DLUR/S
D NET,ID=CNR0077C,E
IST097I DISPLAY ACCEPTED
IST075I NAME = CNR0077C, TYPE = PU_T2.1 645
IST1392I DISCNTIM = 00010 DEFINED AT PU FOR DISCONNECT
IST486I STATUS= ACTIV--LX-, DESIRED STATE= ACTIV
IST1043I CP NAME = NNP61A, CP NETID = USIBMRA, DYNAMIC LU = YES
IST1589I XNETALS = YES
IST933I LOGMODE=***NA***, COS=SNASVCMG 7
IST1476I TCID X′310BB532000000A3′ - REMOTE TCID X′000000000B46BBA8′
IST1481I DESTINATION CP USIBMRA.NNP61A - NCE X′ D0201025′
IST1587I ORIGIN NCE X′ D000000000000000′
IST1477I ALLOWED DATA FLOW RATE = 2800 KBITS/SEC
IST1516I INITIAL DATA FLOW RATE = 3200 KBITS/SEC
IST1511I MAXIMUM NETWORK LAYER PACKET SIZE = 2074 BYTES
IST1478I NUMBER OF UNACKNOWLEDGED BUFFERS = 0
IST1479I RTP CONNECTION STATE = CONNECTED - MNPS = NO
IST1480I RTP END TO END ROUTE - PHYSICAL PATH
IST1460I TGN CPNAME TG TYPE HPR
IST1461I 21 USIBMRA.NNP61A APPN RTP 8
IST231I RTP MAJOR NODE = ISTRTPMN
IST654I I/O TRACE = OFF, BUFFER TRACE = OFF
IST1500I STATE TRACE = OFF
IST355I LOGICAL UNITS:
IST080I NNP61A ACT/S----Y
IST314I END
Figure 122. RTP Pipe for DLUR/S
The APPN COS was SNASVCMG 7 and the partner node was NNP61A 8. To
prove that this was the DLUR/S HPR pipe, we displayed the CP name of the
partner node as in Figure 123 on page 144.
There were four sessions between RAA and NNP61A. Two of them, using the
RTP pipe CNR0077C 9, were labelled ACTIV/DL meaning that they were
DLUR/S sessions. The other two, using RTP pipe CNR0076F 10, were the
CP-CP sessions. The message IST1355I 11 shows that this node is a served
DLUR that is acting on behalf of the type 2 PU W05170.
A display of the PU W05170 (Figure 124 on page 145) showed that a DLUR PU is
seen by VTAM almost as any other peripheral type 2 link station. The only
difference is the presence of the message IST1354I 12 showing that the PU is
on a served DLUR node.
The display of the LU, in fact, shows absolutely nothing that indicates it is on a
DLUR node.
When we displayed the RTP connections from RA39 we saw Figure 125.
D NET,ID=ISTRTPMN,E
IST097I DISPLAY ACCEPTED
IST075I NAME = ISTRTPMN, TYPE = RTP MAJOR NODE 360
IST486I STATUS= ACTIV, DESIRED STATE= ACTIV
IST1486I RTP NAME STATE DESTINATION CP MNPS TYPE
IST1487I CNR0000F CONNECTED USIBMRA.NNP61A NO LULU 12
IST1487I CNR0000E CONNECTED USIBMRA.NNP41A NO RSTP
IST1487I CNR0000D CONNECTED USIBMRA.NNP41A NO CPCP
IST1487I CNR0000C CONNECTED USIBMRA.NNP41A NO CPCP
IST1487I CNR00003 CONNECTED USIBMRA.RAK NO LULU
IST1487I CNR00002 CONNECTED USIBMRA.RAA NO RSTP
IST1487I CNR00001 CONNECTED USIBMRA.RAA NO CPCP
Figure 125. RTP Connection to DLUR LU
We saw that a new LU-LU session pipe, CNR0000F, has been created 12.
There is, of course, no Route Setup pipe to NNP61A because NNP61A is not
adjacent to RA39. The RSTP pipe CNR00002 was probably used to set up
CNR0000F. A detailed display of CNR0000F gave us Figure 126 on page 146.
This pipe used APPN COS #CONNECT 13 and contained a session to the
dependent LU W0517002 16. However, the RTP partner was NNP61A 14, the
DLUR node. The path taken by the pipe (and therefore the session) 15 is
shown graphically in Figure 127 on page 147.
The session path was calculated by RA39 and took the optimum route through
the APPN network. The fact that it happened to pass through its DLU server
(RAA) was purely coincidental.
Next, we displayed the dependent LU from RA39. Figure 128 on page 148
showed that it used RTP pipe CNR0000F.
Now we checked the 3746 HPR display again to see how the new pipe showed
up, as in Figure 129.
The last line of the display shows the RTP connection carrying the CS/2 session
to RA39′s NetView, using APPN COS #CONNECT. It is interesting to note that
this RTP pipe originates in the TIC (Port 2112/2144), rather than in the NNP as do
the control sessions.
V NET,INACT,ID=RAAAHHD
IST097I VARY ACCEPTED
IST1196I APPN CONNECTION FOR USIBMRA.RA39 INACTIVE - TGN = 21
IST1494I PATH SWITCH STARTED FOR RTP CNR00761 18
IST1133I RAAHRAC IS NOW INACTIVE, TYPE = PU_T2
IST1133I RAAAHHD IS NOW INACTIVE, TYPE = LCL SNA MAJ NODE
IST1488I INACTIVATION FOR RTP CNR00763 AS PASSIVE PARTNER COMPLETED
IST1416I ID = CNR00763 FAILED - RECOVERY IN PROGRESS 17
IST1136I VARY INACT CNR00763 SCHEDULED - UNRECOVERABLE ERROR
... ... ... ... ...
IST1097I CP-CP SESSION WITH USIBMRA.RA39 TERMINATED
Figure 130. Deactivate MPC Connection
The Route Setup pipe to RA39, CNR00763, was deactivated immediately 17. A
path switch was initiated for the CP-CP pipe CNR00761 18, but failed after the
path switch timer expired because there were no alternative routes (let alone
Control Flows-capable routes) to RA39. When the pipe was deactivated the
CP-CP sessions also failed.
The DLUR/S pipe, CNR0077C, was not switched because it did not cross the
failing connection. No messages were issued regarding the dependent LU
session, because RAA (as an ANR node on the session path) was not aware that
the session passed through it.
RA39, as the RTP endpoint, detected the failure. In fact it was the first RTP
endpoint to detect the failure because its adjacent link failed. Figure 131 shows
what happened on RA39.
RA39 initiated a path switch, and the dependent LU session was moved to a new
route. Figure 132 on page 150 shows the new route, now passing through both
3746s. The session no longer even passes through the VTAM that owns the SLU.
However, it must pass through its DLUR (NNP61A), because the DLUR provides
the subarea boundary function which VTAM or NCP would have provided without
DLUR. To obtain the maximum benefit from APPN routing you must go one
stage further and put the DLUR function in the workstation itself.
A display of the switched pipe from RA39 confirms the new route, as seen in
Figure 133 on page 151.
We took some traces on the 3746 and the CS/2 node during these tests, which
showed the subarea session setup protocols flowing natively between the CS/2
and the 3746, but being encapsulated in LU 6.2 sessions between the 3746 DLUR
and VTAM DLUS. Appendix B, “A Complete Scenario” on page 247 shows
extracts from these traces.
The configuration of the VTAMs and the 3746s was identical to the previous
configuration; only the workstation was changed. 7.5, “HPR on Communications
Server/2” on page 90 shows how to set up DLUR on a CS/2 node.
D NET,ID=CNR00790,E
IST097I DISPLAY ACCEPTED
IST075I NAME = CNR00790, TYPE = PU_T2.1 454
IST1392I DISCNTIM = 00010 DEFINED AT PU FOR DISCONNECT
IST486I STATUS= ACTIV--LX-, DESIRED STATE= ACTIV
IST1043I CP NAME = CM5HPRNN, CP NETID = USIBMRA, DYNAMIC LU = YES
IST1589I XNETALS = YES
IST933I LOGMODE=***NA***, COS=SNASVCMG
IST1476I TCID X′310BB546000000CE′ - REMOTE TCID X′000000000000001A′
IST1481I DESTINATION CP USIBMRA.CM5HPRNN - NCE X′ 8 0 ′
IST1587I ORIGIN NCE X′ D000000000000000′
IST1477I ALLOWED DATA FLOW RATE = 1597 KBITS/SEC
IST1516I INITIAL DATA FLOW RATE = 1597 KBITS/SEC
IST1511I MAXIMUM NETWORK LAYER PACKET SIZE = 2058 BYTES
IST1478I NUMBER OF UNACKNOWLEDGED BUFFERS = 0
IST1479I RTP CONNECTION STATE = CONNECTED - MNPS = NO
IST1480I RTP END TO END ROUTE - PHYSICAL PATH
IST1460I TGN CPNAME TG TYPE HPR
IST1461I 21 USIBMRA.NNP61A APPN RTP
IST1461I 25 USIBMRA.CM5HPRNN APPN RTP
IST231I RTP MAJOR NODE = ISTRTPMN
IST654I I/O TRACE = OFF, BUFFER TRACE = OFF
IST1500I STATE TRACE = OFF
IST355I LOGICAL UNITS:
IST080I CM5HPRNN ACT/S----Y
Figure 134. DLUR/S RTP Pipe from CS/2
The path used for this pipe was as shown in Figure 135.
A display of CM5HPRNN from RAA shows that it is a DLUR node serving one
type 2 PU WAA61A 1. The only sessions between CM5HPRNN and RAA are
the two DLUR/S (ACTIV/DL) sessions on RTP connection CNR00790 2 (please
see Figure 136 on page 153).
A display of the PU WAA61A (Figure 137) confirms that it is on a DLUR node 3.
D NET,ID=WAA61A,E
IST097I DISPLAY ACCEPTED
IST075I NAME = WAA61A, TYPE = PU_T2 653
IST486I STATUS= ACTIV---X-, DESIRED STATE= ACTIV
IST1043I CP NAME = ***NA***, CP NETID = USIBMRA, DYNAMIC LU = YES
IST1589I XNETALS = YES
IST1354I DLUR NAME = CM5HPRNN MAJNODE = ISTDSWMN 3
IST136I SWITCHED SNA MAJOR NODE = ISTDSWMN
IST654I I/O TRACE = OFF, BUFFER TRACE = OFF
IST1500I STATE TRACE = OFF
IST355I LOGICAL UNITS:
IST080I WAA61A02 ACTIV---X- WAA61A03 ACTIV---X- WAA61A04 ACTIV---X-
IST080I WAA61A05 ACTIV---X- WAA61A06 ACTIV---X- WAA61A07 ACTIV---X-
IST080I WAA61A08 ACTIV---X- WAA61A09 ACTIV---X- WAA61A0A ACTIV---X-
IST080I WAA61A0B ACTIV---X- WAA61A0C ACTIV---X- WAA61A0D ACTIV---X-
IST080I WAA61A0E ACTIV---X- WAA61A0F ACTIV---X- WAA61A10 ACTIV---X-
IST080I WAA61A11 ACTIV---X-
IST314I END
Figure 137. DLUR PU on CS/2 Node
We can see now that the predefined LINE1 station has been connected to the
network node CM5HPRNN. Figure 139 then shows the active RTP connections.
You can see the two CP-CP pipes (the two CP-CP pipes were set up
independently between these two NNs), and the single Route Setup pipe used to
establish the DLUR/S sessions. Those sessions themselves, as well as their
RTP pipe, are not known to NNP61A. Figure 140 shows details of the sessions
ending in NNP61A.
Next, we logged to TSO on RA39 from one of the dependent LUs on CM5HPRNN.
We saw no new RTP pipes on RAA (the DLUS) but one new one, CNR00021, was
established on RA39. Figure 141 shows what we saw when we displayed it.
D NET,ID=CNR00021,E
IST097I DISPLAY ACCEPTED
IST075I NAME = CNR00021, TYPE = PU_T2.1 862
IST1392I DISCNTIM = 00010 DEFINED AT PU FOR DISCONNECT
IST486I STATUS= ACTIV--LX-, DESIRED STATE= ACTIV
IST1043I CP NAME = CM5HPRNN, CP NETID = USIBMRA, DYNAMIC LU = YES
IST1589I XNETALS = YES
IST933I LOGMODE=***NA***, COS=#CONNECT
IST1476I TCID X′38273ECD000000AC′ - REMOTE TCID X′000000000000001B
IST1481I DESTINATION CP USIBMRA.CM5HPRNN - NCE X′ 8 0 ′
IST1587I ORIGIN NCE X′ D000000000000000′
IST1477I ALLOWED DATA FLOW RATE = 1597 KBITS/SEC
IST1516I INITIAL DATA FLOW RATE = 1597 KBITS/SEC
IST1511I MAXIMUM NETWORK LAYER PACKET SIZE = 2058 BYTES
IST1478I NUMBER OF UNACKNOWLEDGED BUFFERS = 0
IST1479I RTP CONNECTION STATE = CONNECTED - MNPS = NO
IST1480I RTP END TO END ROUTE - PHYSICAL PATH
IST1460I TGN CPNAME TG TYPE HPR
IST1461I 21 USIBMRA.NNP41A APPN RTP 4
IST1461I 23 USIBMRA.CM5HPRNN APPN RTP
IST231I RTP MAJOR NODE = ISTRTPMN
IST654I I/O TRACE = OFF, BUFFER TRACE = OFF
IST1500I STATE TRACE = OFF
IST355I LOGICAL UNITS:
IST080I WAA61A02 ACT/S----Y
IST314I END
Figure 141. RTP Pipe to CS/2 Dependent LU
You can see that this particular pipe went all the way from RA39 to CM5HPRNN,
via NNP41A 4. Figure 142 on page 156 shows the route the session took. It
went nowhere near the DLUS VTAM, RAA.
A display of the LU, Figure 143 on page 157, from RA39 shows it simply as an
APPN LU (a CDRSC 5 with owner RA39 6) using the link station CNR00021
7 (an HPR connection).
The session path has moved to the route shown in Figure 145 on page 159. In
this case there were no path switch messages on VTAM as the CS/2 node
detected the failure (an adjacent link deactivation) long before RA39 could do so
(by waiting for a timer to expire). Therefore, CM5HPRNN started and completed
the path switch before VTAM′s timer expired. VTAM only issues a path switch
message if it initiates the switch itself.
After that, we displayed the dependent LU from RA39. There was no change
from the display shown in Figure 143 on page 157.
For this test, we performed a comprehensive audit (traces and displays) of what
went on in the CS/2 node. The results of this exercise are documented in
Appendix B, “A Complete Scenario” on page 247.
In this chapter we extend our network to a remote site in which a 2216 router
has been installed. We used token-ring connections to represent the wide area
network between the 3746s and the 2216. The actual DLC used has only a
marginal effect on the HPR and DLUR customization and operation in such a
configuration.
Once again, the VTAM nodes RA39 and RAA are connected by an MPC link. As
in the previous chapter, each VTAM is connected to one 3746 NNP, the NNPs
being joined together via a token-ring. This time the 2216 has a connection to
each 3746. The 2216 is configured as a network node (there is no choice in the
matter), and it has a separate downstream token-ring to which our workstation is
attached. We ran two distinct tests, as in the 3746-only case:
1. First, we defined DLUR support in the CS/2 workstation and used the 2216
and the 3746 purely for ANR switching.
The 2216 also has a GUI method of configuration. Using this, the configuration
files are created on a PC without requiring online access to the 2216. When
complete, the files are downloaded to the 2216 using either SNMP over the IP
network, or directly through the service port.
When the ASCII terminal is connected to the 2216, the command prompt
displayed is an asterisk (*). To enter configuration mode, type talk 6 at the
prompt, as in Figure 147. The basic 2216 parameters such as the token-ring
MAC addresses had already been configured, so we invoked the APPN
configuration function by entering protocol APPN at the configuration prompt
( Config>), as shown in Figure 147.
*talk 6
Config>protocol appn
APPN user configuration
APPN config>
Figure 147. Invoking APPN Configuration on 2216
At the APPN Config> prompt, you can enter the configuration commands as shown
below. To return to the base prompt, type exit.
The 2216′s configuration has basically the same structure as that of the 3746.
First you define the node, then each port, then the link stations on the ports.
Once again, only those link stations where the 2216 will make the connection
need to be defined.
Config>protocol appn
APPN config>set node
Enable APPN (Y)es (N)o (Y)? y
Network ID (Max 8 characters) ( )? USIBMRA 1
Control point name (Max 8 characters) ( )? NN2216A 1
Enable branch extender (Y)es (N)o (N)? 2
Route addition resistance(0-255) (128)?
XID ID number for subarea connection (5 hex digits) (00000)? 3
Use enhanced #BATCH COS (Y)es (N)o (Y)? 4
Use enhanced #BATCHSC COS (Y)es (N)o (Y)?
Use enhanced #INTER COS (Y)es (N)o (Y)?
Use enhanced #INTERSC COS (Y)es (N)o (Y)?
Write this record? (Y)? y
Figure 148. 2216 Node Definition
At the end of the node configuration we entered y to the last question so the
2216 would store the details for later use. Next, we entered the port
configurations. Figure 149 on page 164 shows the definitions for the token-ring
port which we called trn1. This was the port we used to connect the 2216 to both
3746s. The other port (trn0) was used for the separate LAN on which the
workstation lived. Enter add port at the APPN Configuration prompt to define a
port.
In this figure:
• We specified 5 that this port could accept any incoming connection
requests.
• HPR is enabled on the port by default 6.
• We needed to edit the LLC characteristics 7 in order to change the remote
SAP 8. You may remember that the 3746 local SAPs default to 8, while the
2216′s remote SAP defaults to 4. The remote SAP, of course, applies to the
station rather than the port, but (as with the 3746) the station defaults may be
set at the port level.
On the port trn1 we defined the two link stations connecting the 2216 to the
3746s. You do this by typing add link at the APPN Configuration prompt. The
2216 asks you on which port you want to define this station. Figure 150 on
page 165 shows the definition of the connection to NNP61A.
We entered a very similar definition for the other 3746, NNP41A. We then
defined a second port, trn0, for the downstream token-ring connection to
CM5HPRNN. We did not define a link station on port trn0 because the CS/2 node
would be making the connection.
Once we had entered the whole APPN configuration, we were able to display a
summary of it by typing list all at the APPN Configuration prompt, as shown in
Figure 151 on page 166.
MODE:
MODE NAME COS NAME
---------------------
PORT:
INTF PORT LINK HPR SERVICE PORT
NUMBER NAME TYPE ENABLED ANY ENABLED
------------------------------------------------------
0 TRN0 IBMTRNET YES YES YES
1 TRN1 IBMTRNET YES YES YES
STATION:
STATION PORT DESTINATION HPR ALLOW ADJ NODE
NAME NAME ADDRESS ENABLED CP-CP TYPE
------------------------------------------------------------
T61A TRN1 400037462144 YES YES 0
T41A TRN1 400437462176 YES YES 0
LU NAME:
LU NAME STATION NAME CP NAME
------------------------------------------------------------
Figure 151. Listing of APPN/HPR Configuration
There were several other options we could have defined: DLUR, additional
modes and COS entries, connection networks, and LUs. If the 2216 is serving a
LEN node, independent LUs on that node need to be defined to the 2216 so that it
can respond to search requests for them.
DEFINE_DEFAULTS IMPLICIT_INBOUND_PLU_SUPPORT(YES)
DEFAULT_MODE_NAME(BLANK)
MAX_MC_LL_SEND_SIZE(32767)
DIRECTORY_FOR_INBOUND_ATTACHES(*)
DEFAULT_TP_OPERATION(NONQUEUED_AM_STARTED)
DEFAULT_TP_PROGRAM_TYPE(BACKGROUND)
DEFAULT_TP_CONV_SECURITY_RQD(NO)
MAX_HELD_ALERTS(10)
DEFAULT_ROUTING_PREFERENCE(NATIVE_FIRST)
RETRY_COUNT(6)
ALIVE_TIMER(60)
PATH_SWITCH_TIMER_LOW(480)
PATH_SWITCH_TIMER_MEDIUM(240)
PATH_SWITCH_TIMER_HIGH(120)
PATH_SWITCH_TIMER_NET(60)
ROUTE_SETUP_TIMEOUT(10)
MOBILE(NO)
TN3270E_PORT(23)
TN3270E_KEEPALIVE_TYPE(NONE)
TN3270E_AUTOMATIC_LOGOFF(0)
DISABLE_DLUR_REGISTRATION(NO);
START_ATTACH_MANAGER;
SET_DISCOVERY_SERVER ADAPTER_NUMBER(0)
GROUP_NAMES(IROUTSNA)
ROUTING_CAPABILITIES(NN);
In this file we can see a real logical link 1 to the 2216, specifying the MAC and
SAP address 2 of the 2216′s adapter known as trn0. There is also a DLUR
logical link 3 specifying the CP name of the desired DLUS 4. Defined on the
DLUR logical link are three dependent LUs 5.
We issued list link to see the active APPN links, as shown in Figure 154.
The 2216, as expected, had direct connections to both 3746s. To display the
active CP-CP sessions, we issued list cp-cp_sessions as seen in Figure 155.
Both the 2216 and the 3746 support Control Flows over RTP, and each adjacent
pair of nodes has initiated CP-CP session activation from each side concurrently.
Therefore, we have four RTP pipes for four sessions.
Next, we verified the same information from one of the 3746s, NNP41A.
Figure 158 shows the list of link stations known to NNP41A.
We can see that the connection to NNP61A is not active at this time, but the
connections to VTAM RA39 and to the 2216 are active. The 2216 connection has
been defined dynamically by the 3746 (the 2216 initiated it and it was not
predefined in the 3746), so it has a 3746-assigned name of @@1.
Using the CS/2 Subsystem Management facility, we displayed the active logical
links from CM5HPRNN as shown in Figure 159.
This shows that the CS/2 node has an active real link to the 2216 (TG 21) and an
active DLUR/S pipe to RAA.
A display of the LU 6.2 sessions known to CM5HPRNN shows (Figure 160) that it
has a pair of CP-CP sessions with NN2216A and a pair of DLUR/S sessions with
RAA.
Another display, this time of the RTP connections (Figure 161 on page 172)
shows two independent CP-CP pipes to the adjacent 2216. This is usual for
adjacent NNs. The DLUR/S sessions are carried on an RTP pipe with APPN COS
SNASVCMG. The long-lived pipe (TCID 12F) was used to set up the DLUR/S
sessions across the link to NN2216A.
This shows two HPR connections 6 and 7 to CM5HPRNN. One is the DLUR/S
pipe and the other is the pipe carrying the dependent LU session, which has a
different APPN COS. Note also the two Route Setup pipes connecting RAA with
NNP61A 8. This is unusual because there is only one physical link between
them, but it could happen if each end needs to establish an LU-LU session at the
same time.
To see which HPR pipe to CM5HPRNN was which, we displayed CNR007FF from
RAA (see Figure 163 on page 173).
This pipe represents the dependent LU session, and takes the following path:
┌──────────────────────┐ ┌───────┐ ┌──────┐ ┌─────┐
│(WAA61A04) CM5HPRNN ├───TG21────┤NN2216A├──TG21──┤NNP61A├─TG21─┤ RAA │
└──────────────────────┘ └───────┘ └──────┘ └─────┘
Displaying the same connection from the CS/2 node shows the same path, as in
Figure 164 on page 174.
The other HPR pipe between RAA and CM5HPRNN is confirmed as the DLUR/S
pipe by Figure 165 on page 175. It has APPN COS SNASVCMG and carries one
or more sessions to the CS/2 CP itself, over the same route as the other pipe.
At this time we also took displays in the 2216 to see how the picture had
changed. Figure 167 on page 177 showed the connections that were created
after the CS/2 node was linked into the network.
The only new logical link was to the CS/2 node CM5HPRNN. Using this link were
the CP-CP 9 and long-lived RTP pipes 10, and the CP-CP sessions 11. The
2216 was not aware of the dependent LU sessions or of the RTP connections that
they were using.
A display of the newly switched LU-LU session pipe, CNR007FF, from RAA shows
the new route (Figure 171 on page 179).
The new route for both the DLUR/S sessions and the dependent LU session is
shown in Figure 172 on page 180.
The HPR Connection Details display from CS/2 (Figure 173 on page 181)
confirms this route.
Finally, we displayed the RTP connections once again from the 2216 (Figure 174).
As the 2216 was an ANR node for the switched sessions, it recorded no
information about those sessions or their RTP pipes. The only difference now is
that the CP-CP and long-lived pipes to NNP61A have disappeared.
See Figure 176 on page 183 for details of how we enabled DLUR for our 2216.
1 is the minimum you need to enter. We also entered RAA as a primary
2216-wide DLU server 2 and RA39 as a backup server 3.
The Perform retries settings determine how the 2216 attempts to recover a failed
DLUR/S pipe. A full description of the algorithm is in Multiprotocol Access
Services Protocol Configuration and Monitoring Reference . The action taken by
the 2216 differs depending on the cause of the failure: a nondisruptive UNBIND
from the DLUS (less frequent attempts at recovery) or any other cause (more
frequent attempts at recovery).
Because we only used one downstream node that required DLUR services from
the 2216, these definitions were enough for our purposes. Figure 177 shows how
you might override the DLUR/S configuration parameters for a particular link.
Here we take the station whose MAC address is 400052005115 and assign it a
primary DLUS of RA39 6 and a backup DLUS of RAA 7. To get these
questions asked, you need to respond y to the question 5 about editing the
Apart from recustomizing the 2216, we configured our PC as a LEN node with CP
name PU05170 and four dependent LUs on a single logical link to the 2216. In
fact, CS/2 cannot be configured specifically as a LEN node; what you have to do
is configure it as an APPN node but turn off APPN support on the host (2216 in
this case) link.
At this stage the 2216 has CP-CP sessions with the 3746s 8. There are no ISR
sessions (meaning sessions passing through the 2216) 9. The only RTP
connections active are the CP-CP pipes 10, because no LU-LU sessions have
been started yet. The DLUR/S pipe will not be set up until some dependent LUs
arrive on the scene. The only LU 6.2 sessions are the four CP-CP sessions 12,
and there is not yet a connection to CS/2 11.
We then started the CS/2 node and performed some displays using the
Subsystem Management Facility.
Displaying the details of this link (Figure 180 on page 186) confirms the
MAC/SAP address of the 2216′s downstream port and the fact that CP-CP
sessions are not supported.
As soon as the link from CS/2 to 2216 was started, we displayed the 2216
connection status as seen in Figure 181 on page 187.
This shows that the CS/2 connection has been dynamically defined 1; we
defined this only on the CS/2 node. HPR is not available because APPN is not
available. Indeed, no CP-CP sessions are shown 2 between the 2216 and the
CS/2.
The SNASVCMG RTP connection 3 has been established to RAA, and carries
the two DLUR/S sessions 4. As soon as the CS/2 node contacted the 2216, the
2216 knew (from the absence of the ACTPU not supported bit in the XID) that
DLUR services were required. Therefore, the 2216 looked up its default DLUS
name (USIBMRA.RAA), set up the RTP pipe, established the two DLUR/S
sessions, and forwarded the XID information from CS/2 in a REQACTPU request
unit. VTAM RAA did the rest.
Note also the presence of the Route Setup pipe to NNP61A; this indicates that
the DLUR/S RTP pipe and its sessions are routed via NNP61A.
The console log on RAA (Figure 182 on page 188) shows what happened on the
DLUS at this time.
The RTP pipe 5 to carry the DLUR/S sessions was established first. When the
REQACTPU with the XID information hit RAA, it invoked the configuration
services exit ISTEXCCS to define the type 2 PU and its LUs, since all DLUR/S
flows use switched procedures. Because the dynamically defined PU was the
first one on this VTAM node, the major node ISTDSWMN was created for it 6.
This feature (creation of ISTDSWMN on demand) is new in VTAM V4R4, and also
allows ISTEXCCS to specify an alternative major node name for dynamically
created PUs. Previous releases of VTAM created ISTDSWMN at startup time and
placed all dynamic PUs in the one major node.
A display of ISTRTPMN, the RTP major node, from RAA confirmed that CNR00801
linked RAA with NN2216A. We then took a detailed display as in Figure 183.
DISPLAY NET,ID=CNR00801,SCOPE=ALL
IST097I DISPLAY ACCEPTED
IST075I NAME = CNR00801 , TYPE = PU_T2.1
IST1392I DISCNTIM = 00010 DEFINED AT PU FOR DISCONNECT
IST486I STATUS= ACTIV--LX-, DESIRED STATE= ACTIV
IST1043I CP NAME = NN2216A , CP NETID = USIBMRA , DYNAMIC LU = YES
IST1589I XNETALS = YES
IST933I LOGMODE=***NA***, COS=SNASVCMG
IST1476I TCID X′310BB5B7000000C3′ - REMOTE TCID X′0000000001E2EBC8′
IST1481I DESTINATION CP USIBMRA.NN2216A - NCE X′8280 ′
IST1587I ORIGIN NCE X′ D000000000000000′
IST1477I ALLOWED DATA FLOW RATE = 1597 KBITS/SEC
IST1516I INITIAL DATA FLOW RATE = 1597 KBITS/SEC
IST1511I MAXIMUM NETWORK LAYER PACKET SIZE = 2048 BYTES
IST1478I NUMBER OF UNACKNOWLEDGED BUFFERS = 0
IST1479I RTP CONNECTION STATE = CONNECTED - MNPS = NO
IST1480I RTP END TO END ROUTE - PHYSICAL PATH
IST1460I TGN CPNAME TG TYPE HPR
IST1461I 21 USIBMRA.NNP61A APPN RTP
IST1461I 21 USIBMRA.NN2216A APPN RTP
IST231I RTP MAJOR NODE = ISTRTPMN
IST654I I/O TRACE = OFF, BUFFER TRACE = OFF
IST1500I STATE TRACE = OFF
IST355I LOGICAL UNITS:
IST080I NN2216A ACT/S----Y
IST314I END
Figure 183. New DLUR/S Pipe from RAA
A display of W05170 itself from RAA showed (Figure 185 on page 190) that it was
a switched PU supported by DLUR NN2216A 7.
Both RAA and RA39 immediately set up RTP connections to NN2216A for the new
sessions. These were CNR00802 on RAA and CNR0005C ON RA39.
Displays on RAA and RA39 showed the paths for the sessions were:
We displayed the dependent LUs from both RAA and RA39. Figure 187 shows
RAA′s view of the LU in session with its own NetView.
DISPLAY NET,ID=W0517003,SCOPE=ALL
IST097I DISPLAY ACCEPTED
IST075I NAME = USIBMRA.W0517003 , TYPE = LOGICAL UNIT
IST486I STATUS= ACT/S---X-, DESIRED STATE= ACTIV
IST1447I REGISTRATION TYPE = NETSRVR
IST977I MDLTAB=***NA*** ASLTAB=***NA***
IST861I MODETAB=ISTINCLM USSTAB=US327X LOGTAB=***NA***
IST934I DLOGMOD=D4C32XX3 USS LANGTAB=***NA***
IST597I CAPABILITY-PLU INHIBITED,SLU ENABLED ,SESSION LIMIT 00000001
IST136I SWITCHED SNA MAJOR NODE = ISTDSWMN
IST135I PHYSICAL UNIT = W05170
IST1131I DEVICE = LU
IST654I I/O TRACE = OFF, BUFFER TRACE = OFF
IST1500I STATE TRACE = OFF
IST228I ENCRYPTION = NONE
IST1563I CKEYNAME = LUMOD05D CKEY = PRIMARY CERTIFY = NO
IST1552I MAC = NONE MACTYPE = NONE
IST171I ACTIVE SESSIONS = 0000000001, SESSION REQUESTS = 0000000000
IST206I SESSIONS:
IST634I NAME STATUS SID SEND RECV VR TP NETID
IST635I RAAAN008 ACTIV-P F7FF6164529F95DE 0004 0005 USIBMRA
IST314I END
Figure 187. Owned LU in Session with Application
Figure 188 on page 192 shows the corresponding LU displayed from RA39.
You can see that two RTP pipes with APPN COS #CONNECT have been set up
9. Each links this DLUR node with one of the session partners of the
dependent LUs.
This is not the same terminology as used in APPN, where APPC means LU 6.2
and ISR means not ANR. The sessions listed on the various RTP pipes are:
• One APPC to NNP61A (CP-CP)
• One APPC to NNP61A (CP-CP)
• One APPC to NNP41A (CP-CP)
• One APPC to NNP41A (CP-CP)
• Two APPC to RAA (DLUR/S pair)
• One ISR to RA39 (W0517002 to TSO)
• One ISR to RAA (W0517003 to NetView)
There were no such messages for the LU-LU RTP connection, CNR00802.
Presumably NN2216A initiated the path switch, in which case VTAM would issue
no message. We displayed CNR00802 (Figure 191) to confirm that it had indeed
been switched.
DISPLAY NET,ID=CNR00802,SCOPE=ALL
IST097I DISPLAY ACCEPTED
IST075I NAME = CNR00802 , TYPE = PU_T2.1
IST1392I DISCNTIM = 00010 DEFINED AT PU FOR DISCONNECT
IST486I STATUS= ACTIV--LX-, DESIRED STATE= ACTIV
IST1043I CP NAME = NN2216A , CP NETID = USIBMRA , DYNAMIC LU = YES
IST1589I XNETALS = YES
IST933I LOGMODE=***NA***, COS=#CONNECT
IST1476I TCID X′310BB5B8000000D0′ - REMOTE TCID X′0000000001E57108′
IST1481I DESTINATION CP USIBMRA.NN2216A - NCE X′8280 ′
IST1587I ORIGIN NCE X′ D000000000000000′
IST1477I ALLOWED DATA FLOW RATE = 1597 KBITS/SEC
IST1516I INITIAL DATA FLOW RATE = 1597 KBITS/SEC
IST1511I MAXIMUM NETWORK LAYER PACKET SIZE = 2048 BYTES
IST1478I NUMBER OF UNACKNOWLEDGED BUFFERS = 0
IST1479I RTP CONNECTION STATE = CONNECTED - MNPS = NO
IST1480I RTP END TO END ROUTE - PHYSICAL PATH
IST1460I TGN CPNAME TG TYPE HPR
IST1461I 21 USIBMRA.RA39 APPN RTP
IST1461I 21 USIBMRA.NNP41A APPN RTP
IST1461I 21 USIBMRA.NN2216A APPN RTP
IST231I RTP MAJOR NODE = ISTRTPMN
IST654I I/O TRACE = OFF, BUFFER TRACE = OFF
IST1500I STATE TRACE = OFF
IST355I LOGICAL UNITS:
IST080I W0517003 ACT/S---X-
IST314I END
Figure 191. Path Switch for LU-LU Pipe
Nothing was observed on RA39, and nothing changed as far as it was aware. Its
sessions were not affected by the failure.
This display shows that the link to NNP61A is in RESET status 1, and the CP-CP
sessions no longer exist 2. The RTP connections 3 with their LU-LU sessions
are all still there, but the only remaining Route Setup pipe is 4 to NNP41A,
where the newly switched sessions now go.
Figure 193 on page 196 summarizes this section, where we set up and switched
LU-LU sessions with the 2216 acting as the DLUR node on behalf of a node
acting as a peripheral subarea node.
In this chapter we extend the HPR and DLUR implementation to the 2210
platform. Figure 194 shows the network we built for this configuration.
We installed the 2210 router in parallel with the 2216, to simulate an alternative
gateway to a remote site. We cross-connected the 2216 and the 2210 to each of
the central 3746s, as would probably be the case if the wide area network was a
shared facility such as frame relay. The VTAM and 3746 configurations were the
same as in the previous chapter. Both 2216 and 2210 were configured as DLUR
nodes, and a CS/2 node PU05170, configured for LEN attachment, served as the
remote workstation as in the previous chapter. All connections in the diagram
are in fact token-ring except the ESCON channels to the hosts. The CS/2 node is
on its own separate LAN, to conform more closely with the typical customer
environment. The backbone LAN, which connects the 3746s and the remote
routers, represents the wide area network.
As with the 2216, the major steps in the 2210 configuration are:
1. APPN/HPR node configuration
2. APPN/HPR port configuration
3. APPN/HPR link configuration
4. APPN/HPR DLUR/DLUS configuration
Figure 195 shows that invoking the APPN configuration for the 2210 is exactly the
same as for the 2216. You enter talk 6 at the basic “*” prompt, then protocol
appn. As with the 2216, the commands can be abbreviated (p for protocol, for
example) but in general we used longer forms for clarity.
*talk 6
Config>protocol appn
APPN user configuration
APPN config>
Figure 195. Invoking 2210 APPN Configuration
Next, we defined the two ports trn0 and trn5, the downstream and upstream
ports respectively. Figure 197 shows the definitions for trn0. The definitions for
trn5 are identical except for (of course) the port name and interface number.
Next, we configured the link stations. We defined both upstream stations to the
3746s, as we wanted the remote routers to initiate these connections. We also
defined the connection to the 2216 on the downstream port, as we had not
defined this previously on the 2216. Figure 198 on page 200 shows the
connection to NNP41A on trn5.
Note the remote SAP of 8 1, necessary because the 3746s used their own
defaults instead of the 2210′s default of 4.
The definition of the connection to NNP61A was the same except for the station
name (st61a) and the MAC address of the remote node.
Figure 199 on page 201 shows the definition of the trn0 link station to the 2216
NN2216A.
After the 2216 displays of earlier tests, this was quite familiar to us.
DEFINE_LOGICAL_LINK LINK_NAME(HOST0001)
ADJACENT_NODE_TYPE(LEN) 6
DLC_NAME(IBMTRNET)
ADAPTER_NUMBER(0)
DESTINATION_ADDRESS(X′40002210A00004′ ) 4
ETHERNET_FORMAT(NO)
CP_CP_SESSION_SUPPORT(NO) 7
SOLICIT_SSCP_SESSION(YES) 5
NODE_ID(X′05D05170′ ) 3
ACTIVATE_AT_STARTUP(YES)
USE_PUNAME_AS_CPNAME(NO)
LIMITED_RESOURCE(USE_ADAPTER_DEFINITION)
LINK_STATION_ROLE(USE_ADAPTER_DEFINITION)
MAX_ACTIVATION_ATTEMPTS(USE_ADAPTER_DEFINITION)
EFFECTIVE_CAPACITY(USE_ADAPTER_DEFINITION)
COST_PER_CONNECT_TIME(USE_ADAPTER_DEFINITION)
COST_PER_BYTE(USE_ADAPTER_DEFINITION)
SECURITY(USE_ADAPTER_DEFINITION)
PROPAGATION_DELAY(USE_ADAPTER_DEFINITION)
USER_DEFINED_1(USE_ADAPTER_DEFINITION)
USER_DEFINED_2(USE_ADAPTER_DEFINITION)
USER_DEFINED_3(USE_ADAPTER_DEFINITION);
Note the node ID 3, specified at the node level and confirmed (unnecessarily)
at the link level. This will be used by RA39 to identify the PU and thus to define
the PU and LUs dynamically. Note also the destination MAC and SAP address of
the 2210 4 and the request for SSCP sessions 5.
Because the adjacent node type has been specified as LEN 6 (by not
requesting APPN support on the host link using the GUI configuration), CS/2 will
pretend to be a LEN node at XID exchange time. The XIDs will state that parallel
TGs are not supported between CS/2 and the 2210; this is all that is needed to
ensure that a connection is treated as LEN, and imposes the requirement that
CP-CP sessions are not supported 7.
All ports and link stations are active, both to the host gateway 3746s and to the
adjoining 2216 node.
Next we displayed the RTP connections and active sessions, as in Figure 204.
There were pairs of CP-CP sessions with each partner network node ( 8 and
10), as we hoped. The RTP connections table 9 showed that all were using
RTP pipes (all four nodes support control flows); those to the 2216 had both
CP-CP sessions on a single RTP pipe but those to the 3746s had separate pipes.
This is purely a timing consideration. The Route Setup pipe was from a previous
LU-LU session; there were no DLUR/S sessions active at this time as 10
proved.
The links to the 2210 and the 2216 are defined dynamically on the 3746 (that is,
on the partner side only), so they have 3746-defined names such as @@3 and
@@11. The links to NNP61A and RA39 are explicitly defined. All are active.
Figure 206 shows the active RTP connections at the start of the test. Apart from
two Route Setup pipes from previous LU-LU sessions, the only active
connections are the CP-CP pipes to the four adjacent network nodes.
This was the DLUR/S pipe, and used the following path:
┌───────┐ ┌──────┐ ┌──────┐
│CP2210T├───TG29─────┤NNP41A├───TG21─────┤RA39 │
└───────┘ └──────┘ └──────┘
Further proof that this is the DLUR/S pipe is given by Figure 208 on page 207.
The node CP2210T has two sessions with status ACTIV/DL on link station
CNR00060 11, and is the DLUR for W05170 12, which is the type 2 PU in the
CS/2 node.
Next, we issued the displays in Figure 209 on page 208 on the 2210 to see the
newly created RTP connection.
Once again, as soon as the peripheral node sent an XID to the 2210 requesting
SSCP sessions, an RTP connection was established to the DLUS defined in the
2210 node. This connection 15, using APPN COS SNASVCMG, was to RA39.
The two sessions it carried are shown at 14. The actual link station to PU05170
13 is dynamically defined (its name is generated by the 2210), and does not
support HPR because it appears as a LEN connection.
A display of W0517003 from RAA (Figure 210 on page 209) shows that RAA sees
it as an APPN LU owned by CP2210T and served by RA39.
At this stage we looked at the 2210 displays again to see the connectivity status.
The RTP connections display can be seen in Figure 211.
The new session is represented by the RTP pipe with APPN COS #CONNECT
16.
The observant reader will notice the absence of a long-lived Route Setup pipe to
NNP61A. If the new session was set up over this connection, where is the Route
You can see that the new path now goes through both remote routers:
┌─────────┐ ┌───────┐ ┌───────┐ ┌──────┐ ┌────┐
│W0517003 ├─────┤CP2210T├──TG29───┤NN2216A├──TG21─┤NNP61A├─TG21─┤RAA │
└─────────┘ └───────┘ └───────┘ └──────┘ └────┘
Figure 213 on page 211 shows the old and new session paths after the link
failures.
From RA39, we displayed the RTP connection that the DLUR/S sessions were
using, as in Figure 214 on page 212.
Most of the CP-CP sessions had disappeared. The only ones left were to the
2216 17 over an RTP connection. The two RTP pipes that concerned us were
active: the DLUR/S pipe to RA39 18 and the LU-LU pipe to RAA 19.
The first new answer here is that to the BX question 1. This immediately
results in the 2216 asking whether to allow a search for unregistered LUs from
its backbone NN server 2. Please refer to 3.3, “DLUR/S Design
Considerations” on page 32 for an explanation of why this might be necessary.
Next, we defined the port to be used for the uplink (upstream connection to the
backbone network). Figure 217 on page 216 shows the port definitions we used
for the trn1 port.
We responded in the affirmative to the question 1 about using this port as a BX
uplink. This sets the default value to be used by link stations on this port. Each
link station can be defined individually as an upstream or downstream
connection, provided the BX rules are obeyed.
In this dialog, we specified 2 that the adjacent node was a network node. This
immediately dictates that the link station is an upstream connection, so the 2216
did not ask whether it was upstream or downstream. The other link station, to
NNP41A, was defined in a similar fashion.
There is another way to activate the new configuration: in the APPN config prompt
(after talk 6 and protocol appn), type activate. This command compares the
active configuration with the new one, and either restarts the 2216 (as above) or
dynamically updates the parameters, depending on what has changed.
Aside from the BX support, our 2216 and 2210 configurations were the same as
in the previous chapter. In particular, the 2216 was acting as a DLUR for
downstream nodes, with RAA as its DLUS. The 2210 was configured as a DLUR
node with RA39 as its DLUS.
Figure 220 (Part 1 of 3). NDF File for CS/2 End Node
Figure 220 (Part 2 of 3). NDF File for CS/2 End Node
DEFINE_LUA LU_NAME(@LUA0001)
HOST_LINK_NAME(HOST0001)
NAU_ADDRESS(2);
DEFINE_LUA LU_NAME(@LUA0002)
HOST_LINK_NAME(HOST0001)
NAU_ADDRESS(3);
DEFINE_LUA LU_NAME(@LUA0003)
HOST_LINK_NAME(HOST0002)
NAU_ADDRESS(4);
DEFINE_LUA LU_NAME(@LUA0004)
HOST_LINK_NAME(HOST0002)
NAU_ADDRESS(5);
Figure 220 (Part 3 of 3). NDF File for CS/2 End Node
Link station HOST0001 3 is connected to the 2216′s downstream port 4, and
supports both CP-CP sessions 5 and SSCP sessions 6. We have an APPN
node with dependent LUs, so we require both. The PU that this link represents
is given the IDNUM of 04444 7, because we need to distinguish it from the
other link, which will inherit the node′s IDNUM. If the IDNUMs on the two links
Link station HOST0002 8 is connected to the 2210′s downstream port 9, and
also supports both CP-CP sessions 10 and SSCP sessions 11. The CP-CP
sessions, of course, will only be established over one link at a time.
After we started the CS/2 node, we displayed the active connections using the
Subsystem Management facility as in Figure 222 on page 221.
We defined two dependent LUs on each link, to use the DLUR support in the 2210
and 2216. We did not use these in the test, but the display shows their SSCP-PU
and SSCP-LU sessions (total three sessions on each link). The two extra
sessions on HOST0001, of course, are the CP-CP sessions to the 2216, which the
CS/2 node has chosen as its network node server.
The display of LU 6.2 sessions (Figure 223) shows that the extra sessions on
HOST0001 are indeed the CP-CP sessions.
Figure 224 on page 222 then showed us the network view as seen from the 2216.
We performed similar displays from the 2210 console, as shown in Figure 225.
On the 2210:
• There are three active connections 5; one to each 3746 and one to BREN1.
• The only active CP-CP sessions 6 are with NNP41A. BREN1 chose
NN2216A to be its server, and CP2210T has chosen NNP41A.
• The 2210 has its own DLUR/S connection, over an RTP pipe 7, to RA39.
This connection serves two dependent LUs on BREN1.
We also displayed the connections as seen from the 3746s. Figure 226 shows
those on NNP41A.
Note that NNP41A sees the 2210 (CP2210T) as an end node. Similarly, NNP61A
is connected to both routers (Figure 227) and sees both as end nodes.
Both connections from CP2210T to the 3746s are known to the APPN network (the
local topology databases in this case), even though only one of them at a time
can carry CP-CP sessions.
On RAA, the DLUR/S sessions were carried on RTP pipe CNR008C7, which took
the route:
┌───────┐ ┌──────┐ ┌──────┐
│NN2216A├───TG21───┤NNP61A├──TG21───┤ RAA │
└───────┘ └──────┘ └──────┘
We displayed the details of the #INTER session (Figure 228) to see that it was
using HPR as its logical link.
The two expected RTP pipes to RA39 are there. We asked for the details of the
#INTER pipe to see its route, as shown in Figure 230 on page 226.
Displaying the #INTER RTP connection (CNR00077) from RA39 confirms the
session path, which is:
┌─────┐ ┌───────┐ ┌──────┐ ┌──────┐ ┌─────┐
│BREN1├─TG23─┤NN2216A├──TG21──┤NNP61A├──TG21──┤NNP41A├─TG21─┤RA39 │
└─────┘ └───────┘ └──────┘ └──────┘ └─────┘
As expected, only the link to NNP61A and its CP-CP sessions remain. A similar
display from the 2210 (Figure 232 ) is more interesting.
list cp cp
CP Name Type Status Connwinner ID Conloser ID
========================================================================
USIBMRA.BREN1 EN Active B8930F86 1 B8930F82
USIBMRA.NNP41A NN Active B8930D60 B8930D62
USIBMRA.NNP61A NN Inactive 00000000 00000000
APPN >list appc
LU Name Mode Type FSM
======================================
USIBMRA.NNP41A CPSVCMG Pri ACT
USIBMRA.NNP41A CPSVCMG Sec ACT
USIBMRA.RA39 CPSVRMGR Pri ACT
USIBMRA.RA39 CPSVRMGR Sec ACT
USIBMRA.BREN1 CPSVCMG Sec ACT 1
USIBMRA.BREN1 CPSVCMG Pri ACT 1
Figure 232. Displays on 2210 after Link Failure
The display reveals 1 that the CS/2 end node has changed its allegiance by
establishing CP-CP sessions with CP2210T. If it has CP-CP connectivity into the
network, it can initiate a path switch, so we displayed the RTP connection details
again from BREN1. Figure 233 on page 228 shows what we saw.
RTP path switch has occurred, and the new route goes via the new BX gateway
(as it must) and then NNP41A. To confirm this, we displayed the same RTP pipe
from RA39, as in Figure 234.
DISPLAY NET,ID=CNR00077,SCOPE=ALL
IST097I DISPLAY ACCEPTED
IST075I NAME = CNR00077 , TYPE = PU_T2.1
IST1392I DISCNTIM = 00010 DEFINED AT PU FOR DISCONNECT
IST486I STATUS= ACTIV--LX-, DESIRED STATE= ACTIV
IST1043I CP NAME = BREN1 , CP NETID = USIBMRA , DYNAMIC LU = YES
IST1589I XNETALS = YES
IST933I LOGMODE=***NA***, COS=#INTER
IST1476I TCID X′38273F230000009D′ - REMOTE TCID X′0000000000000189′
IST1481I DESTINATION CP USIBMRA.BREN1 - NCE X′80 ′
IST1587I ORIGIN NCE X′ D000000000000000′
IST1477I ALLOWED DATA FLOW RATE = 3085 KBITS/SEC
IST1516I INITIAL DATA FLOW RATE = 1597 KBITS/SEC
IST1511I MAXIMUM NETWORK LAYER PACKET SIZE = 2048 BYTES
IST1478I NUMBER OF UNACKNOWLEDGED BUFFERS = 0
IST1479I RTP CONNECTION STATE = CONNECTED - MNPS = NO
ST1480I RTP END TO END ROUTE - PHYSICAL PATH
IST1460I TGN CPNAME TG TYPE HPR
IST1461I 21 USIBMRA.NNP41A APPN RTP
IST1461I 31 USIBMRA.CP2210T APPN RTP
IST1461I 34 USIBMRA.BREN1 APPN RTP
IST231I RTP MAJOR NODE = ISTRTPMN
IST654I I/O TRACE = OFF, BUFFER TRACE = OFF
IST1500I STATE TRACE = OFF
IST355I LOGICAL UNITS:
IST080I BREN1 ACT/S----Y
IST314I END
Figure 234. New Path after B X Switch
In fact, the newly switched path is shorter than the original path. This is simply
because the newly acquired NN (BX) server of BREN1 happens to be nearer the
RTP partner than the original server.
This path switch worked well because the CP-CP sessions between BREN1 and
its NN servers did not flow over RTP pipes, and were therefore terminated
immediately after the link failure. The 2216 and 2210, in fact, will terminate such
sessions at once even if they do flow over RTP connections. If there is no valid
alternative path for CP-CP sessions the 221X nodes will not wait for the path
switch timer to expire, thus allowing a timely RTP path switch for LU-LU sessions
just as we saw.
The ARB mechanism used by HPR for both flow control (keeping the network
running smoothly) and congestion control (keeping the receiving node running
smoothly) is described in Chapter 9 of Inside APPN - the Essential Guide to the
Next-Generation SNA . To assist in the reader′s understanding, we offer a
shortened description here together with some examples of traces taken during
our tests, that will help to illustrate the concepts.
A.1 Introduction
The adaptive rate-based (ARB) congestion and flow control algorithm allows RTP
connections to make more efficient use of network resources such as links and
buffers. Input traffic (offered load sent by an RTP connection endpoint) entering
the network is regulated by the ARB algorithm based on the conditions in the
network and the partner RTP endpoint. When the network or partner RTP
endpoint approaches congestion (in other words, there is increasing delay and
throughput fails to keep pace with incoming traffic), input traffic is reduced.
When the capacity of the network or partner RTP endpoint increases, input traffic
is increased.
Figure 235 shows the relationship between network throughput and offered load
for a given path.
The knee (point K) is the point beyond which the path starts to become
congested (in other words, link transmission queues develop along the path
resulting in higher network delays). Beyond point K, an increase in the send
rate does not result in an increase of throughput. ARB detects this
precongestion condition (saturation) and adjusts the sending rate accordingly,
The data being regulated by the ARB algorithm always flows from a sender (ARB
sender) to a receiver (ARB receiver). The sender continually queries the
receiver, by sending it a rate request along with the data, in order to obtain
information about the state of the network and the state of the node containing
the receiver. The receiver responds by sending back a rate reply. The sender
The fixed characteristics of the path (the speed of the slowest link along the path
and the total transmission time over the entire path) are used to calculate two
very important parameters used in the ARB algorithm: the range begin time and
the range end time. The range begin time (which we refer to as K time) is the
amount of delay that will cause the network to reach the knee point (point K in
Figure 235 on page 231). The range end time (which we call C time) is the
amount of delay that will cause the network to reach the cliff (point C in
Figure 235 on page 231).
The K time is the time taken to transmit 8000 bits over the slowest link in the
RTP path between sender and receiver. The C time is the time taken to transmit
80000, 120000 or 160000 bits over the same link. Which of these three values is
chosen depends on the number of hops and the number of slow-speed links in
the path. This is so that longer paths can compete fairly with shorter paths for
the same traffic.
The rate request, rate reply, and setup messages are transmitted in the ARB
optional segment in the transport header of the NLP.
The receiver also takes into account previous delays remembered from earlier
rate request messages. Based on this network delay information, the receiver
will recommend appropriate actions to be taken by the sender. These actions
are communicated in a rate reply segment that enables the sender to adjust its
send rate appropriately. The rate reply segment may either be sent stand-alone
or be carried along with data. The receiver, in addition to deriving its
recommendations based on network delays, can also tell the sender to reduce
its sending rate based on conditions within the receiver node (for example,
buffer shortage). Figure 237 on page 234 illustrates these principles.
When the receiver gets the second rate request message it compares t1 (the
interval between the sending of consecutive rate requests) with t2 (the interval
between receipt of the same two requests). Depending on the relationship
between t1 and t2, the receiver sends a rate reply recommending an action to
the sender.
The action taken by the sender on receipt of the rate reply indicators is as
follows:
• Normal.
If the mode is green, the send rate is increased by an amount determined by
the link capacity (not a percentage of the current send rate). If the mode is
yellow or red, the send rate is left alone but the mode is changed (to green
or yellow respectively).
• Restraint.
The send rate is left unchanged, but the mode is changed from red to yellow
or yellow to green, as appropriate.
• Slowdown1.
Table 1 summarizes this algorithm. The first column represents the event that
takes place, and the other columns show the result depending on what the
current operating mode was. The numbers in the results columns indicate to
which state the sender is switched: 1 means green, 2 means yellow, 3 means
red. The words in brackets denote the effect on the sending rate of each event.
┌───────┐ ┌───────┐
│ RTPa │ │ RTPb │
└───────┘ └───────┘
Notes:
1. RTPa establishes a connection with RTPb by sending a connection setup
segment. The ARB setup segment is included (along with the switching
information segment - not shown) so that ARB parameters can be initialized.
2. RTPa sends data to RTPb. No ARB rate request segment is included at this
time.
Notes:
1. If the m i n i m u m link capacity 4 is 16000 kbps, the initial allowed send rate is
10% of this (1600 kbps 1). The ARB mechanism will adjust this value until
the operating region (between knee and cliff) is reached.
2. 3 indicates the ARB setup segment.
3. 2 shows that the current ARB operating mode is green.
Note 1 the last measurement interval is sent to the receiver. This is the first
rate request on an RTP connection.
In this section we show a detailed illustration of an HPR and DLUR scenario, with
traces and displays. The purpose is to give you an example with which to
compare your own environment when a problem occurs and problem
determination is called for. Figure 244 shows the network setup which we used.
The traces and displays were taken on the CS/2 node, mainly because
formatting a trace is so easy on this platform. Not all the activity was traced; we
confined ourselves to the main flows as this redbook is quite long enough
already.
Subfield X′81′
Subfield X′80′
CV X′61′
Subfield X′80′
Line: 91 Recv MU
Time stamp: 08:26:55.79
DLC type: HPR
||TH: FID5, Exp, OIS, SNF=0x0003, R SA2586f52ddc291f93
||RH: +RSP, SC, FI, RQD1
||BIND +rsp
|| (0x0001) Type = Negotiable
|| (0x0002) FM profile = 19
|| (0x0003) TS profile = 7
|| FM usage - primary:
|| (0x0004) Chaining use = Multiple-RU chains allowed
|| Request control mode = Immediate request mode
|| Chain response protocol = Definite or exception response
|| Two-phase commit = Not supported
|| Compression = Will not be used
|| Send end bracket = Will not send
|| FM usage - secondary:
|| (0x0005) Chaining use = Multiple-RU chains allowed
1 The destination hop index contains the index (integer) into the
RSCV (CV 2B) for the destination node.
2 The Path Switch time indicates the maximum time (in milliseconds) that
the destination requires for a path switch.
CV X′80′
Here we show the first flow on the DLUR/S pipe, which starts with a REQACTPU
request from the DLUR.
Line: 3343 Send MU
Time stamp: 08:27:32.42
DLC type: HPR
||TH: FID5, OIS, SNF=0x0001, R SA00000000f2000004
||RH: RQ, FMD, FI, OIC, RQE1, PI, CEBI
||FMH-5
|| Command code = Attach
|| User ID already verified = No
|| Password is substituted = No
|| PIP present = No
|| Conversation type = Basic
|| Synchronization level = None
|| Transaction program name = ?006 (APPN Receive_Encap_Msg)
|| Logical unit of work identifier:
|| LU name = USIBMRA.CM5HPRNN
|| Instance number = 0xafff44a61c0d
|| Sequence number = 0x0001
|| Conversation correlator = 0x58451f93a4f58625
||FID2 Encapsulation Variable
|| TH: FID2, OIS, LFSID=0x00000, SNF=0x0000
|| RH: RQ, FMD, FI, OIC, RQD1
|| REQACTPU rq
|| Format = Internal PU
|| DLUR/S Capabilities control vector
|| (0x0002) Dependent LU support level = 0x01
|| (0x0003) Node type = Network Node
|| (0x0004) DLUR/S node type = DLUR
|| (0x0005) Flow reduction sequence number = 0x00000000
|| (0x0009) RECEIVE_TDU_TP = Not supported
|| Network name control vector:
|| (0x0002) Network name type = PU
|| (0x0003) Name = MPU00001
|| XID Image FID2 Encapsulation control vector
|| (0x0002) XID I-field image:
|| Hex dump:
|| 020605da a61a <...... > <....
|| TG descriptor control vector
|| DLC Signaling Type TG Descriptor subfield
Once the REQACTPU, and the ACTPU (not shown) have completed,
ACTLU can proceed.
The links used for the CP-CP sessions and the DLUR/S sessions are all active.
At the same time, we displayed the active RTP connections, as in Figure 248.
TCID 7D (Figure 251 on page 282) is the DLUR/S pipe to RA39, which takes the
following route:
Similarly, TCID 7B (not shown) is the DLUR/S pipe to RAA via the route:
RAA has received the request and performs a Route Setup to establish the
HPR path on which this new session will be set up. The Route Setup message
flows on the existing long-lived pipe from NNP61A.
Then the SESSST (session started) RU is sent to the SSCP. This flows on
the SSCP-LU session in the DLUR/S pipe.
Next, a Route Setup is received for the new session with RAAAN007.
The previous RTP connection was closed when the last (and only) session
on it ended, so we need a new one. Note that this would not happen with
TSO which does the CLSDST PASS before the BIND is sent.
Now the session begins with SDT (start data traffic), followed by some 3270
data.
A new HPR connection is now present with the TCID 74, the COS used being
#CONNECT as we saw in the Route Setup flows. The details of this connection
are shown in Figure 254 on page 299.
Clearly a path switch must occur, and the first node to notice the problem
(CM5HPRNN, because an adjacent link failed) will initiate it. Before it can do the
path switch it must calculate a new route (being a network node) and send a
Route Setup message to obtain the ANR labels and ARB information for the new
route. Because both OLU and DLU are on network nodes, CM5HPRNN can do
this without sending any Locates into the network.
Another path switch occurs for the DLUR/S session with RAA, which was
also on the failing link. Here is the new Route Setup.
During this time we are still using the 3270 session. We press the Enter key
and the application responds.
TCID 7B is the DLUR/S pipe to RAA, so we display its details in Figure 257 on
page 309.
The Previous path button allows us to view the old route, so without hesitation
we use it to see Figure 258 on page 310.
TCID 74 is the LU-LU session pipe, so we display its details in Figure 259 on
page 311.
B.3.3 Summary
Figure 260 on page 312 summarizes the path switch described in the previous
section. It is the same scenario as Figure 193 on page 196.
Keyword Statement
Network n/a n/a Selects support for APPN, IP, and/or subarea traffic.
NPA eligible NPACOLL LINE Eligible for NPM performance data collection (requires NPM
V2R3).
Attachment n/a n/a Indicates whether the ESCON port is attached directly, via an
ESCD, or via a chain of ESCDs.
ESCD n u m b e r n/a n/a Identifies the ESCON Director to which this port is attached.
ESCD m o d e l n/a n/a Identifies which type of ESCON director is being used.
Control unit link address n/a n/a Specifies the port at the ESCON director to which the optical
fiber from the 3746 ESCON coupler is attached.
Network n/a n/a Selects support for APPN, IP, and/or subarea traffic.
Host mod e n/a n/a Mode of the host configuration with respect to partitioning.
Basic, LPAR or EMIF.
Host n a me n/a n/a Specifies the machine to which the ESCON channel adapter is
attached and is used to identify IOCP output.
Partition name n/a n/a Specifies the partition to which the CHPID is attached.
CHIPID n/a n/a Specifies the Channel Path Identifier generated in the IOCP at
the Host for the ESCON host link.
P arti ti on n u mber n/a n/a Identifies the logical host within a physical host. Valid only
when host mode EMIF or LPAR is selected.
Host link address (HLA) n/a n/a Identifies the port in the ESCON Director to which the optical
fiber from the host is attached.
Accept any incoming call? CALL LINE Defines whether link stations can be created dynamically on
this host link. Not available for ESCON.
NPA eligible NPACOLL LINE Eligible for NPM performance data collection.
Maximum received PIU size TRANSFR*BFRS-18 LINE/BUILD Maximum frame size 3746 is able to receive.
Maximum sent PIU size MAXBFRU*IOBUF HOST/PU Maximum frame size 3746 is allowed to send.
Relative cost per byte COSTBYTE LINE Used for APPN route calculation.
Relative cost per unit of time COSTTIME LINE Used for APPN route calculation.
PU type PUTYPE LINE Used to define the role of the adjacent node.
IPL through that station? IPL LINE Enables NCP loading and dumping through that station.
On which CCU? n/a n/a Indicates owner (APPN, CCU-A, CCU-B or the link station).
Keyword Statement
NPA eligible NPACOLL PU Eligible for NPM performance data collection (future).
HPR support LLERP / HPR PU Select HPR support as none or yes (ERP required).
Relative cost per byte COSTBYTE PU Used for APPN route calculation.
Relative cost per unit of time COSTTIME PU Used for APPN route calculation.
XID receipt supported? XID PU No meaning for ESCON links. Relevant only when using
dependent LU requester function.
Channel adapter slowdown CASDL PU Maximum amount of time that the ESCON station can block
timer (CASDL) inbound traffic due to slowdown before signaling that the
station is inoperative.
Attention timer (TIMEOUT) TIMEOUT PU Amount of time to wait for a response to an attention signal
sent to the host before initiating channel disconnect.
Delay timer (delay) DELAY PU Maximum amount of time to wait between the time data is
available to the host and the time the attention signal is sent to
a host node.
Total transmit threshold SRT PU Number of transmissions associated with the station before
informing the host that the threshold was reached.
Total retry threshold SRT PU Number of retries associated with the station before informing
the host that the threshold was reached.
Keyword Statement
APPN local SAP n/a (always 4) n/a DLC service access point (SAP) of 3746.
HPR local SAP n/a n/a DLC service access point (SAP) of 3746 for HPR use
Accept any incoming call? CALL LINE Defines whether link stations can be created dynamically on
this host link.
NPA eligible NPACOLL LINE Eligible for NPM performance data collection (requires NPM
V2R3).
Maximum received PIU size RCVBUFC LINE Maximum frame size 3746 is able to receive.
Keyword Statement
Maximum sent PIU size MAXTSL LINE Maximum frame size 3746 is allowed to send.
Relative cost per byte COSTBYTE LINE Used for APPN route calculation.
Relative cost per unit of time COSTTIME LINE Used for APPN route calculation.
T1 reply timer T1TIMER BUILD Value specifying time within which reply should be received.
T2 acknowledgement timer T2TIMER PU Value specifying maximum time within which reply is returned.
Inactivity timer TITIM ER PU Value specifying maximum time within which data should be
received.
Maximum received frames MAXOUT or 127 PU Maximum number of frames received before partner requires
acknowledgment.
RNR l i m i t RNRLIMT PU Specifies how long a remote station is allowed to refuse data
before being identified as inoperative.
Retries per sequence RETRIES(m,,) PU Number of retry attempts in a sequence after a transmission
has failed. Total attempts within a sequence is m+1.
Retry sequence RETRIES(,,n) PU The number of retry sequences. The total number of
sequences is n+1.
Pause between retry RETRIES(,t,) PU The period between two retry sequences.
sequences
HPR remote SAP n/a n/a DLC service access point (SAP) of station for HPR use.
HPR support ERP / HPR PU HPR support, no or yes with ERP type required.
Relative cost per byte COSTBYTE PU Used for APPN route calculation.
Relative cost per unit of time COSTTIME PU Used for APPN route calculation.
Primary dependent LU server n/a n/a Fully qualified name of primary DLUS.
(DLUS)
Relevant only when using dependent LU requester function.
B a c k u p DLUS n/a n/a Fully qualified name of backup DLUS (if appropriate).
T1 reply timer T1TIMER PU Value specifying time within which reply should be received.
Keyword Statement
T2 acknowledgement timer T2TIMER PU Value specifying maximum time within which reply is returned.
Inactivity timer TITIM ER PU Value specifying maximum time within data should be
received.
Maximum received frames 127 or MAXOUT PU Maximum number of frames received before partner requires
acknowledgement.
RNR l i m i t RNRLIMT PU Specifies how long a remote station is allowed to refuse data
before being identified as inoperative.
Retries per sequence RETRIES(m,,) LINE Number of retry attempts in a sequence after a transmission
has failed. Total attempts within a sequence is m+1.
Retry sequence RETRIES(,,n) PU The number of retry sequences. The total number of
sequences is n+1.
Pause between retry RETRIES(,t,) PU The period between two retry sequences.
sequences
There are various types of packets that can flow between HPR nodes. A packet
on an HPR-capable link may contain an XID-3 I-frame, a FID-2 PIU, or a network
layer packet (NLP). Figure 261 illustrates.
┌────XID3 packet───┐
┌──────────────────┐
│ XID3 I-frame │
└──────────────────┘
A receiver can distinguish a FID-2 PIU from an NLP by examining the first bits of
the packet. For FID-2 PIUs, these bits are always set to B′0010′ indicating a
format identifier of 2. The first four bits of an NLP will never be B′0010′.
Both FID-2 PIUs and NLPs use the same transmission priorities: network, high,
medium, and low. Link outbound queues (one for each priority) are used to
implement priority routing. FID-2 PIUs and NLPs are enqueued according to
their priority and so any given priority queue can contain both FID-2 PIUs and
NLPs. There is no additional priority implied based on the type of packet. FID-2
PIUs and NLPs are treated equally so the order they appear in the queue is the
same as the arrival order.
FID-2 PIUs are assigned a priority based on the ISR routing tables in the
intermediate node, as the priority is not carried in the header. An RTP
connection has no tables in the ANR node; the priority field is in the NHDR.
Whether both FID-2 PIUs and NLPs actually flow over a given link depends on
several factors:
• CP-CP sessions flow on NLPs or FID-2 PIUs depending on whether both sides
support the Control Flows over RTP option.
• Route Setup messages flow as NLPs or FID-2 PIUs depending on whether
both sides support the Control Flows option.
• If the link is a peripheral subarea connection then SSCP-PU, SSCP-LU, and
the route extension parts of LU-LU sessions use FID-2 PIUs with dependent
session addresses (OAF and DAF).
D.2 Formats
This section summarizes the new and changed data formats for HPR. For a
detailed description of the formats, please refer to SNA Formats .
D.2.1 XID
HPR uses XID-3 exchange just as does base APPN. Additional information is
exchanged between the partner nodes in a new control vector:
• HPR capabilities control vector (CV61)
The presence of this CV indicates that it is desired that the link run HPR
protocols. It is used in the XID exchange and can include:
− IEEE 802.2 LLC subfield (X′80′)
− Control Flows Over RTP Tower subfield (X′81′)
Reference to PTF numbers that have not been released through the normal
distribution process does not imply general availability. The purpose of
including these reference numbers is to alert IBM customers to specific
information relative to the implementation of the PTF when it becomes available
to each customer according to the normal IBM PTF distribution process.
Information in this book was developed in conjunction with use of the equipment
specified, and is limited in application to those specific hardware and software
products and levels.
IBM may have patents or pending patent applications covering subject matter in
this document. The furnishing of this document does not give you any license to
these patents. You can send license inquiries, in writing, to the IBM Director of
Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785.
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact IBM Corporation, Dept.
600A, Mail Drop 1329, Somers, NY 10589 USA.
The information contained in this document has not been submitted to any
formal IBM test and is distributed AS IS. The information about non-IBM
(″vendor″) products in this manual has been supplied by the vendor and IBM
assumes no responsibility for its accuracy or completeness. The use of this
information or the implementation of any of these techniques is a customer
responsibility and depends on the customer′s ability to evaluate and integrate
them into the customer′s operational environment. While each item may have
been reviewed by IBM for accuracy in a specific situation, there is no guarantee
that the same or similar results will be obtained elsewhere. Customers
attempting to adapt these techniques to their own environments do so at their
own risk.
Reference to PTF numbers that have not been released through the normal
distribution process does not imply general availability. The purpose of
including these reference numbers is to alert IBM customers to specific
information relative to the implementation of the PTF when it becomes available
to each customer according to the normal IBM PTF distribution process.
Microsoft, Windows, Windows NT, and the Windows 95 logo are trademarks
or registered trademarks of Microsoft Corporation.
The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics covered in this redbook.
This information was current at the time of publication, but is continually subject to change. The latest
information may be found at http://www.redbooks.ibm.com/.
Redpieces
For information so current it is still in the process of being written, look at ″Redpieces″ on the Redbooks Web
Site ( http://www.redbooks.ibm.com/redpieces.html). Redpieces are redbooks in progress; not all redbooks
become redpieces, and sometimes just a few chapters will be published this way. The intent is to get the
information out much quicker than the formal publishing process allows.
IBMMAIL Internet
In United States: usib6fpl at ibmmail [email protected]
I n Canada: caibmbkz at ibmmail [email protected]
Outside North America: dkibmbsh at ibmmail [email protected]
• Telephone Orders
• 1-800-IBM-4FAX (United States) or USA International Access Code -408-256-5422 (Outside USA) — ask for:
Redpieces
For information so current it is still in the process of being written, look at ″Redpieces″ on the Redbooks Web
Site ( http://www.redbooks.ibm.com/redpieces.html). Redpieces are redbooks in progress; not all redbooks
become redpieces, and sometimes just a few chapters will be published this way. The intent is to get the
information out much quicker than the formal publishing process allows.
Company
Address
We accept American Express, Diners, Eurocard, Master Card, and Visa. Payment by credit card not
available in all countries. Signature mandatory for credit card payment.
C
CCM 109 E
CCM parameters 313 enterprise extender
communication storage manager 18 benefits 45
Communications Server/2 23, 37, 43, 90 connection network 47
Communications Server/AIX 24, 37 description 46
Communications Server/NT 23, 37, 43, 47 product implementations 47
control flows over RTP 12, 14, 53 responsive mode ARB 47
controller configuration and management UDP port mapping 47
See CCM ESCON channel 118, 121
CPSVRMGR mode 27, 30 ESCON director 119
CS/2 configuration 93, 166, 203, 218 ESCON generation assistant 110
CS/2 displays ESCON host connection options 121
logical link details 186
logical links 171, 185, 221
logical units 190 F
LU 6.2 session details 225 FID-5 header 7
LU 6.2 sessions 171, 221
RTP connection details 174, 181, 228
RTP connection for DLUR/S 176 H
RTP connections 172, 225, 226 high-performance data transfer
See HPDT
high-performance routing
D 3746 configuration 115
definitions and DLUR 90
See VTAM definitions and VR-TG 18, 67
dependent LU requester/server APPN/HPR boundary 5, 11
3746 configuration 115 benefits 2
and HPR 90 control flows over RTP 14, 53
branch extender considerations 42 dedicated RTP connections 14
connections 91, 102 error recovery 8
contact procedures 31 example trace 247
cross-border considerations 27, 32 flow control across a CNN 75
description 27 implementation options 13
design considerations 32 MLTG 12, 23
example trace 247 NCP capability 16
GDS variable 31 NCP definitions 73
S
M SESSACC keyword 76
MAE session address 7
See 3746 multiaccess enclosure session services extensions 32
mobile RTP partner 10, 50 start options
MODIFY RTP command 60 See VTAM start options
MPC 19, 22 stationary RTP partner 10, 50
multinode persistent sessions 50
multipath channel
See MPC
T
TCID 6, 318
TG characteristics 70, 75
N transmission priority 4, 318
NCE 5 transport connection identifier
NCP HPR capability 73 See TCID
NCP HPR definitions 73
NCP parameters for HPR 21
Index 337
V
VR-TG 64
VTAM
HPR capability 16, 73
VTAM definitions
3746 ESCON connection 129
CDRM for VR-TG 65
for DLUR/S 34
HPR connections 19
switched major node for DLUR 93
TG profile 70
VTAM displays
3746 NN CP 139
attempted path switch over VR-TG 63
CDRMs 64
DLUR LU 148, 157, 191, 192, 209
DLUR node 102, 144, 153, 189, 207
DLUR PU 145, 153, 190
DLUR/S connection 98
LU using base APPN 88
MPC activation 136
path switch 56, 57, 60, 62, 149, 158, 178, 194
path switch with VR-TG 67, 80, 81
path table 65
RTP activation for DLUR/S 188
RTP connection 54, 55, 58, 59, 61, 79, 83, 86, 87,
89, 101, 105, 137, 146, 151, 155, 173, 179, 194, 210
RTP connection for DLUR/S 100, 101, 143, 152,
175, 188, 206, 212
RTP connection with BX 228
RTP major node 53, 54, 145, 172
subarea routes 64, 81, 82, 83, 84, 85
topology 70, 78, 85
VR-TG activation 65
VTAM start options
HPR 16, 20, 49, 52
HPRNCPBF 16, 17, 20, 84, 88
HPRPST 20, 50, 52
PSRETRY 20, 51, 52, 61
X
XID 15, 319
Your feedback is very important to help us maintain the quality of ITSO redbooks. Please complete this
questionnaire and return it using one of the following methods:
• Use the online evaluation form found at http://www.redbooks.ibm.com
• Fax this form to: USA International Access Code 914 432 8264
• Send your comments in an Internet note to [email protected]
Please rate your overall satisfaction with this book using the scale:
(1 = very good, 2 = good, 3 = average, 4 = poor, 5 = very poor)
Overall Satisfaction ____________
Was this redbook published in time for your needs? Yes____ No____
_____________________________________________________________________________________________________
_____________________________________________________________________________________________________
_____________________________________________________________________________________________________
_____________________________________________________________________________________________________
_____________________________________________________________________________________________________
_____________________________________________________________________________________________________
_____________________________________________________________________________________________________
_____________________________________________________________________________________________________
_____________________________________________________________________________________________________
IBML
SG24-5204-00