Pipeline
Pipeline
Author’s Edition
1.1.12
SL26-0018-06
CMS/TSO Pipelines
Author’s Edition
1.1.12
SL26-0018-06
Note:
Before using this information and the product it supports, read the information in “Notices” on page 918.
| This edition, SL26-0018-06, applies to CMS/TSO Pipelines Version 1 Release 1 Modification Level 12 sublevel 06 and to all subsequent releases
| and modifications of this product until otherwise indicated in new editions or Technical Newsletters.
| Changes or additions to the text and illustrations are indicated as described in “Revision Codes” on page xxvi.
Copyright International Business Machines Corporation 1986, 2010. All rights reserved.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
Table of Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
What Is CMS/TSO Pipelines? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
Who Is CMS/TSO Pipelines for? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Skills Expected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
What this Book Contains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Before Reading this Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
How to Use this Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
: When Viewing this Book with Adobe Acrobat . . . . . . . . . . . . . . . . . . . . . xxii
: Web Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii
Additional Information, Download Site . . . . . . . . . . . . . . . . . . . . . . . . . xxii
Syntax Notation and Typography . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Stage Separator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Supported Operating Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
: VM Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
z/OS Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv
Compatibility with z/VM and Virtual Machine/Enterprise Systems Architecture . . . xxiv
| Significant Documentation Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
How to Send Your Comments to IBM . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
Revision Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvi
Part 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Editing Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Using FMTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Using SC/SCM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Issuing the PIPE Command from a FILELIST Panel . . . . . . . . . . . . . . . . . . . 27
Sample Pipelines and REXX Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Compatibility Between TSO Pipelines and CMS Pipelines . . . . . . . . . . . . . . . . 28
Table of Contents v
Contents
Chapter 11. Accessing and Maintaining Relational Databases (DB2 Tables) . . 136
sqlselect—Format a Query . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Creating, Loading, and Querying a Table . . . . . . . . . . . . . . . . . . . . . . 137
Using spec to Convert Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
About the Unit of Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Using Multiple Streams with sql Stages . . . . . . . . . . . . . . . . . . . . . . . 141
Using Concurrent sql Stages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
CMS Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Obtaining Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
! Chapter 19. CMS/TSO Pipelines Built-in Programs supporting Data Spaces . . . 217
Chapter 22. Scanning a Pipeline Specification and Running Pipeline Programs 242
Pipeline Scanner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
Pipeline Dispatcher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
States of a Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
Commit Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Reading, Writing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Delaying the Record . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Device Drivers that Wait for External Events . . . . . . . . . . . . . . . . . . . . 248
Return Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
Table of Contents ix
Contents
Table of Contents xi
Contents
Table of Contents xv
Contents
¡ Appendix E. Generating and Using Filter Packages with CMS/TSO Pipelines . . 894
¡ Note for MVS Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 894
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 894
: Specifying Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 895
Contents of a Filter Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 896
Glue Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 896
Entry Point Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 896
FPLEPTBL—Generate Entry Point Table Object Module . . . . . . . . . . . . . 897
Message Text Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897
¡ FPLMSGTB—Generate Message Text Table Object Module . . . . . . . . . . . . 898
Keyword Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 899
FPLKWDTB—Generate a Keyword Table Object Module . . . . . . . . . . . . . 899
Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 900
: Generating a Sample Type-1 Filter Package . . . . . . . . . . . . . . . . . . . . . . . 900
Appendix F. Pipeline Compatibility and Portability between CMS and TSO . . 902
¡ TSO Commands Supplied with TSO Pipelines . . . . . . . . . . . . . . . . . . . . . 902
¡ FPLRESET . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 902
¡ FPLDEBUG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 902
¡ FPLUNIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 903
¡ Using the PIPE Command from Unix System Services . . . . . . . . . . . . . . . . 903
Pipeline Specifications—The PIPE Command . . . . . . . . . . . . . . . . . . . . . . 903
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 918
Programming Interface Information . . . . . . . . . . . . . . . . . . . . . . . . . . . 918
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 919
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 920
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 925
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 927
Preface
This book has a dual purpose: to introduce new users to CMS/TSO Pipelines, and to
provide reference information for all CMS/TSO Pipelines users.
CMS/TSO Pipelines provides a CMS and TSO command, PIPE. The argument string to the
PIPE command is called a pipeline specification. PIPE selects programs and “bolt” them
together in a pipeline to pump data through them. The pipeline module has a built-in
library of programs that can be called in a pipeline specification; these programs interface
to z/OS and CP/CMS, and perform many utility functions. For example, read a file, select
particular records, reformat each record, and display the result on the terminal; CMS/TSO
Pipelines takes the chores out of this task because it has utility functions to read files and
write to your terminal. It might even have programs to perform the selection and editing
you want, but if it does not, all you do is write a program to complement the built-in
programs rather than start from scratch.
CMS/TSO Pipelines users issue pipeline commands from the terminal or in EXEC proce-
dures; they can write programs in REXX to augment the programs built into CMS/TSO
Pipelines. The PIPE module can also run as a job step in z/OS batch.
Programmers often find that CMS/TSO Pipelines helps them write better programs
faster and cheaper. Access to files and devices is greatly simplified; programs are
reused without change. Of course, CMS/TSO Pipelines does not make a sloppy
programmer better overnight, but pipethink makes it easy to break a complex task into
smaller ones and thus reduce complexity.
Toolsmiths write tools: programs to help programmers and users be more productive
when pursuing their business objectives. Toolsmiths have a field day with CMS/TSO
Pipelines; it takes the toil out of writing CMS and TSO tools. The toolsmith concen-
trates on the real problem instead of, for instance, how to access files most efficiently.
There is an avalanche effect when the toolsmith makes more and better tools for the
programmer who, in turn, writes more functions for end users!
Skills Expected
You should be familiar with the timesharing system on which you are going to use
CMS/TSO Pipelines: CMS is described in the z/VM: CMS User’s Guide; TSO is described in
TSO Extensions Version 2 User’s Guide, SC28-1880. Some experience with REXX
(described in the z/VM: REXX/VM Reference and the TSO Extensions Version 2 REXX
Reference, SC28-1883) is recommended, at least to the point of writing simple command
procedures. Some CMS or TSO skill, but no (systems) programming expertise is required to
understand the concepts of CMS/TSO Pipelines and use most of the built-in programs.
Some built-in programs, however, expose operating system interfaces that require an
understanding of device architectures or the format of data sent to a device.
TSO users are sometimes expected to perform a mental transformation when an example is
explained in CMS terms. This transformation is often a matter of different file naming
conventions, or to substitute ISPF/PDF for XEDIT as the editor being used; CMS/TSO
Pipelines is designed to provide a common set of functions for the two timesharing
systems.
Part 2, “Task Oriented Guide” on page 21 describes how to perform related functions.
Chapter 3, “Where Do I Start?” on page 22 explains how you can verify that CMS/TSO
Preface xix
Preface
Pipelines is installed; it gives hints on how to obtain help; and it describes some tools that
may be useful when creating command procedures that use CMS/TSO Pipelines.
Chapter 4, “Building a PIPE Command” on page 29 explains how to connect the pipeline
to your terminal and to your files; it explains how to issue commands and process their
responses in the pipeline; and it shows many examples of built-in programs that filter data.
Chapter 5, “Using Multistream Pipelines” on page 74 explains how to specify a network
of interconnected pipelines with programs that support more than a single input and output
! stream. Chapter 6, “Processing Structured Data” on page 91 describes how to refer to
! data in records symbolically rather than by column, word, or field number. Chapter 7,
“Writing a REXX Program to Run in a Pipeline” on page 97 explains how to write a
REXX program to process data (a REXX filter). Chapter 8, “Using Pipeline Options” on
page 120 explains how to specify options that control the pipeline itself. Chapter 9,
“Debugging” on page 124 explains how to cope with trouble in the pipeline (the data
plumber’s guide to blocked drains). Chapter 10, “Pipeline Idioms—or—Frequently Asked
Questions” on page 127 explains some pipeline idioms and annotated answers to some
frequently asked questions.
Part 3, “Specialised Topics, Tutorials” on page 135 contains descriptions that may not all
be relevant in your installation. It also contains a tutorial for specs, to which you might
wish to pay particular attention. Chapter 11, “Accessing and Maintaining Relational Data-
bases (DB2 Tables)” on page 136 explains how to access relational databases and process
the result of a query. Chapter 12, “Using CMS/TSO Pipelines with Interactive System
Productivity Facility” on page 143 explains how to access Interactive Systems Productivity
Facility to get or set function pool variables and lines in tables.Chapter 13, “SPOOL Files
and Virtual SPOOL Devices on VM” on page 147 explains about unit record equipment
! and SPOOL files on VM/CMS. Chapter 14, “Using VMCF with CMS Pipelines” on
! page 154 describes how to build a VMCF server. Chapter 15, “Event-driven Pipelines in
Clients and Servers” on page 158 explains about pipeline in client/server applications.
Chapter 16, “spec Tutorial” on page 163 explains the workings of specs in easy steps.
: Chapter 17, “Rita, the CMS Pipelines Runtime Profiler” on page 205 introduces the CMS
: Pipelines performance tool. Chapter 18, “Using VM Data Spaces with CMS Pipelines” on
! page 207 describes how to use data spaces with CMS/TSO Pipelines. Chapter 19,
! “CMS/TSO Pipelines Built-in Programs supporting Data Spaces” on page 217 introduces
! built-in programs that support ALET operands.
Part 4, “Reference” on page 219 is the formal specification for the parts of CMS/TSO
Pipelines seen by the user at the terminal and by the REXX programmer. Chapter 20,
“Syntax Notation” on page 220 explains how to read syntax diagrams. Chapter 21,
“Syntax of a Pipeline Specification Used with PIPE, runpipe, ADDPIPE, and CALLPIPE”
on page 234 defines the syntax of the PIPE command, the ADDPIPE pipeline command, the
CALLPIPE pipeline command, and the input to the runpipe built-in program. Chapter 22,
“Scanning a Pipeline Specification and Running Pipeline Programs” on page 242 describes
how the pipeline specification parser and the pipeline dispatcher go about processing a
pipeline specification. Chapter 23, “Inventory of Built-in Programs” on page 250
describes the programs that are supplied with CMS/TSO Pipelines; it describes the syntax
of the argument string as well as the operation of the program. Chapter 24, “spec
Reference” on page 692 describes all features of specs. Chapter 25, “Pipeline
Commands” on page 723 describes pipeline commands. Pipeline commands are processed
by the default command environment in pipeline filters that are written in REXX.
Chapter 26, “Message Reference” on page 746 lists CMS/TSO Pipelines messages in
numerical order and explains what they mean. Chapter 27, “PIPMOD Command (CMS
Pipelines only)” on page 834 describes the functions performed by the PIPMOD command
(the main pipeline module). Chapter 28, “Configuring CMS/TSO Pipelines” on page 839
describes CMS/TSO Pipelines configuration variables, which control the actions taken
Part 5, “Appendices” on page 847 contains miscellaneous information that does not fit
into the previous parts of this book. Appendix A, “Summary of Built-in Programs” on
page 848 contains the synopses for the built-in programs, ordered by keywords; it can be
useful when looking for a program to perform a particular function. Appendix B,
“Messages, Sorted by Text” on page 868 contains a reference to message numbers in the
order of their text. Appendix C, “Implementing CMS Commands as Stages in a Pipeline”
on page 888 describes how some CMS commands can be formulated as pipeline
specifications; it shows device drivers and filters that can be used to accomplish the equiv-
alent function of a CMS command. Appendix D, “Running Multiple Versions of CMS
Pipelines Concurrently” on page 891 describes how CMS Pipelines initialises itself when
the first PIPE command is issued in a CMS session. It also explains how different versions
of CMS Pipelines can be active concurrently in a virtual machine. Appendix E, “Gener-
ating and Using Filter Packages with CMS/TSO Pipelines” on page 894 describes filter
packages and explains how to generate them. Appendix F, “Pipeline Compatibility and
Portability between CMS and TSO” on page 902 describes compatibility and portability
between the CMS and z/OS environments. Appendix G, “Format of Output Records from
runpipe EVENTS” on page 908 contains information useful for the authors of PIPEDEMO,
RITA, and others who process the detailed trace of CMS/TSO Pipelines operation that is
produced by runpipe EVENTS.
New users of CMS and CMS/TSO Pipelines should read Chapter 2, “A Walk Through a
Pipeline” on page 7 and progress to the task-oriented guide.
Experienced CMS users may find the introduction too slow; go directly to Part 2, “Task
Oriented Guide” on page 21 instead.
Having mastered the topics explained in Part 2, you might like to peruse selected chapters
of Part 3, “Specialised Topics, Tutorials” on page 135; and you might want to familiarise
yourself with the tutorial on specs.
Though Part 4 is intended for reference use, you should read the first chapter if you find
that the syntax notation is not intuitive. The information in Part 4 is available in the help
library, which you can access from your terminal.
Preface xxi
Preface
: Intra-book links include references to CMS/TSO Pipelines terms and concepts. For
: example, all mention of a built-in program also includes a link. Try this: console.
: Thus, if you are wondering what something means, try hovering the mouse pointer over it
: and see whether it offers a link. Be sure to use the hand tool; other tools may not offer
: the links.
: Web Links
: The default for Acrobat is to include web links into the document you are viewing
: (editing, actually), but this is unlikely to be what you want to happen when viewing this
: document.
: Select Edit, Preferences, Web capture and ensure that the “Open Weblinks” drop-down is
: set to “In Web Browser”.
| From this site you can download the latest CMS Pipelines “Runtime Library”, also known
| as the “Field Test Version” and the “Princeton Distribution”.
: The CMS Pipelines discussion list is also hosted by Marist College. To join the list, send
: mail with a subject line that contains “SUBSCRIBE CMS-PIPELINES” to:
: [email protected]
A reference to a built-in program is written in italics type, for instance spec. It is in lower
case, even at the beginning of a sentence. A keyword option is set in small capitals: for
instance, ANYOF.
A complete command is written in double quotes in Gothic type: for instance, “pipmod
msglevel 15”.
Examples
This book contains many example terminal sessions and command procedures. Examples
are set in monospaced Gothic type.
The first position of a line of an example terminal session has a character to show whether
the line is typed by the user or is a system response; this character is not part of the line
you see or write on your terminal. Commands and input lines written on the terminal, by
the user, have a blank (space) in the first column; responses have an arrowhead (). Most
CMS examples were done on CMS in line mode; they show the PIPE command in front of
the pipeline specification.
| Though not identified, 388 of the CMS examples were run while formatting this book.
| Figure 265 on page 163 shows the version of CMS/TSO Pipelines used. What you see is
| what it does, even if an error should slip by the author.
Other examples are written as fragments of REXX programs; you can tell by the comment
(/* comment */) on the first line. This indicates that the program is written in REXX.
Stage Separator
The solid vertical bar (|) is an important character when writing CMS/TSO Pipelines
commands; it indicates the end of the specification of one program and the beginning of
the next. This character is also used as the logical OR operator in REXX and PL/I. Not all
terminals display the code point (X'4F') as a solid vertical bar; refer to “Find the Stage
Separator on Your Terminal” on page 23 for more information.
: VM Environment
: CMS Pipelines supports VM/CMS from these products:
: z/VM Version 3 (5654-A17)
: z/VM Version 4 (5739-A03)
: z/VM Version 5 (5741-A05)
| z/VM Version 6 (5741-A07)
sql was developed with SQL/DS Version 1 Release 3.5, IBM Program Number 5748-XXJ; it
has been tested with Version 2 Releases 1 and 2, IBM Program Number 5688-004,
Version 3 Release 1, IBM Program Number 5688-103, and IBM Database 2 Server for VSE
& VM Version 5. ispf was developed for Version 2 Release 2 of Interactive System
Productivity Facility, IBM Program Number 5664-282.
Preface xxiii
Preface
When CMS Pipelines waits for an external event, it uses thread suspend/resume when it
senses that some other application has entered multitasking mode, so as not to lock out
such an application.
z/OS Environment
¡ TSO Pipelines 1.1.12 supports z/OS Enterprise Systems Architecture and z/OS with JES2 or
JES3.
Level 1.1.9 of TSO Pipelines shipped under the name BatchPipeWorks* as part of
BatchPipes/MVS and subsequently SmartBatch/MVS, IBM Program Number 5655-A17,
but those products are no longer marketed by IBM.
Prior to CMS Pipelines level 1.1.10 and Virtual Machine/Enterprise Systems Architecture
Version 2 Release 3.0, the versions shown in Figure 1 are considered equivalent, which
means that pipelines written according to the documentation of one of the environments
will also work with the other one. There may be undocumented aspects that do not work
in equivalent environments; traditionally, CMS Pipelines would quietly accept undocu-
mented behaviour that is compatible with some past specification.
Built-in programs that are described in this book are also available in previous releases of
Virtual Machine/Enterprise Systems Architecture unless revision codes indicate new func-
tion added since 1.1.10.
: Notes:
| 1. CMS Pipelines 1.1.10 is carried forward unchanged into z/VM versions 3, 4, 5, and 6.
| However, each release of z/VM has increased it sublevel; thus, for example, 5.4
| reports level 110A002A.
: 2. CMS Pipelines 1.1.11 and later are not shipped with z/VM. Function not present in
| z/VM is indicated by an exclamation point or an inverted exclamation point as the
: revision code in this book.
3. You cannot report errors with the Runtime Library (Field Test Version) through
| normal IBM program support procedures. Even so, IBM programming service may
| ask you to recreate a problem detected in CMS using the Field Test Version to help in
| problem determination.
Preface xxv
Preface
If you would like a reply, be sure to also include your name, postal or e-mail address,
telephone number, or FAX number.
When you send information to IBM, you grant IBM a nonexclusive right to use or
distribute the information in any way it believes appropriate without incurring any obli-
gation to you.
Revision Codes
This book uses several revision codes (characters or symbols in the left margin) to show
changes from previous editions.
Summary
In essence, CMS/TSO Pipelines is a command, PIPE. The PIPE command interprets its
argument string as a pipeline specification, which is a list of programs to run. A program
has a name and often an argument string. A solid vertical bar (|) marks the end of the
specification of one program and the beginning of the specification of the next. Programs
are either built into CMS/TSO Pipelines or written by the user (usually in REXX). There is
a connection from the output stream of the program to the left of the vertical bar to the
input stream of the program on the right of the vertical bar. The order of programs in a
pipeline specification defines how data are passed from one program to the next: data are
pumped from left to right in a pipeline.
The pipeline specification is scanned by CMS/TSO Pipelines, and the programs are started.
A particular program can be used several times in a pipeline; each instance of a program
in a pipeline is called an invocation. An invocation of a program is also called a stage.
Each stage runs independently of all other ones; there is a pipeline dispatcher to coordinate
it all and make sure that data flow through the pipeline. Programs obtain data from the
pipeline dispatcher or from a host interface (an interface to the underlying operating
system); they deliver data to the pipeline dispatcher or a host interface. Programs
accessing a host interface are called device drivers because the interface often reads or
writes a device or file. Programs that do not interact with the host are called filters; they
process data in the pipeline in some particular way.
Many Streams
CMS/TSO Pipelines supports an unlimited number of interconnected pipelines. Multiple
pipelines are specified in a PIPE command. A program using more than one stream is
declared with a label on its primary pipeline; subsequent references to the label specify
where additional streams are connected to surrounding stages.
Pipelines are added to the set of running ones in two ways. A program can call a subrou-
tine pipeline to process its input data or generate output data, or both; or a program can
transfer a stream to a new pipeline that is added to the set and runs in parallel with the
current pipeline set.
Writing Programs
Though you can accomplish many tasks with built-in programs and combinations of
built-in programs (cascades of filters), there will no doubt be times when you need a func-
tion for which there is no program readily available. You (or someone else) must then
write a program to perform this function. Such programs are often written in the REXX
language. They can be compiled with the REXX/370 Compiler or run by the REXX Inter-
preter.
Pipeline programs in REXX read and write the pipeline using pipeline commands; other
pipeline commands add pipelines to the set of running pipelines, run a subroutine pipeline,
and perform many other functions.
A Pipeline Example
Assume you are giving a presentation and you wish to know the number of words in your
manuscript. The manuscript is stored in a file in the format used by Script/VS. Figure 2
shows a way to do this.
The first line shows the PIPE command; the second line is the response from CMS/TSO
Pipelines; the third line is the CMS ready message indicating that the command has
completed without error. The arrowhead on the left side of the last two lines is a conven-
tion in this book to indicate that these lines are written by the system; the arrowheads are
not displayed on your terminal when you issue CMS or TSO commands. The blank in front
of the PIPE command indicates that this line is typed on the terminal by the user.
The command specifies that three programs are to be run: one to read a file (disk), one to
count words (count), and the last one (console) to display the result on your terminal. The
three programs are separated by two solid vertical bars. Because a program in a pipeline
is called a stage, and because the bars separate pipeline programs, the solid vertical bar is
also called the stage separator.
2. CMS/TSO Pipelines scans the argument string to see which programs should be
selected. It finds three; via an internal table, it resolves the names to built-in programs
residing within the pipeline module.
3. The pipeline specification has no errors; thus, the programs are run. Conceptually, the
three programs are run in parallel, but in reality, control must pass from one to the
other. The pipeline dispatcher takes care of this; you seldom need to concern yourself
with the way it is done.
4. Anyhow, it starts the leftmost program first: disk is called.
5. disk reads the file and calls the pipeline dispatcher to write one line at a time to its
primary output stream; the pipeline dispatcher looks for someone to read the line.
6. count is started. It calls the pipeline dispatcher to read a record. The two sides of the
stage separator are now in a state where a line can be passed from one to the other;
the program on the left is writing a record and the program on the right is reading.
7. The pipeline dispatcher passes the line from disk to count, makes a note that disk has
now written the line, and runs count again.
8. count counts the number of words in the line, discards the line, and calls the pipeline
dispatcher to read another one.
9. The pipeline dispatcher finds that there is no line being written by disk. It suspends
(stops running) count for a while and resumes disk which reads another line from the
file and writes it to the pipeline, and so on.
10. Eventually, CMS reflects end-of-file on a call to read from the file. disk then returns to
the pipeline dispatcher from the call in step 4.
11. count is waiting for an input record, but there are no more. The pipeline dispatcher
resumes count with a return code to indicate end-of-file. count now writes a line
containing the count of words.
12. The pipeline dispatcher finds console, starts it, and passes the line to console, which
writes the result to the terminal. console reads another line from the pipeline,
resuming count.
13. count returns.
14. The pipeline dispatcher reflects end-of-file to console, which also stops.
15. All programs are now complete. The pipeline dispatcher returns to CMS (or TSO) with
a return code, in this case 0.
Returning to the command in Figure 2 on page 3, note that the first two programs have
arguments to indicate what they should do. The first one (disk) needs to know the
DSNAME (z/OS) or the name and type of the CMS file to read; it looks for the file on all
accessed minidisks and directories. count has a keyword option (WORDS) to make it count
words; it counts bytes and lines when other options are used.
The response of a number with no accompanying text to explain it may seem a bit terse at
first. There are good reasons why count writes a number without, for instance, “words”
after it; it is simpler to add text than to remove it.
You can tell from the ready message that this particular example was run on CMS. Had it
been run on TSO, the command would have read the member SEAS88 from the data set
allocated to DDNAME SCRIPT. (There are other ways to read z/OS data sets.) And the
ready message would have been all capitals with no trailing semicolon.
The most important observation has been kept to the end. The first example shows the
independence of the programs in the pipeline. When first in a pipeline, disk reads a file, no
matter what is put after it in the pipeline to process the data. Likewise, console is not
choosy about what it writes to your terminal; there could be anything in between the two
programs to process a file and type the result. So what happens if you put nothing
between disk and console? Well, then you have the CMS TYPE command. Looking at it
backwards, the sample also shows how easy it is to adapt a pipeline to some other needs:
Think of a CMS command that almost does what you want.
Refer to Appendix C, “Implementing CMS Commands as Stages in a Pipeline” on
page 888 to see how it is done with a pipeline.
Add filters to the pipeline to tweak it to perform your task.
Another Example
Your SEAS presentation is a success; you are asked to publish a book on the subject. To
get an indication of its size you wish to count the number of words in the book which you
have almost written. You cannot use the command shown in Figure 2 on page 3 directly
because the book is made from several files. CMS users store collections of files in a
different way than TSO users do; let us treat the two cases separately.
The approach in Figure 4 is to ask CMS which Script files on disk H have a UG in the
name, read the contents of the files into the pipeline, and then count the number of words
in the aggregate of the files.
The command is a pipeline that uses the last two programs from Figure 2 on page 3 to
count the number of words; the first two stages get the contents of the required files.
cms has an argument string that looks remarkably like a CMS command. It runs the CMS
command to list files and traps the response CMS would normally write to the terminal; in
this case it is the list of files in your book.
This list is passed to getfiles which reads the contents each of the files into the pipeline.
You can visualise this as replacing the name of the file with the contents of the file.
This pipeline uses listpds to read the list of members from the PDS directory of the data set
allocated to DDNAME UGSCR, which we assume contains the Script files for the book. The
output records contain all information present in the directory; the member name occupies
the first eight columns; chop truncates the record after the member name.
Thus, the input to readpds contains the names of all members of the PDS. readpds reads
members, one at a time, from the PDS into the pipeline; thus, the output from readpds
contains all the files in the book. count counts the words in the aggregate and console
displays it on the terminal.
We walk you through the development of a small application, explaining things and
pointing out important considerations on the way. The commands and programs shown
represent what is required to perform the tasks at hand; no attempt is made to give a
complete description of all features of each.
TSO users should not be put off by the clear VM/CMS bias of the examples in this chapter.
The aim is to show how to craft a pipeline to perform a task and how to adapt it by
stepwise refinement. The cp device driver used in the following example is simply a
convenient source of data which are stored in a file to be processed at our leisure. The
following examples could have used a report file as their input just as well. The point is
that after the cp device driver has written the command response into the pipeline, the
following stages do not “know” and certainly do not care if their input data come from CP,
from a file, or, indeed, from the moon.
The examples you are going to see show how the response to a CP command is processed
to provide information not directly available from standard CP and CMS commands.
Though few CMS/TSO Pipelines users are expected to be interested in the application for
its own sake, it is hoped that the examples shown illustrate the process of developing a
pipeline application.
The CP command “query names” displays information about users logged on to your
system. CP writes the response to your terminal screen when you issue the command
directly to CP, or via CMS which forwards it to CP; the response can also be captured and
processed in the pipeline. Figure 6 on page 8 shows how the command is issued with
CMS/TSO Pipelines; the response is written to the file LOGGED USERS A and displayed on
your terminal.
Figure 6. CP Command with Response Logged to Disk and Written on the Terminal
pipe cp Q N | > logged users a | console
ICSTAT3 - DSC , FINNS - DSC , EPLDICT - DSC , EPLCS - DSC
DKEMERG - DSC , CPO - DSC , CISLTLX - DSC , BUYDBM - DSC
ARKIV - DSC , SQLDBA3 - DSC , YVETTE - DSC , VM3ACCT - DSC
TRT - DSC , TORSTEN2 - DSC , VANILLA - L001, IMPOSIT - DSC
DRIFTNYT - DSC , VMBCKUP3 - DSC , VMAVAIL - DSC , VMASMON2 - DSC
VMASMON - DSC , SQLMON - DSC , SNAOPER - DSC , RSCSPC - DSC
RSCS - DSC , PVMB - DSC , PVM - DSC , NETOPER - DSC
NET - DSC , ISPVM - DSC , VMOPER - 05A0, COLPRT2 - DSC
AP2SVP - DSC , VMAUTO - DSC , VMCLASSI - DSC , SMART - DSC
DATAMOVE - DSC , DIRMAINT - DSC , SHRMGR - DSC , AUTPWMON - DSC
AUTOOPER - DSC , VMAUDIT - DSC , AUTOLOG1 - DSC , OPERATOR - DSC
VMTODDY - DSC , VMBSYSAD - DSC , CARLCH - L003, JOHLJUNG - L002
TOMMYJ - DSC , EPC - L000, SCHEEL - DSC , KURTKR - DSC
OPRATNSA - DSC , SEN - DSC , POE - DSC , TOMS - DSC
SPCPRO - DSC , SPCENT - DSC , SCRSRV - DSC , QASTAT - DSC
PESERV - DSC , NPSM - DSC , JOHN - 05A2
Ready;
What happened? The CMS command PIPE is issued. It scans its arguments and finds two
solid vertical bars separating the specifications of three programs to run in a pipeline.
Such programs pass data to each other via a standard interface in what is called the pipe-
line dispatcher in PIPE. A program running in a pipeline is called a stage. Of the three
stages, the first one, cp, issues its argument string as a command to CP and writes the
response to what is called the primary output stream, one record for each line in the
response. > replaces the contents of a file with records read from its primary input stream;
on CMS, the arguments specify the file name, type, and mode; on TSO, and with this partic-
ular format of the parameter list, the member LOGGED is replaced in the PDS allocated to
the DDNAME USERS (possibly not the best choice of names, but it emphasises the portability
between CMS and TSO). > also copies the file to the primary output stream; console reads
it and displays it on the terminal.
Figure 7 shows the layout of the pipeline in Figure 6. Each block represents a stage; the
line between them represents the connection between output and input streams.
Recall that each stage communicates with the pipeline dispatcher; a stage does not call a
neighbour directly. This simplifies the programming of a pipeline program considerably;
for instance, the result of the CP query can be inserted into a DB2 database directly from
the pipeline without ever touching a file. Consider that cp was written before SQL/DS was
even announced; but once the sql driver was written for CMS/TSO Pipelines, all pipeline
programs acquired support for SQL/DS (now DB2 Server for VM) without change to a single
one of them. Such is the power of the pipeline concept.
The first thing one might ask is, who is connected? CMS/TSO Pipelines has many
programs to select lines with or without a string, label, or what not, but the response in
Figure 6 on page 8 needs a bit of massaging before it is in a form where the connected
users can be selected. Selection stages select complete lines; the response in Figure 6 on
page 8 is four abreast and must be split up with a line for each virtual machine.
Use split to display each virtual machine on a separate line. take selects the first 5 records
to limit the number of records shown. Figure 8 shows the command and the response.
< reads the test reference stored previously and writes it into the pipeline. The comma (,)
after split makes it split records at commas; the commas are discarded. Thus, lines split
off have a leading blank, which explains why the response is ragged. Use strip to remove
leading blanks as shown in Figure 9; as used here, strip also removes trailing blanks, but
they are a bit more difficult to see.
Now you have a line for each virtual machine logged on. The first eight bytes contain the
name of the virtual machine; the name is followed by a hyphen (-). The last word is the
address of the terminal from which the virtual machine is logged on; DSC is displayed if
the virtual machine is disconnected. A take stage was added to the pipeline to limit the
output for this figure.
Removing lines where the terminal is shown as DSC excludes disconnected virtual
machines from the list; presumably what is left is a list of connected virtual machines. So,
Figure 10 on page 10 shows how to see which virtual machines are connected:
So far, you have seen how to issue a CP command and process its response, storing it in a
file, and displaying it on your terminal. You have seen how to process the response and
you have fine-tuned a suite of filters to create a command to perform a function not readily
available with standard commands.
It is indeed the CMS/TSO Pipelines way to write long pipelines with many stages, but
typing such long commands on the keyboard is not the way to do it. Store pipeline
specifications in REXX programs instead. Two types of REXX programs are now presented:
subroutine pipelines and “normal” CMS command procedures (EXECs).
Subroutine Pipelines
Today you wish to see who is connected, but maybe some other day you would like to see
who is connected at a particular control unit; the sequence of split, strip, and nlocate
seems to be something you might do often when processing the response to CP commands.
The program in Figure 12 on page 11 is a REXX program; it has a comment on the first
line to indicate that it is indeed a REXX program. However, it differs from normal
command procedures (EXECs) in two respects:
CMS Pipelines resolves the program automatically when the file type is REXX rather
than EXEC; TSO Pipelines resolves the program from the data set allocated to FPLREXX
rather than SYSEXEC. The program is not stored as a normal EXEC because the
commands in the program are pipeline commands, not CMS or TSO commands; having
a different file type makes it more difficult to use a program in the wrong context.
The command itself is probably not like any CMS or TSO command you have seen.
A subroutine pipeline normally contains the single pipeline command CALLPIPE with argu-
ments, followed by the REXX instruction exit to specify the return code from the program
as the return code from the subroutine pipeline.
Figure 12. CNCTD REXX: Subroutine Pipeline to Select Connected Virtual Machines
/* Select connected virtual machines */
'callpipe *:| split , | strip | nlocate /- DSC/ |*:'
exit RC
The command itself probably looks a bit unfamiliar, but you recognise the three programs
used to select the connected virtual machines. The asterisk followed by a colon (*:) is
called a connection. It is put as a stage at the beginning and the end of the subroutine to
indicate that the caller’s input is connected to the left side of the subroutine, and that the
output of the subroutine is to be written to the caller’s output. The command is said to be
in landscape format2 because it is a single line.
Figure 13 shows the layout of the pipeline when it is started and after the subroutine pipe-
line is active.
Use count LINES to see how many users are connected. count reads all input lines and
then writes a single number to its output. Figure 14 shows how.
2 This metaphor is from painting: landscapes are wider than they are tall, while portraits are taller than they are wide.
Often the address of the terminal is not interesting. Use chop to truncate records.
Figure 15 on page 12 shows just the names of the users connected. Each line is 8 bytes
long with trailing blanks you cannot see.
The sequence of cnctd and chop seems to be so useful that you might wish to write a
subroutine to perform this function. You could extend CNCTD REXX to include the chop
filter and save it under another name, but this is not recommended because you would
have two copies of the subroutine to maintain. Subroutine pipelines can be nested to any
depth. Figure 17 shows CNCTDN REXX which writes just the names of connected users.
Figure 17 shows the subroutine pipeline written in portrait format with a line for each
stage; there is a comma (,) after each stage to indicate to REXX that the command is
continued on the following line. There are comments at the right of the line; REXX can
cope with comments after the comma for continuation.
Two XEDIT macros were used to help write the program in Figure 17. The pipeline is
written in landscape form, the cursor is moved to the prefix area for the line, and the prefix
command FMTP (format pipe) is issued to transform the pipeline specification to portrait
form. Comments are then appended to the end of each line; the command (use in Virtual
Machine/Enterprise Systems Architecture) is issued from the prefix area of the first line to
align comments on the right; write a number after SC to shift comments on several lines at
the same time. The XEDIT macros can also be issued from the command line, in which
case they operate on the current line.
You can also use command procedures when developing the pipeline; add stages to a PIPE
command in a command procedure as you fine-tune your pipeline specification.
As an example of a command procedure, CNTLOG EXEC counts the number of users logged
on and connected to a terminal. It displays the result with a few additional words. The
EXEC uses the “live” CP query and the subroutines developed earlier to process the result of
a query. Figure 19 shows the EXEC; Figure 18 is an invocation.
You recognise the comment on the first line to indicate that the program is written in
REXX.
exit RC
The first line that is not a comment (signal on novalue) tells REXX to branch to the label
novalue when referencing a variable that has had no value assigned, but there is no such
label in the program. This is deliberate to force a syntax error at the point where the
variable without a value is referenced; REXX writes the procedures active at the point of
failure when a syntax error occurs. The second instruction (address command) requests
that REXX send commands directly to the CMS command environment; this is where the
PIPE command is, so some processing time is saved in the invocation.
If you do not issue the signal on novalue instruction, REXX treats a reference to a vari-
able that has not been set as the literal string consisting of the variable’s name in upper
case. This can be handy in small simple programs, but it can be the source of subtle
errors when a subroutine defines a variable, which causes a different command to be issued
somewhere else in the REXX program. Literal constants must be in quotes when signal
on novalue is active. In Figure 19 the PIPE command is such a literal. Another reason to
write a pipeline specification as a quoted string is the abundance of solid vertical bars.
They would be interpreted by REXX as inclusive OR operators if they were not put in
quoted strings.
Most stages in the PIPE command in Figure 19 have been described already; cons is an
abbreviation of console. A few program names have an abbreviation, but most must be
spelt out.
The text around the number of logged on users is added by spec. This is a versatile
program that you are going to meet again many times. It tends to have a long argument
string; in this case the spec stage requires three lines when formatted for comfortable
reading. For each input line, spec goes through the list of items in its argument string and
performs each item once, in the order written. An item specifies a field; it can be data
from the input record, or a literal. A literal field is a delimited string: that is, between two
occurrences of a delimiter character, which cannot appear in the string itself. An input
field is a column range; *-* means the whole input record. The second part of an item
indicates where the field is placed in the output record. A number selects a specific
column; the keyword NEXTWORD (which can be abbreviated to NEXTW—it has a synonym
NWORD that can be abbreviated to NW) appends a blank and the field to the output record.
Thus, the spec stage used above puts a literal before and after the number of users logged
on.
Maybe you would like to see the IDs of the virtual machines. A single line of eight char-
acters for each is a bit unsophisticated; Figure 20 shows how to format the result of the
query with eight users per line. join joins lines; as used here, 7 lines are appended to a
line with a single blank added between each. The blank is written in a delimited string,
just like the constants in spec in the previous example.
But what’s that? Who is LOGO05A4? Quickly store another test case:
The test case is stored; find selects the offending line to make sure it is in the new test
case. The count of lines shows that someone logged off before you could capture the new
test case, but it seems to have the data you need.
The line means that terminal 5A4 is in the state where no one is logged on, but the VM
logo is not displayed. In this state, a user can send and receive messages without being
logged on, for instance to ask the operator to call on the phone. So, as far as CP is
concerned, this represents a user even though the true identity is not known. Most of the
time, however, no one sits at the terminal; it has been left in this state after the previous
session. You do not wish to include such a terminal in the list of logged on users.
How can you exclude lines for virtual machines that have no user logged in? If your
system has no users whose IDs begin with “LOGO”, you can use nfind to exclude lines
beginning with this string as shown in Figure 22. (Because the argument to find has three
trailing blanks, only user IDs that are seven or eight characters are discarded.)
But if one of the users of your system is Logodan, then this approach is too simplistic. On
VM/System Product, the user IDs to exclude are of the form LOGONxxx where xxx is the
same as the device address; on VM/High Performance Option the device address is four
characters as shown above. So, which CMS/TSO Pipelines filter excludes lines where the
right characters of the user ID are the same as the terminal address? Though pick is close,
there is no such filter; you must write one yourself.
Figure 24 shows realuser, a REXX program for VM/High Performance Option to exclude
lines for users of the class described above.
The REXX program REALUSER REXX has the same file type as a subroutine pipeline; the two
are the same as far as REXX and CMS/TSO Pipelines are concerned, though they may look
different to you.
This program iterates reading an input line into a REXX variable, testing it, and writing the
input line to the output if it does not look like a user ID of the type you wish to exclude.
The instruction do forever opens the iteration; it is closed with the end instruction.
Inside the loop, the variable in receives the contents of the next record in the pipeline each
time the command READTO IN is issued. Note that the name of the variable to receive the
value is a literal. It is important to write the name in a way where its value is not substi-
tuted by REXX. READTO sets the variable as a side effect. REXX sets the variable RC to the
return code. End-of-file is indicated by a return code 12.
The Parse instruction separates the components of the user ID. The leftmost four charac-
ters of the user ID go into one variable, the rightmost four go into another variable. The
hyphen in quotes instructs REXX to skip to the position after the next hyphen (or the end of
the variable). Finally, the address is assigned to a variable. Parse has done the hard work;
it only remains to test if the leftmost four characters are not equal to the constant “LOGO”
or if the rightmost four characters are not equal to the device address.
The input record is written to the output using the command OUTPUT unless the test fails.
Note the difference between the READTO and the OUTPUT command. OUTPUT writes its
argument string to the pipeline. You can compute an expression to write to the pipeline;
for instance, putting a timestamp in front of each record. You can also write a constant.
In this example, the input record is copied unchanged to the output if it is selected.
Having written the function to suppress these unwanted lines, add the filter to the subrou-
tine pipeline in CNCTD REXX. Figure 25 shows the subroutine converted to portrait form.
Note that the new function is retrofitted to all uses of cnctd.
Returning to the list of logged on users in Figure 20 on page 14, do you wish it sorted
ascending instead? Figure 26 on page 17 shows the new test case sorted ascending.
cnctdn trims the terminal address from the line; it does not matter that the lines are already
split with one per user.
It is undeniably sorted, but you want the presentation to be transposed so that user IDs are
ordered ascending in the columns as you see it in Figure 27.
pad ensures that each record is 9 characters. snake formats a “page” to put the input data
into six columns. The column depth is adjusted to ensure that all columns contain at least
one line of data.
Figure 29 shows a display of the names of users logged on, based on a names file.
user2nam in Figure 30 shows how it was done. The program issues the CMS command to
look a user up in a names file, writing the first line of the response to the pipeline. It
writes a line of question marks if the NAMEFIND command should produce no output.
Thus, user2nam writes one output record for each input record.
The command NAMEFIND is issued for each input line. It instructs CMS to look in
COTTAGE NAMES for an entry describing the user ID, and to display the name of the user.
The response is trapped by CMS Pipelines and written to the output from cms. (The some-
what cryptic append literal ensures that a default is provided in case NAMEFIND produces
no response; take ensures the response is precisely one line.) Name files are described
further in z/VM: CMS Primer.
Multistream Pipelines
But you really want a display with a line for each connected user. The line should have
the terminal address followed by the user name. There does not seem to be any way to
make NAMEFIND do this, so more craftiness is required; Figure 31 shows the result.
Figure 33 on page 19 shows the multistream subroutine pipeline to make this display.
The approach is to split the line in two parts, the user ID and the terminal address. Each
part is processed by itself: the name as you have already seen; the address of the terminal
is shifted to the left. The lines are merged at the end of the subroutine pipeline.
Figure 32 on page 19 shows the topology of the pipelines. The primary pipeline is at the
top and the secondary is at the bottom. chop and spec are shown as tall stages because
they can transmit data on both pipelines. chop only reads from the primary input stream;
the secondary input stream is left unconnected. In the same way, spec reads from all input
streams but writes only to the primary output stream. The spec stage shifts the name right
to column 10 and inserts the terminal address in the first columns. strip removes leading
blanks and hyphens from the terminal address (or LU) to make it begin in column 1 when
it arrives at the secondary input stream to spec.
Pipeline commands are strings, and as such inherently linear; how is the multistream
topology transformed to an argument string?
There are two pipelines in Figure 32. The topmost is defined first; the bottom pipeline is
defined after the end character. The program USERTERM REXX is shown in Figure 33.
Figure 33. USERTERM REXX: Display User Name with Terminal Address
/* Display user name with terminal address. */
signal on novalue
'callpipe (end ?)',
'*:|', /* Input records here */
'c:chop 8|', /* Split after column 8 */
'user2nam|', /* Get the name of the user */
's:spec 1-* 10', /* Shift response right */
'select 1', /* Read other stream */
'1-* 1.8|', /* Put first */
'*:', /* To the output */
'?c:|', /* The rest of the input record */
'strip leading anyof /- /|', /* Remove - and blanks */
's:' /* Pass to spec */
exit RC
Parentheses at the beginning of the argument string to CALLPIPE contain a global option to
control how CMS/TSO Pipelines scans the argument string. This one defines the question
mark (?) to be a special character to delimit pipelines, called the end character.
chop was used in Figure 15 on page 12 to truncate records after eight characters. It
writes the truncated record (the part of it to the left) to the primary output stream; if the
secondary output stream is defined and connected, the part of the record chopped off is
written there. In this example, the first position of the record on the secondary output
stream is a blank or a hyphen, depending on how the terminal is attached. strip removes
leading blanks, hyphens, or both. The delimited string after the keyword ANYOF specifies
a list of characters to be removed from the beginning of the record. strip repeatedly
removes a leading character that is present in the enumerated list of characters; it stops at
the first character that is not in the list.
spec takes the name from the primary input stream and combines it with the device
address on the secondary input stream. The resulting line is written to the primary output
stream.
A DB2 Query
CMS Pipelines interfaces to DB2 Server for VM if your virtual machine is registered with
DB2 and you have run the SQLINIT procedure. TSO Pipelines interfaces to DB2 for z/OS in
a similar way. Records are passed between DB2 and CMS/TSO Pipelines without changing
the format of fields (for instance, integers are not made printable in a query). This gives
you complete control over the format of data going to and from the database, but on the
other hand you must worry about data formats and null values in general. Conversely,
sqlselect formats a query with column headings and converts data from internal represen-
tation to character strings. Figure 34 shows a query in one of the sample databases.
For further information, refer to Chapter 11, “Accessing and Maintaining Relational Data-
bases (DB2 Tables)” on page 136 and to sql.
IBM Manuals
You may wish to start with Virtual Machine/Enterprise Systems Architecture CMS Pipe-
lines User’s Guide, SC24-5609.
The present book covers everything you need to know to use CMS/TSO Pipelines from
your terminal and when you write REXX programs to issue pipeline commands or process
data.
“Bibliography” on page 925 lists other books you may find useful.
| Refer to “Additional Information, Download Site” on page xxii for additional pointers.
| This world is not perfect, however; you could see other responses (though that would be
| highly unlikely on CMS). Figure 36 on page 23 shows what may go wrong. Contact your
system support staff to install CMS/TSO Pipelines.
There are too many variations to list what the solid vertical bar is on other terminals and
PC terminal emulators; in all cases, it is the character used as the OR operator in a REXX
expression. The solid vertical bar has the code point X'4F', which is displayed as an
exclamation mark (!) on many European and Latin American terminals. Some PC
keyboards do not have a solid vertical bar. Instead, the terminal emulator maps the split
vertical bar (¦) into the solid vertical bar. Create an EXEC like SAYBAR (see Figure 37) if
you are in doubt what the solid vertical bar is on your terminal.
Figure 37. SAYBAR EXEC Displays the Stage Separator on Your Terminal
/* SAYBAR EXEC */
say 'The solid vertical bar is:' '4f'x'.'
saybar
The solid vertical bar is: |.
Ready;
Note: 3270 terminals in some countries have both a solid vertical bar (|) and a split
vertical bar (¦). The solid vertical bar is the stage separator on such terminals.
| SYSTSPRT REXX issues error messages from reentrant environments to this DDNAME. The
| importance of allocating this data set cannot be overstressed.
STEPLIB The library that contains the PIPE load module, if it is not in Link Pack Area
or in the link list.
You should also issue PROFILE WTPMSG. When you have, REXX is at least able to remind
you if you forget to allocate SYSTSPRT.
Pipe Help
Help files are generated as part of the installation procedure for CMS/TSO Pipelines; “pipe
help menu” displays the help menu for built-in programs.
There is help for each of the programs listed in the inventory. As an example, “pipe
help <” displays help for the device driver to read a file.
¡ CMS/TSO Pipelines stores information between commands. This includes a list of the last
eleven messages issued and the last eleven SQL error codes received. Help for messages is
most conveniently obtained through the pipeline infrastructure: the command “pipe help”
invokes help for the last message issued. “pipe help 1” invokes help for the second to
last message issued, and so on. There are 11 messages in this memory. Help for message
11 is displayed if you type “pipe help 11”; the number is taken to be a message number
: when it is larger than 10. TSO Pipelines is unable to display help for SQL as z/OS does
: not provide the table of messages.
Issue “pipe help msg <number>” to get help for the message with the number specified.
Often the return code is the same as the last message issued.
Type the commands shown in Figure 38 on your terminal to try out the help facilities.
If you are using Virtual Machine/Enterprise Systems Architecture Version 1 Release 2.2 or
later, you might also try PIPE ahelp. This displays the Author’s Help, which in essence is
the reference material in this book.
Editing Tools
CMS/TSO Pipelines users soon find themselves entering pipeline specifications in EXECs or
CLISTs. The pipeline specifications become longer and more complex as a “plumber” gains
experience.
To help with this task, CMS/TSO Pipelines supplies two edit macros that you may find
: improve your productivity when editing pipeline specifications. These macros support the
: editors ISPF on TSO and XEDIT on CMS.
FMTP converts a pipeline from landscape format where the pipeline is specified on a
single line to portrait format where each stage is stored in a separate record.
Having one line per stage means that you can easily add, delete, or move stages in
a pipeline specification. It also means that there will be room to the right of the
line for a running commentary.
SC lines comments up nicely on the right. It also adds the ending */ when a
comment is not terminated. (SC is called in Virtual Machine/Enterprise Systems
Architecture.)
On CMS, the macros are intended to be used from the prefix area.
Using FMTP
FMTP XEDIT is a useful tool when you are entering a pipeline specification into an EXEC.
Simply insert the PIPE followed by an option string and some stages:
Now move the cursor to the prefix area of the line containing the command and type fmtp:
You can see that the macro has converted the pipeline specification. The commas at the
end of the lines indicate continuation to REXX; though the pipeline specification is now
spanned over four lines, it is still just a single REXX expression. FMTP also added the
option NAME to identify the program containing the pipeline specification; you will find
this very useful when debugging an oil refinery of pipes because error messages will refer
you to the file containing the pipeline specification issuing an error message.
This particular style having the stage separators to the right is sometimes called a right-
handed pipeline (some have modified FMTP to put the bars to the left of the line, making
left-handed pipelines). To REXX it makes little difference; it is still all just a character
string.
Add lines to insert stages in the pipeline. You can insert a landscape pipeline segment and
then convert it to portrait form. Remember to begin with a quote and end with a comma
to indicate continuation:
Press ENTER.
Isn’t all that blank space to the right inviting? Add comments!
Using SC/SCM
The macro SC XEDIT (which is shipped with Virtual Machine/Enterprise Systems Architec-
ture as SCM XEDIT) shifts REXX comments to align them on the right. Simply type the
beginning of the comments wherever convenient:
Then position the cursor on the prefix area of the line containing the PIPE command and
enter the prefix command to format the lines:
Press ENTER.
Voilà!
Specify the file name followed by the remainder of the pipeline specification:
| QDI XEDIT displays the current screen with comments and constants highlighted in different
| colours.
Many of the samples are also shipped with Virtual Machine/Enterprise Systems Architec-
ture by default on MAINT’s 193 minidisk.
See Appendix F, “Pipeline Compatibility and Portability between CMS and TSO” on
page 902 for more information.
Although you can type a complete pipeline specification as a command at your terminal, it
is often easier to write an EXEC to do a given function. Such an EXEC includes a pipeline
specification usually issued by the PIPE command. On CMS, the PIPE command should be
issued to the COMMAND environment (Address command); use the ATTACH environment
(Address Attach) or the LINK environment (Address link) on TSO. The EXEC can issue
additional CP and CMS (or TSO) commands to complement the pipeline function performed.
The return code from PIPE is the “worst” of the return codes received from each of the
stages. If any stage’s return code is negative, then the PIPE return code is the minimum of
all stages’ return codes; otherwise it is the maximum one.
The file is read when disk is first in the pipeline specification; otherwise the file is
appended. For reading, file name and type is the minimal specification. This suffices
when you wish to append to an existing file. (Use > if you wish to replace an existing
file; disk does not erase a file.) You may write a record format after the file mode. For
fixed format files you can also specify the record length. For a new file, the default is
variable format; it is ensured that an existing file is compatible with the format and record
length you specify. The following example shows one way to copy a file to another mini-
disk (making the new file variable record format). The syntax is good for FILELIST : the
output disk mode is the first argument; it is followed by the name, type, and mode of the
input file.
You can inadvertently append to a file by putting a stage in front of the one intended to
read the file; the combination of literal and disk is particularly alluring. To guard against
this, < is an entry point to disk that issues an error message if it is not first in the pipeline,
thus ensuring that the file is always read. Conversely, > and >> must not be first in a
pipeline because the two programs replace a file and append to a file, respectively. When
using >, you must specify the file mode of the disk to receive the file. Note that a blank
character must delimit the command verb from the file name.
To update a file on CMS, it is possible to use < and > for the same file in a pipeline
specification. When the file exists, > writes a utility file and does not erase the existing
file before processing is complete. Figure 50 shows an example.
This processing is safe as long as the input file is completely read before > receives end-
of-file on its input. Using the secondary output stream from, for instance, take or drop can
cause the output file to be created too early. In such a case you can use buffer to ensure
that the file is read completely before being processed. When the file is too large to buffer
in storage, you must write a REXX program that creates a utility file and renames it after
the pipeline has completed. You can use the pipeline command COMMIT 1 in a subroutine
pipeline to test if all data transport has completed without error before you erase the
original file.
When replacing a large file, consider erasing the existing file before starting the pipeline if
the existing file is not needed to create the new file. This reduces the disk space required
because two copies do not exist when the new file is created. On the other hand, this has
potentially undesirable consequences for SFS files that are accessed through a mode letter;
refer to the usage notes for >.
You can read and write a file that is stored in the Shared File System (SFS) in two ways.
You can use the command to access the directory as a mode letter and then refer to the file
using the mode letter or you can use the directory path directly without accessing the
directory first.
When you use a mode letter, CMS Pipelines uses the original minidisk interface to the file
system, even when the file is in SFS.
When you specify a directory path, CMS Pipelines uses the callable interface to SFS.
disk is also available on TSO; it behaves as < when it is first in a pipeline and as >> when
it is in other positions.
By data set name: When the data set name is not enclosed in single quotes, TSO
Pipelines applies the prefix, if any has been set by the TSO command SET PREFIX. A
member name can be specified in parentheses after the data set name. These are ways to
read from a data set:
By DDNAME: An already allocated data set can be referenced by its DDNAME. The
DDNAME is prefixed by the keyword DDNAME= or any abbreviation down to DD=. A
member can be specified in parentheses after the DDNAME. This usage is parallel to the
way members are specified with DSNAMEs.
The last line shows the “CMS-compatible” way to specify a member of a PDS that is
already allocated to a DDNAME. It reads the member C from the data set allocated to
ADMSYMBL. This follows the GDDM standard for the symbol set library.
To access such files, use the <, >>, and > device drivers as if there were nothing special
about the file. CMS/TSO Pipelines inserts line end characters when you write a file and it
deblocks the file automatically when you read it.
When reading or writing a hierarchical file, the argument is a single word or a quoted
string, which specifies the path to the file using the normal OpenExtensions conventions.
A path that begins with a forward slash (/) specifies the fully qualified path from the root
of the file system; a path that omits the leading forward slash is relative to the present
working directory.
To read the file sample.c from your current working directory on CMS and using the full
path:
The byte count plus the line count should be equal to the file size as reported by
OpenExtensions.
On z/OS, sample.c is a perfectly valid name for a sequential data set, whereas
john/sample.c is not a valid name for an z/OS sequential data set. Thus, if the word
contains a forward slash, it is taken to be an OpenExtensions path. Clearly, the full path
from the root contains a leading forward slash and will always be interpreted as a refer-
ence to an OpenExtensions file. You can always construct a path from the current working
directory that contains a forward slash by prefixing ./, which makes the path explicitly
relative to the current working directory:
OpenExtensions file names may contain blanks. To support this, CMS/TSO Pipelines
supports enclosing the path in quotes. On CMS, you can use single quotes or double
quotes, as you find most convenient, but on z/OS you must use double quotes, because
single quotes denote a fully qualified data set name:
CMS/TSO Pipelines also provides device drivers to read and write binary files in an
OpenExtensions file system; refer to hfs. See also hfsdirectory, hfsquery, hfsstate, and
hfsxecute.
Libraries
CMS/TSO Pipelines can write information about the contents of a partitioned data set into
the pipeline (that is, information from the directory); and it can write the contents of
specified members into the pipeline.
listpds writes an output record for each member of a partitioned data set. On CMS, the file
name and file type of the library are specified; on TSO, the data set name or DDNAME is
specified.
The first eight bytes of each output record contain the member name; the remainder of the
record is undefined as far as CMS/TSO Pipelines is concerned. The double quotes repre-
sent unprintable binary data.
members reads specified members from a partitioned data set. It can read the names of
members to process from its input as well as processing the members specified on its
parameter list. This example shows reading a member of an z/OS partitioned data set:
This example shows that members can read the member list from its input.
You can combine device drivers to copy a data stream to several devices or files. In this
example, the data are copied to the console as well as to the file “A B” on disk C:
3 This file is also used as the input file in the xlate examples in “Translate Characters” on page 44.
console reads lines you type on the terminal when it is first in the pipeline specification.
console stops reading when you enter a null line (just hit enter); this line is discarded.
Note: z/OS users should note that console reads and writes to the log on terminal; CMS is
indeed a “master console operator” application in the sense that it is the program that is
IPLed in the virtual machine. On TSO, console cannot access an z/OS console; nor does it
issue WTO macros to the master console (it uses route code 11 to write to the programmer
when the PIPE command is invoked directly from JCL). Use the synonym terminal if you
find that a more appropriate name.
There are two ways to read from the console stack when the device driver is first in the
pipeline. Use console to read lines until a null line is read; use stack to read as many lines
as there are on the stack.
The first word of the command (PIPE) is upper case because the default command environ-
ment has been set to COMMAND, which makes the case of CMS commands important. spec
removes the carriage control character from the beginning of each record. Further note
that records that have carriage control X'03' (no operation) are included in the data typed.
The first line is the tag of the SPOOL file.
Figure 64 shows how to print a file already containing machine control characters, for
instance SCRIPT output:
Lines read by reader can be written back to SPOOL with printmc without further processing
if the SPOOL file is a printer file:
The pipeline specification in Figure 65 copies the reader file to a copy on the printer (if all
CCW operation codes are valid for the output device), except that the tag of the reader file
appears as an additional no operation record in the printer SPOOL file. (See Figure 166 on
page 79 for a command that retains the tag.)
MVS SPOOL
In a batch job, TSO Pipelines can read a SYSIN data set by specifying its DDNAME to <.
But TSO Pipelines is unable to read data sets that have been because the underlying inter-
face to read the SPOOL file requires the task to be authorised.
You can create a SYSOUT data set in two ways. You can allocate the data set and use
DDNAME= with >; or you can use sysout to allocate the data set dynamically.
For those with a CMS bent, printmc and punch are synonyms for sysout. printmc expects
the input records to have machine carriage control (like RECFM=VM); whereas punch
assumes that no carriage control is present (like RECFM=V).
Accessing Variables
CMS/TSO Pipelines can access the variable pools in REXX, EXEC2, and CLIST programs.
We recommend that you use REXX programs and on TSO issue the PIPE command by
Address Attach.
Use stem to read and write variables in the program that calls CMS/TSO Pipelines. As
with EXECIO, <stem>0 is set to the count of records and <stem>n (where n is a positive
number) contains the nth individual record. Figure 67 shows how to load a file into a
stemmed array beginning with file.1.
stem is handy to run subroutines as pipelines without going via the stack or a file.
Figure 68 on page 37 shows how to sort the contents of REXX variables. The count of
records in the array with stem unsorted. must be stored in the variable unsorted.0
before running the pipeline. The period after the stem name indicates the use of a
stemmed array. No period is added by stem; this means that stem can also be used with
EXEC2 or CLIST.
It is possible to read and write the same variables in a pipeline with two stem stages that
refer to the same stem. This is safe as long as there is a buffering stage (for instance,
sort) or no records are added to the file in the pipeline, but it is better to be safe and write
to a different stem unless the file is so large that two copies cannot fit in storage.
The device driver var reads the contents of a single variable into the pipeline when it is
first in a pipeline. When var is not a first stage, it loads the first record into the variable
and copies all input to any following stage; the variable is dropped if there is no input.
The filter spec will be described later; as used here, it converts the record from the internal
: IBM System/390* hexadecimal floating point representation (eight bytes) to a character
string that contains the number in scientific notation. After conversion the contents of the
variable can be processed by REXX.
Refer to varset for a way to set many variables that are not a stemmed array that has
consecutive numeric subscripts. varfetch reads variables from the REXX environment; the
names of the variables are specified in varfetch’s input. vardrop drops variables.
The first stem in Figure 70 reads the stemmed array as described in “Accessing Variables”
on page 36. append copies the primary input stream to the primary output stream and
then runs the argument stage, connected to the primary output stream. The input to the
argument stage is not connected; it is a first stage and does read the stemmed array.
preface runs the argument stage before it copies the primary input stream to the primary
output stream.
preface and append run the arguments in the REXX environment in effect when they are
invoked irrespective of the number of REXX programs in the pipeline.
You can also use < with append to concatenate two files:
Figure 72 (Page 1 of 2). Reading Two or More Files into the Pipeline
/* Catenate two CMS files */
arg fn1 ft1 fm1 fn2 ft2 fm2 fn3 ft3 fm3 .
address command
'PIPE',
'literal' fn2 ft2 fm2 '|',
'literal' fn1 ft1 fm1 '|',
'getfiles |',
'>' fn3 ft3 fm3
exit RC
Figure 72 (Page 2 of 2). Reading Two or More Files into the Pipeline
/* Catenate two MVS files */
arg fn1 fn2 fn3 .
address command
'PIPE',
'literal' fn2 '|',
'literal' fn1 '|',
'getfiles |',
'>' fn3
exit RC
/* Catenate many MVS files */
arg output rest
address command
'PIPE',
'literal' rest '|',
'split |',
'getfiles |',
'>' output
exit RC
Issuing Commands
Several device drivers issue commands and provide the response as their output, a line at a
time. These device drivers issue the argument string, if any, as the initial command; the
primary input stream is then read and a command is issued for each line.
Other device drivers issue commands without trapping the response; these are useful to
invoke programs that use full screen mode, for instance XEDIT.
If the primary output stream from cp, command, cms, or tso, is not connected, the output
from the command is discarded. Thus, this allows a way to issue commands and suppress
any error messages that might otherwise have been issued.
When the host command interfaces are used without a secondary output stream, they
aggregate the return codes from the individual commands and provide this aggregate as
their return code. CP return codes are zero or positive; a return code of 1 indicates an
unknown CP command; for other return codes, the aggregate is the maximum return code.
For CMS, if any return code is negative, the aggregate of the return codes is the minimum
return code; otherwise the aggregate return code is the maximum of the return codes.
The built-in programs to issue host commands support a secondary output stream. When
the secondary output stream is defined, the program writes the return code received on a
command to this stream after the output from the command (or the command itself) has
been written to the primary output stream. The return codes can be aggregated by passing
them to aggrc. When the secondary output stream is defined, the return code from the
host command interface is zero unless the program itself detects an error. You cannot
specify an initial command as the argument when the secondary output stream is defined.
CP
The device driver cp sends commands to the Control Program (CP) and writes the response
to the pipeline. If an argument string is present, it is issued first; then the primary input
stream is read and issued.
CP command names are in upper case; giving CP a command whose name is in lower case
results in return code 1 (unknown CP command). To help you, cp inspects the first word
of each command before it is issued; if the word is all upper case, cp issues the command
as you have written it, possibly with mixed case arguments. If the first word of the
command is completely or partly in lower case, cp translates the complete command to
upper case before issuing it to CP. This is why the first line in Figure 73 shows the
response to the QUERY command; the second example is interpreted by CP as a request to
look for the user logged in as “files” in lower case. The last example shows how to issue
multiple commands.
It is seldom useful to cascade cp device drivers because the output of the first cp device
driver would be interpreted as commands by the second instance of cp, and you would
most likely just see a return code of 1 indicating that the line is not a valid CP command.
However, the response to a CP command is often used to build another CP command;
Figure 74 shows how to close a punch and put the resulting SPOOL file first in the reader
queue.
For a QUERY command, the response buffer is extended automatically to accommodate the
length of the reply. For example, you need not worry about the number of reader files in
your reader when you process the response to QUERY RDR * ALL. The default length of the
response buffer is 8K for commands other than QUERY; put a number as the first argument
to cp to use a different size for the buffer. The number specifies the number of bytes to
allocate to the buffer to make it larger (or indeed smaller) than the default 8K buffer.
CMS
cms and command issue CMS commands and intercept the response normally written to the
terminal. cms is recommended for casual work because it issues commands with the
search order you are used to when you type CMS commands on your terminal. This is
equivalent to Address CMS in REXX. Figure 75 shows an example.
CMS forwards unrecognised commands to CP. Thus, cms can be used to issue CP
commands, but the response is written to your terminal by CP; use cp to issue CP
commands when you wish to process the response. On the other hand, use subcom CMS to
issue CMS commands without trapping the CMS response.
command issues the command in the same way that Address COMMAND in REXX does; the
argument string and input lines to command should be upper case unless you wish to
manipulate objects with names in mixed case. Figure 76 shows how to create and erase a
file with a mixed case name.
listfile * map
load MAP A5
Ready;
listfile * map
DMSLST002E File not found
Ready(00028);
TSO
The command device driver issues TSO commands without trapping the command response.
It produces no output on the primary output stream. The tso device driver issues TSO
commands while trapping the response (to the extent that the REXX function OUTTRAP can
trap a response).
The first command in the example shows that the response is written directly to the
terminal by TSO. The count stage does not receive any records from command and thus
writes 0 to its output. In the second example, the response is indeed intercepted; we do
not see it on the terminal. The number 1 indicates that one line was written into the
pipeline where it was counted by count.
Subcommand Environments
subcom directs the commands to a specified subcommand environment and copies the
command to the output; in general, it is not possible to trap responses from subcommand
environments. It is unspecified which subcommand environments are available; some envi-
ronments on CMS may not support standard CMS parameter lists and may cause CMS fail-
ures if invoked with standard parameter lists.
Use subcom CMS to issue CMS commands without intercepting console output.
The pipeline specification in Figure 78 issues the STATE command for each file mentioned
in a file with file type FILELIST, issuing error messages for files that do not exist.
The example above shows that the command specified as the argument string is not copied
to the output; the response is displayed on the terminal, and the count of lines written into
the pipeline is zero.
starmsg connects to the message system service provided by CP to intercept console output.
Each output line from starmsg has a 16-byte prefix, which contains the message class and
the name of the originating virtual machine; this is followed by the message or response
from CP.
starmsg operates differently when it is first in a pipeline and when it is not first in a pipe-
line. When it is not first in a pipeline, it will terminate when it reaches end-of-file on its
input; use this form to implement clients. When it is first in a pipeline, it will continue
waiting for messages until it is terminated by a command or its output is severed; use this
form to implement servers.
When starmsg is not a first stage, it issues each input record as a CMS command and
terminates when it reaches end-of-file. To issue a single command and trap the response,
both from CP and CMS:
When a command ends with a message, for instance “Command complete”, it may be
possible to use tolabel to stop at this point; the line is discarded. Note that no CMS ready
message is issued since the PIPE command is still running; use the Say REXX instruction at
the end of a command procedure to write a line that can be used to stop processing.
Figure 81 on page 44 shows an example that invokes a command procedure to issue both
CP and CMS commands; the response is stored in a file. The command “complex” issues a
complex set of CP and CMS commands; it writes “Done?” to the terminal when complete.
starmsg sets up an immediate command (HMSG by default); you can issue this command to
stop the stage. Another way to stop an asynchronous pipeline is to issue the command
PIPMOD STOP to CMS; you can do that with an immediate command or from a filter written
in REXX. This terminates all stages waiting for an external event.
Using Filters
Programs that process data in the pipeline without reference to a host interface are called
filters. These functions are typical examples of tasks performed by filters:
Translate characters, mapping one character to another.
Count characters, words, and lines.
Edit the record to rearrange its contents.
Change the record format and transform between CMS and formats used in other oper-
ating systems.
Select records. You can select records that start with a given string in the same way
as the FIND XEDIT subcommand, or ALL XEDIT subcommand. Other filters emit the
records that do not match rather than the matching ones.
There are others; sort is a distinguished example of the remaining filters.
Translate Characters
xlate replaces each character with another one based on the mapping in a translate table.
This is useful, for instance, to change the collating sequence or to blank out unwanted
delimiter characters. Translation can be restricted to specified input ranges. The translate
table is built by modification to an initial table, the neutral one by default; other initial
tables are selected by keyword.
The following xlate examples operate on the file shown in Figure 59 on page 33.
Figure 83 shows how to specify two column ranges. Characters within the specified
ranges are translated to upper case. Note that ranges may be written in any order and that
an asterisk (*) identifies the beginning or the end of the record, as appropriate. A hyphen
(-) separates the begin and end column of a range; use a period to append a column count
to the begin column number. To reverse the case:
Translation is performed in the order column ranges are written. All or part of a record
can be translated more than once; this is noticeable when the translation has no closure:
Here the first 10 columns are translated twice, and the letters go back to their original case.
The neutral translate table is used when no keyword is specified, as in this example
removing special characters:
It is a good idea to use an explicit input range when the default translate table is used.
This ensures that the first item of the translation specification is not taken to be a single
column range:
The first token is interpreted as a desire to translate the contents of columns 1 to 80 inclu-
sive, though the intent was to translate that range to blank characters.
Figure 89 shows a transliteration of the lower case letters. Upper case letters are not
affected. The simpler specification a-y b-z z a is not used because the “holes” in the
EBCDIC collating sequence would turn i and r into characters that are not letters.
Counting
count counts the amount of data in the input stream. It counts characters, words, and lines.
The result is a single record, which is written to an output stream. The result is written to
the secondary output stream, if defined. Input lines are copied to the primary output
stream when the secondary output stream is defined.
The xlate stage turns all special characters into blanks. The counts are always in the order
characters, words, and lines irrespective of the order in which the options are specified.
change
change works like the CHANGE XEDIT subcommand; as in Figure 78 on page 42, it is
often used to put a literal in front of each line.
The default, however, is to change all occurrences; specify the maximum number of substi-
tutions after the change string specification.
Figure 91 shows how to change all occurrences of the letter “l” to five asterisks and how
to remove ll. You see that the default scope is the complete record.
In Figure 92, only the first l is changed; a range is specified in the second part. This is
similar to the ZONE XEDIT subcommand setting, but you can have more than one range in
parentheses (not shown).
change ANYCASE supports mixed case change strings. That is, the twenty-six letters are
compared irrespective of their case when change looks for a string to replace. If the first
string contains no letters or contains one or more upper case letters, the second string
replaces occurrences of the first string without further change. When the first string is in
lower case, change tries to preserve the case of the string being replaced:
specs
spec has evolved into a program that is rich in function; so much that it has been given
two separate chapters in this book. (Chapter 16, “spec Tutorial” on page 163 and
Chapter 24, “spec Reference” on page 692.) Here we show the original simple form of
specs, which is still useful in its own right.
As used originally, spec builds an output record for each input record. The output record
contains one or more fields, which can contain literal data or data from a field of the input
record. The specification list (from which specs got its name) consists of pairs of input
and output specifications:
... | specs <input-1> <output-1> <input-2> <output-2> | ...
For each input record, specs goes through the specification list and performs the actions
specified. By the time it reaches the end of the list, it has built the output record.
To put the contents of the input record into the output record at column 11:
This specification list contains one pair of input and outputs. The input range (1-*) repres-
ents the entire input record beginning in column 1 and extending to infinity (the asterisk is
idiomatic for the largest integer that the computer can handle). spec does not pad the input
record; it takes the shorter of the actual record and the range specified. It puts the input
field into the output record starting at column 11 and fills the unspecified part of the output
record with blanks. Again, it does not pad the output record beyond the area filled by the
input field.
To prefix the contents of each input record with a literal that is repeated in all output
records:
In this example, the literal field, which is specified between the two slashes, is inserted into
the beginning of the output record followed by the contents of the input record (1-*). The
keyword NEXT specifies that the field should be appended to the rightmost character
inserted into the output record so far.
Literal fields are delimited by a special character, which is traditionally a forward slash (/).
The delimiter character is deleted when the field is inserted in the output record.
For example, specs can can be used to prefix a record with an identification of its origin as
in Figure 96. The file name, type, mode, and record number are prefixed to each input
record.
This is done by putting three literal fields containing the file name, file type, and file mode
into columns 1, 10, and 20, respectively; the record number into columns 25 through 34;
and finally appending the contents of the input record in column 41 onwards:
A field of the input record can be specified as beginning or ending relative to the end of
the record rather than the beginning. (Or both beginning and ending relative to the end of
the record.) Put a minus sign in front of the column number to make it relative to the end
of the record. You must specify a beginning and ending column number separated by a
semicolon when using this notation. There is no provision for a column count in this
format.
Figure 97 shows a target that is relative both to the beginning and end of a record; the
first and last character are removed from each record. A record with two characters or less
becomes a null record, because specs ignores fields that have zero or negative lengths.
When the field is ignored, the output record is not padded to the position where the field
would have gone if it were not null.
Finally, Figure 98 shows how to replace the null string with five asterisks. Note the
difference between spec and change in the last record. spec inserts the string in the last
record as well; change does not because the last record does not extend to the column
range.
The specification of an input range grew from the simplistic range you can specify with
the command. With COPYFILE ( SPECS, you could specify only the first and the last column
of a range. Thus, COPYFILE ranges are always specified relative to the beginning of the
record.
In contrast, CMS/TSO Pipelines allows you to specify an input range relative to either the
beginning or to the end of the record. A range that is relative to the end of the record is
specified with a leading hyphen (-). And you can specify more than just a column count:
Blank-delimited words.
Tab-delimited fields.
If you prefix the range with the keyword WORD, the number(s) specify words, which are
separated by one or more blanks.
You can even specify a different word separator character if you prefix the WORD keyword
with the keyword WORDSEPARATOR and then specify which character you want used to
delimit words. This can be useful when you want to treat a run of delimiter characters as
a single delimiter:
Tab-delimited fields are very similar to blank-delimited words; the difference is that two
consecutive field separators specify a null field. By default, the field separator is X'05',
the horizontal tab character. You will probably change the field separator more often than
you change the word separator:
This may be complex already, but you will soon find yourself wishing to extract the
second character of the third word:
You can even take the substring of a substring, and so on ad infinitum. (See Figure 145
on page 69)
Selecting Records
Several built-in programs select records based on the contents of the records and the argu-
ments specified. To select records based on their contents, there are filters that work like:
REXX built-in functions Abbrev and Verify.
XEDIT subcommands Find, NFind, and Locate
COPYFILE options FRlabel and TOlabel.
Overview
take, drop, frlabel, and tolabel partition the data stream by deleting or copying records to
or from a specified position in the data stream. between, inside, notinside, outside, and
whilelabel do this for ranges of records based on leading characters.
all, asmfind, asmnfind, find, nfind, locate, nlocate, and unique select individual records
based on contents.
Four built-in programs require a selection stage as their argument string. These programs
(or prefixes) modify the behaviour of the specified selection stage. This concept may be
slightly mind-boggling at first, but you will soon see examples of their use.
casei is specified as a prefix to a selection stage to make the selection stage disregard case
while it performs its operation, for example to select records that contain a character string
irrespective of the case. zone is specified as a prefix to a selection stage that does not
support a column range otherwise; a specified range of each record is tested rather than the
beginning of the record. casei and zone prefixes can be combined to perform the
composite operation; the order of the prefixes does not influence which records are
selected.
Be sure to use the selection stage’s own facilities for caseless operation and input ranges,
if it supports such; this will be more efficient and you are likely to find more facilities than
are offered by casei and zone.
frtarget and totarget are also specified as a prefix to a selection stage. frtarget selects
records starting with the first one that is selected by the selection stage; totarget selects
records up to (but not including) the first record selected by the argument stage. Thus,
frtarget and totarget partition the file even if the specified selection stage does not.
All selection filters operate like a railway junction; each record is sent to the primary
output stream or the secondary output stream, depending on whether the contents of the
record satisfy a condition specified as a parameter to the filter. Though definitely useful
(see “Decoding Trees” on page 82), the secondary pipeline is not defined most of the time;
records destined for it are then discarded. In this chapter, however, we are only concerned
with what happens on the primary pipeline.
This locate stage selects all records that somewhere contain the string between the forward
slashes (/). You get at least all lines with the operation code PIPERM because an operation
code must have at least one blank character on each side.
locate supports only one string; use all to select lines that contain one of several strings, or
contain several strings.
The exclamation sign specifies that a record is selected if one of the strings is contained
within it (or both are).
find selects records where the leading string is equal to the parameter. As with XEDIT,
blank characters in the argument correspond to positions to be ignored during the compar-
ison, and underscores are positions where there must be a blank character to match.
Ready;
In the example in Figure 105, all records starting with “abc” are selected, as you would
expect. Suppose, however, that the parameter had been written with three trailing blank
characters. How should CMS/TSO Pipelines interpret this? Blank characters mean “don’t
care” positions; do trailing blank characters make any difference in find?
Experiment when wondering about things like this. In the example in Figure 106, no lines
were entered with trailing blank characters; the records were as long as what you see.
Ready;
The line with abc only was not echoed; thus, trailing blank characters are significant in the
parameter list for find.
Use pick to select records which satisfy some relation between the contents of a field and a
literal, or between two fields in the record. To select records from July 1994 and later
(assuming dates in the ISO format):
This also discards null and blank lines, because a missing word is considered to contain no
characters; and two null words are considered equal.
pick can (unlike locate and find) reject records that are exactly equal to some string and
select records that contain further data:
You can use verify to select or reject records that contains characters from those specified
in a list; for example, to verify that the record is numeric:
take copies up to a specified number of records to the output, discarding any further
records. drop does the opposite: it discards records and copies the balance of the file to
the output. Both filters work on the last part of the data stream if instructed by the
keyword LAST. To see the last lines of a linkage editor output:
Use frlabel to copy records starting with the first one that has a given string in the first
positions. This is equivalent to the COPYFILE option FRLabel. tolabel is the converse
operation: it copies records up to but not including the matching one. Both stages match
the parameter string verbatim in the case entered; blank characters and underscores mean
just that. To see who Tim is (note that the label must be in the correct case and that not
finding it gives no diagnostic):
If, on the other hand, you want to see what is immediately before Tim:
Ready;
Figure 115 on page 56 shows the use of inside with the second argument a number.
| Note that an occurrence of the first string within the records being selected is not treated as
| a recursion. That is, even though there were two records that begin with a, the first record
| that begins with c terminates the range of selected records.
Ready;
Selection stages can be cascaded to fine-tune the set of records selected as shown in
Figure 112. As a further example, a CMS file with file type COPY may contain several
members. Each member is prefixed by a record that contains “*COPY <name>”. To select
one such member:4
The trailing blank characters on frlabel and tolabel are deliberate. Note that this example
fails without drop : tolabel would match the first record it sees, which would cause no
records to be selected.
4 The command is printed on two lines because it is longer than the figure width. The two ellipses (...) are not part of the command issued.
Caseless Operation
All built-in programs, for which it is appropriate, support the ANYCASE option to specify
that they should ignore the case of the letters a through z; that is, treat “a” and “A” as the
same character for purposes of comparison:
In the first pipeline in Figure 118, case is respected and both input records are rejected.
For a selection stage that does not support the ANYCASE option (it would have been written
by a user), you can instead use casei as a prefix to the selection stage to select records
independent of the case of the contents of the record and the case of the search argument:
Refer to “Destructive Testing” on page 84 when your alphabet contains more characters
than the twenty-six used by the English. It explains how to use a derivative of the actual
record to control the selection.
The split family of filters matches a string or a single character. For a string, write the
keyword STRING (which can be abbreviated to STR) followed by the data between two
occurrences of a delimiter character in the standard CMS way.
The pattern is “reversed” with the keyword NOT (or TO in the case of strip ). When used
with a single character pattern, NOT means the complement set with respect to the universe
of all 256 values that can be stored in a byte. NOT used with a string pattern means, skip
occurrences of the string; the pattern matched is considered to have length 1.
strip further lets you say which side (or both) you want stripped and optionally a
maximum count of characters stripped. A record is not discarded by strip if all of it has
been matched; a null record (having zero bytes) is written.
The default for split is to do it AT the target, which is then removed; use the keywords
BEFORE or AFTER to designate that the target should remain and be included in the second
or first record, respectively. chop truncates before the target by default; use the keyword
AFTER to truncate after the target. For both chop and split, the options BEFORE and AFTER
are further modified by a number. This is an adjustment to go past the target when the
number is positive (left for BEFORE, right for AFTER ); a negative number moves “through”
the target in the opposite direction. Note that AFTER is provided as a convenience; n
AFTER <pattern> is just a way of expressing m BEFORE <pattern>, where m is
-n-length(<pattern>). This definition sidesteps the question, how long is a string that
is not there?
Joining
join puts records together. Specify one less than the number of input records to be used
when building an output record. You can add a literal string between records joined.
The first example in Figure 126 shows the default of joining two records with no added
characters; the second one shows the effect of “join 2”; the third one shows adding a
string between the records joined. The fourth one shows how to limit the length of the
output record. This can be used to flow text if the input records contain words, but it does
not provide for flowing of text in general because join never splits input records.
join also supports a key field at the beginning of the record. Only records that have the
same key are joined.
Records from multiple streams are joined by spec with SELECT, and overlay (possibly after
offsetting one of the streams with spec).
Use joincont to join records when continuation is indicated by the presence (or absence) of
a string at the end of a record being continued or at the beginning of the continued line.
To join lines using the C convention of suppressing line end (“splicing lines”):
This is too simplistic for real C programs, however, because the line could end in an even
number of backward slashes. In this case, the line is not to be spliced. To prevent
joincont from being fooled by double backward slashes, we turn double backward slashes
into something else before joining; and we remember to turn the backward slashes back
again. For this to succeed, we need to know that some particular character does not occur
in any input record (or we are prepared to accept that such a character is turned into a
backward slash):
In the example above, double backward slashes are turned into two null characters.
Because change works from left to right, an odd number of contiguous backward slashs
will leave one backward slash at the right (it would not have been so good if it were at the
left).
Use deblock LINEEND or deblock STRING when a particular character or string separates
logical records. This deblocking operation is a combination of blocking and deblocking
because it joins lines together when the end of line sequence is not at the beginning or end
of a record. deblock removes the delimiter character or string; it must be reinserted (for
instance with spec or change) if it is to be retained.
Finally, asmcont processes an ASSEMBLE file to join all lines of a continued statement into
one record. Columns 72 to 80 are discarded, as are columns 1-15 of continued statements.
pack does the reverse of unpack. To pack variable record format files with unknown
record length, the secondary output stream from pack is connected to the secondary input
stream to disk. This ensures that the header in the first record indicates the correct logical
record length for the file; in general, this is only known when the complete input stream
has been processed.
When you read from the reader or a tape, what you get is not always unblocked records;
often you see blocks in a format peculiar to the program that has made the file as with
DISK DUMP, TAPE DUMP, OS record descriptor words, or utilities. And these are often
nested. Here is a simple example:
Use the option Fixed on deblock if the tape is fixed blocked. Sometimes you need more
than one deblocking stage to get records completely unwrapped if they are in a Chinese
box (for instance a partitioned data set that has been sent in netdata format from z/OS).
Sorting
sort processes files of moderate size that can be held in virtual storage for the duration.
You may be able to use dfsort to sort large files.
The sort filter reads the file to sort from its input and writes the sorted file to its output;
the only options specified on sort are the sort fields, which default to the complete record.
sort normally compares sort fields as binary data; though you can specify the ANYCASE
keyword to make it ignore the case in records, this is inefficient for large files. To sort
efficiently on a field irrespective of its case you must generate a sort key that has the data
folded to the case required. You must also do this if you are dealing with text in a
language where the rules for capitalisation are different from English. Remove the key
after sort:
The translate stage is extended like this to support the Danish collating sequence:
sort reflects the beauty of the pipeline because it only has to sort. Changing the collating
sequence is done elsewhere in the pipeline specification, so there is no need for the exits
one sees in sorts that also have to read and write files.
Note that a range is a single word, unlike the CMS command where you specify the begin-
ning and the ending column of a range as separate words. However, sort accepts up to 10
ranges and it is perfectly proper to have a “sort 1 10”, but then you are asking for a sort
on columns one and ten only.
This is of course naive in the extreme: numbers, examples, DCF control words, and GML
tags are also considered words5 .
collate merges detail records into a master file; lookup retrieves records from a master file,
based on keys; and merge merges records from multiple streams according to a sort key.
How to define such streams is described in Chapter 5, “Using Multistream Pipelines” on
page 74.
Cascading Filters
As you have seen by now, it is normal to combine several filters to do the job one wants
done. One advantage of using CMS/TSO Pipelines is that you can do this easily.
This section shows how CMS/TSO Pipelines is normally used by combining several filters.
You see examples with more stages than previously, but that is not necessarily the best
way to combine filters in real life.
One way is to combine the result of REXX functions that each return part of a pipeline
specification. Figure 66 on page 36 shows how to tag and SPOOL a device and punch a
file on it. A better approach for the pipeline could be:
'pipe disk' file pchprim(node user)
Figure 134 on page 64 shows a possible PCHPRIM EXEC.
5 See the “programming pearls” column in the May 1985 issue of The Communications of the ACM (28:5). Reprinted in Jon Bently, Program-
ming Pearls, Addison-Wesley 1986; ISBN 0-201-10331-1.
Figure 134. Using a REXX Function Reference to Generate Part of a Pipeline Specification
/* PCHPRIM EXEC */
/* Prime the punch and set up the pipeline */
arg node user .
address command
'IDENTIFY(LIFO'
parse pull . . . . net .
'CP SPOOL D' net 'PURGE NOCONT CL A'
'CP TAG DEV D' node user
return '|chop 80|punch'
Figure 135 shows a different, and in many ways better, approach. It is a REXX stage that
performs the required CP commands and then redefines itself to punch the data stream.
The same CP commands are issued in the two examples; the difference is in the way the
pipeline specification is issued.
In the first example, the function returns a character string that is made part of the pipeline
specification. Thus the stage separator must be a solid vertical bar (or made an argument).
Errors are reported (though not shown) by not returning data, which forces a syntax error
in the calling REXX program.
The second example runs as a stage; it uses CALLPIPE6 in the second last line to replace
itself with a new pipeline specification. The pipeline specification is independent of the
stage separator specified in the first pipeline and errors can be reported with a return code.
(The stages with “*:” are required to show that the new pipeline connects to the existing
one.)
Netdata Format
Assume that you have a netdata file in your reader. The reader device driver reads the
SPOOL file, but the output is not the data set in a format you normally want.
The CCW operation code is in the first position of the record. Records with data you are
interested in have X'41' in the first position. The remaining records have X'03' and
should be ignored.
A more subtle difference is that CP discards trailing blank characters from the records in
the SPOOL system. The physical transmission format does not take this into account and
the deblocking stage fails if it gets short records.
The block size is indeed in a transmission header, but it is not required that it be in the
first physical record. Thus, the strategy adapted by deblock NETDATA is to ignore the
problem and insist that the data set be padded to the appropriate block size before the
deblock stage. (If CP has saved the original length in the SPOOL file, reader finds it and
pads the record.)
Now you can recreate a sequential data set with variable record length containing control
records and data records. This data stream can be inspected by a stage that redefines itself
to either create the desired data set or send the data directly into XEDIT for peek.
Here the pipeline specification is pushed so that it is issued when there is an active XEDIT
session. However, it is not recommended to stack the complete command; for one thing, it
might be longer than XEDIT’s truncation limit of 255. Instead, write an XEDIT macro that
issues the pipeline command to CMS. Then stack a call to this macro.
To be truly compatible with PEEK, add “take 200|” to the pipeline after nfind. This shows
only the first 200 records of a large file so that XEDIT does not run out of storage. Of
course, what you see is indeed the first 200 records, not the first 200 card images in the
transmission data set.
Use deblock TEXTUNIT to deblock the text units in the control record into separate records
(see Figure 137 on page 66 7). The first halfword defines the type of data, the second is
the number of fields; and each field is preceded by a halfword length.
Records starting with X'E0' (the backward slash, \) are control records, which are
selected. Text units can be processed in parallel with loading a file into XEDIT by using
the secondary output stream from find.
Creating a netdata file for transmission is in principle the reverse procedure. But now you
need to be concerned with generating the control records with the proper format and
contents. For a variable record format file, the record format declared in the transmission
header should be X'0002', meaning variable records without descriptor words. Refer to
INMR123 REXX S2 for an example of how to construct this header.
OSPDS REXX in PIPSAMP PACKAGE shows how to process the IEBCOPY unloaded data set
after the physical blocking has been taken care of. The example creates a file for each
member of the partitioned data set, or a stacked file where members are separated by
*COPY delimiter records.
The first specs stage puts the second word of the record into the first ten columns of the
output record; the original input record is appended from column 11 and onward. The
xlate stage works on the first ten columns only. It translates everything to a blank, except
for the number sign, the at sign, and the dollar sign, which are translated to “x”. The
locate stage then selects the records that contain an “x” anywhere within the first ten
columns. The second specs stage extracts the original record, dropping the selection key.
| While the previous example of pipethink is ingenious and the best way at the time of
| writing, the selection can be performed directly as of CMS/TSO Pipelines 1.1.10:
| '|...
| '|locate word 2 anyof /#@$/',
| '|...
Selecting, Revisited
Previously, you saw how to select records with the string “ PIPERM ”. This selects all
such macro instructions when applied to an assemble file, but it might also select lines
where the comment includes the word. To improve on this, first consolidate continuations
and remove comment lines beginning with * and .*:
Adding a column range to locate finds exactly the lines you want if the program has all
operation codes in column 10. The three commands in Figure 140 give identical results:
However, the operation codes do not have to begin in column 10. Figure 141 shows how
to locate lines with a particular operation code, independent of its position in the input
record.
Here the label is removed by stripping to the first blank character. The next strip removes
blank characters after the label (and at the end of the record); find selects lines with the
operation code followed by at least one blank.
But you really want the first positional operand, so continue as follows:
Here the operation code is deleted in the same way that the label was removed above. The
record is truncated before the first blank character or comma, which is the end of the first
positional operand. Each line is now just a single word.
This may not seem overly useful. But when the file name and type are added by spec,
fanin joins the lines from all programs, and the stream is sorted a few ways, then you have
one of the files in the cross reference in CMS Pipelines Installation and Maintenance
Reference, SL26-0019. The complete set of selection stages is shown in Figure 143:
You may find this approach contorted, but no doubt you agree that a message cross refer-
ence can be useful. The question is, would you have written a program to do the same or
would you have relied on a manual system?
Though many built-in programs have been added to CMS/TSO Pipelines since the section
above was written in 1985, the usefulness of a cascade of filters is not in dispute; learning
pipethink is just as important today as it was then. Still, using vintage 1994 technology,
this selection might be performed more naturally by this cascade:
The first spec stage puts the operand field into column 1 of the record. The second spec
stage inserts the file name and extracts the message number in a rather subtle way. A
question mark is inserted into column 10 in case there is no first positional operand; the
field separator is set to a comma and the first field is extracted and inserted into the output
record, overlaying the question mark. If, however, the first positional operand is omitted,
the input record will contain a comma in column 1, the field will be null, and the question
mark will be left unchanged.
You can easily create complex record validation functions. For example, to ensure that
columns 1 to 10 contain only one word of numeric data without regard to the placement of
the number within the ten columns:
The verify stage selects records that contain blanks and digits in the first ten columns, but
this could still contain a string that is not a single number. Thus, the nlocate stage is
employed to discard records that contain more than one word in the first ten columns.
This is still not perfect, for Figure 145 accepts a record that contains all blanks in the first
ten columns. To be sure that there is a word you must add a locate stage:
To validate a number with a decimal point, simply tag on another stage to reject the third
field, using a period as the field separator:
CMS
Use command LISTFILE to obtain file status information on files.
In Figure 148 the standard CMS command was run and the output intercepted. This works
fine when few files are listed, but because the command has to complete before the lines
can be processed by the pipeline, you may run out of storage if you wish to process all
files on all accessed mode letters. You may be able to list files on one mode at a time to
minimise buffer requirements.
Use state and statew to obtain information about specific files. Note in Figure 149 the
similarity to the command drivers: an initial item can be specified with the stage; further
input lines list more files you wish information about.
Figure 150 on page 71 shows how the function of VMFDATE is implemented using state.
/*********************************************************************/
/* Change activity: */
/*12 Sep 1993 +++ Rename. Make sensitive to system. */
/*********************************************************************/
signal on novalue
compid=left(myfname, 3)
If where¬='CMS'
Then signal notcms
notcms:
'callpipe (name FPLDATE)',
'|listispf dd='ft fn,
'|stem f.'
If f.0=0
Then exit 28
If f.1=''
Then exit
TSO
state determines whether a data set or an allocation exists. When the data set or allocation
exists, the fully qualified data set name is written to the primary output.
You can use REXX functions or issue TSO commands to obtain other information about a
data set:
Figure 152. Using TSO Commands to Obtain Information about a Data Set
pipe tso lista | take 3 | cons
DPJOHN.TSO.LOAD
SYS1.HELP
DPJOHN.PIPE.HELPLIB
READY
pipe tso listds names.text | cons
DPJOHN.NAMES.TEXT
--RECFM-LRECL-BLKSIZE-DSORG
VB 255 8192 PS
--VOLUMES--
TA922A
READY
Three device drivers provide information without issuing TSO commands. listcat provides
data set names that share a qualifier or part of one; listdsi provides detailed information
about individual data sets; and sysdsn tests for the presence of specified data sets.
To see which data sets you have catalogued that begin with the letter T:
The pipeline was run while the prefix was set to PIPER. Note that you do not append an
asterisk to the search criterion.
Under the covers, listdsi uses the REXX function by the same name and then writes the
variables set by REXX into the pipeline.
sysdsn exposes the REXX function by the same name; the function result is written to the
pipeline.
Though it has not been mentioned explicitly yet, the pipeline specification “works” by
specifying a sequence of transformations and functions that are applied to the data as they
flow from left to right through pipelines such as those presented this far in the book. In
the simple pipelines, putting the definitions of two stages next to each other with a stage
separator between them has been sufficient to define the input/output relations. The output
stream of one stage is connected to the input stream of the following one. That is all that
is necessary to define the connection on the primary stream.
Streams are defined in pairs; a stage always has a primary input stream and a primary
output stream. When the stage is first in a pipeline, the primary input stream is not
connected, but it is there all the same. Likewise a stage that is last in a pipeline has an
output stream, even though it is not connected.
The generalisation to multiple streams in a stage is just that the stage can have multiple
streams; the difficult part is how to specify the topology. CMS/TSO Pipelines has taken
the model that a multistream pipeline consists of two or more individual pipelines, each of
which is specified just like a simple pipeline is. The trick is that a multistream stage is in
more than one pipeline.
ENDCHAR can be abbreviated as END; and it usually is. Refer back to Figure 156 on
page 74. It looks very much like two pipelines, one for the master file and one for the
transactions, does it not? If the update program reads and writes the master file on the
primary stream (update does that), clearly the device drivers to read and write these files
should be connected to the primary streams for update; this is very much like a simple
pipeline. Likewise, the second pipeline would read the transactions into the update
program’s secondary stream and write its output records to the log file.
The pipeline in Figure 159 is written as a left-handed pipeline; that is, lining the stage
separators and the end characters on the left. As far as REXX is concerned, it is a character
string all the same. The pipeline has also adopted the convention of prefixing the first
pipeline with the end character. Note that the left-handed style makes for easy recognition
of the pipeline end character.
The pipeline specification above almost does the update, but there is one serious flaw in it:
the update program is invoked twice with one set of streams on each invocation, rather
than once with two sets of streams. We need to specify a connection between the two
stage positions where the update program connects to its neighbours. This is done with a
label, which is a string of one to eight characters followed by a colon. This example uses
a label that is one character:
All occurrences of a label in a pipeline specification (the formal name for the argument to
the PIPE command) refer to a single stage. The stage has as many pairs of input and
output streams as there are occurrences of the label. The actual program to run and its
arguments are specified the first time the label is used. This is called the definition of the
label. The next time the label occurs is a reference back to the previous definition.
Because the stage has already been specified completely, the label is written by itself at the
point in the pipeline topology where the stream should be connected. The first reference to
the label creates the secondary input and output stream; the next one creates the tertiary
streams; and so on.
Note in Figure 160 that both the label definition and the label reference are in the middle
of a pipeline; each connects an input stream as well as an output stream. The most
common beginner’s error is to specify two label references where only one should have
been used.
In summary: A stage can be in two or more pipelines at the same time and have access to
two or more data streams, one in each pipeline. Two concepts are introduced to support
this:
A stage can be referenced several places in a set of pipelines if its definition is
prefixed with a label (up to eight characters followed by a colon). The first time a
label is used is the primary stream (number 0) for that stage. The secondary stream is
defined the second time the label appears, and so on.
To be able to use more than one device driver that must be first in a pipeline, the
character defined by the option ENDCHAR is used to delimit multiple pipelines in a
single command string. (ENDCHAR is usually abbreviated to END).
locate was the first selection stage to be modified to write rejected records to its secondary
output stream; this opened the field of transformations that depend on record contents,
because records could be processed differently when they were selected than when they
were rejected.
In Figure 161 the individual characters represent records. The relative sequence is shown
by their order from left to right. locate passes the records that contain “a” to its primary
output stream and the records that do not contain “a” to its secondary output stream.
But how do we merge these records back into one file after they have been processed?
faninany does precisely this:
Figure 163 shows how to combine the two ideas, to prefix selected lines of a file with a
marker. The first part of the figure is the EXEC that was run (the command is too long to
type directly). It is followed by the response and a topology diagram.
This is all proper, but with variable format files on CMS, performance of the second pipe-
line can be inferior to the first one. More importantly, there are cases where you wish to
block the composite data stream and put it to a medium where you cannot write records
piecemeal. Think of a tape file, for instance; you may not be able to obtain the result
desired simply by running more than one pipeline, because this might create a short block
in the middle of the output file.
The gateway fanin combines the input on all its input streams, writing the data in the order
specified. Output is a single data stream:
Here you see two pipelines defined. The option ENDCHAR (abbreviated to END) specifies
that the question mark (?) separates pipelines. The end character can be any character
(that has no other special meaning to the scanner), but it cannot be used in the individual
pipelines unless it is escaped.
The intersection occurs in the fanin stage, which has the label a:. The label is defined on
the first line of the pipeline specification; it is referenced on the next line. The primary
pipeline is selected when fanin starts. It passes all input records to the primary output
stream; when fanin reaches end-of-file on the primary input stream, it switches to the
secondary input stream and then passes those records to the primary output stream.
The example in Figure 165 on page 78 shows how to use multiple streams in general.
This particular example can also be done with append as was shown in Figure 71 on
page 38.
The file is read from the reader and split into two streams by drop, which removes the tag
record from the primary input stream and sends it to the secondary output stream. The
primary stream carries the file itself; it is sent to the printer. The secondary stream
extracts the tag from the first record of the file (discarding the CCW operation code in the
first position) and sets the tag for device 00E using cp. This works because the tag in the
SPOOL file is set when the CP command “close” is issued after the pipeline has completed
processing.
Figure 168 shows GENMLIB EXEC, which generates a macro library from a packed COPY
file. The first two lines of the pipeline specification read the file and unpack it (if it needs
to be unpacked). The last stage names the REXX program in Figure 169 on page 81.
8 maclib also creates a TXTLIB, acceptable to members, from TEXT decks, as long as there is a *COPY record between members.
exit RC
Figure 169 shows a subroutine pipeline to generate a macro library. Subroutine pipelines
are described in Chapter 7, “Writing a REXX Program to Run in a Pipeline” on page 97.
The construct “*:” at the beginning of the subroutine pipeline means that it should be
connected to the current input of the stage that issued the CALLPIPE pipeline command.
The maclib stage has one input stream (its primary) and three output streams. The primary
output stream is connected to the primary input stream of >. The secondary output stream
is connected to a buffer stage where it is accumulated. The tertiary output stream is
connected to the secondary input stream of >.
Figure 167 on page 80 shows a diagram of the topology of the part of the subroutine that
creates the library. The last pipeline (which begins '?o:',) is not shown; it processes a
copy of the directory (made with fanout) to determine whether the file has duplicate
member names. It writes a diagnostic to the terminal for each name that occurs more than
once.
GENMLIB also accepts variable record format SCRIPT files that have *COPY records to iden-
tify members.
If you wish to process the contents of the newly generated macro library without writing
the file to disk, you must use buffer to delay the main part of the file until the correct first
record is written to the secondary output stream. Figure 170 on page 82 shows how to
create a single stream with the contents of a macro library. It relies on the ability to
specify the order fanin is to process its inputs.
Decoding Trees
Selection stages emit data on one of two pipelines. They can be cascaded to any depth.
Create multiple identical streams with the gateway fanout. (And you can write your own
in REXX if these do not meet your needs.) You now have a way to process the file with
particular filters, depending on the contents of the record. Here is a decoding tree:
An input record appears on exactly one output line as long as the decoding tree consists
exclusively of selection stages; records can be processed differently on each stream out of
the decoding tree. Though you can write a file at the end of each stream, you can also
gather it all into a single stream again. Use the gateway faninany to gather the records as
they appear on the various lines. If you want a different ordering, you must buffer records
you wish to defer. buffer and sort do that; then use fanin as already shown.
The first word of the user record contains the word “user”, which can be abbreviated to
one letter. Likewise the first word of the resource record contains “link”, also with a
potential abbreviation to one letter. To allow for pretty formatting, leading blanks are
allowed in this file; case is ignored.
The question is, how to construct a file that lists which users have access to what
resources. You could write a REXX filter to perform this function, but since writing filters
in the REXX programming language is described in a later chapter, you must resort to
pipethink to develop a solution consists entirely of built-in programs.
The solution seems to hinge on selecting records based on abbreviations and how to
remember the user ID that applies to a resource record.
A quick check in the help menu shows abbrev conveniently near the upper left corner; it
seems to do the job. As for remembering the user ID, the built-in program is juxtapose
(which means putting things next to each other). The topology of the solution (after the
initial strip and capitalisation) is:
The first (the larger) abbrev stage selects user records, which travel on the upper pipeline
where the second word is extracted and delivered to juxtapose’s primary input stream
padded out to ten characters. juxtapose stores the contents of the record in a buffer and
discards the record.
Resource records travel on the lower pipeline, where the second and third words are
extracted and passed to the secondary input stream of juxtapose.
When juxtapose senses a record on its secondary input stream, it appends this record to the
one stored in its buffer and writes the composite record to its primary output stream.
Thus, the user ID has been prefixed to the resource name.
Nota bene: In this example, the input records to juxtapose were derived from the same
data stream and the filters in the multistream path did not allow records to get out of order;
that is, they do not “delay the record”. (This concept is described later.)
juxtapose is likely to give unexpected results when the two input streams are derived from
sources, such as independent device drivers, that can produce records concurrently. When
records are available at both input streams at the same time, it is unspecified and unpre-
dictable in which order the streams will be read. Given a chance, juxtapose might drain
all records from its primary input stream before reading any records from the secondary
input stream.
You can use synchronise to make records on two streams march in lockstep, but it is prob-
ably easier to use spec or overlay to combine sets of input records into one output record.
Destructive Testing
Examples in earlier chapters have shown how to prefix a key field to records, perform
some processing, and delete the key field afterwards. This, for example, allows for sorting
or selection based on a transformation of the record or parts of the record. To be able to
delete the key field, it must have a fixed length or be terminated by some particular char-
acter, or in other ways be identifiable.
When adding a key field cannot bend a selection stage to your needs, you may consider
the technique of testing by destruction. The crux of this technique is to keep a copy of the
original record in a safe place while the the record to be tested is transformed in an irre-
versible way and then passed to the selection stage. But the output from the selection
stage is not used as output data; rather, it is used to control which output stream is to
receive the original record.
In Figure 175, fanout and predselect represent the built-in programs, whereas xform and
select are selected by you to perform the particular transformations and selection required.
xform may represent a cascade of filters.
Thus, if the transformation and the selection do not delay the record, the pipeline segment
above as a whole will behave like a selection stage.
But there is one little extremely important detail that you must remember if you build your
own selection stages using this approach: Do specify the option STOP ANYEOF on fanout;
this makes sure that end-of-file can propagate backwards, both from the stages that follow
in the pipeline and from the selection stage. If you make this kind of pipeline, it might be
a good idea to test it with tolabel as the selection stage.
Update
update applies a single update to a data stream. The master file is on the primary pipeline;
the update records are on the secondary one. Multilevel update is implemented as a
cascade of update stages.
In this two level update, the master file travels on the top pipeline. The individual update
files are read into the secondary input stream of the update stages that apply the updates.
Unlike the CMS command, the update stages apply the updates in parallel as the master file
travels along the top pipeline.
This means that the log files are created in parallel too. But you probably do not wish to
see the composite log file in the order that individual lines are written; presumably you
would like (or more likely insist) that the first update stage’s log is first in the log file,
followed by the second one, and so on.
To generate a file that aggregates files that are generated in parallel, all except the first file
must be buffered in a buffer stage. The individual files are then fed to fanin, which will
copy each file from its input streams as a whole to its primary output stream. In the figure
we have taken the artistic licence to show the buffer stage backwards; records flow from
right to left through it. We have also taken the liberty to show the secondary input stream
to fanin at the top.
Note that all built-in programs are reentrant. It is perfectly proper to use a program
several times; each invocation has (in general, at least) no information about other invoca-
tions; and it has no inclination to find out.
Multilevel update pipelines are usually generated by a program that reads the control file
and the auxiliary control files to determine which updates are to be applied and then
generate the appropriate pipeline. (Yes, you can even write a pipeline to do that.) Still,
you might be curious how the topology in Figure 176 is coded up in an EXEC. This is
how:
Merge
merge reads records from all its input streams and writes the merged file to the primary
output stream. When each input file is already sorted, the output file will also be sorted.
When merge sees records with the same key on two or more input streams, it writes the
record in the order of increasing stream numbers. merge supports up to ten input streams.
You could cascade merge stages, but you might run out of storage if this means reading a
large number of disk files concurrently.
Collate
collate merges detail records into a master file. By default, the detail records follow the
master record on the primary output stream. Master records for which there are no detail
records are written to the secondary output stream. Unmatched detail records are written
to the tertiary output stream. Additional plumbing is required to insert master records for
which there are no detail records in the primary output stream; refer to the description in
the reference part of this book.
In contrast to merge, collate supports only two input streams; it assumes that there is only
one master record that has a particular key; and it allows you to specify whether the detail
records should precede or follow the master record.
Lookup
Use lookup to select records whose key field is one of several (or is not one of several).
The secondary input stream to lookup contains the reference records. These records are
read into lookup when it starts and are stored in a buffer internally while the file on the
primary input stream is processed. When a record from the primary input stream has the
same key as one of the reference records, the input record and the matching reference
record are written to the primary output stream. Unmatched records from the primary
input stream are written to the secondary output stream. When lookup gets end-of-file on
its primary input stream, the reference records that were never matched are written to the
tertiary output stream.
lookup is often used to remove “stop words” (words that are too common to be of interest)
from a set of words being indexed:
lookup also supports dynamic change to the reference while it processes records from the
primary input stream. This is useful when it is used in a server to validate the clients’
authorities against a database that is updated dynamically. A record on the tertiary input
stream is added to the reference, in effect adding a user or a privilege to the table; a record
on input stream 3 causes the corresponding key to be deleted from the reference, in effect
removing privileges.
The following sections contain a synopsis; the full story is in Chapter 22, “Scanning a
Pipeline Specification and Running Pipeline Programs” on page 242.
Often a stall occurs because a stage that accesses multiple streams is trying to read from a
stream on which no record will become available or is trying to write to a stream that will
not be read. To stall, a pipeline must also have more than one path between some stages.
In general, you must consider the possibility of the pipeline stalling when there are one or
more meshes in the pipeline topology; that is, when the pipeline contains two stages
between which there is more than one path.
The buffer stage in Figure 167 on page 80 ensures that the inner loop in the pipeline does
not stall; the fact that maclib severs its primary output stream and its secondary output
stream before writing the final record to the tertiary output stream ensures that the outer
loop does not stall either.
If a selection stage puts a fork in the pipeline, it is normal to join the two branches with
faninany. Because faninany is able to read from any input stream that has a record avail-
able, it will be able to keep records flowing out of the multistream segment.
When records from a stage can travel over two or more paths to a fanin stage, you must
ensure that records arrive at the inputs of fanin in the correct order, which is all records on
the primary input stream, then all records on the secondary input stream, and so on.
Buffer stages (buffer or sort) or elastic stages connected to all input streams other than the
first will prevent stalls, at the expense of storage.
This section describes how you can ensure the order of the output records from a decoding
network that has been joined into one stream with faninany.
Two concepts are important to reason about the relative ordering of records:
locate (and indeed every selection stage) passes the input record unmodified to an
output stream. When both output streams are connected, a selection stage writes an
input record once, and only to one of the output streams.
locate (and indeed every selection stage), spec, and faninany are examples of built-in
programs that do not delay the record. This means that output records are in the same
sequence as the corresponding input record, and that an output record is produced
before the corresponding input record is consumed. If the stages in the mesh do not
delay the record, the output from faninany will be in the same order as the input to the
selection stage.
Thus, in Figure 163 on page 77, locate does not read another input record until console
has written the previous record to the terminal.
In general, to ensure that output records remain in the order they enter a pipeline segment
having parallel paths, any part of the pipeline specification that can pass a record on
parallel paths must consist entirely of stages that do not delay the record.
Some built-in programs (notably specs) synchronise their input streams; that is, they ensure
that there is a record available on all their inputs before beginning a processing cycle.
Connecting the outputs from fanout or chop to a synchronising stage without delaying the
record will result in a stall. To ensure that the resulting output record is not delayed, you
should insert a delay of one record into the topmost of the parallel paths between the chop
and the spec stage. For example, this pipeline fragment reverses the first word of each line
and puts it at the end of the line:
Had the words in the output record been in the same order as in the input record, you
could have used a cascade of faninany and join to perform the operation that is done with
specs above.
When you wish to trace data flowing in the pipeline to debug it, you should be careful not
to insert a device driver directly in the main pipeline path, for a device driver does not
propagate end-of-file backwards. After all, writing the output file is quite productive and
there is no need to terminate just because further processing has terminated.
You can use fanoutwo to obtain a copy of the records that flow in the pipeline and be sure
that end-of-file is propagated backwards and that the secondary output stream does not
interfere with the main stream.
If your CMS/TSO Pipelines does not have fanoutwo you can approximate its behaviour
with fanout STOP ANYEOF. This will not isolate the main pipeline from the device driver;
if the device driver terminates for some reason, the main pipeline will also shut down.
! You can reference each field by its column number, for example, the virtual device number
! begins in column 23, but perhaps it would be clearer to refer to it simply as Vdev.
! Someone would have to make this definition manually, as the format is not present in a
! machine readable form. One possible definition is shown in Figure 181 below. Assume it
! is stored in the file QACC RECORD. We show one structure in this example, but such a file
! can contain multiple structure definitions.
! The file is in free format. You can span the definition across as many or as few lines as
! you like.
! A structure is defined by a colon, which is followed by the name of the structure (blanks
! are optional after the colon). The fields are then defined on subsequent lines in this
! example.
! Structure names and field names are case sensitive unless the structure is defined as
! caseless, that is, STRUCT ADD ANYCASE was specified to define it. They must begin with a
! letter from the English alphabet or one of the characters “@#$!?_”. Subsequent characters
! may also include the digits 0 through 9. This is the same syntax as a valid REXX simple
! variable, but unlike REXX, it can be case sensitive. Note that the special characters are
! codepage sensitive; your terminal may show them differently.
! Refer to the reference article for structure for the complete syntax of a structure definition.
! Note that the structure definition is the input stream to structure; it need not come from a
! file; it could have been generated by a conversion utility upstream in the pipeline.
! You can even annotate the structure definition, as long as you remove your comments
! before passing the record to structure.
! We referenced the field “fully qualified” in this example. Were you to reference several
! fields, you can specify the structure name with the keyword QUALIFY, as shown in the
! slightly contrived example in Figure 186.
! You may also specify the option QUALIFY to define a default qualifier for all stages of the
! pipeline specification, but the option must be spelt in full; no abbreviation is available.
! C A character string.
! D Binary integer (big endian) in two’s complement notation. The input field may
! have any length.
! F System/360 hexadecimal floating point. The input field may have any length,
! but only the first sixteen bytes are used (corresponding to extended precision).
! If it is present, the eighth byte is ignored; it is the characteristic of the lower
! half.
! P System/360 packed decimal integer. A scale may optionally be associated
! with the member by specifying a signed number in parentheses. A positive
! scale specifies the number of decimal places; a negative one specifies the
! number of integer digits to drop on the right.
! R Byte-reversed (little endian) binary integer in two’s complement notation. The
! input field may have any length.
! U Unsigned binary integer. The input field may have any length.
! A blank type, which means no type defined, as well as other letters are treated the same as
! character strings.
! Using Arrays
! A member of a structure can be defined as an array of fixed or variable dimensions. Such
! a member is referenced using a subscript in parentheses after the member name. The
! subscript must be a positive number, except that spec is able to use computed subscripts in
! some contexts.
! Our example structure has a slightly contrived array on top of the directory name. To
! select the second member of this array:
! The entire array is selected when you specify a member that is an array without providing
! a subscript.
! pipe cms query accessed | pick qualify qacc member Vdev == /DIR / | cons
! Structure not defined: qacc.
! ... Issued from stage 2 of pipeline 1.
! ... Running "pick qualify qacc member Vdev == /DIR /".
! Ready(01392);
! The example also shows that the structure is no longer known to CMS/TSO Pipelines.
! Structure Scopes
! You can define structures in caller, set, or thread scope. A number of structures are
! predefined in CMS/TSO Pipelines; they are in the built-in scope. Structure names are
! resolved in the order caller, set, thread, built-in.
! Use caller or set scope for production strength applications, unless they are run by an EXEC
! that contains multiple PIPE commands (you may consider changing that to a single PIPE that
! runs a REXX program to issue the multiple pipelines with CALLPIPE). Thread scope is
! appropriate for interactive use. Refer to the usage notes for structure for further details.
! Caller Scope
! Structures defined in caller scope must be defined by a CALLPIPE specification, which
! logically makes the structures local to the stage issuing this subroutine pipeline and all its
! descendants. The scope is dismantled when the stage terminates. Structures defined in
! caller scope obscures, for its callees only, all structures of the same name in all other
! scopes.
! There can be any number of caller scopes within a pipeline set. In general, the caller
! scopes form a forest that has the pipeline set at its root.
! Set Scope
! Set scope is the default. Structures defined in set scope last until the end of the current
! PIPE command or until the current record being processed by runpipe is consumed. At that
! point the pipeline set is dismantled and all its contents are discarded, including structure
! definitions in set scope.
! Structures being defined in set scope can embed any already defined structure, except for
! structures defined in a caller scope in the innermost pipeline set.
! A new set scope is established on a recursion into CMS/TSO Pipelines and by runpipe.
! A structure defined in set scope temporarily obscures a structure by the same name in all
! nesting pipelines, in thread scope, and built in.
! Thread Scope
! Thread scope is effectively permanent. Structures defined in thread scope remain defined
! until the end of the CMS process or the z/OS task.
! Structures in thread scope can embed only structures in thread scope and built-in scope.
! Built-in Scope
! Built-in structures are searched last when a structure name is resolved; thus, they may be
! obscured by structures you define, but they cannot obscure a structure defined by you.
! You can reference built-in structures freely in structure definitions; you can list the
! contents of a built-in structure using structure LIST; you can list the names of the built-in
! in structures using structure LISTALL BUILTIN; but you can neither add nor delete a built-in
! structure.
On CMS, the maximum nesting of 200 CMSCALLs does not apply to REXX filters; you can
have as many REXX programs running in a pipeline as you have virtual storage for. On
z/OS, REXX filters run in separate reentrant environments. There is a predetermined
maximum number of possible concurrent REXX environments in an address space. The
installation can set this number.
Each REXX stage is a separate REXX program. It has its own set of variables which are
distinct from all other variables in all other invocations of REXX programs; other REXX
programs run without disturbing a program’s variable pool.
A REXX filter reads its input stream(s) and writes its output stream(s) as and when it
chooses; the program decides when its task is complete and it should exit. The pipeline
dispatcher runs the pipeline stages so that data move through the pipeline. When a filter
reads a record, the dispatcher often turns around and runs some other stage so that it in
turn can produce the record to be read.
See Chapter 25, “Pipeline Commands” on page 723 for a reference of all pipeline
commands.
Concentrate on getting a simple program working first. Wait with complex programs until
you understand the environment REXX filters run in.
Figure 190 (Page 1 of 2). HELLO REXX, a Simple REXX Filter with Usage
/* HELLO REXX: REXX filter */
'output' 'Hello, World!'
pipe hello | console
Hello, World!
Ready;
Figure 190 (Page 2 of 2). HELLO REXX, a Simple REXX Filter with Usage
pipe hello | console
Hello, World!
READY
Figure 191 shows the basic copy filter. Add instructions to it to build a filter processing a
data stream.
The pipeline command READTO reads from the pipeline. The argument (record) is the
name of the variable that receives the contents of the next record. The assignment is a
side effect of issuing the pipeline command, as is setting the variable RC to the return code.
This is why the name is a literal inside the quotes. A record is discarded if READTO is
issued without an argument.
OUTPUT writes the argument string as a record to the pipeline. An expression is evaluated
by REXX before the pipeline command is processed by CMS/TSO Pipelines. Note the
difference between READTO and OUTPUT: the latter has the record to write as its argument
string; the former has the name of a variable as its argument.
Return code 12 on a READTO or OUTPUT pipeline command means end-of-file; REXX trans-
fers control to the error: label and the stage exits with return code 0.
A REXX filter using only the READTO and the OUTPUT pipeline commands is suitable for a
| pipeline specification that contains just one pipeline, but it has the potential to “delay the
record” (see “Keep the Order of Records” on page 89). Such a delay can lead to unex-
pected results in a multistream pipeline network; thus we recommend that you learn to
write robust REXX filters from the beginning. Figure 192 on page 99 shows a copy
program that does not delay the record.
Figure 192. COPY Program that Does not Delay the Record
/* COPYND REXX -- Copy without potential to delay */
Signal on novalue
do forever
'peekto line' /* Look for next input line */
/* Process line here */
'output' line /* Pass it to the output */
'readto' /* Consume the record */
end
The PEEKTO pipeline command sneaks a peek at the next input record without consuming
it. When control returns after the PEEKTO pipeline command and the return code is zero,
the stage that produced the record is now waiting in an OUTPUT pipeline command. You
can peek as often as you like; the same record will be shown until you issue a READTO
pipeline command to consume the record. The producer can then resume after its OUTPUT
pipeline command.
Figure 193 shows a simple variation of the copy filter is a program to prefix its argument
string to each record being copied: See
This can be generalised to perform an arbitrary operation on the record (see Figure 194 on
page 100) where the argument string is an expression computing the record to write to the
pipeline.
Figure 195 shows rxp used with REXX built-in functions to manipulate the data stream.
Note that the output expression is enclosed in parentheses. This ensures correct operation
even when the expression contains a relational or Boolean operator, which has lower
precedence than the blank operator used to concatenate the command (OUTPUT) to the
string to be written.
Performance of rxp improves if the complete loop is interpreted rather than each OUTPUT
pipeline command:
But as you can see, this performance improvement comes at the price of making the
program much harder to read. We recommend that you keep things simple until you really
have a performance problem resulting from interpreter overhead.
CMPF EXEC reads the first file into the primary input stream of scmp; it reads the second
file into the secondary input stream of scmp; and it displays the output on the terminal.
When you write a multistream REXX filter, you use PEEKTO, etc., to perform I/O operations
just as you would do with a pipeline specification that contains just one pipeline. Use
SELECT to specify which stream is used by subsequent I/O commands. SELECT INPUT 1
switches to the secondary input stream for subsequent PEEKs and READTOs. SELECT INPUT
0 switches back to the primary input stream. Output streams are selected similarly; for
example, SELECT OUTPUT 1. SELECT ANYINPUT switches to any input stream that has a
record available; if there is no record available, it waits for one.
scmp, which is shown in Figure 198 on page 102, discards the first run of identical
records from the primary input stream and from the secondary input stream. A single
record containing the count of discarded records is written to the primary output stream.
The return code is 0 when both streams are at end-of-file (and thus their contents are
identical); it is 4 when one of the streams is at end-of-file; and it is 8 when neither stream
is at end-of-file.
peek:
parse arg which
'select input' which /* Select stream */
If RC ¬=0 /* Not defined? */
Then exit -abs(RC)
If i>0
Then 'readto' /* Discard previous unless first time */
'peekto data.which' /* Sneak a peek */
If RC<0 | find('0 12', RC)=0
Then exit RC /* Serious trouble? */
eof.which=(RC=12) /* EOF? */
return
The interesting part is in the subroutine peek. Its argument is the number of the stream to
read. The stream is selected with SELECT. The previous record is discarded except for the
first time, and the next record is loaded into the data variable with PEEKTO, which has a
peek at the record without consuming it. The return code sets a variable indicating end-of-
file. The data variable data.which is dropped at end-of-file, which is why the test for
end-of-file is performed inside the loop separate from the test for identity.
Controlling Streams
When a REXX filter is specified with secondary streams (or more), these streams will be
available to it. The program can read from them and write to them as and when it pleases;
it can reference them in subroutine pipelines (to be described later); and it can even throw
them away.
In a multistream REXX filter, the SELECT pipeline command specifies which stream to read
and which one to write. Subsequent PEEKTO, READTO, and OUTPUT pipeline commands
will refer to the stream you selected until you select another one. You can select the input
stream independently of the output stream; or you can select both input and output with
one command. The primary input stream and the primary output stream are selected when
the REXX filter starts.
Use SELECT ANYINPUT to select whichever stream has a record available. If more than one
stream has a record available when you issue the SELECT ANYINPUT pipeline command, it
is unspecified which input will be selected.
Use SELECT OUTPUT to select the stream to which OUTPUT writes its argument string.
SELECT BOTH selects a stream at both the input and the output side. SELECT ANYINPUT
selects whatever input stream has a record available; STREAMNUMBER INPUT sets the return
code to the number of the stream currently selected.
Use the MAXSTREAM pipeline command to determine the highest stream number available.
Thus, MAXSTREAM returns 1 when you have secondary streams, but not tertiary streams.
Write a pipeline specification after the command verb (see Figure 199). This particular
subroutine pipeline issues CP commands and translates the response to lower case. Use
cplower whenever you wish a cascade of cp and xlate.
The REXX program waits for all stages of the new pipeline to complete before it continues.
The variable RC is set to the “worst” return code from any of the stages.
Specify where to connect the input and output streams of the running stage to the new
pipeline with connectors of the form “*:”. For simple subroutines, put one of these at each
end of the pipeline specification to indicate that the new pipeline should be connected to
the currently selected streams.
Using PEEKTO in Figure 198 on page 102 to see the record without consuming it means
that the program can be called as a simple front end to a more sophisticated compare
program. This is because the records that do not match stay in the producer’s output
stream and can be read again, for instance by a control stage.
This subroutine pipeline copies records until tolabel reads a record that contains :body. in
the first six columns. tolabel then terminates without consuming the record. This causes
end-of-file to propagate from within the subroutine pipeline towards the outside. The input
and output streams are reconnected to the REXX program at this point, and the program can
now process the body of the file.
Short Circuits
A subroutine pipeline with two connectors and no stages short circuits the streams; that is,
it connects the two neighbour stages as if the stage that issued the CALLPIPE pipeline
command were not there. That is, records are passed from the neighbour to the left
directly to the one to the right. The calling stage waits while records fly overhead and
resumes when end-of-file is reflected, at which time the output stream is connected back to
the neighbour on the right.
One use of this is to write a variation of literal where the literal record is written after the
input stream is copied to the output. You could have written a loop to copy the stream but
the short circuit is simpler and faster. Figure 201 shows LITAFTER REXX, the REXX formu-
lation of append literal.
signal on novalue
'callpipe (name litafter) *:|*:' /* Copy the file. */
if RC=0
then 'output' arg(1) /* Write literal text */
exit RC
Use the pipeline command SHORT, rather than a short circuit pipeline, when you are not
going to write to the output stream after the input stream has been copied to it. (SHORT is
more efficient than the short circuit subroutine.) literal can be formulated in REXX using
SHORT (see Figure 202 on page 105).
Figure 203.
pipe rexxvars | take 1 | console
s CMS COMMAND PIPE XEDIT * PIPE XEDIT
rexxvars writes the source string in its first output record. It then writes two records for
each variable in the environment. These lines are discarded in the example above, because
take only copies the first record. (rexxvars is smart enough to detect that its output is
being discarded and terminates quickly.)
Making such a scanning routine available to the REXX filter programmer ensures consist-
ency between built-in programs and REXX filters.
The scanning routines are called through pipeline commands from a REXX filter; all param-
eters are specified in the command string and all results are fed back through variables,
which are set as a side effect.
Thus, the argument string on the scanning pipeline commands consists of three parts, each
of which are separated by a single blank.
1. Keywords, where required.
2. Literal variable names for the result and the residual string. CMS/TSO Pipelines sets
the variables as a side effect of the pipeline command.
Using this model, CMS/TSO Pipelines supports pipeline commands to scan the argument
string and to get the contents of an input range in a record.
As an example, consider a simplified version of the insert built-in program with this
syntax:
──INSERT──delimitedString──┬────────────┬──
└─inputRange─┘
Figure 204 shows the beginning of a REXX filter that scans its argument string according
to the syntax diagram above:
The first command scans the beginning of the argument string for a delimited string. The
result is two strings, the one scanned and the remaining argument string after the delimited
string. These are stored into the variables string and rest, respectively.
The second command scans what remains after the delimited string for an inputRange.
That is, it determines the position in the input record where the string should be inserted.
The keyword OPTIONAL specifies that an omitted range should be treated as the complete
record. Except for the keyword, SCANRANGE is similar to SCANSTRING. The range is
stored into the variable range and the remaining string is stored into rest. (Note that
REXX referred to the value of the variable when it built the command string; you can reas-
sign its value, just as you can in the REXX Parse instruction.)
Both the string and the range variables contain the result of the scanning routine, but
whereas you can use the string directly in REXX (and no doubt you will), the range is a
“token” in the original English meaning of the word. It is something that CMS/TSO
Pipelines has given you to hold for a while; when you wish to refer to the part of an input
record that is defined by this particular input range, you hand the token back to CMS/TSO
Pipelines.
The arguments to the GETRANGE pipeline command consist of three words and a string.
The first word specifies the name of the variable that contains the token that represents the
input range; this variable was set by SCANRANGE. The second word is a keyword; it
specifies that you wish the result in a stemmed array. The third word specifies the stem of
the compound variables into which the result is to be stored. The remainder of the
command (after exactly one blank) is the input line from which the input range is to be
extracted.
The variable stem.0 is set to one or three. It is set to one when the input range is not
present in the record; stem.1 is then set to the entire record. When stem.0 is set to three,
the part of the input record up to the beginning of the range is stored into stem.1; the
contents of the range are stored into stem.2; and the part of the input record after the
range is stored into stem.3.
Scanning Arguments
There is one more twist to scanning arguments. Suppose you want to pass part of the
argument string to a built-in program, for example, an inputRange, you should use the
scanning commands already described in “Scanning the Argument String” on page 105,
but the resulting token is not of much use.
Instead, you can infer the input string that was scanned by the length of the residual string:
Be careful when you use part of the argument string in a pipeline specification; the string
may contain a stage separator or an end character, which will need to be doubled up to be
escaped. The probability of this happening in an inputRange is low, but not zero; still,
you may wish to accept this restriction. (For example, the word separator could be
specified as a vertical bar.)
When scanning for a delimitedString, however, the exposure is real, but the cure is
different. The idiomatic way to pass a delimited string to a stage is shown in Figure 207.
The trick is to convert the string to hexadecimal notation, which means that the string in
the pipeline specification will not contain any special character at all.
CMS/TSO Pipelines provides two pipeline commands to issue messages: MESSAGE and
ISSUEMSG; and CMS has the XMITMSG command.
The MESSAGE pipeline command simply writes its argument string to the current message
disposition. This is usually your terminal, but see also runpipe. You must supply the
entire message, including the message prefix, for example:
The advantage of the MESSAGE pipeline command is its simplicity. The disadvantage is
that you have the message text deep in your code, which makes it difficult to change and
almost impossible to support translation to national languages (NLS) and multiple message
repositories.
Setting things up for ISSUEMSG is best done in a subroutine, as shown in Figure 210. The
first argument string contains the message number; subsequent argument strings contain
strings to be substituted:
The advantage of ISSUEMSG is that it uses the CMS/TSO Pipelines infrastructure and thus
allows for NLS to the extent that CMS/TSO Pipelines does. The disadvantage is that it may
be cumbersome to add your own message to CMS/TSO Pipelines’s repository. Prior to
level 1.1.10/0015 it would entail making a filter package to contain the repository. From
1.1.10/0015, you can add your messages to the FPLUME REPOS repository and install this
user repository. (See Chapter 28, “Configuring CMS/TSO Pipelines” on page 839.)
Use the CMS command to issue a message using a CMS message repository. This has the
advantages of using standard message repositories, but the disadvantage that the message
will be written to the terminal irrespective of the CMS/TSO Pipelines message disposition.
If your program has allocated resources before committing, it must deallocate those
resources when it discovers that the pipeline is being abandoned, just as it must when it
terminates normally.
Propagating End-of-file
You should make an effort to avoid unnecessary processing. For example, if an output
stream has been severed by its consumer, there is no point in producing output on it once
you know that it has been severed. And when you realise that all output streams are gone,
you should terminate unless you can do useful work (which would imply that your REXX
filter acts as a device driver rather than as a true filter).
So how do you realise that it is time to call it quits? You may wish to check the status of
your streams from time to time. You can do this in several ways:
Issue the STREAMSTATE pipeline command to determine the state of a particular
stream. You can loop over all defined streams to get the whole picture. This is a bit
cumbersome.
Issue the STREAMSTATE ALL command with the name of a variable to be set.
CMS/TSO Pipelines then stores the status of all defined streams into this variable.
You still need to write a loop to process the status of the individual streams.
Issue the STREAMSTATE SUMMARY pipeline command to get a return code that can be
used directly for your decision. If the return code is 8, either all inputs are gone or all
outputs are gone.
But even with these very sharp tools, you are still not able to emulate what the built-in
programs can achieve. The remaining problem is to discover that an output stream has
been severed while the REXX program is waiting for an input record. Once you have
issued the SELECT ANYINPUT or the PEEKTO pipeline command, you will not get control
until a record is available (or there is end-of-file on the input).
The solution is the EOFREPORT pipeline command, which modifies the semantics of the
commands to read and write. Issue the EOFREPORT ALL to be alerted when all output
streams have been severed while you are waiting for an input record. The return code will
be 8 when all outputs are gone. You should issue this command in all your production
strength REXX filters.
If the EOFREPORT pipeline command is issued to a version of CMS/TSO Pipelines that does
not support the command, the return code will be -7. You can ignore this error. Since
REXX traces negative return codes by default, you should turn trace off to avoid a nuisance
message in this case.
But even this is not always enough. Sometimes you may wish to propagate end-of-file on
individual streams, as is done, for example, by gate. Issue EOFREPORT ANY to be alerted to
any change of pipeline connections while you are waiting for a record. Return code 4
means that some as yet unknown stream has been severed; you must issue STREAMSTATE
ALL and parse the variable it sets to discover which stream(s) need severing.
With EOFREPORT ANY we are speaking fine detail. Even an OUTPUT pipeline command
will terminate with return code 4 when a stream has been severed before the consuming
stage has seen the record. (That is, you can in some circumstances produce an output
record and then later retract it!) If the consuming stage has seen the output record, it is
too late. The producing stage must remain blocked (it cannot be resumed) until the record
is consumed or the consumer severs the stream.
Note that return codes 4 and 8 are set only when the stage is blocked at the time the
stream is severed. If a stage is ready to run, but not dispatched, there will be no indication
that a stream has been severed, because the pipeline dispatcher can reflect only one return
code at a time.
signal on error
do forever
'peekto line' /* Get some input */
'getrange range stem parts.' line /* Split it up */
If parts.0=1 /* Was range in record at all? */
Then 'output' string || line /* No, just insert first */
Else 'output' parts.1 || string || parts.2 || parts.3
'readto' /* Consume record */
end
error: exit RC*(wordpos(RC, '8 12')=0)
err:
parse arg msgno .
sub=''
Do i=2 to arg()
sub=sub '00'x || translate(arg(i),, '00'x) || '00'x
End
parse source . . myfn .
trace off /* Be quiet on MVS */
'issuemsg' msgno myfn sub
exit RC
The program can be read from any input stream; this stream will be at end-of-file when
the program starts. Instead of using the Interpret instruction, the program in Figure 194
on page 100 can be generated in a subroutine pipeline like this:
The argument to the rexx stage specifies that it should read the program from the
secondary input stream (*.1:) and that the file name returned by the Parse Source instruc-
tion should be RXP.
Note that each input record becomes a line of the program; semicolons separate REXX
instructions on a line.
mode. The macro can also be transformed; for instance, to remove Address
instructions.
The BEGOUTPUT pipeline command, when issued in the macro, has the effect that
subsequent commands are written directly to the output stream rather than being proc-
essed by CMS/TSO Pipelines as pipeline commands. Thus, what the user writing the
macro thinks of as commands is passed to the following stage as data records.
The macro processor can use the PRODUCER option on the device drivers for REXX
variables (for instance stem) to access the REXX variable pool of the macro rather than
its own variable pool.
The SETRC pipeline command allows the macro processor to set the return code in the
macro.
It is certainly possible to write an editor that supports XEDIT macros to process data in the
pipeline. Such a REXX filter will (if written correctly) be directly transportable between
CMS and TSO. It should also be possible to write an XEDIT macro processor to allow XEDIT
macros to be used with ISPF/PDF.
Miscellaneous Issues
If you issue a command directly from a REXX filter to the host system, CMS/TSO Pipelines
has no way to know that you have given control over to the host system. In many cases,
this makes no difference, but there are two pitfalls you should try to avoid:
1. The time spent in the host is charged to the stage by RITA. After all, CMS/TSO
Pipelines does not know that you have given control over to the host, how could it tell
Rita?
2. On CMS, the delay stage cannot recover from a program that uses the clock
comparator, such as the command invoked by the PROFS EXEC.
Instead of addressing commands directly to CMS, use the command stage in a subroutine
pipeline to issue them when you are not sure whether the command will interfere with
CMS Pipelines or not. (Or use subcom CMS.) This lets CMS Pipelines in on what is going
on and it may keep you out of trouble.
Note that a REXX filter has no way in general to discover that a pipeline is being timed or
that the pipeline contains a delay filter. Therefore, if you are writing production strength
REXX filters, avoid the use of the address instruction if you can. One severely burnt
plumber wrote:
Me? I'm just going to NEVER AGAIN use Address COMMAND in a Rexx
filter - if it occasionally costs me a few extra microseconds, then
so be it.
Now that you have been duly warned, let us discuss how to address commands to partic-
ular environments in a REXX filter.
In a REXX filter, you can use the Address instruction to issue a command to other
command environments on CMS. As an example, on CMS, this clears the screen before
writing a line:
When a subroutine returns, REXX restores the default command environment to the one in
effect when the subroutine (or function) was called. Thus, it is safe to change the default
command environment in a subroutine that does not issue pipeline commands and does not
call a subroutine that issues pipeline commands.
clear:
address '' /* COMMAND environment */
'VMFCLEAR' /* Clear screen */
If RC=0 /* OK? */
Then return
exit RC
Do not change the command environment permanently unless you know how to get back.
It is safer to issue all commands to other environments with the Address instruction so that
you are sure that you retain the pipeline command environment. It is not recommended to
use the Address instruction without operands to toggle between command environments.
The initial command can be specified as the argument; additional commands are read from
the input.
Red Neon!
You cannot issue pipeline commands from a REXX program that has been called as an
external function or invoked with the Address instruction because this implies a
CMSCALL. Do not pass the result of Address() to an external function; results are
unpredictable if you issue pipeline commands from REXX programs that are neither
invoked as stages nor called by the REXX pipeline command from a stage. You are
likely to encounter a disabled CMS wait when the REXX program tries to return.
CMS and REXX on TSO give return code -3 when they do not recognise the command you
have issued to them. This is likely to happen when you issue a pipeline command to the
host. Be sure you keep the original command environment intact if you issue the Address
instruction to select a new environment permanently.
CMS/TSO Pipelines gives return code -7 when it does not recognise a pipeline command.
The two most likely reasons are:
A CP or CMS command is addressed to the pipeline command environment. Use the
Address instruction with CMS or COMMAND to issue the command to CMS rather than
as a pipeline command.
A continuation comma is missing in a pipeline specification written over several lines.
By default, REXX traces commands that give a negative return code. Since REXX’s
message is quite explicit, CMS/TSO Pipelines does not issue further messages in this case.
(Thus, if you wish to handle this condition quietly, you can do so.)
On TSO, trace from REXX filters is written to the DDNAME SYSTSPRT; be sure to allocate it!
Be sure to test the return code or use signal on error to trap errors.
Pitfalls
Here are some pointers for when things break inexplicably. Be sure also to read the first
section of “Issuing Commands from a REXX Filter on CMS” on page 114.
When the file type of the REXX program is not EXEC and REXX resolves an external func-
: tion that is not in a loaded function package, REXX on CMS first searches for a file with a
file type like the calling program.
Thus, if you call the external function myfunction, REXX looks first for MYFUNCTI REXX
and then for MYFUNCTI EXEC. Thus, if you try to hide an existing EXEC with a REXX filter,
it simply will not work; you will get an unending recursion instead. If you are lucky you
run out of storage before CMS reaches the limit on nested SVCs and ABENDs you.
: On z/OS, REXX searches only the data set from where the calling function was loaded.
: Thus, you may need to maintain two copies if an external function is called both form a
: CLIST and from a REXX stage.
But note that the search for a REXX filter is after the search for a built-in program. If you
choose easily understandable names for your REXX filters, it may well happen that a new
release of CMS/TSO Pipelines has a built-in program with the same name as your REXX
filter. And then the built-in program “wins”; and your users get frustrated.
The obvious recommendation is to use an explicit rexx stage to run the REXX program; this
will even save you an infinitesimal amount of CPU time. But if you later decide to incor-
porate the REXX filter in a filter package, it will not be resolved; the contents of a filter
package are effectively built-in programs, even when they are written in REXX.
Thus, for production strength, choose the file names for REXX filters carefully. You might
consider prefixing the names with your company’s acronym or your own initials; this
should reduce the probability of a naming clash.
So what can you do after you have been bitten? Suppose you have a IF REXX which
begins to fail with the most strange error messages after you install a new level of
CMS/TSO Pipelines (or a new level of CMS). Chances are that if is now a built-in
program. And you have literally thousands of references to it scattered over hundreds of
files. You can give yourself time to think by putting your REXX filter into a filter package
that has the magic file name PIPPTFF. Filters in this package override the built-in
programs.
Still, this may not be a good idea either: Many parts of CMS run pipelines under the
covers; if you replace a built-in program with a program of your own by putting it in
PIPPTFF filter package, this will also affect the pipelines that are written with the real
built-in program in mind.
Performance
Remember these points when writing programs for CMS/TSO Pipelines:
Make sure the function cannot be performed with a built-in filter or a cascade of such
filters. For instance, spec should be used instead of the program shown in Figure 193
on page 99.
Make a program do one thing, and do it well. Decompose a complex task into a suite
of simpler generalised programs. Write one small program to perform what is unique
to the task.
Measure, if performance is a concern. Only when convinced of a substantial savings
should you contemplate writing a filter to combine function already available in sepa-
rate programs.
Figure 217 shows the difference between a REXX filter and an equivalent built-in program.
Words four and five of the ready message show CPU used on a lightly loaded IBM
3081KX. The first command shows the overhead of reading the largest file in the system
and counting the records in it. The second test uses the program shown in Figure 193 on
page 99, whereas the last two examples use spec and change to do the same thing.
Issues other than performance may influence your decision in favour of compiling:
The compiler may support a higher language level than the interpreter does.
You may wish to hide the source for a REXX filter so that a user cannot tamper with it.
The compiler can detect syntax errors in the program that may go unnoticed in testing,
because some part of the program is not exercised by the test cases.
We recommend that you compile complex REXX filters even if you decide to use the inter-
preter when they are run. The compiler finds many errors that would otherwise have gone
unnoticed for a while. We have discovered many spelling errors that would not have
caused an interpreter diagnostic, by scanning the cross reference listing from the compiler
for unreferenced variables and strings that look very much alike.
MVS Considerations
When TSO Pipelines is running a REXX filter, it runs in a reentrant environment. Output to
the terminal (for example, from the Say instruction) in such an environment is written to
the DDNAME SYSTSPRT rather than directly to the terminal.
Be sure to allocate this data set in your logon procedure or in the Job Control Language
for the job step that invokes TSO Pipelines. If the DDNAME is not allocated, REXX will
issue a message to the programmer to this effect. But you will only see this message in
TSO if you have PROFILE WTPMSG.
Thus, if the REXX filter fails and SYSTSPRT is not allocated, you will most likely just see a
broken pipe that has no data coming out. This can lead to much head-scratching and
finger-pointing.
Control the Pipeline Specification Parser: The default stage separator is the solid vertical
bar that you have by now seen many times. Sometimes you may wish to use the vertical
bar as an argument to a filter; for instance to find records starting with a solid vertical bar.
There are three ways to do this, shown in Figure 218:
Use an additional stage separator character for a self-escaping sequence. Two adjacent
stage separator characters are treated as a single normal character, which is passed to a
program as part of the argument string.
Define an escape character to put in front of solid vertical bars and other characters
that are not to be taken as stage separators.
Use the option SEPARATOR to redefine the stage separator to a different character.
The escape character is defined with the option ESCAPE; when defined, it can be put in
front of any character, not just the stage separator. The escape character itself is ignored
and any special meaning the following character might have to the pipeline specification
parser is suppressed, so the second character becomes just a “normal” one. Use two
escape characters to specify a single escape character in an argument string.
All three examples in Figure 218 select records with a solid vertical bar in column 1.
The option ENDCHAR (which is often abbreviated to “end”) is the last of the special charac-
ters you can define. It is used to delimit pipelines when using multiple streams. The end
character is also self-escaping; use two abutted end characters to provide an end character
as the argument to a stage.
The word after the option SEPARATOR, option ENDCHAR, or option ESCAPE is an xorc,
which is a single character, as we have seen, or a two-character hex value. This is partic-
ularly useful in REXX programs where you can use a character that the user cannot type on
the terminal. However, remember that it is only the specification of the character in the
global option that can use the two-character hex value; it must be a single character in the
pipeline specification proper, but with REXX this can be coded as a hex constant.
Figure 219 is an example.
Figure 219. Using Two-character Hex Values for the Stage Separator
/* Using 01 as the stage separator character */
'pipe (separator 01) < some file' '01'x,
'find |' || '01'x,
'> revised lines a'
Parentheses, the asterisk (*), the colon (:), the period (.), and the blank (X'40') have
special meaning to the pipeline specification parser; these characters are rejected when
used for the scanner characters.
The escape character is not effective when scanning pipeline options; it is not possible to
use a right parenthesis for a value of a pipeline option.
Name the Pipeline Specification: The option NAME followed by a blank-delimited word
associates a name with a pipeline specification. This name is displayed in messages, but
has no other effect. The name need not be unique among the current set of pipelines.
Though of limited use when the pipeline specification is typed at the terminal, the name
option is useful in nested subroutine pipelines. If you write the name of the EXEC in the
option NAME, CMS/TSO Pipelines can tell you where there is trouble. FMTP XEDIT auto-
matically inserts the file name as the pipeline name when it converts a pipeline from land-
scape to portrait format.
Get More Informational Messages: You may wish further information when the PIPE
return code is not zero and you get no error message from any stage. Three levels of
additional information messages can be requested from the pipeline dispatcher:
LISTERR Issue a message when a stage returns with a nonzero return code. Use this
option to see which stages return “quietly” with a nonzero return code.
LISTRC A message is issued when a stage is started and when it returns, be the return
code zero or not.
TRACE This option causes a large amount of trace data to be written. All calls to the
pipeline dispatcher are traced as are its actions. Using this trace you might be
able to relate messages issued by other commands or REXX Say instructions to
a specific stage.
You can also reduce the number of messages issued:
MSGLEVEL Enables or disables additional messages that are issued to pinpoint the stage or
command that issued a message.
Though it is not exactly providing an informational message, the following option may be
useful when you are debugging an Assembler program running in CMS Pipelines.
STOP A message is issued when each stage is started and an address stop (or similar
trace) is activated to stop in CP console function mode as soon as the first
instruction in the stage is issued. You can set up traces within the program
under test. Be sure to have SET RUN OFF when using this option; the stage
runs away from you if RUN is ON!
Use runpipe as shown in Figure 221 to redirect CMS/TSO Pipelines messages to a file.
Specify a local option to activate trace for a single stage of a pipeline, or selected stages.
Turning off the rightmost three bits in the message level suppresses the normal messages
to identify the stage issuing the message.
Chapter 9. Debugging
CMS/TSO Pipelines issues a message when it detects an error. But sometimes the mistake
is not a syntactical one. A few hints are given below about the things you can do to find
out what went wrong when you get no output or get an unexpected return code.
Error Messages
A filter issues an error message when the parameter list is in error or when an error occurs
during processing. Figure 223 is a sample run.
set emsg on
Ready;
Lines are read from the console and blocked with OS record descriptor words (or at least
that was the intent). block decides it cannot do what is asked and issues error message
115. CMS/TSO Pipelines adds two messages to help you find the error.
The error in block is discovered before any of the stages begin running; no stage is started.
After the error in this example, issue “pipe help” to display more information about
message 115.
Use runpipe to issue a pipeline specification and capture all messages issued from it. This
may be a more convenient way to document a problem than console SPOOL. Be sure to
use diskslow if the problem causes an ABEND. This ensures that the output file will be
readable and will contain all records.
Other Hints
Use the option LISTERR to list stages giving a nonzero return code. This lets you find
stages that do so “quietly” without issuing an error message.
Here you see the effect of the option LISTERR. maclib gives return code 12 without issuing
a message when there is no stage to read its output.
The last sample in Figure 224 shows how to name a pipeline. This is particularly useful
when running a pipeline from an EXEC to indicate which EXEC invoked CMS/TSO
Pipelines; knowing which stage issues a message may not be too helpful if one does not
know which EXEC contains the pipeline specification being run.
The example in Figure 225 shows how to issue a trivial pipeline (it has only the stage,
hole) through runpipe. Normally, you would build the pipeline specification in a REXX
variable and then insert it into the pipeline with a var stage.
If this still leaves you without a clue, try runpipe EVENTS. Refer to Appendix G, “Format
of Output Records from runpipe EVENTS” on page 908 for the format of the output
records. Be prepared to sift through large amounts of data.
No Output
Sooner or later, you run a pipeline and get no output. You can run the pipeline one stage
at a time using utility files to notice where it drains up. This may be a fine approach when
developing programs for a pipeline, because you can test each stage individually and as
cheaply as possible (and to completion) when the input to the stage under test is simply
the file generated when the previous stage tested OK. However, to see what went wrong
in an existing pipeline, add > stages around selected stages to take a snapshot of the data
flying by.
Pipeline Stall
A multistream pipeline stalls when, for instance, a stage refuses to produce output.
What happens next depends on the setting of the two configuration variables STALLACTION
and STALLFILETYPE. Refer to Chapter 28, “Configuring CMS/TSO Pipelines” on
page 839. The following sections describe the default behaviour.
You get messages displaying the state of the stages when this occurs. A snapshot of the
pipeline control blocks can be written to disk. This snapshot is provided as a service aid;
the format is “undefined”. On CMS, the snapshot is appended to the file PIPDUMP LISTING
unless the global variable PIPDUMP is set to OFF. (Use GLOBALV to set global variables.)
How Can I Do xxx and Get the Result into REXX Variables?
This question is probably the first one you will ask when you start to use CMS/TSO
Pipelines.
For example, to store the file names, types, and modes of all files on all modes that are
accessed read/write into a list of REXX variables starting with FILE.1 and setting FILE.0 to
the number of items stored in the “stemmed array”:
The solution shows some amount of pipethink by running one CMS command and proc-
essing the response to generate a list of other CMS commands, which are then issued. Note
the filtering of the response from the first command: the heading line must be discarded.
The solution above is a good starting point for developing your business application. But
do not stop here; read on! The asking of the question per se also deserves comment.
Always do as much in the pipeline as you can once you have your data in there; it is
much more efficient to bring CMS/TSO Pipelines built-in programs to bear on your file
in the pipeline than to write the same function as REXX procedural code. And it is
also a lot fewer keystrokes, because the CMS/TSO Pipelines notation is much more
compact that the corresponding procedural code.
Moving function from procedural REXX to functional CMS/TSO Pipelines pipeline
specifications will make both you and your system more productive.
If you are enhancing an existing program to use CMS/TSO Pipelines for I/O, look
further in the program and you will most likely see a loop over the stem just loaded
with data. Are you sure this loop could not be performed with CMS/TSO Pipelines
built-in programs?
Knowing how to apply CMS/TSO Pipelines built-in programs is the hallmark of a skilled
pipeline programmer. Once you can bring the full power of CMS/TSO Pipelines to bear
on your data while they are in the pipeline, you can call yourself a journeyman plumber.
But all is more. It supports an expression of targets which are separated by OR operators
and AND operators, using the normal precedence that AND groups closer than OR. Use
parentheses to specify a different grouping. The NOT operator negates a target; the line is
selected if it does not contain the string. Finally, the target can be specified as a
hexadecimal or binary literal.
... | all (/a/ ! /b/) & ¬ x02 | ...
In this example, records that are selected do not contain X'02', but do contain either “a”
or “b”.
If the default upper case translation is not appropriate or the function you wish to perform
is not listed above (or you wish to optimise performance for a large file), you must resort
to brute force: Prefix the record with a temporary field that contains the data in upper
case; perform the operation you wish to perform, using this temporary field; and delete the
field when you are done:
... | spec 1.3 1 1-* 4 | xlate 1.3 upper | sort 1.3 | spec 4-* | ...
Numeric Sorting
The sort built-in program sorts by the binary contents of the key field. You can use the
PAD option to extend shorter keys with a pad character on the right for purposes of
comparison, but for sorting numbers you need to pad on the left and sort does not support
this.
As always, when a built-in program does almost what you want, but not quite, put on your
pipethink cap and create a sort key that is aligned to the right. spec does that easily
enough.
If the numbers are unsigned integers and you know an upper limit to the field length, you
can use the approach in Figure 227 on page 129:
Exit RC
ns0
Poul 50,000
Bob 100,000
Donna 150,000
Ready;
The sort key is put in the first fifteen positions of the record, aligned to the right, before
the sort; it is removed after the sort.
Putting the sort key first in the record as a field that has fixed length has also the advan-
tage of improving sort’s performance.
A sort key is definitely required when the data can contain both positive and negative
numbers or even decimal fractions. The trick to sorting a mixture of negative and positive
numbers is to add (or subtract) some very large number to all the keys so that the resulting
number is positive for all key values. The example in Figure 228 shows a revision of the
previous example:
Figure 228 (Page 1 of 2). NS1 EXEC—Sort Fractional Signed Decimal Numbers
/* Numerical sort of signed number */
Signal on novalue
Address COMMAND
'PIPE (name NS1)',
'|literal 2,735.8 7.99 -15 -3,586.99', /* Some random numbers */
'|literal 0.0001 0 -0.0001', /* Not random numbers */
'|split', /* Make them records */
'|spec word 1 1 1-* nextword', /* Make copy of number first */
'|change word 1 /,//', /* Remove commas in number */
'|spec a: w1 .', /* Convert first word to counter */
'print a+1000000000 pic 9999999999v9999 1', /* Make aligned */
'fieldseparator blank field 2-* nextword', /* Rest of record */
'|sort 1.15',
'|console'
Exit RC
Figure 228 (Page 2 of 2). NS1 EXEC—Sort Fractional Signed Decimal Numbers
ns1
09999964130100 -3,586.99
09999999850000 -15
09999999999999 -0.0001
10000000000000 0
10000000000001 0.0001
10000000079900 7.99
10000027358000 2,735.8
Ready;
In the example above, the output record contains the sort key in the first column and the
original number in the remainder of the record. The number was converted to excess-
10000000000 notation by adding this huge number to all keys. You can see that keys that
are zero or positive are simply prefixed by “1”, whereas negative keys are now in ten’s
complement notation.
Internally, spec uses a floating point decimal format that has thirty-one digits precision and
(for practical uses) an infinite range of exponents. You can use all of spec’s facilities to
round, truncate, and so on. The number you add can be any number as long as it is equal
to or larger than the negative of the smallest key.
If the input numbers are in the Continental European notation (periods for the thousands;
comma for the decimal point), you can delete the periods with change and then use xlate
to turn the comma into a period.
The records must be shorter than 64K. Use the keyword CMS4 to prefix four bytes binary
length.
addrdw was added in CMS/TSO Pipelines level 1.1.9. You might see an earlier idiom to
achieve this:
The pad stage compensates for spec considering a null record to contain no data, rather
than containing a null field. Use an additional spec stage to convert the binary halfword to
decimal:
... | spec 1.2 c2d 1 3-* nextword | ...
zone can also be used to run a selection stage against a range (column, word, or field).
The general solution is to split the record into three pieces; apply the operation; and join
the parts back together. This approach will work as long as the operation does not delay
the record:
If chop is not appropriate to select the part of the record you wish, you must turn to spec
(or a REXX filter of your own):
This solution is not as robust as the previous one, because multiple blanks surrounding the
| third word are lost. Splitting the record with threeway may prove to be more robust.
The reason is that the cms and command host command interfaces each maintain a separate
CMSTYPE flag and set it to RT when they start. Use this subterfuge to obtain the CMSTYPE
setting that applies outside the pipeline:
pipe subcom cms query cmstype ( lifo | hole | append stack | take 1 | ...
You can indeed change the flag while the host command interface is running, even by
sending it commands to that effect, but the flag is discarded when the stage terminates.
Thus, you cannot affect a permanent change to the CMSTYPE flag through the cms and
command host command interfaces.
If you wish to chop long records into 80-byte chunks, you should investigate one of these
approaches:
deblock 80. This produces as many records as needed to write all data in each input
record into output records that are at most eighty bytes long. An input record that is
shorter than eighty bytes is passed unchanged. deblock 80 respects input record
boundaries; if the length of the input record is not a multiple of eighty, the last record
of the batch produced from an input record will be a short record. deblock 80 is
probably what you need.
fblock 80. This ignores input record boundaries; that is, it treats the input data as a
byte stream. This byte stream is then written eighty bytes at a time. Thus, an output
record can contain data from adjacent input records; the only possible short output
record is the last one.
specs 1-80 1 WRITE 81-* NEXT. This writes two records for each input record. The first
output record contains the first eighty bytes of the input record (or the entire input
record if it is shorter than eighty bytes). The balance of the input record is passed in
the second output record (or it is null). You can use locate 1 to discard null records.
chop 80. This produces the first eighty bytes on the primary output stream and the
remainder of the record (or a null one) on the secondary output stream. You can use
faninany to merge the two parts of the original record into sequence:
'PIPE (end ? name PIPFAQ.SCRIPT:244)',
'?...',
'| c: chop 80',
'| i: faninany',
'| ...',
'? c:',
'| i:'
If you use stem to read and write the same stemmed array in a pipeline, you must be sure
that writing back to the array does not overtake reading from the array. If it does, you
will not get the result you expect (or at least the one we expect you will expect), because
you might have a “destructive overlap”.
Be sure to buffer the array before writing it back to the stem when your process can
produce more records than it reads:
| However, no points can be awarded for the artistic impression. Surely you can move more
| of the processing of the stem into the pipeline to take advantage of the speed of built-in
| filters relative to interpreted REXX; it is likely that you can do away with the stem entirely.
Wondering If It Is a Bug?
If something “does not work”, it could be an error in CMS/TSO Pipelines. Though they
have been observed occasionally, errors in CMS/TSO Pipelines are rare. It is more likely
that your expectations of how CMS/TSO Pipelines works are different from how it was
designed to work. But documentation is just as important a part of a product as is the
code; if you read the documentation and it led you to believe that the code should work
differently than it does, the documentation must be improved. Append to the forum to let
us know where we failed.
If you are still convinced the problem is in CMS/TSO Pipelines, try to gather as much
information as possible before reporting the suspected bug:
Query the level of CMS/TSO Pipelines are you using. (PIPE QUERY LEVEL will tell you
this.) Which operating system is it running under and what is the release of this oper-
ating system?
Try to reduce the pipeline that produces the failure to a few simple stages using little
data. With a simple test case we are able to understand the problem faster and thus
provide you with a fix in a most timely manner.
Show the exact pipeline specification that causes the failure. If you cannot embed an
EXEC, use cut and paste to be sure you get the exact command you typed.
Show your input data and the output result you obtained. Also show the output result
you expected or indicate where the actual is different from your expectations.
Collectively these products are referred to as DB2. sql processes statements in the Struc-
tured Query Language (SQL).
Several tasks are performed before a CMS/TSO Pipelines user can issue SQL statements
through sql:
DB2 must know about CMS/TSO Pipelines. On CMS, this process is called preparing
the access module. It is performed once by your system support staff. On z/OS, it is
called binding the plan. Help for sql as well as CMS/TSO Pipelines installation proce-
dures describe this process.
If you are going to use Distributed Relational Database Access (DRDA), you must
ensure that all other systems know about CMS/TSO Pipelines. Unload the plan from
the system where you have installed CMS/TSO Pipelines and bind it at the other
systems.
You must be registered as a DB2 user. Contact your database administrator if you are
not already registered. Your installation may have granted everyone connect authority;
you can query tables once you have connect authority.
To create tables, you must have a DBSPACE or write privileges to a space owned by
someone else. Your database administrator allocates a space to you.
On CMS, you must issue the command SQLINIT before you can access SQL tables; this
establishes the connection to the database server. On z/OS, the option SUBSYSID
specifies which subsystem you wish to connect to, if it is different from the default for
your installation.
The following description is slanted towards CMS. z/OS users should substitute “DB2
subsystem” for “DB2 server”.
sqlselect—Format a Query
CMS/TSO Pipelines provides sqlselect, which formats a query for presentation on the
terminal. The filter takes a query as the argument, describes the query, and formats the
result; see Figure 232. The first line of the response contains the names of the columns
padded with hyphens to their maximum length; the remaining lines represent the result of
the query.
Two ways to create a table are shown in Figure 233. The first example shows how to
issue a single SQL statement; the second example shows that sql EXECUTE reads statements
from its primary input stream. The point is that you can supply many SQL statements to a
single invocation of the sql device driver.
pipe literal create table ktest (kwd char(8), text varchar(80))|sql execute
Ready;
Use sql INSERT to load data in the table (see Figure 234 on page 138). The first eight
characters of each record are stored in the column kwd; the remainder of the record is
loaded into the column text.
To insert the values, build complete insert statements using literal data:
Chapter 11. Accessing and Maintaining Relational Databases (DB2 Tables) 137
Using DB2
Note that you must enter the names of the columns, even when you are setting all of them.
On z/OS, you will get a strange SQLCODE if you omit the column names.
On CMS you can use a faster underlying interface (inserting on a cursor) by omitting the
value clause and supplying the values for all columns in the appropriate format. spec is
used with a conversion option to generate the halfword length required for the variable
character string:
All columns defined for the table are loaded with data from the input record when sql
INSERT is used without further operands. sql obtains the length of each column from DB2
Server for VM; data loaded must be in the format used by DB2 Server for VM, which in
general involves conversion. The spec stage copies the first eight characters of each record
without change; it then inserts a halfword field with the number of bytes remaining in the
input record and copies the rest of the input record after this halfword. This is the format
required by DB2 Server for VM for a row with a fixed and a varying length character vari-
able.
Figure 236 shows how to use sql DESCRIBE SELECT to see the format of the input record
or the result of a query.
Each line describes a column in the table. The first column of the record is the numeric
SQL field code. It is decoded in the next column. A column with the length (or precision)
of the field as perceived by DB2 is next. The following number is the number of characters
required to represent the field when loading with sql INSERT and when queried with sql
SELECT. Note that the varying character field has two bytes reserved for the length prefix.
Finally, the name of the column is shown.
The double quotes in Figure 237 represent unprintable binary data. The first two positions
of each column contain the indicator word that specifies whether the column is null or
contains data. This information may be required to process the result of a query of a table
that contains columns that can contain the null value (no data). Figure 238 shows how
indicator words are suppressed in the output record; the query seen by DB2 is the same in
both cases.
The remaining two unprintable bytes contain the length, in binary, of the varying field.
Use spec to discard these columns. As an alternative, Figure 239 shows how to use spec
to format binary data.
spec supports conversion between character and binary or floating point, as well as
constructing varying length character fields.
Chapter 11. Accessing and Maintaining Relational Databases (DB2 Tables) 139
Using DB2
In Figure 240 on page 140, sqlselect formats a query against the sample table.
Use spec to convert from readable formats to the internal ones. The sample program
sqlselect shows how to format DB2 data on output. Figure 241 shows how to convert
some DB2 data types. The input record is assumed to contain a single field.
The unit of work can also be rolled back. That is, the database is restored to the state
before the unit of work began. sql automatically rolls the unit of work back when it
receives an error code from DB2; use sql ROLLBACK WORK to perform an explicit rollback,
possibly in response to a CMS or pipeline error condition.
The result of the first query is written to the primary output stream. If the secondary
output stream is defined and connected, the result of the second query is written there, and
so on. More queries are allowed than there are streams defined. The output records from
the last queries are written to the highest numbered stream defined.
The option NOCOMMIT must be specified when multiple sql stages are running concur-
rently. Each stage uses its own cursor; the module is prepared for up to ten cursors.
If one of the stages fails with an DB2 error, the unit of work is rolled back and all other sql
stages fail if they access DB2 after the error occurred. Use a buffer stage to isolate the
programs when building SQL statements from the result of a query. This ensures that the
initial query is complete before a subsequent stage starts processing. You can also process
the query and store the result in a REXX stemmed array; test the return code and issue the
second sql pipeline only when the first one completes OK.
CMS Considerations
Obtaining Help
DB2 Server for VM stores help information in tables. If you have connect privileges and
have run SQLINIT, you can use help SQL to access these tables. Specify the topic about
which you wish help as the argument. This may be an SQL statement or a numeric return
code. Use help SQLCODE to obtain help for the last return code received from SQL; help
SQLCODE 1 displays help for the second last return code received, and so on.
Because CMS HELP has no interface to receive the information to display, it is displayed in
an XEDIT session.
Figure 243 on page 142 shows a session where a user accesses DB2 for the first time.
The first attempt to obtain help fails. Simply issuing the SQLINIT command does not help
because the EXEC is not available. Having linked and accessed (minidisks and mode letters
may be different in your installation), the user runs the initialisation procedure and obtains
help.
Chapter 11. Accessing and Maintaining Relational Databases (DB2 Tables) 141
Using DB2
sqlinit
Unknown CP/CMS command
acc 195 t
T (195) R/O
Ready; T=0.01/0.01 13:20:23
sqlinit db(sqldba)
Ready; T=0.09/0.14 13:20:39
help SQL uses sql; Figure 244 shows the response when the access module has not been
generated by your systems support staff.
The example in Figure 244 was run in the PIP style. If the access modules are generated
for the DMS style, it may help to use this style instead.
ISPF is also a dialog manager. If you normally work within an ISPF dialog, you can define
a PIPE command to ISPF, as described in “Defining PIPE to ISPF” on page 146.
On z/OS, there is no choice: subcom ISPEXEC is the only way to issue ISPF service requests
from a REXX filter because a REXX filter executes in a reentrant environment, which is not
merged with TSO and therefore has no ISPEXEC environment defined. On the other hand,
subcom selects subcommand environments in the default REXX environment.
Once a table is open you can read rows from it and you can add or replace rows in it. As
an example, Figure 246 on page 144 shows how to create a table that contains a row for
each allocated DDNAME. It is assumed that the table is defined to contain the variables
DDNAME and DSNAME.
In this example, the stages up to the last one transform the response to the TSO query into
a file that has one line for each DDNAME that is allocated to a real data set. ispf TBADD
then performs these steps for each input record:
It peeks at the input record without consuming it.
It issues a VREPLACE service request to set the variable DDNAME to the contents of
columns 3 through 10 and the variable DSNAME to the contents of the record from
column 12 onwards.
It issues a TBADD service request to copy the contents of the function pool variables
into a row of the table. Note that the table can have more columns than the two that
are specified as the argument. The additional columns would be set from the current
contents of the respective variables (which would be specified when the table was
defined).
It copies the input record to the primary output stream. In this example, the output
stream is not connected; the output record is discarded.
It consumes the input record.
Only one column of the table is read into the pipeline in the example below:
This process continues until ISPF sets a return code to indicate that the end of the table has
been reached.
Note that ISPF sets a function pool variable for each column in the table even though the
example above copies only one variable into the pipeline. The remaining variables remain
in the function pool where they can be used by other requests, for instance to update a
table:
xlate does not delay the record. It is important that no stage delays the record between the
one that reads a line from the table up to the one that replaces the line in the table. If the
data were to be delayed, the wrong line in the table might be updated.
On TSO, the function pool can never be the variable pool of a REXX filter; on CMS it might
be. To import values from the ISPF function pool into the variable pool of a REXX filter:
Note that ispf VCOPY needs an input record to trigger a cycle; without the input record it
would produce no output.
Chapter 12. Using CMS/TSO Pipelines with Interactive System Productivity Facility 145
ISPF
Interaction (on TSO) Between ISPF and Stages that Access REXX Variables
The function pool that ISPF maintains for REXX variables is in the REXX environment that
ISPF creates when it initialises. On the other hand, REXX filters run in separate reentrant
environments which each contain their own variable pools. Thus, ispf may be accessing a
different variable pool than does, for example, var.
You can do this by adding a line to the table You must define the PIPE command as a line
mode command to ensure that ISPF refreshes the screen when the pipeline is done.
Figure 250 shows the ISPF variables you should set before adding the row to the ISPCMDS
table.
You may wonder why VM supports these strange files; here is the story. You can skip the
introduction if you still remember how to program an IBM 1401.
The storage medium used was a punched card, in which holes were punched in a twelve
by eighty array: The punched card had eighty columns of twelve rows.
Master files were stored as card files and transactions were punched into cards before
being processed. A typical operation would sort the transactions, collate them into the
master file (which is already sorted), print invoices and update the master file, and finally
remove the transactions from the master file and collate the updated master records into the
new master file.
Operators attending to accounting machines performed the tasks of taking decks of cards
from the stacker (where they come out) of one device and putting them into the hopper
(where they are read) of another device. An operator was expected to handle stacks of
2,000 cards with his bare hands, often turning a stack upside down in the process; there
would be trouble if the cards fell on the floor. CP implements the virtual card operator by
transferring SPOOL files from one queue to another one.
Originally, a card file was something real that you could carry around with you. An able-
bodied person can comfortably carry a box of 2,000 cards under each arm. Programs that
were larger than 4kloc required a trolley for transportation (or several programmers).
Punched cards were not made redundant overnight by the introduction of electronic
computers, however. Input was in cards well into the Nineteen-seventies. Though real
cards are no longer used to store files, they are still very much in evidence in VM/CMS to
support virtual reader/punch and printers.
A system running VM/370 would typically have an IBM 2540 card reader/punch and an
IBM 1403 line printer. To share these devices, virtual machines were given virtual unit
record devices. These are simulated by CP and have no real counterpart. A card file is
simulated by a file in CP SPOOL; it is read by a virtual reader. A SPOOL file is created by a
real reader (now extinct), a virtual punch, or a virtual printer.
In early VM days, most SPOOL files were quickly transcribed to the external medium; few
users, if any, used SPOOL files for messages. The SPOOL system was not used as a reposi-
tory in those days because all SPOOL files were lost if the system went down without
saving warm start data; that is, without being shut down properly, as would happen on a
power drop.
A SPOOL file of punched cards contains records with up to 80 bytes. A column that has no
holes punched is read as a blank (X'40'); you can think of short cards as padded with
blanks: punching nothing leaves the column blank.
Print files contain control information in addition to character data; a printer has a carriage
that moves the paper past the stationary printing station. When not writing text on one or
more lines, the computer told the printer to skip a number of lines, or to the next page, or
to the end of the page; the printer carriage then moved the paper faster than when printing.
This dual speed carriage improved elapsed time for printing jobs, especially on sparsely
printed pages.
How can a printer tell the beginning of a page? No doubt, you look for the perforation,
but that was not so easy for a 1403: it had a carriage tape: a paper loop as long as the
page or multiples thereof and about two inches wide. This paper loop was installed in the
printer in a special device to read it and move it synchronised with paper movement. Each
form had its own carriage tape. While printing, the printer read the carriage tape for
punched holes with twelve brushes. The programmer punched holes in the carriage tape
with an IBM Carriage Tape Punch and glued the ends together with IBM glue to form a
band. The computer instructed the printer to skip to a channel, which meant until a hole
was detected in the corresponding column of the carriage tape. Convention soon became
that a hole in the first channel meant the top of the page. The end of the page was indi-
cated by a hole in channel 12; it would be punched where one would print subtotals.
It was good practice to have at least one hole punched in any channel, but programmers
are always too busy to remember small details like this; a printer took the skip instruction
literally: when there was no hole to stop the carriage, the printer would spew out paper at
high speed until the operator intervened. The original 1403 skipped heavy paper faster
than it could stack it, so the paper tended to hit the lid if one skipped a long way. To
avoid jams from this, the printer was normally run with the cover open: a runaway skip
was quite spectacular.
With the inception of IBM System/360, control units were put between the computer and
the printers as part of the standard I/O architecture which still applies. Control units are
attached to channels that are programmed in a limited instruction set called channel
commands. Each channel command word (abbreviated CCW) contains a command code
(one byte), a buffer address, a byte count, and flag bits to control the channel. The
command can be immediate with no data transfer, or it can write a line of text and then
start paper movement. The command codes select the particular type of carriage move-
ment.
On the IBM 3211 the forms control buffer, often abbreviated to FCB, replaced the paper
carriage tape of the 1403. This electronic buffer is loaded by a channel command; data
sent to the printer with the write FCB command is not the same for all printer device types.
CP SPOOL stores the CCW command code along with the data, so logically each record of a
SPOOL file has a leading character which is called the carriage control character. It is a
machine carriage control character, because it is the CCW command code.
We hope this long preamble explains why carriage control is important and why attention
to detail is required when dealing with CP SPOOL; if you lose the carriage control character,
you have lost the layout of the page, though not the words on it.
The eXtended Attribute Buffer. As far as CP is concerned, the XAB can contain up to
32K of arbitrary characters. Print Services Facility uses the XAB to store additional
file attributes used for printing. The tag cannot be used, because the file might need to
be transmitted by RSCS to the actual print destination.
The data stored in the file. That is, the records that were written to the file.
A SPOOL file is created by writing records to a unit record output device; that is, a virtual
punch or a virtual printer. The command is issued when the file is complete. At this time,
information is copied from the virtual device and associated with the file. This includes
the SFBLOK information, the tag, and the extended attribute buffer. That is, you can
change the characteristics of the device while the file is being created and the updated
characteristics will be associated with the file when it is closed.
There are many attributes associated with a virtual SPOOL device. In general, a SPOOL file
is created each time the device is closed; the file gets the attributes associated with the
device at the time it is closed. Three attributes are particularly important when using
CMS/TSO Pipelines unit record device drivers:
The class. A spool file’s class is a letter or a digit. A reader device can read files of
a particular class only or it can read all files (class *). You can define multiple
readers and use a different class with each.
The hold status. A SPOOL file in hold cannot be read. You can create a held file or
you can change a file to be held.
The hold status for a reader is interpreted differently. When the reader is NOHOLD, a
file is purged after it has been read. Be sure to SPOOL the reader HOLD to retain the
file.
The continuous setting. The CLOSE command has no effect when an output device is
set to be continuous. This is useful to suppress CLOSE commands in CMS commands
so that you can issue your own CLOSE command, which could include the NAME
operand to name the SPOOL file.
A reader that is spooled continuous reads all files of its class that are not in hold. By
default, a reader reports end-of-file after each file it reads. When a reader is both held
and continuous, the files are put in hold after they have been read; otherwise the
reader would read the first file forever. It is unlikely that you will want to SPOOL your
reader CONT.
printmc Create a print file. The first column contains a machine carriage control
character.
punch Create a punch file. No carriage control is required, because only one
operation is allowed by CP on a punch device.
uro Create a print or a punch file. The first column contains a machine
carriage control character.
reader Read a SPOOL file. The file can be a printer or a punch file.
xab Manage the eXtended Attribute Buffer of a device or a SPOOL file.
The device address can be specified as the argument to these device drivers. The default
address is 00E for printmc and uro; it is 00D for punch.
To give you complete control, no CP commands are issued to the virtual device: you must
issue SPOOL, TAG, and CLOSE commands as required. You must also use xab if you wish
to change the extended attribute buffer associated with a device. The SPOOL file is created
by CP when you issue the CLOSE command.
Use punch to create a punch file. Each input line is written to the punch with X'41'
carriage control, which is the only one allowed by CP except for the X'03' no operation.
Note that CP truncates punch lines after column 80 without issuing a message or giving
other indication of error.
Use printmc or uro to create a SPOOL file where you specify the machine carriage control
as the first byte of each record. The carriage control character controls the carriage move-
ment or the stacker selection, depending on the device type. printmc supports only printer
devices, whereas uro supports both. You can find the command codes under the heading
I/O Command Codes in the IBM System/370 Reference Summary, GX20-1850, and in the
IBM Enterprise System Architecture/370 Reference Summary, GX20-0406.
Though there is only one kind of carriage control CCW codes, there are two kinds of
carriage control associated with listing files: machine carriage control, which is the CCW
operation code described earlier; and the more user friendly ASA carriage control.
CMS/TSO Pipelines filters mctoasa and asatomc convert between the two formats. If the
first position of a record with carriage control contains any of these characters it has ASA
carriage control:
1 (X'F1') Skip to new page. The line is printed at the top of the next page.
The numbers 2 through 9 and the letters A through C are defined for the other
channels, but are seldom used.
CMS/TSO Pipelines reports errors on the unit record output devices with a message that
includes sense data; in most cases you can ignore these hexadecimal values and concen-
trate on message 293. (But please supply all data when you report an error.)
For all three output drivers, you will see Intervention Required if the SPOOL is full or the
limit on SPOOL files is exceeded. This also occurs when you issue the CP command,
NOTREADY. Issue READY to make the device ready.
ready d
Ready;
Another kind of error is to write a record with carriage control that CP does not like.
Because X'41' is the only valid carriage control on a virtual punch, this one fails with
X'F1':
The condition is known as “Command Reject”. Ignore message 292 when message 293
shows a decoded value.
As we shall soon see, the output from reader has carriage control in the first position of
each output record; this can be fed directly to uro to copy a spool file, but the virtual
output device must match the type of spool file; you get the command reject error if the
device and the file are incompatible.
It is practical to concentrate this in a single subroutine pipeline so that your main EXEC is
not required to handle such tasks.
exit RC
Cardboard read by a real card reader has a format similar to a virtual punch file; such
files are now almost extinct, but it is likely that the carriage control is X'42'.
How one wishes to process a file depends on the format of the file. Since SPOOL files are
mostly used for electronic mail, the most common format is the virtual punch format.
However, there are many protocols for the contents of a punch file with mail in it; some,
such as VMSG and MAIL, are not blocked further and have a record for each line in the
mail file, but other formats (notably NOTE) block the message before it is punched in cards.
! A VMCF transaction comprises sending the request from a vmclient (or even vmc) stage to
! a vmclisten stage, typically in a different virtual machine. The server uses vmcdata in
! some contexts to reject, receive, and reply.
! Two similar data areas, both 40 bytes in length, are central to the workings of the VMCF
! stages; one of the two is present at the beginning of all records passing in and out of VMCF
! stages. They are documented in appendix C of CP Programming Services and a structure
! definition of each is built into CMS/TSO Pipelines.
! VMCPARM: The VMCF parameter list is passed to CP with the VMCF diagnose, 68.
! VMCMHDR: The VMCF message header is stored as part of reflecting a VMCF interrupt
! to a virtual machine.
! For all practical purposes, the difference between the two is restricted to the first two
! bytes.
! In VMCF terminology, the source virtual machine originates the request and the target
! virtual machine processes the request. They are client and server, respectively, in normal
! parlance.
! vmclient is used in the source virtual machine. vmclisten and vmcdata are used in the
! target virtual machine. A particular virtual machine can at the same time be target and
! source, but there can be only one vmclisten stage active in a virtual machine at any time.
! Supported Functions
! Supported functions are send, sendx, send/receive, and identify.
! Identify
! The parameter list is transferred to the target. This completes the transaction. The server
! cannot reject the message. The identify function offers only eight bytes payload.
! Sendx
! You can think of sendx as identify with appended data. The message header and the data
! are stored as part of reflecting the interrupt. The transaction is then complete. CMS
! Pipelines sets an arbitrary limit of 512 bytes of sendx data. As with identify, the server
! cannot reject the message.
! Send
! At the VMCF level, send transmits the parameter list to the message header. The server
! then inspects the message header and decides whether to receive it or reject it.
! When RECEIVE is omitted from VMCLISTEN, the output record contains the message header
! only. Its function code must be modified to VMCPRJCT or VMCPRECV and the record must
! be passed to vmcdata to complete the transaction. Note that the receive function is manda-
! tory with a send function.
! Send/receive
! The send/receive message, if any, is first be received, as you would do for send; the trans-
! action is completed by passing a VMCPREPL function including reply data to VMCDATA.
! The receive function is optional with a send/receive function, as it is the reply that
! completes the transaction.
! Parameter lists
! CMS Pipelines exposes underlying message headers and parameter lists in the records it
! produces; and it expects properly formatted parameter lists as input records. While this
! applies to all three stages, the user is usually concerned with building a parameter list only
! for vmclient, as the input to vmcdata is often derived from the output from vmclisten.
! vmclient: The following fields in the parameter list must be filled in by the producer:
! VMCPFLG1 Usually zero.
! VMCPFUNC Function code. Specify
! VMCPSEND (send, X'0002'),
! VMCPSENR (send/receive, X'0003'),
! VMCPSENX (sendx, X'0004'), or
! VMCPIDEN (identify, X'000A'), as appropriate.
! VMCPUSER Specify the user ID of the target virtual machine unless a user ID is
! specified as an operand of VMCLIENT.
! VMCPLENB For send/receive, specify the maximum reply size required for
! send/receive. If specified as zero, the current reply buffer is used; it is
! at least 4056 bytes.
! VMCPUSE Not inspected or modified by CMS Pipelines. May be used for trans-
! action codes and reasons, as desired by the protocol built on top of the
! VMCF messages.
! VMCPMID, VMCPVADA, VMCPLENA, and VMCPVADA are set by vmclient as appropriate; the
! contents of the input record are ignored.
! The message ID, user ID, and length fields must remain unchanged from vmclisten.
! error:
! say 'vmcserv ended. rc='rc
! exit RC
! The intent of this arrangement is that you can use an immediate command to terminate the
! server gracefully by shutting the gate. Any failure in VMCSERV REXX will pass a record to
! the gate too; it will terminate and thus cause immcmd to terminate as well. vmclisten can
! be used in a server that also processes TCP/IP requests. This would typically be done by
! adding a pipeline to listen on a port and invoke a server:
! The workings of the server stage shown in Figure 254 on page 156 are rather simple. If
! the function code does not indicate send/receive or the user data does not contain “Permit”,
! the record is passed to the label id: where it is turned into a reject parameter list.
! Otherwise the record is first turned into the input required to perform the desired
! ADRSPACE PERMIT having the ASIT in columns 1 to 8 and the user ID in columns 9 to 16.
! When the permission has been granted, the record is passed to vmcdata to complete the
! transaction.
! This arrangement ensures that the client waits until the permission has been granted. Were
! an identify function used instead, there would be a race between the two virtual machines;
! the client may well have tried to create an ALET before the server has permitted it.
! First, we send the request to get permission. We then create an ALET (see Chapter 18,
! “Using VM Data Spaces with CMS Pipelines” on page 207) to be able to access the other
! virtual machine’s storage.
! Finally, we display the CMS command history information for JOHN3 and also the one of
! our own virtual machine, to show that we do access a different address space from our
! own.
Though typically running disconnected, a service machine should also be able to process
commands when it is connected to a terminal. The person at the console might be an
authorised user or the programmer debugging the service machine program. CMS supports
immediate commands that can interrupt the running program: you have no doubt issued HI
to halt a runaway REXX program. Use immcmd to set up an immediate command
processor; the argument specifies the name of the command processor to set up. immcmd
writes a line to the pipeline whenever the user issues the immediate command; the line is a
blank character or any string the user types after the immediate command verb.
This chapter may be useful to you even if you have no service machines, you may at times
wish to leave your own virtual machine unattended and, for instance, forward notification
when a particular file arrives in your reader.
The pipeline in Figure 256 generates an infinite number of records with '+60' (but only
one at a time).
literal makes one record;
duplicate * copies this record to the output until it gets return code 12 because its
output is no longer connected.
delay reads a record, waits for 60 seconds, and writes the record.
spec turns the delay interval into a CP command on each line it reads from delay.
Put a literal stage (with any argument string) between delay and spec to generate a
command immediately when the pipeline starts.
exit RC
The spec stage shows how to write any string without worry about the stage separator:
convert it to hexadecimal.
The time interval begins when the stage after delay has processed the output line.
Commands are issued less frequently than once a minute if it takes an appreciable time to
process the response. You might adjust the delay if the processing always takes the same
time. Write a REXX program to take the processing time into consideration; it waits in
OUTPUT while delay processes a request.
Remember the plus when using an interval. This is also a valid delay:
“literal 60|delay”. However, if you issue this command at any time on a Monday, it
wakes up at noon on the following Wednesday. (This is why EVERY REXX adds the plus.)
The first blank-delimited parameter on an input line specifies the time in hours, minutes,
and seconds with colons to separate the parts. When there are one or two parts, delay
assumes zero hours (and minutes) for a relative delay; it assumes zero seconds (and
minutes) for time. Specify all three components of the time to be sure. delay is not fussy
about, for instance, the number of minutes in an hour; +1:67 is the same as +2:7 (or
+2:007).
But if nothing will make the output terminate, it will take a while to exhaust the supply of
records. duplicate * does terminate, at least in principle, after it has written 2147483647
output records. At the rate of one a minute, this will take well over four thousand years
and you may not be that patient.
Issue the immediate command PIPMOD STOP from your terminal to terminate delay while it
waits. You can also force the waiting stages to terminate by passing a record to pipestop;
it has the same effect. Note, however, that this only stops waiting stages; you cannot
terminate a running pipeline this way.
Let us also make an immediate command to issue CMS commands with full command
resolution while the pipeline waits for work. The two commands seem to be useful
together; put them in a subroutine pipeline (subroutine pipelines need not be connected to
the caller’s streams):
Armed with asyncms, write this pipeline instead of the one in Figure 258.
Processing Messages
Use starmsg to get your hands on terminal responses that cp and cms cannot trap.
You might also consider starmsg when CMS’s programmable operator (PROP) does not
satisfy your requirements. Maybe you need to preserve state information across calls to
the action routine: with PROP you must store state information outside the REXX program,
for instance in GLOBALV. In contrast, your action routines run concurrently with CMS/TSO
Pipelines.
Select the type(s) of message to process with the CP command SET. The easiest service
machine to set up processes commands from users on the same system, sent with the CP
command SMSG. More sophisticated servers can service requests forwarded as RSCS
messages. Here is an example of the first kind:
Figure 261.
'CP SET SMSG IUCV'
'PIPE starmsg | spec 9-* 1 | validate | spec 9-* 1 | subcom cms'
Lines from starmsg have eight bytes with the type of message followed by eight bytes
with the origin user ID followed by the message data, if any. In this case the message
type prefix is the same on all lines: discard it. validate (which is shown in Figure 264 on
page 162) ensures that only those we trust get service. Use a decoding network to process
requests from users in particular ways. starmsg sets up the immediate command HMSG to
make it stop. Issue HMSG (or PIPMOD STOP) from the terminal to terminate starmsg.
Warning: Setting CPCONIO IUCV means that all console output generated by CP is
presented to you. This includes the echo of commands you type on the terminal; they are
indistinguishable from CP responses. You also receive messages and warnings when the
corresponding setting is ON.
The double quote characters represent line end characters (X'15'). We entered four
commands while starmsg was intercepting console output:
CP translates the echo of CP commands to upper case, but not the echo of CMS commands.
Try to match the four commands to the responses in Figure 263. Note the change in the
message class prefix after MSG was set to IUCV; after this, the message is no longer treated
as CP-generated console output; also note that there is no time stamp.
Validating a User ID
If the file VALID USERS contain the user IDs of the users who are allowed to access a
server, you can use lookup to filter those that are not authorised:
spec has evolved from a simple filter to a complex programming language, but the
language can be subset: You can choose a subset you wish to learn; if you do not use a
particular feature, you need not learn how to avoid it.
As you progress through this tutorial, you will realise that some of the statements made in
the early sections might be in need of the odd qualifying footnote. However, if you
choose a subset that does not include the finer points, you do not need to know these finer
points and a sprinkling of footnotes becomes a nuisance rather than a help.
You will find a concise reference for spec in Chapter 24, “spec Reference” on page 692.
Refer to that right now if you prefer to read a complete authoritative reference rather than
a tutorial.
The examples in this chapter are formatted with the spec stage across the entire column
and the input records below to the left and output records below to the right. To make
reading easier, each specification item is on a separate line. For reasons of typography, it
is not possible to put meaningful headings into this layout; you will have to remember that
the left hand side contains the input records and that the result is shown on the right. The
good news is that the examples are run when the book is formatted for printing. What you
see is indeed what it does, even when the examples contain mistakes. This printing
applies to:
Basic Mechanics
spec reads an input record; it then interprets its argument string and produces an output
record when it reaches the end of the argument string. It then repeats this cycle with each
new input record until it reaches end-of-file.
The argument list to spec is called a specification list, because it is interpreted as a list of
specification items. Some specification items are keywords that control how spec operates;
others define the contents of fields in the output record.
spec processes the specification list from left to right, but the output record need not be
built from left to right; a specification item can modify a part of the output record that has
already been filled by a previous specification item.
Input Ranges
The basic specification item copies part of the input record to the output record. It is
specified as an inputRange followed by a number.
The specification list in Figure 266 contains a single specification item. This item contains
an input range (1-*) and an output column number (1).
An asterisk in an input range is interpreted as the beginning of the record when it is first
and as the end of the record when it is after the hyphen; thus, both *-* and 1-* specify
the entire record.
To select a subset of the input record and indent it in the output record:
Figure 267 shows that an inputRange can select things other than just columns. WORDS
(which can be abbreviated to W) specifies that the range refers to blank-delimited words.
In this example the first word of each input record is inserted in the output record begin-
ning in column 5. The first four columns are filled with blanks.
When you specify a word range, spec interprets that as the range of columns from the
beginning of the first word to the end of the last one. It does not squish out multiple
blanks within such a range:
In this example you can remove the excess blanks easily; just do the three words one at a
time:
NEXTWORD (which can be abbreviated to NEXTW and even to NW) specifies that the field is
appended to the contents of the output record after a blank is added as a separator. The
blank is omitted when the output record is empty.
Note that you must specify WORD for each specification item that refers to a word range;
this will allow you to refer to words in some specification items and to columns in others.
spec also supports tab-delimited fields. Just as words are separated by blanks, fields are
separated by horizontal tabulate characters (X'05'). But whereas words can be separated
by more than one consecutive blank, two adjacent tabulate characters have a null field
between them (that is, a field of length zero).
You can specify a different tabulate character with the FIELDSEPARATOR keyword (or its
synonym FS). To move the contents of fields that are delimited by equal signs to specific
columns:
Notice that the first two records contain only two fields; the third record contains four
fields; the second and third records contain null fields.
An inputRange can contain a negative number; this specifies that the count is from the
end of the record rather than from the beginning:
The general form of a range consists of two numbers separated a semicolon. Thus, there
is a third idiom to refer to the entire record: 1;-1. When both numbers are positive,
there is no difference between using semicolon and using a hyphen to delimit the numbers.
When the two numbers have the same sign, the first number must be less than or equal to
the second one; it is an error to specify an ending column that is before the beginning one.
(Recall that -2 is less than -1.) When the numbers have different signs, a null input field
is used when the beginning position is after the end position:
The second output record is a null record (it contains no data), because the field to be
written started at the beginning of the second word (the first “r”) and extended to the end
of the second last word (the first “d”). Since the input field ends before it begins, the
output field is null. (Null records are not written to CMS files, because CMS does not
support null records in files that have variable record format. But spec produces a null
record all the same.)
When the first number in a range is positive, you can specify a count rather than the last
number. The count is specified after a period and it must be positive:
CMS/TSO Pipelines processes the input range from right to left. It starts with the complete
record. It then processes word 3; this string becomes the input record for the substring
expression. You can mix fields, words, and column ranges within a substring expression;
you can even have different field separators and word separators for different parts of the
expression.
! Structured data: Rather than referring to the absolute column, word, or tab-delimited
! field, you can declare structures that contain members, as described in Chapter 6, “Proc-
! essing Structured Data” on page 91 and in the description of structure.
! Such structure definitions can be created manually or possibly by a utility from an already
! existing machine readable record layout, or even dynamically.
! In this chapter we shall use the structure in Figure 276 to show examples of the use of
! structured data.
! Read the literal as: Structure str contains an member named mem, which is four bytes and
! have no type associated. beginning in the first column. The next four columns contain
! member char, which is of character type as indicated by the single character. The next
! four columns contain member bin, which is a binary number in two’s complement nota-
! tion as indicated by the type D. The next eight columns contain member float, which is
! a System/360 hexadecimal floating point number. The final member of the structure is
! pack, which is a packed decimal number. The length four accommodates seven digits and
! a sign. The scale is two (there are two decimals in the number); this is the data type
! known as computational-3 to COBOL programmers. Structure and member names are case
! sensitive.
! As binary data are cumbersome to construct and also not to obscure examples by creating
! such numbers, we have prepared a two record file, which is dumped in Figure 277.
! The important point is that, except for data typing and scaling of packed decimal data, a
! member of a structure is simply a symbolic way to specify a particular substring of the
! input record; thus most of our examples will show column numbers as that keeps the
! example compact, but you should use structures and members for production.
! The utility of structures comes, of course, when the record layout changes; you no longer
! need to track down the various EXECs that are affected by a change.
! This reads the sample file, selects member char, and prints it.
Literals
To add a literal string to each record:
In Figure 279, the first word is inserted in column 1 and the literal string is appended after
a blank. WORD can be abbreviated down to W and you can elide the blank between the
keyword and the word number.
NEXT specifies that the field should be abutted to the contents of the output record so far.
(X'5C' is the hexadecimal representation of the asterisk.)
! Manifest Constants
! A manifest constant is also literal data, but it is four bytes binary. Typically, a manifest
! constant is used to insert a particular value in a control block or parameter list being built.
! pipe struct list vmcparm | ...
! ... pick from w1 == /vmcpfunc/ to after substr 1.8 of w1 == /vmcpsend/ |
! ... console
! vmcpfunc D 3.02
! vmcpauth=0
! vmcpuaut=1
! vmcpsend=2
! Ready;
! pipe spec qualify vmcparm eof m vmcpsend m vmcpfunc | spec 1-* c2x 1 | console
! 40400002
! Ready;
! The manifest constant is a binary constant of four bytes. It is by default entered into the
! output field aligned to the right. (Using EOF is a handy way to force spec to generate a
! null input record internally.)
The keyword NUMBER refers to a field maintained by spec. The field is ten characters
wide; the number is aligned to the right with leading zeros suppressed.
Output Placement
So far, the output field has contained precisely the characters in the input field.
You can specify the size of the output field to make it shorter or longer than the input
field. The input field will be padded with blanks or truncated, as required to fill the width
you have specified:
With a placement option, you can control how the field is inserted into the output record.
You can align the field to the left or to the right; or you can centre it. When you use a
placement option, the input field is stripped of blanks before it is placed. To put the
sequence number into columns one through five aligned on the left:
Figure 284 also highlights the fact that you need not copy any input fields to the output
record; you still get as many output records as there are input records.
An input field that does not exist in a particular input record is considered to be null; that
is, it contains no characters. When a null input field is referenced in a specification item
that does not specify an explicit length, the specification item is ignored. In particular, the
output record is not padded to the position of the output field:
Padding
When an output field is placed beyond the current end of the output record, the gap is
filled with the pad character, which is also used when output fields are placed with a
particular length.
You can use PAD to change the pad character to use in subsequent specification items:
The first word is inserted after four blanks, because the blank is the default pad character
and it has not yet been overridden; the second word is inserted after five asterisks in the
first record and after four asterisks in the second record.
You can resort to a subterfuge to put the record number in the first five columns and insert
leading zeros:
Rather than supply an operand to specify no leading zero suppress (there is no such
operand), you can use PAD to specify the pad character to be used when the stripped
number is inserted into the output record; thus, the net effect is the one desired. Note that
NEXTWORD inserts a blank irrespective of the setting for the pad character.
Conversion
| This section discusses explicit conversion. When you define structures with typed
| members, their contents are converted automatically to the desired form; you may not
| specify explicit conversion too.
Conversion can be used to make binary data visible as well as to turn printable data into
the internal representation, which is the form numbers have inside the computer.
Conversion to printable form is often used with sql SELECT. Here, however, is a spec
stage that formats the first eight characters of a line into a form often used by program-
mers:
The conversion used in the example in Figure 288 unpacks a byte of data into two bytes
in hexadecimal notation. The eight bytes of input data are split into two fields, which are
printed with a blank between them. You can see that a short input field is not padded
before conversion.
The input data are placed in the output record a second time, this time without conversion.
An asterisk is inserted in column 19 followed by eight bytes of input data and a closing
asterisk. It would be normal to translate any unprintable characters in the original record
to blanks in a subsequent xlate stage.
READ assumes a null record when it gets end-of-file. READSTOP, in contrast, terminates the
specification list:
The multistream support in spec follows the same pattern as the multistream support in a
REXX pipeline filter. That is, you first select the stream you wish to read from. Then you
can use the normal read operation to read from the stream.
spec is different in one respect, however. It synchronises its input streams before it starts
processing a set of input records; and it consumes the set when it comes to the end of the
specification list.
The synchronisation operation ensures that all records are available; and the
synchronisation operation is easy to understand. But if two input streams originate in a
common stage, for example chop, the pipeline is likely to stall unless you take precautions.
Refer to “Ensure the Pipeline Does not Stall” on page 88.
Note that the record number is incremented at the beginning of the cycle; it is constant
during a cycle. Writing a record during the cycle does not increment the counter; nor does
reading a record during the cycle.
The pipeline in Figure 293 produces two output files, both containing the same number of
records as in the file INPUT FILE. The file FIRST WORDS contains the first word of each
input line; SECOND WORDS contains the second word of each input line.
Unlike SELECT, OUTSTREAM takes effect when the record is written, not when data are
placed in the output record. You can build only one output record at a time.
Expressions
spec performs decimal arithmetic with thirty-one digits precision. You can save the result
of a calculation in a counter, where it can be stored for use in a subsequent record.
You can format the contents of a counter for printing under control of a picture, which is a
pictorial representation of the formatting you require.
You can suppress the automatic writing of an output record so that a record is written only
at end-of-file.
Counter Expressions
A counter is identified by a number that is zero or positive. The syntax to specify a
counter consists of a number sign (#) followed by the number of the counter; for example,
#17.
The code point for the number sign is X'7B', which displays the number sign on an
English terminal; however, not all terminals display this code point the same way. (The
character is also called a hash or a pound sign, but this must not be confused with the
currency symbol for pound sterling.) If there is a number sign9 on your keyboard, you can
go ahead and use it. If your keyboard does not have a number sign, some other character
must be used. On a French keyboard, the pound sterling symbol would probably work. It
is easy to find out with this pipeline:
9 Unless you have a Danish keyboard, where you must use the upper case Æ.
The number of the counter is specified after the number sign. Counters are numbered from
zero and upwards. spec can store values in as many counters as you need; there is no
arbitrary limit to the number of counters.
! The values stored in counters may be numbers or strings, so the name counter is slightly
! misleading; maybe register would be more appropriate, but the old name sticks.
The Arithmetic/logic Unit (ALU) implemented by spec can reference data in input records
in three ways:
! Using MEMBER, as we have seen already.
Indirectly through a field identifier that specifies the field that contains the number to
| use. Syntactically, a field identifier is a single letter followed by a colon; it is placed
| in front of the inputRange. Case is respected in field identifiers; thus, there are
fifty-two possible field identifiers.
! Through the record function, as we shall see when we get to expressions.
The following example of using the spec ALU sums the values of the first word of each
input record and prints a running total in each line:
In the example in Figure 295, the first specification item (1-* 1) simply copies the input
record to the output record. The second item (a: word 1 .) associates the field identifier
a with the first word in each record, but it does not place that word in the output record,
because the placement is specified as a period, which means “ignore”.
The third specification item (set #0:=#0+a) can be read as set counter zero to the sum of
its current contents and the contents of field a. Thus, this item accumulates the running
total in counter 0. := is the assignment operator. Note the colon, which distinguishes
this operator from the = relational operator, which tests for numeric equality.
The fourth specification item (print #0 10) “prints” the contents of counter 0. That is, it
places the contents of the counter into eleven characters starting in column 10 of the
output record. By default, PRINT formats the value with leading zeros “suppressed”
| (converted to blanks); you can control the formatting with a picture, as we shall see later.
! You can also refer directly to members of structures. Members that have a numeric type
! are converted automatically when assigned to a counter. This applies whether you refer-
! ence the member directly or you use a field identifier to reference indirectly a range that is
! defined by a member.
! Rather than supplying the fully qualified member name each time you refer to a member
! of a particular structure, you can declare a qualifier for the current stream:
! You can also specify in which column the structure should start; here we have specified
! the default explicitly (this is a good habit to get into).
! You can specify a separate qualifier for each input and each output stream and you can
! also specify that a qualifier should apply to all input streams, all output streams, or all
! streams.
The example in Figure 295 on page 176 was cast for the reader who is familiar with
REXX. It can be written more compactly by using operators borrowed from C. For
example, the SET item is redundant; the counter can be updated as it is printed:
This example uses the increment operator (+=) to add the contents of the identified field to
the contents of the counter. This is the preferred way to increment a counter before it is
“printed”.
! String Processing
! The example in Figure 298 on page 177 can be made even more compact using some of
! the string functions:
! record() returns the entire input record; and word selects the first blank-delimited word;
! finally, the assignment with add forces conversion of the string "1" to a number, which is
! added to the contents of the counter.
! You can store input string data in a counter and you can concatenate strings and apply
! most of the REXX functions to strings.
! The output position in this example is a computed output position. The wordindex func-
! tion very conveniently provides the position of the third word, at least when there is one.
! Finally, max guards against the case where there are two or fewer words in the record as
! the word index is zero in this case.
! You should also note that you can print a string, but you must not supply a picture; doing
! so would force conversion to a number.
! Here is an example of concatenating strings (the OR bars are doubled to escape them):
! You can make an expression more readable by putting it in parentheses, because you can
! the sprinkle blanks into it. The previous print item could be written as:
! print ( #0 |||| " and " |||| #1 ) 1
! But it could not have been written like this, because spec has not implemented the blank
! operator that REXX uses to concatenate with a blank:
! print ( #0 "and" #1 ) 1
! The parser is trying to tell you that it does not like two abutted terms. The state number
! (102) has meaning only to the programmer who built the parser (because he can refer to a
! listing that defines the state, which is assigned by the compiler compiler). The parser then
! informs you that it is expecting to see a right parenthesis, a semicolon, or the end of the
! expression (at least, so the programmer would tell you—you might not be quite that
! clairvoyant). You might also wonder why it does not tell you that you should use the
! concatenate operator when it just accepted such a construct in the previous example, but
! such are the ways of LALR(1) parsers (for that is what it is). The good news is that the
! programmer has added many rules for erroneous syntax to the grammar to issue mean-
! ingful error messages, but eventually you will arrive at “crunch point” where the LALR(1)
! parser rears its head.
Note that there are more output records than input records, even though there is no WRITE
item. The reason is that spec performs an additional final cycle when it reaches end-of-
file. It takes this runout cycle to process the specification items after EOF.
You can suppress output for all detail records and print only the summary record at end-
of-file:
PRINTONLY EOF specifies that no output records are to be written until the runout cycle.
During each cycle, the contents of field a are added to counter 0 and the constant 1 is
added to counter 1. Thus, at end-of-file, counter 0 contains the total, as before, and
counter 1 contains the record count. The specification items following EOF display the
contents of these counters.
You can use STRIP to strip all types of input fields of leading and trailing blanks. In
Figure 305, it is used to strip the leading blanks from the counter being printed.
You can even combine the two SET specification items into one by using the discard oper-
ator:
The SET specification item contains two expressions that are separated by the semicolon
operator. In the example in Figure 306, it works like the REXX clause delimiter, because
the result of the expression is discarded.
The expression is enclosed in parentheses to make it more readable. This allows the use
of blanks to separate the terms of the expression; without the parentheses it must be
written as #0+=a;#1+=1.
Pictures
Try twiddling the input data to explore the numeric range supported by spec:
As you can see, spec does not complain about decimal fractions. A counter can hold
floating point numbers with up to thirty-one decimal digits of precision. The exponent
range is in the thousand millions, which should be quite sufficient for most needs.
If you study the numbers and the results in Figure 307 carefully, you will see that the
computation has been performed without loss of precision, but printing has truncated the
number to an integer. You can specify a picture to control the way the contents of a
counter are formatted. A picture is a string of characters that specifies the desired format;
this string contains one character for each column of the formatted field. The picture is
specified after the keyword PICTURE. Case is ignored in pictures.
999 is a simple picture, which specifies that the number is to be formatted as three digits,
with no sign and no decimal point and no suppression of leading zeros. If, for example,
counter 4 contains the value 16 and the specification item is print #4 picture 999 1,
the output field will be 016. To get suppression of leading zeros, use z, rather than 9, in
your picture. In this case, if the picture is changed to zz9, the output becomes 16. To
allow for negative numbers, use one or more minus signs in the picture. For example, if
the counter contains the value -16 and the picture is ---9, the output will be -16. The
minus sign is said to drift; it is replaced by blanks until just before the first nonzero digit
in the output.
If you omit the PICTURE keyword, spec uses a default picture that has a drifting minus sign
with ten digits and no decimal fraction. Hence the truncation in Figure 307.
The second last specification item computes the average by dividing the total by the count
of observations. The result is increased by five thousandths to ensure correct rounding
when the number is truncated for formatting with the picture.
This particular picture specifies eight hyphens, which represent a drifting sign; a nine,
which represents a digits position; a period, which represents the units position as well as
the character to insert for the decimal point; and two more nines to represent the first two
digits of the decimal fraction. For negative results, a hyphen is inserted into the last posi-
tion that contains a blank.
Conceptually, the picture is processed by first converting the result of the expression to a
number that has eight digits before the decimal point and two digits after the decimal
point. That is, the number has two digits fewer than the number of characters in the
picture, because the drifting sign and the period each require a position. The digits in this
string are then inserted into the output record under control of the picture. For the
hyphens making up the drifting sign, leading zeros are suppressed and replaced by blanks.
The character 9 indicates that the digit is to be inserted unconditionally. Thus a number
numerically less than one will have a zero digit just in front of the decimal period.
Using counters and pictures, the record numbering shown in Figure 287 on page 171 can
be accomplished in a much simpler way:
The first word of the output record contains the input field.
The second word shows the number printed with cheque protection where asterisks rather
than blanks are used to suppress leading zeros. The commas in the picture are displayed
as commas if the number has started. The decimal fraction is displayed with two decimal
places. The s character specifies that the sign should be inserted after the number. Zero is
considered positive.
The third word in the output shows the running total printed with a drifting plus sign.
This example is shown here to warn you that a drifting plus results in a negative value
being formatted with no sign. Use a drifting s to prefix a number with either a plus or a
minus.
The last number is too large to print using the picture specified; but the number is well
within the range you can store in a counter.
You can use scientific notation for expressions that have a very large range of potential
values:
The e character specifies the beginning of the exponent field. Even though case is ignored
syntactically within a picture, it is respected in the character to be inserted to signify the
beginning of the exponent. The digits of the exponent follow simplified rules for format-
ting because the exponent is an integer.
In this picture, the number is printed with a leading sign, one digit before the decimal
| point, five digits decimal fraction, the exponent sign, and three digits exponent.
The last number contains an exponent that is too large to print using the picture specified;
but the number is well within the range you can store in a counter.
This picture contains both periods and commas. Thus, the v is used to specify the units
position explicitly, because the periods are not marking the units position; they mark
millions and thousands, respectively. You can also see the way the punctuation characters
are suppressed just like the drifting sign. Notice that zero is considered positive.
Boolean Operators
Every journeyman plumber knows how to write a multistream pipeline that puts an indi-
cation of the equality of words 1 and 2 into column 1. But with spec, this can be done
much more simply:
The result of a relational operator is a number, which is zero for failure and one for
success. This result is inserted into column one of the output record using a picture that
contains a single digit (picture 9). Thus, if the result of the comparison is true, a single
digit 1 is placed in column 1 of the output record; otherwise, a single digit 0 is placed in
column 1.
You can see that the two equal signs mean that the comparison is strict, as defined for
REXX.
Note the processing of the second and third lines. They contain numbers that have “expo-
nents”; that is scaling by a power of ten. (This confuses REXX programmers too.)
Now you see that using a single equal sign makes the comparison numeric. But unlike in
REXX, a numeric field must contain a number; spec does not revert to strict comparison
when it cannot convert a field to the internal representation of a number. It issues an error
message and terminates instead.
! But there is a datatype function you can use to test the operands:
! datatype returns NUM precisely when the conversion to a numeric value will succeed.
There is even a conditional operator. To find the maximum of two fields as one would do
in the C programming language style:
The conditional operator first evaluates the expression before the question mark. When
the result of this expression is not zero, the expression between the question mark and the
colon is evaluated and the expression after the colon is ignored. Likewise, when the result
is zero, the expression between the question mark and the colon is ignored and the
expression after the colon is evaluated. Thus, in this example, field a is tested for being
greater than field b. If the result of that test is true, the result of the conditional expression
is the value of field a; otherwise, the result is the value of field b.
The expression above is enclosed in parentheses; this allows the use of blanks to make it
more readable.
Conditional Processing
You can test the value in a counter or an input field and issue or ignore specification items,
depending on the outcome. To mark with an equal sign in column 1 all records where the
first two words are equal:
The first word is copied to the beginning of the output record; the relation that holds is
then inserted; and finally, the second word is inserted into the output record. Rather than
specifying the input range once again, id is used to refer back to the item that defined
the field.
! You can even iterate. To reverse every second word of the input line:
! Apart from showing how to write a while group, the example shows an important concept.
! It makes sure the the counter controlling iteration is always incremented. It does so by
! starting with a value that is one tick less than the first index wanted and then increment it
! in the expression that determines when to stop.
! If you do it in other ways, you might forget to iterate and then spec will go on until the
! cows come home (this is a technical term meaning forever) and you will be forced to use
! to stop the show, which is a rather hamfisted way of doing so.
! But then it would loop forever because += is so low in the precedence hierarchy of opera-
! tors that the increment is 1<=#0 which is always one when there is one or more words in
! the input line; not what you should want.
! In general, it is a good idea to parenthesise the assignment operation when the result is
! used further; such a good an idea that spec issues nuisance warnings in some cases when it
! detects an operator to the left of the counter being updated.
! As shown above, you can nest IF and WHILE constructs; and you can nest any combination.
! The depth is limited to 16.
You can treat this buffer as an additional input stream, which is selected by SELECT
SECOND.
Using this, you can combine fields from two adjacent input records without using the READ
or READSTOP specification items. Thus, you can construct the output record by intermixing
fields from the two records:
SELECT FIRST is a convenience for SELECT 0; it selects the primary input stream as the
source for the following specification item.
The second reading contains a null record while the first record is being processed. This
cycle is called the runin cycle.
Likewise, spec runs an additional cycle when it reaches end-of-file. This cycle is called
the runout cycle. The input streams are assumed to contain null records during the runout
cycle, but the second reading station still contains the last record.
The ALU supports built-in functions that return true while spec is taking a runin or a
runout cycle. first() is true during the runin cycle; eof() is true during the runout
cycle.
The runin cycle is skipped if the second reading is the only input stream used. The first
record is then loaded directly into the second reading. The runout cycle is skipped when
the second reading is not used and no EOF specification item is issued.
Not all specification items in the list are issued during runin and runout cycles. The rules
are somewhat arcane; refer to the reference if you are mixing SELECT FIRST and SELECT
SECOND.
| The example in Figure 325 on page 193 processes data from the first reading only. This
| is appropriate for titles or similar that come before the run of records that has a particular
| key. Use the second reading station when you wish to compute subtotals.
Control Breaks
Field identifiers have other uses than to supply numeric data for computations.
spec can compare a field in two adjacent records and issue specification items only when
the fields do not contain the same data. A field identifier defines a field to be compared
between adjacent records.
The input file is usually sorted on a key field before being passed to the spec stage that
generates a report. A control break means that the key has changed between two adjacent
records.
Suppressing Repetitions
For example, you can suppress the contents of the first five columns in the output record
when they are the same as in the previous record:
The first specification item has a field identifier (a) associated with it. The field itself
covers the first five columns of the input record. The placement is a period, which means
that the item has no effect on the output record. Because the field identifier is used in a
subsequent BREAK item, the contents of the field are compared with the contents of the
same field in the previous input record, which has quietly been squirrelled away in a buffer
for this purpose (the second reading). A break on level a is established when the two
fields are not identical. Because there is no previous record when the first record is proc-
essed (the previous record is considered null), a break is established on the first cycle.
The second specification item copies the remainder of the input record to the output record,
inserting blanks in columns 1-6.
The third item tests if a break is established for field a. If no break is established, the
remainder of the specification list is ignored.
The fourth item inserts the key (the contents of field a) into the first five positions of the
record. Because this specification item is issued only when a break is established, subse-
quent output records for this key contain blanks in the first five columns.
The built-in function break() returns true if a break is established for the field specified.
This is used to generate the title line. Note the use of WRITE; without it, the title would be
prefixed to the output record that is written at the end of the cycle.
Printing Subtotals
Let columns one and two contain the part number and columns three through five contain
the number shipped:
SELECT SECOND is issued first to cause the second reading station to be used as the source
of the data. Thus, a control break is active while the last record having a particular key is
being processed.
The second specification item identifies the part number with a and copies it to the output
record. Likewise, the third item identifies the number shipped with b and prints that as
well.
A control break is established when the part number of the following record is different
from the one in the current record (strictly, when the content of the field identified by a
changes).
The last three specification items are issued only when the break is established. The first
of them writes the detail record in the output buffer so that the subtotal can be written as a
separate record; the second one prints the subtotal (the contents of the counter); and the
last one resets the counter to 0.
In Figure 328, the discard operator (the semicolon) is used to reset the counter after its
contents have been fetched for printing. The semicolon operator first evaluates its left
hand operand, which is simply the contents of the counter; this becomes the result of the
discard operator. It then evaluates the right hand operand and discards that result. Thus,
conceptually at least, the contents of counter 0 are moved to the output record before the
counter is reset to zero.
As you would expect, the semicolon operator has the lowest precedence of all operators.
After each subtotal is printed, it is added into counter 1 to accumulate the grand total.
Printing the grand total is just like printing the subtotal, except that it is done only once.
You can also print the subtotals before you load the data from the first reading station, but
now you must suppress printing during the very first break. You must also print the
subtotal for the last batch during the runout cycle:
The second specification item tests for a break on level a, except for the break during the
runin cycle. If such a break occurs, the subtotal is written in a separate record and the
counter reset.
The second IF test unconditionally for a break on level a, which ensures that the first
record contains the part number of the first part.
Because the second reading station has not been selected, the runout cycle starts at the EOF
specification item. Thus, to print the last subtotal, the specification item to print it is
repeated here.
You can see that it was much easier to control totals by processing data from the second
reading station, as was done in Figure 329 on page 196.
Break Hierarchies
Control breaks are often hierarchical. When you are generating an invoice, you might
wish to group detail records for individual part numbers together and compute subtotals for
each. Of course, you would also want to print an invoice total for each customer, and no
doubt some grand total at the end.
To do this, you will need to define several types of control breaks; in this case, at least one
for part numbers and one for customer numbers.
Note also that when the customer number changes, a control break should be generated for
the part number first, even if the part number is unchanged. To support this, control
breaks are ordered in a hierarchy, which has a at the lowest level and Z at the highest
level.
SELECT SECOND selects the data source as being the record at the second reading station.
Thus, when testing for break the previous record is compared to the current one. It is
easier if you readjust your focus to be the record from where data come; then a control
break means that the current record is the last one for that particular key.
Break level a is associated with the first column and break level “b” is associated with the
second one. The result of the break() functions for the two identified fields are printed
after the fields have been moved to the output record. The value of the eof() function is
also printed.
The function results are printed in the order of the break hierarchy; you can see that the
break on b when the first record is at the second reading station forces a break for a as
well, even though the first column is unchanged.
But a break is not established until the specification item that defines the associated
identifier is issued. Printing the function results after the specification item loading the a
field has been issued uncovers some possibly surprising behaviour:
It is an error to refer to a break level that has not been identified with an input field.
Therefore, when the first set of function results is printed, the column for break level b is
left blank.
Look at the output for the first record. No break level is established when the specification
item identified by a is issued, because the second record also contains a in the first
column.
But when the specification item identified by b is issued, a break is established at that level
and this forces the break at all lower levels to be established as well.
On the runout cycle, in contrast, the maximum break level is established at the beginning
of the cycle.
When you have two separate fields and you wish to issue specification items when either
of the fields break, you might be tempted to test only the break level of lower rank, having
the expectation that a break of higher rank will force the break on the lower level:
But this is a mistake, because unless you test the break level, it is not treated as a break
level. To issue some specification items when either a or b breaks:
Thus, even though the first BREAK items looks redundant, it is not. The order is important;
you must test the higher rank(s) first.
! This example shows the use with EOF. The first word of each input record specifies the
! output column of the remaining words in the record.
! Had KEEP been omitted, you would see the last record only:
In this example, there are no input records; thus, both counters contain zero at end-of-file.
spec does not treat zero divided by zero in any special way; it reports the error rather than
risking the potential divide exception. You can prevent an error in such a case:
Examples
Page Formatter
When printing a file, you might wish to add headings and page breaks in the same way
accounting machines used to do. You can write a REXX filter for this or use specs.
The example in Figure 340 on page 203 generates an output file that contains ASA
carriage control in the first column. The number 1 means that the record should be printed
at the top of the next page; a blank means that the record should be printed on the next
line.
parse arg pl pw
If pl=''
Then pl=60
If pw=''
Then pw=80
Counter 0 is used to keep track of the current line number on the output page; counter 1 is
used for the current page number. When the line number is 0, a page header is written
and the page number is incremented. Then a blank line is written, and the line number is
set to 3. Note that counter 0 is initialised to zero, which means that a heading is included
in the first page.
After each input record has been written to the output, the line number is tested for being
greater than or equal to the page size. If it is, it is reset to zero so that a heading is
generated in the next cycle; otherwise it is incremented by 1.
Line 3
Line 4
114 Jul 2010 Page 3
Line 5
! And Finally
! Before leaving this tutorial we remove the definition of the structures we defined in
! Figure 276 on page 167.
: Lennon/McCartney
: Rita reports on the CPU usage of a pipeline set by stage and pipeline specification. Rita
: also reports the largest amount of virtual storage used by each stage for work areas and
: buffers.
: Rita comes with CMS on the examples disk, usually MAINT 193.
| To invoke Rita, change your PIPE command to RITA. RITA invokes the PIPE command to
| run the pipeline with options to capture timing information and a stage to reduce this infor-
| mation.
: Rita displays CPU usage in milliseconds for each stage and pipeline, both inclusive of and
: exclusive of the time used by subroutine pipelines invoked by the stage or pipeline.
: Rita writes a summary on the console and detailed information to a disk file. The file
: name of Rita’s output file is the first eight characters of the option NAME specified on the
: RITA command. The file type is of the form RITAnnn, where nnn is the first unused
: sequence number.
: For a more detailed discussion and many examples of using Rita to tune pipelines, see
: Melinda Varian’s Streamlining Your Pipelines.
: If you are too busy to read this extremely informative paper, beware of this:
: If a stage (it would have been written by a user) goes into a wait state outside of the
: control of CMS Pipelines, the wait time is counted as CPU time.
: Pipelines added with ADDPIPE are not represented in the inclusive number for any
: stage.
: The numbers displayed by Rita do not include the CPU usage of other pipeline sets
: started with PIPE commands or runpipe stages from within the pipeline set Rita is
: measuring.
: Rita produces the best results when all CALLPIPEs and ADDPIPEs have the option NAME
: specified and when any specifications that have the same name are also identical in the
: stages invoked.
: NUCXLOAD RITA before running a pipeline that contains a ldrtbls stage. This avoids
: Rita interfering with the loader tables.
: Rita is likely to add less overhead when RITA REXX is compiled.
: While the results from Rita will be indicative of the relative performance of various stages
: and subroutine pipelines, Rita comes at price:
: Rita enables the message level to gather accounting data. This, in turn, causes the
: pipeline dispatcher to take a longer path than it would take otherwise.
: Rita issues the pipeline specification through runpipe and processes the output to
: extract pipeline accounting messages. This may add an overhead of 5% or more.
: Example
: Invocation and summary followed by detailed output truncated on the right:
: rita literal abc|append literal def|hole
: CPU Utilization by Pipeline Specification 28 Nov 2006 16:56:46
:
: 0.003 ( 0.003) ms total in "Append/Preface" (1 invocation)
: 0.013 ( 0.010) ms total in "NoName001" (1 invocation)
:
: 3.078 ms total.
:
: Detailed output from Rita in UNNAMED RITA001.
: Ready; T=0.21/0.25 16:56:46
: pipe < unnamed rita001|cons
: CPU Utilization by Pipeline Specification from: 28 Nov 2006 16:56:46
: to: 28 Nov 2006 16:56:46
:
: CPU utilization of pipeline specification "Append/Preface":
: 0.003 ( 0.003) ms ( <1K) in stage 1 of pipeline 1: literal de
: 0.003 ( 0.003) ms total in "Append/Preface" (1 invocation) <=====
:
: CPU utilization of pipeline specification "NoName001":
: 0.003 ( 0.003) ms ( <1K) in stage 1 of pipeline 1: literal ab
: 0.006 ( 0.003) ms ( <1K) in stage 2 of pipeline 1: append lit
: 0.004 ( 0.004) ms ( <1K) in stage 3 of pipeline 1: hole
: 0.013 ( 0.010) ms total in "NoName001" (1 invocation) <=====
:
: 0.013 ms attributed to stages; no virtual I/O.
: 2 pipeline specifications used (4 stages).
: 2 pipeline specifications issued.
:
: 0.005 ms in general overhead.
: 0.018 ms in scanner.
: 0.006 ms in commands.
: 1.024 ms in dispatcher.
: 0.000 ms in hunt.
: 2.012 ms in accounting overhead.
:
: 3.078 ms total.
: Ready; T=0.01/0.08 16:58:57
! This chapter describes how to combine CMS Pipelines built-in programs to manage data
! spaces, address list element tokens, and memory mapped minidisks. It contains a complete
! terminal session as examples, including supporting commands that are not directly related
! to data spaces. If you choose try the session yourself, you should perform the steps in the
! same sequence as shown here and in one virtual machine and pay attention to the contents
! written to files where an ASIT is saved to disk for later reference.
! CMS Pipelines does not expose the entire repertoire of CP macros that are available and it
! also makes a few simplifying assumptions.
! Your virtual machine must be in XC mode to use the data space support and you must
! have been given privileges in the user directory entry for your virtual machine if you wish
! to create data spaces or share them, because by creating a data space you increase your
! virtual machine’s footprint. This line will allow you to create up to ten address spaces of
! a maximum aggregate size of one gigabytes; further, you are allowed to share the data
! spaces you create:
! xconfig addrspace maxnumber 10 totsize 1g share
! The VM help library contains information about VM data spaces. Issue the CMS command
! HELP VMDS MENU to display a menu of the CP macros in support of data spaces.
! CMS Pipelines provides interfaces to most of the CP macros with adrspace, alserv, and
! mapmdisk. In addition, diskid supports reserved minidisks.
! Terminology
! Your virtual machine’s real storage is formally called the host-primary address space of
! your virtual machine. With appropriate privileges, you can also create data spaces, which
! are separate sets of pages that can contain data, but from where no instruction can execute.
! The virtual machine real storage and data spaces are collectively referred to as address
! spaces.
! The name of your real storage is the reserved word “BASE”. You give a data space a
! name when you create it; the name is up to twenty-four characters made up from the
! twenty-six English letters, digits, or any of the special characters # $ @ _ - (number sign,
! dollar sign, at sign, underscore, and hyphen). Note that the first three special characters
! are national use; your terminal and keyboard may display these differently. Data space
! names are upper case, but CMS Pipelines translates them automatically, so you can specify
! them in whatever case you like. The combination of user name and data space name must
! be unique; that is, a virtual machine can have only one data space by a particular name at
! any time.
! An ASIT that is for the real storage of a virtual machine is called a virtual configuration
! identification token (VCIT); it identifies the address space uniquely within the IPL of the VM
! system; that is, ASITs are not reused until the system has been shut down. The VCIT
! identifies the virtual machine uniquely in standard CMS.
! You can discover the ASIT by creating an address space or by querying it.
! To use a data space you must obtain an access list entry token (ALET) for the address
! space. This value is loaded into an access register by CMS/TSO Pipelines to identify the
! address space you wish to reference. If you are writing Assembler programs, you would
! then switch to access register mode to access the data space and switch back to primary
! space mode when you are done.
! The contents of a data space is either something you put there or it is mapped to a mini-
! disk. The contents lasts until the data space is destroyed or you IPL the virtual machine.
! As CMS does not know of data spaces, end of command has no effect on a data space.
! The contents of the ASIT is not specified; it is just a handle, but you are assured that the
! value is unique for the duration of the IPL of VM.
! However, if you watch the ASIT of your real storage you will soon note that the lower
! word is incremented as you IPL your virtual machine and as you create data spaces.
! This attracts a warning, but does not terminate adrspace. In fact, adrspace acts like a
! selection stage; it passes the name of the unknown data space to its secondary output
! stream, when it is defined.
! Finding one’s VCIT is done so often that you might write MYASIT REXX as shown in
! Figure 346.
! Displaying data in hex is also done a lot; C2X REXX hides the complexity of formatting
! four bytes at a time (this is not a trivial piece of 407 plugboard wiring).
! parse arg as
! If as\=''
! Then 'issuemsg 112 FPLC2X x'c2x(as)
! else
! 'callpipe (end \ name X2C.REXX:6)',
! '\*:',
! '|spec set #0:=length(record());#1:=-3',
! ' while (#1+=4)<=#0 do',
! ' if (#1>1 & #1//16=1)',
! ' then / / n endif',
! ' print c2x(record(#1, 4)) nw',
! ' done',
! '|*:'
! exit RC
! This example is contrived because you could obtain the same information directly without
! specifying ALET, but it still shows the mechanics.
! You can also access someone else’s data spaces, if that virtual machine has granted you
! permission and you know the ASIT. Either the owner of the data space has left the ASIT in
! a prearranged place, for example in SFS or the address space name is well known, in which
! case you can use adrspace QUERY to discover it for yourself.
! The output is the ASIT of the data space. Note that the size has been rounded up to the
! nearest megabyte segment. It is a good idea to save the ASIT either in a disk file or a
! REXX variable. As there is no REXX environment active, we store it in a file here. (But
! you can always discover it by adrspace QUERY as long as you remember the name of the
! data space.)
! pipe < ds1 asit | alserv add write | > ds1 alet a | c2x | console
! 01000002
! Ready;
! This creates a file that will be needed in Figure 357 on page 214.
! Note that we now have a megabyte of shiny new zero bits available by using ALET 2.
! Contrast it with the contents of real storage as shown in the last command. (ALET 0 is
! reserved and always refers to the primary space of the virtual machine; ALET 1 is not valid
! on CMS; it is used to reference the secondary address space on z/OS.)
! You can omit the leading X'01' of an ALET on CMS; CMS Pipelines supplies it for you as
! that is the only format that CP supports.
! The ALET is a number between 2 and the maximum number of ALETs allowed for your
! virtual machine, up to 1023, which is the limit in the hardware architecture. Their
! numbers are predictable, being the smallest unassigned number.
! Let us put something into the data space and even in the first two pages for good measure:
! pipe literal Killroy was also here | pad 32 | storage alet 2 1000 32 e0
! Ready;
! And you are undoubtedly not surprised to see the data staying in the data space.
! Pass the ASIT of an address space that you own to adrspace PERMIT.
! Here the user OPERATOR is given read only access to the real storage of your virtual
! machine.
! Use adrspace ISOLATE to stop sharing a address space. This is all or nothing at all: all
! permissions on the data space are lost; you cannot fall out with just one of your friends.
! That said, sharing of address spaces is not as easily done as it might seem:
! The grantee must discover the ASIT, which can become rather complicated, and add an
! ALET to its access list.
! VM has no facility to grant public access to a data space; permissions must be granted
! individually.
! All permissions to a data space are dropped when it is isolated; there is no facility to
! drop a particular permission.
! The virtual machine that is granted permission must be logged on.
! IPL or reset of a virtual machine drops all permissions granted to it previously.
! IPL or reset of the permitting virtual machine deletes all data spaces and clearly all
! permissions granted on them.
! An ALET that you have obtained for a data space in another virtual machine may thus
! go stale at any time. This is reflected by an addressing capability exception, program
! interrupt code X'136', from which CMS Pipelines cannot recover. Stages that use an
! ALET are, however, able to determine its validity while validating operands.
! Thus, to set up a service virtual machine to maintain a database in a shared data space,
! you will also need to implement some kind of protocol to enable the server to authorize
! clients. Refer to “Example Server Application” on page 156 for an example.
! Mapped minidisks are used by DB/2 for VM to access the database directly as virtual
! storage. While you can map any disk that is formatted with 4K blocks, maintaining the
! file system structures or even just accessing the contents of files is not trivial, but a
! mapped minidisk would be appropriate for a disk repair kit.
! Let us get a temporary disk to play with and format it. It must be formatted with 4K
! blocks, but that is the default for 3390s, so we need not specify that option.
! The output from FORMAT is truncated in formatting. The somewhat strange way of
! providing responses to the prompts from CMS command FORMAT allows the sample to be
! run automatically while this book is formatted.
! pipe diskid 102 | spec 1.2 c2x 1 3.2 c2d nw 5.4 c2d nw | console
! 0102 4096 8
! Ready;
! (RESERVE does not like a trailing blank in the response to the prompt.)
! Strictly speaking, we could omit reserving the minidisk and use all of it, but reserving the
! disk prevents trouble if it should ever be accessed. There is no need to access the mini-
! disk; you could read the contents of the disk with trackread rather than mdiskblk, which
! does require the minidisk to be accessed.
! We now have 172 blocks to play with at offset 8 from the beginning of disk 102. To store
! the data space into this file we must first define a minidisk pool; in this case the pool will
! contain one extent only, the temporary disk.
! This assigns the reserved portion of the minidisk to blocks 0 through 171 of the minidisk
! pool. The null record indicates that the pool has been defined without error. (Had you for
! some reason not passed any extent definitions to mapmdisk IDENTIFY, it would not have
! produced an output record.)
! A virtual machine can have only one minidisk pool defined at any time; any existing pool
! is quietly replaced by the new one.
! We are now ready to map the minidisk into the data space that was created in Figure 350
! on page 211. We map just the first page of the data space onto the first block of the
! reserved file. RETAIN instructs mapmdisk to leave the contents of the data space intact, the
! default being to use the data on the minidisk.
! It would appear that the minidisk pool is not referenced once pages have been mapped and
! that the pool could be redefined while pages are mapped, but this is not documented to be
! the case.
! Having mapped the data space, we save the contents to the minidisk. mapmdisk SAVE
! waits while CP performs whatever page out operations are required for changed pages. It
! writes parameters from the interrupt that marks the completion of the operation.
! The output from mapmdisk SAVE is almost all zeros when the data space has been hard-
! ened onto the minidisk. X'01' in column 9 means that we have received a confirmation
! interrupt for the save operation; no other values are possible. The leftmost bit of column
! indicates the validity of the first eight bytes; the contents are valid when this bit is zero
! (which is a bit unconventional); the rightmost seven bits of this byte contain the
! completion status, which should be X'00' Error codes are described in z/VM: CP
! Programming Services.
! You can run multiple mapmdisk SAVE stages concurrently, for example one for each data
! space.
! So the first page was hardened, but the second one was not, as we should expect.
! Then let us destroy the mapping of the data space, but keep the data space:
! Unmapping a mapped page also discards its contents; unlike when mapping, CP offers no
! choice this time. Of course, the page that was not mapped retains its contents.
! The first page went away, but we can have it back by redefining the mapping.
! As data spaces consume CP resources, a good citizen destroys unneeded data spaces. If
! you do not, it all goes away in a small puff of white smoke next time you IPL your virtual
! machine, even permissions you have granted on your base machine storage.
! For the examples, we use a single data space, which is created as shown in Figure 363.
! Several examples in Chapter 18, “Using VM Data Spaces with CMS Pipelines” on
! page 207 show how to display and store data using STORAGE.
! While the format of the output of instore is unspecified, it contains the ALET into which
! the file is stored. The point to note is that outstore determines the ALET from the
! descriptor record it reads; you need not specify it (you cannot specify it unless outstore is
! first in the pipeline).
! When the file is in a data space, outstore copies each record into a buffer in the primary
! space before writing it to its output; thus, other parts of CMS/TSO Pipelines are not aware
! of address spaces.
Syntax defines valid argument strings for a command. Semantics define what the command
does when it is issued. For instance, when a program accepts a list of items of some kind
as the argument, syntax does not prescribe a limit on the count of items; semantics might
require a maximum of 10 entries in such a list.
Keywords are shown in a Gothic font with the minimum abbreviation in upper case.
When writing the keyword, you must provide at least the minimum abbreviation. Write
the keyword in upper case or lower case; write it mIxEd if you like.
──COMMAND──KEYword──
Syntactic variables are shown in lower case slanted type. Provide a number, address, or
the name of an object where there is a syntactic variable. Figure 366 on page 222 defines
the syntax variables used by CMS/TSO Pipelines.
──COMMAND──number──
A reference to a fragment of a syntax definition breaks the main path with vertical bars.
The fragment is defined later in the diagram.
──COMMAND──┤ snumber ├──
snumber:
├──┬─number──┬──┤
└─-number─┘
When you must choose between two or more items, they are stacked with the first one on
the main path.
──COMMAND──┬─argument─┬──
└─KEYWORD──┘
When you can select an item or take none, the choices are stacked below the main path.
──COMMAND──┬──────────┬──
├─argument─┤
└─KEYWORD──┘
An item may be repeated when an arrow returns to the left in front of it. The item is on
the main path when you must write it at least once.
┌──
──────────┐
─argument─┴──
──COMMAND───
The item is below the main path when you may omit it altogether.
┌──
────────────┐
┬──────────┬┴──
──COMMAND───
└─argument─┘
Syntactic Variables
Words in slanted Gothic type beginning with a lower case letter are syntactic variables.
Substitute something for the syntactic variable; its name is intended as a mnemonic for the
type of information you must substitute.
Input Range
! An input range is specified as a column range, a word range, a field range, or a member of
! a structure.
A single column is specified by a signed number. Negative numbers are relative to the
end of the record; thus, -1 is the last column of the record. A column range is specified as
two signed numbers separated by a semicolon or as a range. When a semicolon is used,
the first number specifies the beginning column and the second number specifies the
ending column. When the beginning and end of a field are relative to the opposite ends of
the record, the input field is treated as a null field if the ending column is left of the
beginning column.
A word range is specified by the keyword WORDS, which can be abbreviated down to W.
Words are separated by one or more blanks. The default blank character is X'40'.
Specify the keyword WORDSEPARATOR to specify a different word separator character.
WORDSEPARATOR can be abbreviated down to WORDSEP; WS is a synonym.
A field range is specified by the keyword FIELDS, which can be abbreviated down to F.
Fields are separated by tabulate characters. Two adjacent tabulate characters enclose a null
field. (Note the difference from words.) The default horizontal tab character is X'05'.
Specify the keyword FIELDSEPARATOR to specify a different field separator character.
FIELDSEPARATOR can be abbreviated down to FIELDSEP; FS is a synonym.
| The default separator characters are in effect at the beginning of a stage’s operands; once a
| separator character is changed, the change remains in effect for subsequent input ranges.
! Members of Structures
! A structure contains data items or embedded structures, or both. In general, a member is
! designated by the keyword MEMBER followed by the fully qualified member name, for
! example:
! member s1.s2.s3.field
! Any of the qualifiers, except the top qualifier, must be specified with a subscript if the
! corresponding member is an array, for example:
! member s1.s2(4).s3.field
! You may specify a subscript for the final member name as well, if it is an array, for
! example:
! member s1.s2.s3.field(1)
! The entire array is selected if you omit the subscript for a member that is an array.
! The member name may be fully qualified, as shown above, or part of the structure qualifier
! may be specified by prefixing the MEMBER keyword with the keyword QUALIFY followed
! by the possibly qualified identifier for the structure, for example:
! qualify s1.s2 member s3.field
! QUALIFY and the qualifier may optionally be followed by a positive number, which
! specifies the column where the specified structure begins; the default being the beginning
! of the record.
! Once specified, the qualifier remains in effect until a new one is specified. Use a period or
! a hyphen instead of the qualifier name to disable the qualifier.
! You may specify two leading periods to indicate a fully qualified member name; any
! active qualifier is then ignored, for example:
! member ..s1.s2.s3.field
! You may specify a single leading period to indicate that a qualifier must be applied, for
! example:
! qualify s1.s2 member .s3.field
! In general, the qualifier applies to all input streams, though spec supports associating
! different qualifiers with each input or output stream.
! The highest level structure name is resolved first in the current pipeline set; then in the
! containing pipeline set, and so on. Finally thread scope structures are inspected.
! Contained structures are resolved by structure ADD when the containing structure is
! defined.
Substrings
You can select a substring of a an input range; and you can do so iteratively.
Syntax
inputRange:
┌──
────────────────────────────┐ ┌──
────────────────────────────────┐
┬──────────────────────────┬┴───
├─── ┬──────────────────────────────┬┴──
└─┬─FIELDSEParator─┬──xorc─┘ └─SUBSTRing──┤ rangePart ├──OF─┘
└─WORDSEParator──┘
──┬─┤ rangePart ├───────┬──┤
! └─┤ memberReference ├─┘
rangePart:
(1) ──┬─range───────────┬──┤
├──┬────────────┬──
├─┤ wrdSep ├─┤ ├─snumber─────────┤
└─┤ fldSep ├─┘ └─snumber;snumber─┘
memberReference:
! ├──┬─────────────────────────────────────┬──Member──┤ idList ├──┤
! └─Qualify──┬─┤ idList ├──┬────────┬─┬─┘
! │ └─number─┘ │
! ├─-──────────────────────┤
! └─.──────────────────────┘
idList:
! ┌─.─────────────────────────┐
! ─identifier──┬───────────┬─┴──
├─── (2) ──┤
! └─subscript─┘
wrdSep:
├──┬─────────────────────┬──Words──┤
└─WORDSEParator──xorc─┘
fldSep:
├──┬──────────────────────┬──Fields──┤
└─FIELDSEParator──xorc─┘
Notes:
1 Blanks are optional after the keywords WORDS and FIELDS.
! 2 Blanks are not allowed in a qualified identifier or its subscript.
Examples
1-*
word 5
1;-1
-18;28
field 4
| substr fs . f3 of word 7
! member struct.member
! member struct.member(8)
! qualify struct 5 member member
The file name and file type are translated to upper case if a file does not exist with the
name as written in mixed case.
As an example, assume that these three files are stored on the minidisks or directories
shown:
MIXED CASES A
mIxEd cAsEs B
mixed cases C
Figure 368 on page 230 shows how the mode letter is resolved for particular operands.
File Mode *
An asterisk file mode (specified or defaulted) exposes the FSSTATE search order. It
searches the table of open files before accessed minidisks and directories. This can cause
unexpected results as shown in Figure 369.
In this experiment, the file SYSTEM LANGUAGE is created on mode A by the first PIPE
command. As expected, state resolves this file in the second command; this is the first
line of output. The EXECIO command reads (and discards) one line from the file by the
same name on the system disk. The third PIPE command now resolves this open file rather
than the file on mode A.
followed by a mode letter to start from the parent directory of the directory that is
accessed as the specified mode.
When the SFS interface is used, the SFS rules for sharing and updating files apply.
Though the device drivers are described as being SFS device drivers, a more correct nota-
tion would have been CSL drivers, because they use callable services, such as DMSVALDT,
DMSOPEN, and DMSOPDBK; all of which support a mode letter as well as a directory path.
This is not advertised (other than here) because there are subtle differences in the way CMS
treats files that are open through DMSOPEN and similar, and the way CMS treats files that
are accessed through the FSxxxx macro interface (which is used by the minidisk device
drivers).
When a directory is accessed as a mode, it makes no difference to the SFS device drivers
(for example, <sfs), whether you use the mode or the directory as the third word, but it
does make a difference to the router device driver (which would be <).
A physical sequential data set is read by < and written by > or >>. A sequential data set
can be either
a physical sequential data set or
a member of a partitioned data set (cannot be appended with >>).
TSO Pipelines supports both generation data groups and member names in DSNAME
specifications. The generation is specified by a signed number in parentheses or by a zero
in parentheses. When both are specified, the generation is specified before a member
name:
gdg.po(+1)(member)
When specifying a DSNAME, the prefix is applied (if set) unless the
DSNAME is enclosed in single quotes. The trailing quote is optional.
A relative member of a generation data group is specified by paren-
theses containing a signed number or zero.
The data set specification is translated to upper case.
pods:
├──┬─dsname───────────────┬──┤
├─dsname(generation)───┤
├─'dsname'─────────────┤
├─'dsname(generation)'─┤
└─DDname=word──────────┘
Enclose a path that contains a blank in quotes. If the path also contains one of the quotes
in which you are enclosing the path, the inner quotes must be doubled up. This example
shows two ways to read a particular file on CMS:
pipe < "/the green man's directory/file one" | ...
pipe < '/the green man''s directory/file one' | ...
On z/OS, the use of single quotes in the second line means that the operand is to be inter-
preted as a reference to a fully qualified data set name; albeit not to a valid one.
The pipeline specification is one pipeline unless an end character is declared and used to
separate pipelines. You may declare an end character without using it.
pipeSpec:
┌─endChar──────┐
─┤ pipeline ├─┴──┤
├──┬───────────────────┬──┬──────────┬───
└─┤ globalOptions ├─┘ ├─stageSep─┤
└─endChar──┘
In its simplest form, a pipeline specification has stages (invocations of programs) separated
by stage separator characters (solid vertical bars).
Options
globalOptions:
┌──
───────────────────────────┐
┬─LISTCMD─────────────────┬┴──)──┤
├──(───
├─LISTERR─────────────────┤
├─LISTRC──────────────────┤
├─MSGlevel──snumber───────┤
├─STOP────────────────────┤
¡ ├─STOPERROr───────────────┤
├─TRACE───────────────────┤
(1) ─MSGlevel──snumber─┤
├─NO───
├─NAME──word──────────────┤
├─SEParator──xorc─────────┤
├─ENDchar──xorc───────────┤
├─ESCape──xorc────────────┤
¡ ├─PROPAGATeeof────────────┤
! └─QUALIFY──qualifier──────┘
Note:
1 Blanks are optional between NO and MSGLEVEL.
Pipeline options that apply to the complete pipeline specification are often referred to as
global options in contrast to local options, which apply to a single stage. Pipeline options
that are specified at the beginning of a pipeline specification modify the way the pipeline
specification is parsed; they specify options that apply to all stages of the pipeline
specification. Write global options in parentheses immediately after the command verb.
: The following options are valid only as global options. They may be specified with all
: three command verbs.
NAME word The word is stored as a name to be used in messages. Names need
not be unique. The file name of the EXEC or REXX program is recom-
mended so that you can see the name of the program with a broken
pipe when CMS/TSO Pipelines issues error messages.
SEPARATOR The character or hex value is the stage separator character for the
xorc pipeline specification. The default stage separator is the solid vertical
bar. STAGESEP is a synonym for SEPARATOR.
ENDCHAR xorc The character or hex value is the end character for the pipeline
specification. There is no default end character.
ESCAPE xorc The character or hex value is the escape character for the pipeline
specification. There is no default escape character. The escape char-
acter takes effect after the parenthesis closing the global options.
Local options are processed without processing for the escape char-
acter. When an escape character is used in a pipeline specification, it
is deleted; the following character is not inspected for any special
meaning to the scanner.
¡ The following global option is valid with CALLPIPE only.
¡¡ PROPAGATE End-of-file propagates out through connectors, as they do for ADDPIPE.
¡ The streams are not restored after the subroutine ends; thus the
¡ subroutine should process the entire file.
When specified as global options, the following keywords apply to all stages of the pipe-
line specification. Once enabled with a global option, a keyword is disabled at a particular
stage with the NO prefix.
LISTCMD Trace pipeline commands except BEGOUTPUT, ISSUEMSG, NOCOMMIT,
OUTPUT, PEEKTO, READTO, and REXX.
LISTERR Trace when a stage returns with a nonzero return code.
LISTRC Trace when a stage begins and ends.
MSGLEVEL Specify additional bits for the message level. Bits are removed from
snumber the message level when NO is prefixed to the keyword. The message
level is a fullword of switches for the pipeline dispatcher to enable
checks and determine if additional messages should be issued. When
specified as a global option, the new message level takes effect after
the right parenthesis is scanned to close the list of global options;
errors in the global options are reported as determined by the message
level in effect when the command is issued.
!! QUALIFY Specify the default qualifier for the pipeline specification. The default
!! qualifier qualifier is inherited by encoded pipeline specifications for CALLPIPE,
! but not by pipeline specifications issued by other means, such as the
! pipeline command CALLPIPE.
STOP Trace when a stage is started. On CMS, the virtual machine is put in
CP console function mode when the pipeline dispatcher calls the
syntax exit and when it calls the main entry point. Be sure to have
RUN OFF.
Chapter 21. Syntax of a Pipeline Specification Used with PIPE, runpipe, ADDPIPE, and CALLPIPE 235
Pipeline Specification
¡¡ STOPERROR Terminate the pipeline specification when any stage for which the
¡ option applies returns with a nonzero return code. When a stage with
¡ STOPERROR on terminates with a nonzero return code, all running
¡ stages in the pipeline specification receive return code -4094.
¡ You can specify NOSTOPERROR for those stages you expect to termi-
¡ nate with a nonzero return code, such as aggrc. You can also specify
¡ STOPERROR on selected stages and not use the global option (which
¡ just sets the default for all stages in the pipeline specification).
¡ Note that the STOPERROR option specifies which stages cause the pipe-
¡ line specification to be terminated, not which stages are terminated as
¡ a result.
TRACE Trace calls to the pipeline dispatcher and trace how control passes
between stages. Because this option is likely to generate large
amounts of data, it is recommended that a pipeline being traced be
issued with the runpipe built-in program.
See “The Message Level” on page 835 for a complete description of the bits that make up
the message level.
Pipeline
pipeline:
┌─stageSep────┐
┬─┤ stage ├─┬┴──
├──┬─────────────────────────┬───
└─┤ connector ├──stageSep─┘ └─┤ label ├─┘
──┬─────────────────────────┬──┤
└─stageSep──┤ connector ├─┘
shortThrough:
├──┤ connector ├──stageSep──┤ connector ├──┤
A pipeline contains stages and label references separated by stage separator characters.
Connectors are optional at one or both sides of a pipeline issued with ADDPIPE or
CALLPIPE. Connectors are delimited with stage separators.
A short-through connection is a pipeline with two connectors and no stages. Other pipe-
line configurations must have at least one stage or a label reference.
A pipeline is scanned for stage separator characters. There are several interpretations for
the string between stage separators (and from the beginning or end to the nearest stage
separator):
It is a connector if it is first or last in the pipeline, begins with an asterisk, and ends
with a colon.
It is a label reference if it is one word that ends with a colon and does not start with
an asterisk.
It is the specification of a stage when it is not one of the two above.
Stage
stage:
├──┬───────────┬──┬──────────────────┬──word──┬────────┬──┤
└─┤ label ├─┘ └─┤ localOptions ├─┘ └─string─┘
localOptions:
┌──
───────────────────────────────────┐
! ┬─QUALIFY──qualifier──────────────┬┴──)──┤
├──(───
! ├─NOQUALIFY──(1) ─────────────────────┤
└─┬──────┬──┬─LISTCMD───────────┬─┘
(1) ┘
└─NO─── ├─LISTERR───────────┤
├─LISTRC────────────┤
├─MSGlevel──snumber─┤
├─STOP──────────────┤
¡ ├─STOPERROr─────────┤
└─TRACE─────────────┘
Note:
1 Blanks are optional after NO.
A label is declared for a stage when the first word has a colon before the first blank or
parenthesis. A label declaration beginning with a period defines the stream identifier for
the primary streams; you cannot use a label reference to refer to such a label placeholder
later in the pipeline. You can request a program that has a colon in its name in two ways.
Define an escape character with the option ESCAPE and use this character in front of the
colon, or write a dummy label, for instance, |.:am:pm|. The first colon marks the end of
the label placeholder; the period separates a null word from a null stream identifier.
Write local options in parentheses after the label, if one is present. Refer to “Options” on
! page 234 for a description of the keywords you can specify that are also global options.
! In addition, option NOQUALIFY may be specified to disable the default qualifier for the
! stage. The options apply to the stage being defined.
The first word (after the label and local options, if any are present) is the name of the
program to call. The string beginning one blank after the program name is passed to the
program as the argument string. An argument string is optional; it extends to the next
stage separator; it can have leading or trailing blanks, or both.
To find the entry point for the program to run, the scanner searches several entry point
tables (see Appendix E, “Generating and Using Filter Packages with CMS/TSO Pipelines”
on page 894 and CMS Pipelines Installation and Maintenance Reference, SL26-0019):
The entry point table in the PIPPTFF filter package, if it is available (that is, the filter
package has been attached to the pipeline module).
The entry point table in the main pipeline module.
Chapter 21. Syntax of a Pipeline Specification Used with PIPE, runpipe, ADDPIPE, and CALLPIPE 237
Pipeline Specification
Entry point tables in filter packages that have attached their entry point tables.
The entry point table for programs that your installation has added to the main pipe-
line module.
The scanner looks for a REXX program with the file name specified if the program is not
resolved from any of the entry point tables. On CMS, it looks for the file type REXX; the
rexx program is called to run the program if one exists. On z/OS, it searches the parti-
tioned data set allocated to the DDNAME FPLREXX, if any.
Connectors
connector:
(1) ──┤
├──*─┬──────────────────────────────────┬─:──
└─.─┬────────┬──┬───────────────┬──┘
├─INput──┤ └─.─┬────────┬──┘
└─OUTput─┘ ├─*──────┤
└─stream─┘
Note:
1 There are no blanks in connectors.
You can put connectors at the beginning or the end of a pipeline (or both) when the
command is issued with ADDPIPE or CALLPIPE. Connectors refer to streams in the stage
that issues the pipeline command; they specify where the streams of the stage are
connected to stages in the new pipeline specification. PIPE and runpipe do not accept
connectors because they start a new set of pipelines; there is nothing to connect to.
Syntactically, the connector is a word that begins with an asterisk ('*') and ends with a
colon (':'). Two components with a leading period ('.') are optional to define the type of
connector. The first component is a keyword (INPUT or OUTPUT) to specify the side of the
stage; the default is INPUT at the beginning of a pipeline and OUTPUT at the end. The
second component specifies the stream. It can be a number, a stream identifier, or an
asterisk. An asterisk means the currently selected stream. The default is the stream
currently selected.
There must be a stage separator character between the connector and the rest of the pipe-
line.
There are two types of connectors, redefine and prefix. They can be applied to the input
and output side of a pipeline, giving four combinations.
The second component of the connector names the side it is on in a redefine connector.
Though valid in a ADDPIPE pipeline command, a redefine connector is usually used in
CALLPIPE pipeline commands.
*.input: or *: at the beginning of a pipeline specifies that the currently selected input
stream is to be connected to the stage at the right of the stage separator ending the
connector. Likewise, *.output: or *: at the end of a pipeline specifies that the currently
selected output stream is to be connected to the stage at the left of the stage separator
before the connector.
The new pipeline is connected to the stage issuing the ADDPIPE pipeline command in a
prefix connector. The current connection is saved on a stack from where it is restored with
the SEVER pipeline command. *.input: at the end of a pipeline specifies that the output
from the new pipeline is to be connected to the currently selected input stream. Likewise,
*.output: at the beginning of a pipeline specifies that the currently selected output stream
is to be connected to the new pipeline.
A short-through connection has no stages between the connectors. The first one must refer
to the input side; the second one must refer to the output side.
A pipeline is inserted in front of (or after) the currently selected input (output) stream
when it has input (output) connectors at both ends.
Labels
label:
├──┬─word:──────────┬──┤
├─word.:─────────┤
└─word.streamID:─┘
label place-holder:
├──┬─.streamID:─┬──┤
└─.:─────────┘
Write local options followed by the name of a program to run and its argument string after
the label where it is declared. This defines the primary streams for the stage. The
secondary and subsequent streams for a stage with a label are defined when you reference
the label later in the pipeline specification. In a label reference, write the label without
options, program name, or arguments; they have already been specified.
A label declaration or reference may specify a stream identifier. Write a period followed
by up to four characters between the label name and the ending colon. Case is respected
in stream identifiers. The scope of a stream identifier is the particular stage that the label
refers to. The period ending the label is optional when the stream identifier is not
specified.
Example
Chapter 21. Syntax of a Pipeline Specification Used with PIPE, runpipe, ADDPIPE, and CALLPIPE 239
Pipeline Specification
Figure 371 shows a pipeline specification with three pipelines. It has two stages with
labels, search and join. The primary output stream of drop is connected to the primary
input stream of lookup. The primary output stream of lookup is connected to the primary
input stream of faninany.
disk starts a new pipeline because it has an end character in front of it. Its primary output
stream is connected to the secondary input stream of lookup. The secondary output stream
of lookup goes through the primary stream of the second > into the secondary input stream
of faninany. The tertiary output stream from lookup goes to the primary input stream of
the third > stage. lookup has a tertiary input stream that is not connected.
To circumvent the REXX limitation, assign parts of the pipeline to variables when a pipe-
line specification is longer than 500 characters. Issue the PIPE command in an expression
that references these variables:
Chapter 21. Syntax of a Pipeline Specification Used with PIPE, runpipe, ADDPIPE, and CALLPIPE 241
Scanner
Pipeline Scanner
The pipeline specification parser first scans global options, if any are present. It stops as
soon as it finds an error in the global options; the rest of the pipeline specification is not
processed. When the global options have been scanned without error, the scanner
performs three passes over the rest of the pipeline specification. It performs each pass to
the end, reporting all errors it finds. The scanner terminates at the end of a pass if it finds
errors.
1. Determine the overall structure of the pipeline specification. The scanner counts
connectors, stages, and pipelines. Errors detected include null stages and null pipe-
lines. If no errors were found on the first pass, the scanner then allocates storage for a
control block to represent the pipeline specification and all its stages, streams, and
connectors.
2. Resolve labels and entry points. At this pass, the control blocks are filled with infor-
mation from the argument string. Errors detected include unresolved entry points,
undefined labels, and labels that are defined more than once.
3. Check the placement and argument syntax for entry points that are resolved to a
program descriptor (the expansion of the FPEP macro). This applies to all built-in
programs. If it is requested in the program descriptor, the scanner calls the stage’s
syntax exit to process the argument string. The entire pipeline specification is
suppressed if the scanner detects an error in the syntax of any one stage or if any
syntax exit returns a return code that is not zero.
When the scanner has completed the third pass without finding errors, it hands the pipeline
specification over to the pipeline dispatcher to perform the work to be done.
Pipeline Dispatcher
The main function of the dispatcher is to run a stage and regain control when the stage
requests a pipeline service or terminates.
Each stage runs independently of other stages, because a stage calls the dispatcher to read
or write, rather than calling the neighbour stages to obtain or deliver records. This divi-
sion of labour has many advantages, the most obvious one being that all stages use a
standard interface to the dispatcher. A more subtle advantage is that each stage’s call
stack is usually quite shallow: the stage often calls a few internal subroutines and the
dispatcher, but it is not entered recursively.
States of a Stage
A stage goes through these states during its lifetime (see Figure 374 on page 244 for a
diagram):
1. Committed to start. When the scanner hands the pipeline specification over to the
dispatcher, all stages are waiting to start. The dispatcher commits to the lowest level
where a stage is committed to start and makes the stages on this commit level ready to
run. (See “Commit Level” on page 244.)
2. Ready, not started. The first time the stage is run, it is started. The stage’s environ-
ment is set up and the main entry point is called.
3. Running. One stage is running at a time. Once the stage is given control, it runs until
it gives up control voluntarily. It can give up control in one of the following ways: It
can issue a service request to the dispatcher to transport data, change the pipeline
topology, or wait for an external event. If the stage cannot continue immediately (or
the dispatcher decides that it should not), the stage is made not dispatchable until the
condition that blocks its progress has been cleared.
4. Waiting for pipeline I/O. A stage waits for I/O, for example, when it performs a read
operation and there is no data available to read on the currently selected input stream.
5. Waiting for a subroutine pipeline to complete. A stage that has issued a CALLPIPE
pipeline command waits until all stages in the subroutine pipeline have ended and all
connections are restored.
6. Waiting to commit. A stage that issues the COMMIT pipeline command to commit to a
level that is higher than the current commit level must wait. The stage is made
dispatchable when the pipeline specification has committed to this level.
7. Waiting for an external event. The stage issues the PIPWECB macro. For example,
delay waits for a timer interrupt. This interface is not available to REXX filters; no
pipeline command can cause a stage to wait for an external event.
8. Ready. The stage can run, but it is not currently running. A ready stage is said to be
on the run list.
9. Terminated. That is, the stage has returned on the original call from the dispatcher.
This means that the stage has completed the task it was set to perform, that the stage
has determined that it can perform no more useful work, or that the stage has failed.
In the latter case, the stage sets a nonzero return code.
The dispatcher will at some future time resume a stage that is waiting (a stage that is in
one of the states 4 through 7). That is, the dispatcher will return to the stage to the next
instruction after the call to the dispatcher service. The stage sees a dispatcher service as
synchronous; no activity takes place in the stage during the call to the dispatcher service.
Chapter 22. Scanning a Pipeline Specification and Running Pipeline Programs 243
Commit Level
Commit Level
The commit level provides a general mechanism to allow unrelated programs to coordinate
their progress. One use of the commit level mechanism is to allow all stages to validate
their argument strings before any stage takes an action that might potentially destroy data,
such as erasing a file or writing on a tape. Thus, the pipeline is abandoned if a built-in
program detects an error in its arguments or if a REXX program returns with a nonzero
return code before reading or writing.
The commit level is a number between -2147483647 and +2147483647 inclusive. Each
stage is at a particular commit level at any time. It increases its commit level with the
pipeline command COMMIT. It cannot decrease its commit level. The pipeline
specification parser performs an implicit commit when a stage is defined. The program
descriptor for a built-in program includes the commit level at which the program begins;
selection stages begin at commit level -2; REXX stages begin at commit level -1; by
default, other stages begin at commit level 0.
The pipeline dispatcher initiates the stage with the lowest commit level first. When more
than one stage begins at a particular commit level, it is unspecified which one runs first.
The stages at the lowest commit level run until they complete (exit or return) or issue a
COMMIT pipeline command.
An aggregate return code is associated with a pipeline specification. Initially, the aggre-
gate return code is zero. The aggregate return code for the pipeline specification is
updated with the return code as each stage returns. If either number is negative, the aggre-
gate return code is the minimum of the two numbers; otherwise, it is the maximum.
When all stages at the lowest commit level have ended or committed to a higher level, the
stages at the next commit level are examined. Stages that would begin at the new commit
level are abandoned if the aggregate return code is not zero. For stages that are waiting to
commit to the new commit level, the return code for the COMMIT pipeline command is set
to the aggregate return code; those stages are then made ready to run. The aggregate
return code is sampled at the time the pipeline specification is raised to the new commit
level. All stages committing to a particular level see the same return code, even if one of
them subsequently returns with a nonzero return code before another stage has begun to
run at the new level. A stage can inspect the COMMIT return code and perform whatever
action is required; built-in programs deallocate resources they have allocated and return
with return code zero when the COMMIT return code is not zero, thus quitting when they
determine that another stage has failed.
By convention, all built-in programs process data on commit level 0. Stages must be at
the same commit level for data to pass between them, except when data flow on a
connection that has been set up with ADDPIPE. The pipeline stalls if a stage at one commit
level reads or writes a record after the stage at the other side of the connection has issued
a COMMIT pipeline command to commit to a higher level.
The scope of the commit level is a pipeline specification. Pipelines added with ADDPIPE
commit without coordinating their commit level with the pipeline that added them. Pipe-
line specifications that are issued with CALLPIPE and contain no connectors (an uncon-
nected pipeline specification) also commit without coordination with the caller.
When a pipeline specification that is issued with CALLPIPE (and is connected to its caller)
increases its commit level, the pipeline dispatcher checks that the commit level for the
stage that issued the CALLPIPE is at or above the new level requested. When the subroutine
would go to a commit level that is higher than the caller’s current commit level, the pipe-
line dispatcher performs an implicit commit for the stage that issued the CALLPIPE. The
subroutine pipeline proceeds only after the caller’s commit has completed (that is, only
after the commit level of the calling pipeline has been raised to the new level). If the
caller is itself in a subroutine pipeline, the new commit level propagates upwards.
A REXX pipeline stage begins at commit level -1. The commit level for a REXX stage is
automatically raised to level 0 when it first issues an OUTPUT, PEEKTO, READTO, or SELECT
ANYINPUT pipeline command. Because the pipeline dispatcher raises the commit level
automatically, most REXX programs need not be concerned with commit levels. In the
usual case, a REXX program validates its arguments before it begins reading and writing
data. If it finds an error in its arguments and exits with an error return code before it has
used any of the four commands that cause an automatic commit, the pipeline specification
will in effect terminate at commit level -1, before data have begun flowing and before
other stages have taken any irreversible actions (assuming they adhere to the convention of
doing such on commit level 0). On the other hand, if a REXX program finds no error in its
arguments and begins to process data by using one of these four commands, the automatic
commit is done, suspending that stage until all other stages are ready for data to flow.
In some cases the automatic setting of the commit level for REXX programs may not be
suitable. If your REXX program erases files or performs some other irreversible function
before it reads or writes, it should first use the COMMIT pipeline command to do an explicit
commit to level 0 to wait until all other stages have validated their arguments. If the
return code on COMMIT is not zero, the program should undo any changes it may have
made and exit with return code 0.
If your REXX program needs to use any of the commands that cause an automatic commit
before it is ready to commit to level 0, it must issue the NOCOMMIT pipeline command to
disable the automatic commit and then later issue an explicit COMMIT. To perform read or
write operations on commit level -1 (to read a parameter file, for example), use ADDPIPE to
connect the input or output stream (or both) to your REXX stage. (You cannot use
CALLPIPE for this, because it would force a commit to level 0 before data could flow.)
Chapter 22. Scanning a Pipeline Specification and Running Pipeline Programs 245
Moving Data
Having defined the new streams with ADDPIPE, use READTO and OUTPUT to read and write.
When you are finished, issue SEVER to restore the original connection. Then issue COMMIT
to perform an explicit commit. Check the return code on the COMMIT before reading or
writing the original stream.
The pipeline dispatcher runs stages if all syntax checks complete without reporting any
errors. The order of dispatching at any commit level is unspecified. The pipeline
dispatcher does not preempt stages; once a stage is running, the pipeline dispatcher regains
control in one of two ways:
The program calls a pipeline dispatcher entry point, for instance to read a record.
The program completes and returns from the initial call.
Reading, Writing
CMS/TSO Pipelines transports records without buffering from an output stream of one
stage to an input stream of another stage.
To write a record, a program (the producer stage) calls the pipeline dispatcher with the
address and length of a buffer that contains the record to be written; the equivalent pipe-
line command is OUTPUT. The stage is then blocked (cannot run) until the neighbour to
the right (the consumer stage) performs an action that releases the producer:
It reads (consumes) the record by calling the pipeline dispatcher. This sets return code
0 on the producer’s write and makes the producer able to run.
It severs the input stream that is connected to the producer’s output stream. The
dispatcher, in turn, sets return code 12 on the producer’s write to indicate end-of-file
and makes the producer able to run.
It returns on the initial invocation from the dispatcher, because processing is complete
or abandoned. The dispatcher then severs all the terminating stage’s streams and sets
end-of-file on all reads and writes that are waiting for the terminating stage to produce
or consume a record.
A stage waits for a record to become available if there is none at the time it reads. There
are two ways to read records. The simplest is to call the pipeline dispatcher, passing the
address and length of a buffer where the next record is placed; this is done by READTO in a
REXX filter. If the neighbour on the left is blocked waiting for a record to be read, the
record is copied and both stages are made ready to run. This type of read is called a
consuming read, because the read has consumed the record. It is also called a move mode
read, because the record is moved into the reading stage’s buffer.
Move mode reads are not well suited to programs that must process records of any size.
| Instead, such a stage first performs a locate mode read to determine the length of the
| record; the address and length of the producer’s buffer are returned: the producer remains
blocked waiting for the record to be consumed. (Move mode and locate mode are terms
from OS data management).
| The pipeline command PEEKTO in a REXX filter performs a locate mode read. The program
| then issues the pipeline command READTO when it has processed the record; this releases
| the neighbour on the left. READTO is normally issued without specifying a variable name,
| which corresponds to a move mode read with buffer length zero. Unlike OS data manage-
ment, the same record is returned on multiple locate mode read calls with no intervening
consuming read to release the record.
By using a locate mode read, a stage can peek at the first record of a file and choose a
suitable subroutine pipeline to process the file, for example, to unpack the file if it is
packed. The subroutine pipeline also sees the record that determined the strategy, because
the first record is not consumed by the peeking stage and is thus available to the subrou-
tine.
As you have seen several times, the order of dispatching (the sequence in which the
dispatcher runs stages) is unspecified. To make the order predictable, you must ensure that
the dispatcher has no choice: if it has only one stage it can run, the dispatcher must run
this stage however unpredictable it tries to be.
The term record delay specifies the degree of control that a program can exert over the
pipeline dispatcher.
A program that does not delay the record processes the file in this way:
1. It obtains an input record with a locate mode read. The PEEKTO pipeline command is
used in a REXX program for this purpose. This blocks the stage that produced the
record.
2. It processes the record. For example, a device driver copies the record to the host
interface or into a buffer; a filter, perhaps, selects a substring of the record or it copies
the record into a buffer to be modified; and a selection stage determines which output
stream to use.
3. It writes one output record. The record can be in a buffer that the stage has obtained,
or it can be in the buffer provided by the producer stage (if the contents of the record
have not been modified).
4. It consumes the input record. The READTO pipeline command is used in a REXX
program for this purpose. The producer stage can resume and run in parallel with the
consuming stage, but not for long; as soon as the consuming stage performs a locate
mode read, it will be blocked until the producer writes the next record.
Because the producer stage is blocked while the record is written in step 3, a program that
processes a record in this way does not allow the producer to produce one more record
until the consumer’s output record has been consumed. You can prove by induction that a
cascade of stages that do not delay the record behaves in the same way as a single stage
that does not delay the record. You can also prove that, for each input record, a decoding
network (see “Decoding Trees” on page 82) composed entirely of stages that do not delay
Chapter 22. Scanning a Pipeline Specification and Running Pipeline Programs 247
Waiting
the record produces a record on one stream, and on one stream only, when the secondary
output streams are connected in all selection stages.
When records take different paths from a common stage (for example fanout or a selection
stage) through a multistream network consisting entirely of stages that do not delay the
record, the records will arrive at the end of this network in the same order as they entered.
This is clearly a desirable property.
A program that consumes the input record before producing the output record (steps 3 on
page 247 and 4 on page 247 are performed in reverse order) has the potential to delay a
record, because it allows the dispatcher to resume the producer stage. Whether the record
is, in fact, delayed will depend on the dispatching strategy, which is unspecified. When a
producer stage produces records on several streams that eventually are connected to the
inputs of a stage that synchronises its input streams (that is, the program performs a locate
mode read on all its input streams before processing the records), a record delay is
required on all but the highest-numbered stream to avoid a stall. The dispatcher will even-
tually run the producer stage to produce one more record before the consumer’s record is
consumed (by its consumer, in turn).
A program that reads a record into a buffer, consumes it, and then performs a locate mode
read before it produces an output record unconditionally delays one record. But when such
a program is used on a subset of a file (because other records take a different path that
shunts the delaying stage), a delay of one record in the program will, in general, lead to an
indeterminable delay in the file as a whole.
The strict definition above of a stage that does not delay the record stipulates that a
program must produce exactly one output record for each input record.
Though the strict definition is required when one reasons about multistream networks
where the contents of a record are written to more than one output stream (chop or fanout)
and gathered with a program that synchronises its input streams, a slightly relaxed
behaviour may be sufficient to reason about topologies where records are gathered with
faninany. In step 3 on page 247, it may be acceptable that no record is produced (thus,
the stage will delete or discard an input record); or it may be acceptable that several output
records are produced as long as these records are produced before the corresponding input
record is consumed.
It is noted in the descriptions of the built-in programs which ones strictly do not delay the
record and which programs produce all output derived from an input record before the
record is consumed.
The description of a built-in program can also specify that the program has the potential to
delay one record.
A few stages, however, wait for external events; CMS/TSO Pipelines is able to run other
stages while these programs wait for external events: console ASYNCHRONOUSLY (but not
the other two ways to read from the console), delay, fullscr (on CMS and under certain
conditions), immcmd, starmsg, tcpclient, tcpdata, tcplisten, and udp.
Return Codes
When one or more error messages are issued by the pipeline specification parser, the return
code from PIPE is the “worst” of the ones found. If any return code is negative, the worst
return code is the most negative return code received; otherwise the return code is the
maximum of the return codes received.
When a stage terminates because of an error in arguments or data, the return code is, in
general, equal to the number of the error message issued.
Return code -7 from the environment processing a pipeline command means that the argu-
ment is not recognised as a pipeline command. Refer to “Return Codes -3 and -7” on
page 116.
Return code -9 on the PIPE command means that storage was not available for the work
area and save area. No explanatory message is issued because of the lack of storage.
Return code -4095 is reflected to the stages by the pipeline dispatcher when the pipelines
are stalled. Messages list the status of each stage.
Chapter 22. Scanning a Pipeline Specification and Running Pipeline Programs 249
Built-in Programs
The format to which input records must adhere, if input data are structured in some
way.
The format of output records produced, if they are structured.
A summary of streams used, if the program references more than the primary input
and output streams.
The record delay, if applicable. It is specified under which conditions the program
does or does not delay the record. When no such clause is present, it is unspecified
whether the program delays the record; the program may delay some records but not
others.
The commit level at which the program starts (if it starts before level 0). This part
also describes the actions performed before the program commits to level 0.
The conditions under which the program will terminate prematurely; that is, without
processing all available input records or without producing all possible output records.
A program terminates normally when all its input streams are at end-of-file, or (in the
case of a device driver that is first in a pipeline) the host interface signals end-of-file
or a similar condition. When a program is described as not terminating normally, it
means that the program accesses a host interface that does not signal end-of-file; if not
terminated prematurely the program will run forever.
A reference to the converse operation that reverses the effect of the program, where
one exists.
References to programs that perform a related function, if any are provided.
Examples of usage. Examples that show a PIPE command followed by output lines
marked with an arrowhead were run as this book was formatted; you may have some
confidence that they run with the CMS/TSO Pipelines level described by this book
(1.1.12/06). Examples with a leading comment line are fragments of REXX programs.
Other examples show a few stages of a pipeline; they are usually a single line which
begins and ends with an ellipsis (...) to indicate the remaining part(s) of the pipeline.
Notes, where applicable.
Return codes issued where they do not represent CMS/TSO Pipelines messages. Most
of these are return codes from CMS.
Configuration variables that apply to the built-in program. The main description of the
built-in program will assume the PIPE style; any differences in other styles are noted in
this section. See also Chapter 28, “Configuring CMS/TSO Pipelines” on page 839.
The built-in programs are fussy about their arguments. Quietly ignoring excess parameters
can be disastrous. An unexpected parameter could be the beginning of what should have
been a following stage, where the stage separator is missing.
Overview by Category
The following tables list the built-in programs by task or function. New built-in programs
are not marked with a change bar in this section; refer to the index for an overview of new
programs.
append Put Output from a Device Driver after Data on the Primary Input
Stream.
casei Run Selection Stage in Case Insensitive Manner.
eofback Run an Output Device Driver and Propagate End-of-file Backwards.
frtarget Select Records from the First One Selected by Argument Stage.
not Run Stage with Output Streams Inverted.
pipcmd Issue Pipeline Commands.
pipestop Terminate Stages Waiting for an External Event.
preface Put Output from a Device Driver before Data on the Primary Input
Stream.
runpipe Issue Pipelines, Intercepting Messages.
totarget Select Records to the First One Selected by Argument Stage.
zone Run Selection Stage on Subset of Input Record.
filterpack Manage Filter Packages.
<—Read a File
< is the generic name for a device driver that reads files into the pipeline.
Depending on the operating system and the actual syntax of the parameters, < selects the
appropriate device driver to perform the actual I/O to the file.
──<──string──
Operation: The actual device driver to be used is selected based on the argument string:
Notes:
1. Use <sfs to access a file using a that would be scanned by < as a mode letter or a
mode letter followed by a digit.
CMS
──<MDSK──fn──ft──┬────┬──
└─fm─┘
Syntax Description: Specify as blank-delimited words the file name and the file type of
the file to be read. A file mode or an asterisk is optional; the default is to search all
modes. STATE is used to locate the file when three words are specified; EXECSTAT is used
when two words are specified. If the file does not exist with a file name and a file type as
entered, the file name and the file type are translated to upper case and the search is
retried.
Operation: If EXECSTAT is used to locate the file and the return code is 0 (indicating that
the file is storage resident), the use count and recursion count are incremented before the
file is accessed through the file block provided by EXECSTAT.
Reading begins at the first record in the file and continues to end-of-file.
When a file is read from disk, the file is closed before <mdsk terminates. When an storage
file is read, the recursion count is decremented before <mdsk terminates.
Premature Termination: <mdsk terminates when it discovers that its output stream is not
connected.
Examples:
/* Read a file and count the number of words */
'pipe < input file | count words | console'
Notes:
1. Use diskslow if <mdsk fails to operate.
2. Use diskslow to begin to read from a particular record. Use diskrandom to read a
particular range of records or to read records that are not sequential. (To read many
records from near the beginning of a large file it may, however, be more efficient to
use drop and take with <mdsk to select the range of records desired.)
3. Use disk or diskslow to treat a file that does not exist as one with no records, rather
than issue a message about a missing file.
4. Use an asterisk as the third word of the argument string to bypass the search for
EXECLOADed files.
5. <mdsk may obtain several records from CMS at a time. It is unspecified how many
records <mdsk buffers, as well as the conditions under which it does so.
6. EXECSTAT resolves a file as follows:
a. It searches the directory of files that are loaded with EXECLOAD or are in a logical
segment (and are identified by an EXEC record) that has been attached with
SEGMENT LOAD.
b. It searches minidisks and SFS directories, if any, ahead of the installation segment.
c. It searches the installation segment, if attached to the virtual machine.
d. It searches remaining accessed mode letters, if any.
7. The fast interface to the file system is bypassed if the bit X'10' is on in offset X'3D'
of the FST that is exposed by the FSSTATE macro. Products that compress files on the
fly or in other ways intercept the file system macros should turn on this bit to ensure
that CMS/TSO Pipelines uses documented interfaces only.
Return Codes: In addition to the return codes associated with CMS/TSO Pipelines error
messages, <mdsk is transparent to return codes from CMS. Refer to the return codes for the
FSREAD macro in VM/ESA CMS Application Development Reference for Assembler,
SC24-5453, for a complete list of return codes. You are most likely to encounter these:
20 The file name or file type contains an invalid character.
24 The file mode is not valid.
25 Insufficient storage for CMS to allocate buffers.
z/OS
──<MVS──psds──
psds:
├──┬─dsname───────────────────────┬──┤
├─dsname(generation)───────────┤
├─dsname(member)───────────────┤
├─dsname(generation)(member)───┤
├─'dsname'─────────────────────┤
├─'dsname(generation)'─────────┤
├─'dsname(member)'─────────────┤
├─'dsname(generation)(member)'─┤
├─DDname=ddname────────────────┤
├─DDname=ddname(member)────────┤
└─member──ddname───────────────┘
Syntax Description: The data set may be specified by DSNAME, by DDNAME, or by two
words specifying member name and DDNAME, respectively.
Enclose a fully qualified data set name in single quotes; the trailing quote is optional.
Specify the DSNAME without quotes to have the prefix, if any, applied. Append paren-
theses containing a signed number to specify a relative generation of a data set that is a
member of a generation data group.
To read from an already allocated data set, specify the keyword DDNAME= followed by the
DDNAME already allocated. The minimum abbreviation is DD=.
A member is specified in parentheses after the DSNAME or DDNAME. The closing right
parenthesis is optional.
The third form (two blank-delimited words) can be used to read a member of an already
allocated data set. The first word specifies the member name. The second word specifies
the DDNAME; a leading DDNAME= keyword is optional for the second word.
The DSNAME, DDNAME, and member names are translated to upper case.
Commit Level: <mvs starts on commit level -2000000000. It then opens the DCB and
commits to level 0. The DCB is closed without reading if the commit return code is
nonzero.
Premature Termination: <mvs terminates when it discovers that its output stream is not
connected.
The second command above could read from the data set DPJOHN.TSO.FILTERS.
──<OE──┬─word─────────┬──
└─quotedString─┘
Operation: <oe uses a subroutine pipeline that contains hfs to read the file and deblock
TEXTFILE to perform the deblocking.
Commit Level: <oe starts on commit level -2000000000. It issues the subroutine pipe-
line, which will commit to 0.
Premature Termination: <oe terminates when it discovers that its output stream is not
connected.
CMS
┌──
─────────────────────────┐
┬───────────────────────┬┴──
──<SFS──fn──ft──dirid──┬───────┬───
└─digit─┘ ├─ASIS──────────────────┤
├─ESM──delimitedString──┤
├─OLDDATERef────────────┤
└─WORKUNIT──┬─number──┬─┘
├─DEFAULT─┤
└─PRIVATE─┘
Syntax Description:
dirid Specify the mode, the directory, or a NAMEDEF for the directory for the
file.
digit Specify the file mode number for the file.
ASIS Use the file name and file type exactly as specified. The default is to
translate the file name and file type to upper case when the file does not
exist as specified.
ESM Provide a character string for an external security manager. The char-
acter string can be up to eighty characters and it may contain blanks.
OLDDATEREF Pass the keyword to the open routine. CMS will not update the date of
last reference for the file.
WORKUNIT Specify the work unit to be used. You can specify the number of a
work unit you have allocated by the callable service; you can specify
DEFAULT, which uses the default unit of work; or you can specify
PRIVATE, which gets and returns a work unit for the stage’s exclusive
use. The default is DEFAULT.
Operation: When the directory is omitted, <sfs looks for the file on all accessed modes.
Reading begins at the first record in the file and continues to end-of-file. The file is closed
before <sfs terminates.
Commit Level: <sfs starts on commit level -2000000000. It creates a private unit of work
if WORKUNIT PRIVATE is specified, opens the file, allocates a buffer if required, and then
commits to level 0.
Premature Termination: <sfs terminates when it discovers that its output stream is not
connected.
See Also: disk, diskback, diskrandom, diskslow, filetoken, members, and pdsdirect.
This reads your profile from your root directory in the current file pool. < selects <sfs to
process the file, because the third word is present, but does not specify a mode.
Notes:
1. <sfs uses the DMSVALDT callable service to resolve the actual file mode or directory
name.
<sfsslow reads one record at a time from the host interface; it does not attempt to block
reads.
CMS
──<SFSSLOW──fn──ft──dirid──┬───────┬──
└─digit─┘
┌──
─────────────────────────┐
┬───────────────────────┬┴──
───
├─ASIS──────────────────┤
├─ESM──delimitedString──┤
├─FROM──number──────────┤
├─OLDDATERef────────────┤
├─OPENRECOVER───────────┤
└─WORKUNIT──┬─number──┬─┘
├─DEFAULT─┤
└─PRIVATE─┘
Syntax Description:
Operation: Unless FROM is specified, reading begins at the first record in the file. The
file is closed before <sfsslow terminates.
Commit Level: <sfsslow starts on commit level -2000000000. It creates a private unit of
work if WORKUNIT PRIVATE is specified, opens the file, allocates a buffer if required, and
then commits to level 0.
Premature Termination: <sfsslow terminates when it discovers that its output stream is
not connected.
This reads your profile from your top directory in the current file pool.
Depending on the operating system and the actual syntax of the parameters, > selects the
appropriate device driver to perform the actual I/O to the file.
──>──string──
Operation: The actual device driver to be used is selected based on the argument string:
Notes:
1. Use >sfs to access a file using a that would be scanned by > as a mode letter or a
mode letter followed by a digit.
2. >sfs maintains authorisations and other attributes for the file when it is replaced,
whereas >mdsk creates a work file and as a result loses such information.
CMS
┌─Variable──────────┐
──>MDSK──fn──ft──fmode──┼───────────────────┼──
└─Fixed──┬────────┬─┘
└─number─┘
Syntax Description: Specify as blank-delimited words the name, type, and mode of the
file to be created. If the file does not exist with a file name and a file type as entered, the
file name and the file type are translated to upper case and the search is retried. Append a
mode number to the mode letter to create a file with this particular mode number. The
mode number of an existing file is retained when you specify a mode letter without a
number; the default is 1 if the file does not exist. The optional arguments designate the
file format of the file that is created. VARIABLE specifies a file that has record format
variable. FIXED creates a file that has record format fixed. An additional number (the
record length) may be specified for such a file. When the record length is specified, it is
ensured that all input records have that particular length. When the number is omitted, the
first record that is not null determines the record length of the file. The default record
format is VARIABLE.
Operation: If a file already exists with the name specified, a utility file is written. When
this file has been written successfully, the original file is erased and the utility file renamed
to the specified name. An existing file is erased if there are no records containing data on
any input stream. The file is closed before >mdsk terminates.
Streams Used: >mdsk first creates the file from records on the primary input stream that
are not null; all input records are also copied to the primary output stream. The primary
output stream is severed at end-of-file on the primary input stream. The first records of
the file are then overwritten with any records from the secondary input stream that are not
null. All records from the secondary input stream are copied to the secondary output
stream after they are written to the file.
Warning: When the secondary input stream is connected, records read from it must have
the same length as the records they replace in the file, but this is not enforced by CMS for
variable record format files; CMS truncates a variable record format file without indication
of error if a record is replaced with one of different length, be that shorter or longer.
Examples:
/* Create a file with a single line in it */
'pipe literal This is a single line.| > one liner a'
Notes:
1. Use diskslow if >mdsk fails to operate.
2. Null input records are copied to the output (if connected), but not to the file; CMS files
cannot contain null records.
3. An asterisk (*) cannot be specified as the file mode.
4. If the existing file is large and not needed to create the new one, it should be erased
prior to running the pipeline so that the disk space is available to create the new file.
5. The record format of an existing fixed format file is not retained by default. Use state
to determine the record format of a file and supply the fourth word of the result as the
file format option.
6. When it is processing records from the primary input stream, >mdsk may deliver
several records at a time to CMS to improve performance. The file may not be in its
eventual format while it is being created; it should not be accessed (by any means)
before >mdsk terminates. It is unspecified how many records >mdsk buffers, as well
as the conditions under which it does so.
7. When a file is replaced, the new contents will not be visible before >mdsk terminates.
8. On CMS9 and later releases, use >sfs to replace files in SFS. You can accomplish this
either by specifying >sfs explicitly or by specifying a directory with >.
9. Connect the secondary input stream when creating CMS libraries or packed files where
the first record has a pointer to the directory or contains the unpacked record length of
a packed file. The stage that generates the file (for instance, maclib) can write a
placeholder first record on the primary output stream initially; it then writes the real
first record to a stream connected to the secondary input stream of >mdsk when the
complete file has been processed and the location and size of the directory are known.
10. The fast interface to the file system is bypassed if the bit X'10' is on in offset X'3D'
of the FST that is exposed by the FSSTATE macro. Products that compress files on the
fly or in other ways intercept the file system macros should turn on this bit to ensure
that CMS/TSO Pipelines uses documented interfaces only.
11. An existing file is not rewritten in its place, even when mode number 6 is specified or
| the existing file has mode number 6. (Use diskupdate instead.)
12. Use >sfs with ASIS to create a minidisk file that has a mixed case file name or file
type, or both.
Return Codes: In addition to the return codes associated with CMS/TSO Pipelines error
messages, >mdsk is transparent to return codes from CMS. Refer to the return codes for the
FSWRITE macro in VM/ESA CMS Application Development Reference for Assembler,
SC24-5453, for a complete list of return codes. You are most likely to encounter these:
1 You do not have write authority to the file.
13 The disk is full.
16 Conflict when writing a buffer; this indicates that a file with the same name has
been created by another invocation of disk.
20 The file name or file type contains an invalid character.
24 The file mode is not valid.
25 Insufficient storage for CMS to allocate buffers.
The configuration variable DISKTEMPFILETYPE governs how >mdsk creates the file name
and the file type for the temporary file when it replaces an existing file that resides in an
SFS directory.
TOD The file name and file type are the unpacked hexadecimal value of the
time-of-day clock at the time the temporary file is created. This creates
| a unique temporary file across a processor complex, so long as all partic-
| ipants observe the protocol.
CMSUT1 The file name is made unique within the virtual machine. The file type
is CMSUT1. CMSUT1 is the default in all styles.
USERID The file name is made unique within the virtual machine. The file type
is the user ID as reported by diagnose 0. This creates a unique tempo-
rary file across a system.
The configuration variable DISKREPLACE governs how >mdsk replaces an existing file that
resides in an SFS directory.
COPY >mdsk performs a copy operation in the SFS server to replace the
contents of the file with the contents of the temporary file it first created.
File authorisations and creation date will remain unchanged. COPY is the
default in the DMS style.
REPLACE >mdsk first creates a temporary file. It then erases the existing file and
renames the temporary file to the file name. This changes the creation
date for the file and drops all authorisations. REPLACE is the default in
the PIP and FPL styles.
z/OS
──>──┤ psds ├──┬────────┬──┬───────────┬──
├─COERCE─┤ └─PAD──xorc─┘
└─CHOP───┘
──┬───────────────────────────┬──┬─────┬──
├─ISPFSTATS─────────────────┤ └─SHR─┘
└─USERDATA──delimitedString─┘
psds:
├──┬─dsname───────────────────────┬──┤
├─dsname(generation)───────────┤
├─dsname(member)───────────────┤
├─dsname(generation)(member)───┤
├─'dsname'─────────────────────┤
├─'dsname(generation)'─────────┤
├─'dsname(member)'─────────────┤
├─'dsname(generation)(member)'─┤
├─DDname=ddname────────────────┤
├─DDname=ddname(member)────────┤
└─member──ddname───────────────┘
Enclose a fully qualified data set name in single quotes; the trailing quote is optional.
Specify the DSNAME without quotes to have the prefix, if any, applied. Append paren-
theses containing a signed number to specify a relative generation of a data set that is a
member of a generation data group.
To rewrite an already allocated data set, specify the keyword DDNAME= followed by the
DDNAME already allocated. The minimum abbreviation is DD=.
A member is specified in parentheses after the DSNAME or DDNAME. The closing right
parenthesis is optional.
The third form (two blank-delimited words) can be used to write a member of an already
allocated data set. The first word specifies the member name. The second word specifies
the DDNAME; a leading DDNAME= keyword is optional for the second word.
The DSNAME, DDNAME, and member names are translated to upper case.
The options COERCE, CHOP, and PAD are used with fixed record format data sets. COERCE
specifies that the input records should be padded with blanks or truncated to the record
length of the data set. CHOP specifies that long records are truncated; input records must
be at least as long as the record length for the data set. PAD specifies the pad character to
use when padding the record. Input records must not be longer than the record length of
the data set when PAD is specified alone.
ISPFSTAT or USERDATA may be specified for a partitioned data set. Specify USERDATA to
insert literal user data in the directory entry. Specify ISPFSTAT to make >mvs update or
create user data in the format maintained by ISPF.
Specify SHR to allocate the data set shared. The default is DISP=OLD.
Commit Level: >mvs starts on commit level -2000000000. It allocates the data set (if
required) and then commits to 0. The data set is not opened if the commit return code is
nonzero. The data set is opened on commit level 0.
Notes:
1. Do not replace two members in a partitioned data set concurrently; z/OS does not
support this.
2. Use readpds to read members whose names are not upper case alphanumerics.
3. The installation can set ISPFSTAT as the default. It can also select one of the coercing
options as the default. Refer to the installation instructions.
4. Specifying SHR only affects the allocation; the user must ensure the integrity of the
data set. In particular, specifying SHR does not imply support for concurrent update of
members in a partitioned data set.
>oe appends a line end character (X'15') to each input record before it writes the record
to the file.
──>OE──┬─word─────────┬──
└─quotedString─┘
Operation: >oe uses a subroutine pipeline that contains block TEXTFILE to append the line
end and hfsreplace to replace the file. When the file exists, >oe buffers the new contents
of the file and thus replaces the specified file when it reaches end-of-file.
Commit Level: >oe starts on commit level -2000000000. It issues the subroutine pipe-
line, which will commit to 0.
CMS
┌──
─────────────────────────┐
┬───────────────────────┬┴──
──>SFS──fn──ft──dirid──┬───────┬───
└─digit─┘ ├─ALLOWEMPTY────────────┤
├─ASIS──────────────────┤
├─CDATE──number─────────┤
├─CHOP──────────────────┤
├─COERCE────────────────┤
├─ESM──delimitedString──┤
├─Fixed──┬────────┬─────┤
│ └─number─┘ │
├─INPLACE───────────────┤
├─KEEP──────────────────┤
├─MDATE──number─────────┤
├─NOCHOP────────────────┤
├─NOPAD─────────────────┤
├─NORECOVER─────────────┤
├─PAD──┬──────┬─────────┤
│ └─xorc─┘ │
├─SAFE──────────────────┤
├─Variable──────────────┤
└─WORKUNIT──┬─number──┬─┘
├─DEFAULT─┤
└─PRIVATE─┘
Syntax Description:
ESM Provide a character string for an external security manager. The char-
acter string can be up to eighty characters and it may contain blanks.
FIXED The record length may be specified after FIXED. Create a new file with
fixed record format; verify that an existing file has fixed record format.
If the record length is specified and the file exists, it is verified that the
file is of the specified record length.
INPLACE Pass the keyword to the open routine. The file will be updated in place.
KEEP PRIVATE is ignored unless WORKUNIT PRIVATE is specified or defaulted.
When KEEP is specified, changes are committed to the file even when an
error has occurred. The default is to roll back the unit of work. KEEP is
mutually exclusive with SAFE.
MDATE Specify the file modification date and time. The timestamp contains
eight to fourteen digits. The first eight digits specify the year (four
digits), the month (two digits), and the day (two digits). The remaining
digits are padded on the right with zeros to form six digits time
consisting of the hour, the minute, and the second. A twenty-four hour
clock is used.
NOCHOP Do not truncate long records. Issue a message instead.
| NOPAD Do not pad short records. For files that have fixed record format, issue a
| message and terminate when an input record is shorter than the record
| length; ignore null records when writing files that have variable record
| format (but pass the null record to the primary output stream).
NORECOVER Pass the keyword to the open routine. Changes to the file may persist
after the unit of work is rolled back.
PAD Pad short records with the character specified. The blank is used as the
pad character if the following word does not scan as an xorc. Pad short
records on the right to the file’s record length in a file that has fixed
record format. Write a single pad character for a null input record in a
| file that has variable record format. In both cases, pass the unmodified
| input record to the primary output stream.
SAFE PRIVATE is rejected if WORKUNIT PRIVATE is neither specified nor
defaulted. When SAFE is specified, >sfs performs a pipeline commit to
level 1 before it returns the unit of work. It rolls back the unit of work
if the commit does not complete with return code 0. SAFE is mutually
exclusive with KEEP.
VARIABLE The record length may be specified after VARIABLE. Create a new vari-
able record format file; verify that an existing file has variable record
format.
WORKUNIT Specify the work unit to be used. You can specify the number of a
work unit you have allocated by the callable service; you can specify
DEFAULT, which uses the default unit of work; or you can specify
PRIVATE, which gets and returns a work unit for the stage’s exclusive
use. The default is PRIVATE.
Streams Used: >sfs first creates the file from records on the primary input stream that are
not null; all input records are also copied to the primary output stream. The primary
output stream is severed at end-of-file on the primary input stream. The first records of
the file are then overwritten with any records from the secondary input stream that are not
null. All records from the secondary input stream are copied to the secondary output
stream after they are written to the file. >sfs terminates with an error message if a record
is replaced with one of a different length.
Commit Level: >sfs starts on commit level -2000000000. It creates a private unit of work
if WORKUNIT PRIVATE is specified or defaulted, opens the file, allocates a buffer if required,
and then commits to level 0.
Examples: To create a file that contains a single line in the root directory:
pipe literal one line | > one liner .
Depending on the operating system and the actual syntax of the parameters, >> selects the
appropriate device driver to perform the actual I/O to the file.
──>>──string──
Operation: The actual device driver to be used is selected based on the argument string:
Notes:
1. Use >>sfs to access a file using a that would be scanned by >> as a mode letter or a
mode letter followed by a digit.
CMS
──>>──fn──ft──┬───────────────────────────┬──
│ ┌─Variable──────────┐ │
└─fm──┼───────────────────┼─┘
└─Fixed──┬────────┬─┘
└─number─┘
Syntax Description: Specify as blank-delimited words the file name and the file type of
the file to be appended to. A file mode or an asterisk is optional; the default is to search
all modes. If the file does not exist with a file name and a file type as entered, the file
name and the file type are translated to upper case and the search is retried. The file is
created as A1 if no file mode (or an asterisk) is specified and no file is found with the
name and type given. The record format and (for fixed format files) the record length are
optional arguments. The default is the characteristics of an existing file when appending,
VARIABLE when a file is being created. When the file exists, the specified record format
must match the characteristics of the file.
Operation: Records are appended to an existing CMS file; a new file is created (with an
upper case file name and type) if no file is found to append to. The file is closed before
>>mdsk terminates.
Streams Used: >>mdsk first appends or creates the file from records on the primary input
stream that are not null; all input records are also copied to the primary output stream.
The primary output stream is severed at end-of-file on the primary input stream. The first
records of the file are then overwritten with any records from the secondary input stream
that are not null. All records from the secondary input stream are copied to the secondary
output stream after they are written to the file.
Warning: When the secondary input stream is connected, records read from it must have
the same length as the records they replace in the file, but this is not enforced by CMS for
variable record format files; CMS truncates a variable record format file without indication
of error if a record is replaced with one of different length, be that shorter or longer.
Examples:
/* Append a line to a file */
'pipe literal this is a single line|>> many liner'
Notes:
1. Use diskslow if >>mdsk fails to operate.
2. Null input records are copied to the output (if connected), but not to the file; CMS files
cannot contain null records.
3. Use diskslow to begin to write at a particular record. Use diskupdate to replace
random records.
4. When it is processing records from the primary input stream, >>mdsk may deliver
several records at a time to CMS to improve performance. The file may not be in its
eventual format while it is being created; it should not be accessed (by any means)
before >>mdsk terminates. It is unspecified how many records >>mdsk buffers, as well
as the conditions under which it does so.
5. Connect the secondary input stream when creating CMS libraries or packed files where
the first record has a pointer to the directory or contains the unpacked record length of
a packed file. The stage that generates the file (for instance, maclib) can write a
placeholder first record on the primary output stream initially; it then writes the real
first record to a stream connected to the secondary input stream of >>mdsk when the
complete file has been processed and the location and size of the directory are known.
6. The fast interface to the file system is bypassed if the bit X'10' is on in offset X'3D'
of the FST that is exposed by the FSSTATE macro. Products that compress files on the
fly or in other ways intercept the file system macros should turn on this bit to ensure
that CMS/TSO Pipelines uses documented interfaces only.
Return Codes: In addition to the return codes associated with CMS/TSO Pipelines error
messages, >>mdsk is transparent to return codes from CMS. Refer to the return codes for
the FSWRITE macro in VM/ESA CMS Application Development Reference for Assembler,
SC24-5453, for a complete list of return codes. You are most likely to encounter these:
1 You do not have write authority to the file.
13 The disk is full.
16 Conflict when writing a buffer; this indicates that a file with the same name has
been created by another invocation of disk.
20 The file name or file type contains an invalid character.
24 The file mode is not valid.
25 Insufficient storage for CMS to allocate buffers.
z/OS
──>>──┬─word───────────────┬──┬────────┬──┬───────────┬──
├─word(generation)───┤ ├─COERCE─┤ └─PAD──xorc─┘
├─'word'─────────────┤ └─CHOP───┘
├─'word(generation)'─┤
└─DDname=word────────┘
Enclose a fully qualified data set name in single quotes; the trailing quote is optional.
Specify the DSNAME without quotes to have the prefix, if any, applied. Append paren-
theses containing a signed number to specify a relative generation of a data set that is a
member of a generation data group.
To append to an already allocated data set, specify the keyword DDNAME= followed by the
DDNAME already allocated. The minimum abbreviation is DD=.
The options COERCE, CHOP, and PAD are used with fixed record format data sets. COERCE
specifies that the input records should be padded with blanks or truncated to the record
length of the data set. CHOP specifies that long records are truncated; input records must
be at least as long as the record length for the data set. PAD specifies the pad character to
use when padding the record. Input records must not be longer than the record length of
the data set when PAD is specified alone.
Commit Level: >>mvs starts on commit level -2000000000. It allocates the data set (if
required), opens the DCB, and commits to level 0.
Notes:
1. >>mvs cannot append to a member of a partitioned data set. Use < to read the
member and > to replace it.
>>oe appends a line end character (X'15') to each input record before it writes the record
to the file.
──>>OE──┬─word─────────┬──
└─quotedString─┘
Operation: >>oe uses a subroutine pipeline that contains block TEXTFILE to append the
line end and hfs to append to the file.
Commit Level: >>oe starts on commit level -2000000000. It issues the subroutine pipe-
line, which will commit to 0.
CMS
┌──
─────────────────────────┐
┬───────────────────────┬┴──
──>>SFS──fn──ft──dirid──┬───────┬───
└─digit─┘ ├─ALLOWEMPTY────────────┤
├─ASIS──────────────────┤
├─CHOP──────────────────┤
├─COERCE────────────────┤
├─ESM──delimitedString──┤
├─Fixed──┬────────┬─────┤
│ └─number─┘ │
├─KEEP──────────────────┤
├─MDATE──number─────────┤
├─NOCHOP────────────────┤
├─NOPAD─────────────────┤
├─PAD──┬──────┬─────────┤
│ └─xorc─┘ │
├─SAFE──────────────────┤
├─Variable──────────────┤
└─WORKUNIT──┬─number──┬─┘
├─DEFAULT─┤
└─PRIVATE─┘
Syntax Description:
FIXED The record length may be specified after FIXED. Create a new file with
fixed record format; verify that an existing file has fixed record format.
If the record length is specified and the file exists, it is verified that the
file is of the specified record length.
KEEP PRIVATE is ignored unless WORKUNIT PRIVATE is specified or defaulted.
When KEEP is specified, changes are committed to the file even when an
error has occurred. The default is to roll back the unit of work. KEEP is
mutually exclusive with SAFE.
MDATE Specify the file modification date and time. The timestamp contains
eight to fourteen digits. The first eight digits specify the year (four
digits), the month (two digits), and the day (two digits). The remaining
digits are padded on the right with zeros to form six digits time
consisting of the hour, the minute, and the second. A twenty-four hour
clock is used.
NOCHOP Do not truncate long records. Issue a message instead.
| NOPAD Do not pad short records. For files that have fixed record format, issue a
| message and terminate when an input record is shorter than the record
| length; ignore null records when writing files that have variable record
| format (but pass the null record to the primary output stream).
PAD Pad short records with the character specified. The blank is used as the
pad character if the following word does not scan as an xorc. Pad short
records on the right to the file’s record length in a file that has fixed
record format. Write a single pad character for a null input record in a
| file that has variable record format. In both cases, pass the unmodified
| input record to the primary output stream.
SAFE PRIVATE is rejected if WORKUNIT PRIVATE is neither specified nor
defaulted. When SAFE is specified, >>sfs performs a pipeline commit to
level 1 before it returns the unit of work. It rolls back the unit of work
if the commit does not complete with return code 0. SAFE is mutually
exclusive with KEEP.
VARIABLE The record length may be specified after VARIABLE. Create a new vari-
able record format file; verify that an existing file has variable record
format.
WORKUNIT Specify the work unit to be used. You can specify the number of a
work unit you have allocated by the callable service; you can specify
DEFAULT, which uses the default unit of work; or you can specify
PRIVATE, which gets and returns a work unit for the stage’s exclusive
use. The default is PRIVATE.
Streams Used: >>sfs first appends or creates the file from records on the primary input
stream that are not null; all input records are also copied to the primary output stream.
The primary output stream is severed at end-of-file on the primary input stream. The first
records of the file are then overwritten with any records from the secondary input stream
that are not null. All records from the secondary input stream are copied to the secondary
output stream after they are written to the file. An error message is issued if a record is
replaced with one of a different length.
Commit Level: >>sfs starts on commit level -2000000000. It creates a private unit of
work if WORKUNIT PRIVATE is specified or defaulted, opens the file, allocates a buffer if
required, and then commits to level 0.
>>sfsslow writes one record at a time; it does not attempt to block writes.
CMS
──>>SFSSLOW──fn──ft──dirid──┬───────┬──
└─digit─┘
┌──
─────────────────────────┐
┬───────────────────────┬┴──
───
├─ALLOWEMPTY────────────┤
├─ASIS──────────────────┤
├─CHOP──────────────────┤
├─COERCE────────────────┤
├─ESM──delimitedString──┤
├─Fixed──┬────────┬─────┤
│ └─number─┘ │
├─FROM──number──────────┤
├─HARDEN──┬────────┬────┤
│ └─number─┘ │
├─KEEP──────────────────┤
├─MDATE──number─────────┤
├─NOCHOP────────────────┤
├─NOPAD─────────────────┤
├─OPENRECOVER───────────┤
├─PAD──┬──────┬─────────┤
│ └─xorc─┘ │
├─SAFE──────────────────┤
├─Variable──────────────┤
└─WORKUNIT──┬─number──┬─┘
├─DEFAULT─┤
└─PRIVATE─┘
Syntax Description:
Streams Used: >>sfsslow first appends or creates the file from records on the primary
input stream that are not null; all input records are also copied to the primary output
stream. The primary output stream is severed at end-of-file on the primary input stream.
The first records of the file are then overwritten with any records from the secondary input
stream that are not null. All records from the secondary input stream are copied to the
secondary output stream after they are written to the file. An error message is issued if a
record is replaced with one of a different length.
Commit Level: >>sfsslow starts on commit level -2000000000. It creates a private unit of
work if WORKUNIT PRIVATE is specified or defaulted, opens the file, allocates a buffer if
required, and then commits to level 0.
To append to a log file and make sure the lines are immediately added to the file:
pipe ... | >>sfsslow log file production.logs harden 1 inplace
──ABBREV──┬───────────────────────────────┬──
└─word──┬─────────────────────┬─┘
└─number──┬─────────┬─┘
└─ANYCASE─┘
The word specifies the characters to compare against the beginning of input records. The
default is a null word. The number specifies the minimum count of characters that must
be present to select the record. The default is zero, which means that any abbreviation
down to a null record or a leading blank will be selected. Specify ANYCASE to make the
comparison case insensitive.
Operation: abbrev compares the leading columns of each record against the specified
word until a blank or the end of the record is met. The record is passed to the primary
output stream if a minimum abbreviation of the specified word is present. Otherwise, the
record is discarded (or passed to the secondary output stream if the secondary output
stream is connected).
Streams Used: Records are read from the primary input stream. Secondary streams may
be defined, but the secondary input stream must not be connected.
Record Delay: An input record is written to exactly one output stream when both output
streams are connected. abbrev strictly does not delay the record.
Commit Level: abbrev starts on commit level -2. It verifies that the secondary input
stream is not connected and then commits to level 0.
Notes:
1. abbrev is similar to the REXX built-in function abbrev().
2. Using abbrev without arguments is a somewhat contorted way to select null records
and records that contain a leading blank.
! 3. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
¡¡ CMS
¡ ──ACIGROUP──
¡ Operation: For each word in the input, acigroup makes it upper case and obtains the ACI
¡ group from CP. When the user ID exists, a 16-byte record is written to the primary output
¡ stream. The record contains the 8-character user ID followed by the group. When the
¡ user ID does not exist and the secondary output stream is defined, an 8-byte record
¡ containing the user ID is written to this stream.
¡ Streams Used: Secondary streams may be defined. Records are read from the primary
¡ input stream; no other input stream may be connected. Null and blank input records are
¡ discarded.
¡ Commit Level: acigroup starts on commit level -2. It verifies that the secondary input
¡ stream is not connected and then commits to level 0.
──ADDRDW──┬─Variable─┬──
├─CMS──────┤
├─SF───────┤
├─CMS4─────┤
└─SF4──────┘
Type: Filter.
Syntax Description:
VARIABLE Prefix a record descriptor word as used by z/OS (four bytes). The
record descriptor word contains the total length of the output record in
the first two bytes; the next two bytes contain binary zeros.
CMS Prefix a record descriptor word as used by CMS (two bytes). The first
two bytes contain the length of the input record (unsigned).
SF Prefix a structured field length specifier (two bytes). The first two bytes
contain the length of the output record; this is two more than the length
of the input record.
CMS4 Prefix a record descriptor word of four bytes. The first four bytes
contain the length of the input record.
SF4 Prefix a record descriptor word of four bytes. The first four bytes
contain the length of the output record; this is four more than the length
of the input record.
Premature Termination: addrdw terminates when it discovers that its output stream is
not connected.
Examples:
pipe literal abc | addrdw cms | spec 1.2 c2x 1 3-* nw | console
0004 abc
Ready;
!! CMS
! ──ADRSPACE──┬─CREATE──┬────────┬──┬────────────┬─┬──
! │ (1) ┘
└─ALET─── └─INITialise─┘ │
! ├─DESTROY────────────────────────────┤
! ├─ISOLATE────────────────────────────┤
! ├─PERMIT──┬────────────┬──┬───────┬──┤
! │ ├─USER──word─┤ └─WRITE─┘ │
! │ └─VCIT──hex──┘ │
! └─QUERY──┬────────────────┬──────────┘
! └─┬──────┬──word─┘
! └─USER─┘
! Note:
! 1 ALET is implied when INITIALISE is specified.
! Syntax Description:
!! CREATE Data spaces are created based on the information supplied on the
! primary input stream. Optionally, an ALET is assigned. If INITIALISE is
! further specified, the first 80 bytes of the data space are set as follows
! (addresses in hexadecimal):
! 0-7 The eye-catcher 'fplasit1'.
! 8-F The ASIT.
! 10-13 The count of pages.
! 14-17 Zeros. May be used as a lock.
! 18-1F The user ID that created the data space.
! 20-38 The data space name.
! 38-3B The first available byte in the data space (=X'50')..
! 3C-3F The last available byte in the data space (4096 times the
! number of pages minus 1).
! 40-4F Reserved. Zeros.
! The definition of this structure is built in as fplasit.
!! DESTROY The data spaces specified by the input are destroyed.
!! ISOLATE The data spaces specified by the input are isolated, that is, all permis-
! sions granted for them are revoked.
!! PERMIT The specified user is given permission to access the data spaces specified
! by the input. The user is identified either by the user ID or by the
! address space identification token (ASIT) of the virtual machine’s primary
! address space, also known as the virtual configuration identification
! token (VCIT).
!! QUERY Obtain the address space identification token of address spaces owned by
! the specified user or yourself.
!! DESTROY If USER or VCIT is specified with PERMIT, the record may be eight,
! ISOLATE
! twelve, or sixteen bytes; the first eight bytes are the address space
! PERMIT
! identification token (ASIT) of the data space for which the user is granted
! access. Otherwise, the input record must be sixteen bytes and contain
! eight bytes ASIT followed by eight bytes user ID.
!! QUERY The name of the address space to query.
! Output Record Format: The table below shows the information written to the primary
! output stream. When the secondary output stream is defined, the input record is passed to
! that stream if the referenced object does not exist.
!! CREATE Eight bytes address space identification token (ASIT) followed by a four
! bytes binary count of the number of pages available in the data space,
! rounded up to the next multiple of 256 to indicate the actual capacity
! available. When ALET is specified four bytes ALET is appended to the
! record making it sixteen bytes in all.
!! DESTROY The input record is passed unchanged.
! ISOLATE
! PERMIT
!! QUERY Eight bytes address space identification token (ASIT) followed by four
! bytes binary count of the number of pages available in the data space.
! This page count is a multiple of 256.
! Streams Used: Secondary streams may be defined. Records are read from the primary
! input stream; no other input stream may be connected. Null input records are discarded.
! Commit Level: adrspace starts on commit level -2. It verifies that the secondary input
! stream is not connected and then commits to level 0.
! Examples: See Chapter 18, “Using VM Data Spaces with CMS Pipelines” on page 207.
! Notes:
! 1. addrspace is a synonym for adrspace.
! 2. The virtual machine must be in XC mode (this excludes z/CMS).
! 3. All address spaces created are destroyed by an IPL of the virtual machine.
! 4. Permissions granted are revoked by IPL of either of the two virtual machines.
! 5. Use key X'E0' on adrspace CREATE to interoperate with CMS programs running in
! user key, such as CMS Pipelines.
CMS
──AFTFST──┬─────────────────────────┬──
├─NOFORMAT────────────────┤
├─SHOrtdate───────────────┤
├─ISOdate─────────────────┤
├─FULldate────────────────┤
├─STAndard────────────────┤
! └─STRing──delimitedString─┘
NOFORMAT The file status information is not formatted. The output record is sixty-
four bytes.
FULLDATE The file’s timestamp is formatted in the American format, with the
century: 3/09/1946 23:59:59.
ISODATE The file’s timestamp is formatted with the century in one of the formats
approved by the International Standardisation Organisation:
1946-03-09 23:59:59.
SHORTDATE The file’s timestamp is formatted in the American format, without the
century: 3/09/46 23:59:59.
STANDARD The file’s timestamp is formatted as a single word in a form that can be
used for comparisons: 19460309235959.
!! STRING Specify custom timestamp formatting, similar to the POSIX strftime()
! function. The delimited string specifies formatting as literal text and
! substitutions are indicated by a percentage symbol (%) followed by a
! character that defines the substitution. These substitution strings are
! recognised by aftfst:
! %% A single %.
! %Y Four digits year including century (0000-9999).
! %y Two-digit year of century (00-99).
! %m Two-digit month (01-12).
! %n Two-digit month with initial zero changed to blank ( 1-12).
! %d Two-digit day of month (01-31).
! %e Two-digit day of month with initial zero changed to blank ( 1-31).
! %H Hour, 24-hour clock (00-23).
! %k Hour, 24-hour clock first leading zero blank ( 0-23).
! %M Minute (00-59).
! %S Second (00-60).
! %F Equivalent to %Y-%m-%d (the ISO 8601 date format).
! %T Short for %H:%M:%S.
! %t Tens and hundredth of a second (00-99).
Operation: A line is written for each file in the Active File Table (AFT).
Output Record Format: When NOFORMAT is specified, the output record contains 64
bytes in the format defined by the FSTD data area.
Otherwise, selected fields of the file status are formatted and written as a record: the file
name, type, and mode; the record format and logical record length; the number of records
and the number of disk blocks in the file; the date and time of last change to the file.
Premature Termination: aftfst terminates when it discovers that its output stream is not
connected.
Notes:
1. CMS adds an entry to the list of open files if the output from aftfst is written to disk
later in the pipeline. Buffer the output from aftfst (for instance with buffer or sort ) to
ensure that you get a consistent snapshot of the file status.
2. Running this device driver may alert authors of shells that they have forgotten to close
open files.
3. A file is in the AFT when it has been opened by an explicit FSOPEN or by an implicit
open on the first I/O operation to the file. FSCLOSE (or the CMS command FINIS) closes
the file and removes information about the file from the AFT.
4. aftfst does not write information about files in the Shared File System (SFS) that are
opened by call to the Callable Services Library (CSL) routine DMSOPEN (or similar).
5. Be sure to set numeric digits 14 when performing comparisons on STANDARD
timestamps; if you forget, REXX will use just nine digits precision. This means that
the first digit of the hour will be the least significant one and the remainder of the
precision will be lost.
6. SORTED is a synonym for STANDARD.
──AGGRC──
Type: Filter.
Output Record Format: A number. Zero and positive numbers have no sign.
Examples:
pipe literal 0 99 3 6 | split | aggrc | console
99
Ready;
Notes:
1. aggrc can aggregate the return codes written to the secondary output stream by host
command processors.
2. No output is produced if there is no input.
Return Codes: When the output record cannot be written (the stage is last in a pipeline),
the return code is set to the aggregate of the numbers read.
factor:
├──┬───┬──┤ string ├──┤
└─¬─┘
string:
├──┬─delimitedString──────┬──┤
└─(──┤ expression ├──)─┘
Operation: all constructs a suitable multistream subroutine pipeline containing locate and
nlocate filters, and runs it.
Streams Used: Records are read from the primary input stream. Secondary streams may
be defined, but the secondary input stream must not be connected.
Record Delay: An input record is written to exactly one output stream when both output
streams are connected. all strictly does not delay the record.
Commit Level: all starts on commit level -2. It verifies that the secondary input stream is
not connected, parses the expression, and then issues a subroutine pipeline to work on data.
This subroutine commits to level 0 in due course.
Examples: To select records that contain a string or a vertical bar (4F in hexadecimal), or
both:
all /abc/ ! x4f
To select records that contain an exclamation mark and either (or both) of two strings:
...| all (/abc/ ! /def/) & /!/ |...
Note that XEDIT does not support parentheses for grouping.
Notes:
1. Performance will improve if the equivalent multistream pipeline network is specified
instead of using all.
2. Specify a percent sign followed by a keyword (in lower case) at the beginning of the
argument string to inspect the subroutine pipeline that is constructed. Specify %debug
to write it to the file ALL DEBUG. Specify %dump to write the subroutine pipeline to the
primary output stream before invoking it.
!! CMS
! ──ALSERV──┬─ADD──┬───────┬─┬──
! │ └─WRITE─┘ │
! ├─REMOVE─────────┤
! └─TEST───────────┘
! Syntax Description:
!! ADD Create an ALET in the access list. The ALET has write permission when
! WRITE is specified and the owner has granted your virtual machine write
! access.
!! REMOVE Remove an ALET from the access list.
!! TEST Verify the correctness of an ALET. Correct ALETs are passed to the
! primary output stream. Incorrect ones are passed to the secondary
! output stream, if it is connected; otherwise the stage terminates with an
! error message.
!! ADD Address space identification tokens (ASIT) in the first eight bytes. The
! record may be eight, twelve, or sixteen bytes long.
!! REMOVE Four bytes access list entry token (ALET).
! TEST
! Streams Used: Secondary streams may be defined. Records are read from the primary
! input stream; no other input stream may be connected.
! Commit Level: alserv starts on commit level -2. It verifies that the secondary input
! stream is not connected and then commits to level 0.
! Examples: See Chapter 18, “Using VM Data Spaces with CMS Pipelines” on page 207.
! Notes:
! 1. The virtual machine must be in XC mode (this excludes z/CMS).
┌─3279─┐ ┌─TEXT─┐
──APLDECode──┼──────┼──┼──────┼──
├─3278─┤ └─APL──┘
├─3277─┤
├─1────┤
└─2────┘
Syntax Description: Two keyword operands are optional. The first operand specifies the
type of device for which data are decoded. The default is to decode with the X'08'
graphic escape sequence used by devices such as 3278 and 3279 (and subsequent termi-
nals). 3277 specifies the older style of decoding that uses pseudo start of field orders. The
first operand can also be specified as 1 (for 3278) or 2 (for 3277); these correspond to the
third word of the output from fullscrs. The second keyword specifies whether the TEXT
(default) or APL mapping should be used.
Operation: Two translate tables are set up containing the defaults that correspond to the
CP translation for TEXT ON (or APL ON if the keyword APL is specified) for the specified
type of terminal. If the secondary input stream is defined, these defaults are modified by
overlaying one record from it.
Input records on the primary input stream are scanned for escape characters, which are
deleted. The character after an escape character is translated using the first table; other
positions are translated using the second table. The escape character is X'08' when no
operands are specified and when the first operand is 3279 (or 3278 or 1); the escape char-
acter is X'1D' when the first operand is 3277 (or 2).
Input Record Format: Inbound 3270 data without orders. In particular, SBA (set buffer
address) orders should have been processed before the record is passed to apldecode.
If the secondary input stream is connected, a single record is read from it before the file on
the primary stream is processed. (End-of-file is treated as if a null record were read.)
This record can be any length, but only the first 512 bytes are used. The record is
assumed to contain two translate tables that are to be overlaid on the two default translate
tables, starting at the beginning of the first table. The ending part of the tables is left
unchanged if the record is shorter than 512 bytes.
Streams Used: If the secondary input stream is defined, one record is read and consumed
from it. The secondary input stream is severed before the primary input stream is proc-
essed. The secondary output stream must not be connected.
Commit Level: apldecode starts on commit level -2. It verifies that the secondary output
stream is not connected and then commits to level 0.
Examples:
The first stage of this pipeline generates the data to be displayed by fullscr. The first byte
contains X'C0', which is a flag byte that specifies the erase/write alternate function. The
second byte contains a write control character that restores the keyboard to allow user
input (it also resets any modified data tags, which is not relevant in this example). The
third stage discards the attention ID and the cursor address from the input record; because
the screen is unformatted, the remaining data contain no 3270 device orders to worry
about. The fourth stage decodes the graphic escape sequences and delivers a record that
contains one byte per character to the final stage, which stores this into a variable.
┌─3279─┐ ┌─TEXT─┐
──APLENCode──┼──────┼──┼──────┼──
├─3278─┤ └─APL──┘
├─3277─┤
├─1────┤
└─2────┘
Syntax Description: Two keyword operands are optional. The first operand specifies the
type of device for which data are encoded. The default is to encode with the X'08'
graphic escape sequence used by devices such as 3278 and 3279 (and subsequent termi-
nals). 3277 specifies the older style of encoding that uses pseudo start of field orders. The
first operand can also be specified as 1 (for 3278) or 2 (for 3277); these correspond to the
third word of the output from fullscrs. The second keyword specifies whether the TEXT
(default) or APL mapping should be used.
Operation: Two translate tables are set up containing the defaults that correspond to the
CP translation for TEXT ON (or APL ON if the keyword APL is specified) for the specified
type of terminal. If the secondary input stream is defined, these defaults are modified by
overlaying one record from it.
Characters in the input record for which the corresponding position in the first translate
table is nonzero are replaced with an escape character and the value from the first translate
table. Characters that are not to be escaped are translated according to the second translate
table. The escape character is X'08' when no operands are specified and when the first
operand is 3279 (or 3278 or 1); the escape character is X'1D' when the first operand is
3277 (or 2).
If the secondary input stream is connected, a single record is read from it before the file on
the primary stream is processed. (End-of-file is treated as if a null record were read.)
This record can be any length, but only the first 512 bytes are used. The record is
assumed to contain two translate tables that are to be overlaid on the two default translate
tables, starting at the beginning of the first table. The ending part of the tables is left
unchanged if the record is shorter than 512 bytes.
Streams Used: If the secondary input stream is defined, one record is read and consumed
from it. The secondary input stream is severed before the primary input stream is proc-
essed. The secondary output stream must not be connected.
Commit Level: aplencode starts on commit level -2. It verifies that the secondary output
stream is not connected and then commits to level 0.
The message is encoded for a 3278 TEXT in the second stage. The third stage adds a flag
byte (erase/write alternate) and a write control character (keyboard restore, reset modified
data tags). The last stage displays the message on the terminal without waiting for oper-
ator action. Thus, the program that issued the pipeline continues immediately.
append—Put Output from a Device Driver after Data on the Primary Input
Stream
append passes all input records to the output and then runs a device driver to generate
additional output.
──APPEND──string──
Type: Control.
Syntax Description: The argument string is normally a single stage, but any pipeline
specification that can be suffixed by a connector (|*:) is acceptable (see usage note 2).
Operation: All records on the primary input stream are copied to the primary output
stream. Then the string is issued as a subroutine pipeline with CALLPIPE, using the default
stage separator (|), double quotes as the escape character ("), and the backward slash as
the end character (\). The beginning of the pipeline is unconnected. The end of the
pipeline is connected to append’s primary output stream. (Do not write an explicit
connector.)
In the subroutine pipeline, device drivers that reference REXX variables (rexxvars, stem,
var, and varload) reach the EXECCOMM environments in effect for append.
Streams Used: append shorts the primary input stream to the primary output stream
(callpipe *:|*:); this does not delay the record. The specified string can refer to all
defined streams except for the primary output stream (which is connected to the end of the
subroutine pipeline by append); the primary input stream will be at end-of-file.
Record Delay: append strictly does not delay the record. The records that are appended
are delayed until the end of the input file.
Premature Termination: append terminates if it is unable to copy all input to the output
before it issues the subroutine pipeline to run the specified string.
Notes:
1. append is useful to add literals after a file has been reformatted, for instance with
spec.
2. The argument string may contain stage separators and other special characters. Be
sure that these are processed in the right place. The argument string is passed through
the pipeline specification parser twice, first when the pipeline containing the append
stage is set up, and secondly when the argument string is issued as a subroutine pipe-
line. The two example pipelines below show ways to append a subroutine pipeline
consisting of more than one stage. In both cases, the split stage is part of the subrou-
tine pipeline and, thus, splits only the record produced by the second literal stage:
pipe literal c d e| append literal a b || split | console
c d e
a
b
Ready;
In the first example, the stage separator that should be recognised in the subroutine
pipeline is self-escaped; to get the parameter (a single |) through the pipeline
specification parser twice, it must be doubly self-escaped; that is, the four vertical bars
become one when the argument is presented to split. In the second example, the main
pipeline uses the question mark as its stage separator and thus no escape is required to
pass the vertical bar to the subroutine pipeline; and a single self-escape suffices to get
the vertical bar to split.
3. Because a subroutine pipeline is used to pass the input to the output, it will terminate
prematurely if the output is not connected or if end-of-file propagates backwards in the
pipeline. Ensure that the output is connected when using a cascade of hole and
append to issue a command after the input stream reaches end-of-file:
... | hole | append command ... | hole
Without the second hole, the command stage might start before you intended it to.
4. Remember that REXX continuation functionally replaces a trailing comma with a blank.
Also recall that when two strings are separated by one or more blanks, REXX concat-
enates them with a single blank. Use the concatenation operator (||) before the
comma at the end of the line if your portrait style has the stage separators at the left
side of the stage and the trailing blank is significant to your application.
Return Codes: The return code is the return code from the CALLPIPE pipeline command.
It may reflect errors in the argument string or trouble with the stage(s) in the pipeline.
──ASATOMC──
Type: Filter.
Operation: Records that have X'03' (no operation) in column one at the beginning of
the file are passed to the output unchanged. If the first input record has a valid machine
carriage control character, the input is passed unmodified to the output and each record is
verified to have a valid machine carriage control character.
Input Record Format: The first column of the record is an ASA carriage control char-
acter:
Output Record Format: The first column of the record is a machine carriage control
character:
xxxx x001 Write the data part of the record and then perform the carriage operation
specified by the five leftmost bits.
xxxx x011 Perform the carriage operation defined by the five leftmost bits imme-
diately (the data part of the record is ignored).
000n n0x1 Space the number of lines (0 through 3) specified by bits 3 and 4.
1nnn n0x1 Skip to the channel specified by bits 1 through 4. The number must be
in the range 1 to 12 inclusive.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. Null input records are discarded.
Record Delay: When the carriage control character is converted (as opposed to being
passed through unmodified), the carriage control character is not delayed; the data part of a
record is delayed to the following record.
Premature Termination: asatomc terminates when it discovers that its output stream is
not connected.
Examples:
pipe literal 1Head line | asatomc | spec 1 c2x 1 2-* next | console
8B
01Head line
Ready;
Notes:
1. The last output record has a write no space command code (X'01').
──ASMCONT──┬────────────────┬──
¡ └─OFFSET──number─┘
Type: Filter.
Syntax Description:
OFFSET The assembler statements are not at the beginning of the record. For
example, a listing file often has the Assembler statement at offset 41.
number Specify an offset that is zero or positive. The default offset is zero.
¡ Operation: When OFFSET is specified, the column numbers in the following description
¡ should be increased by the number specified. The contents of the offset columns are
¡ deleted from continuation records, but kept on the first record of a statement.
A record shorter than 72 characters or with a blank character in column 72 is copied to the
output.
When column 72 of a record is a non-blank character, columns 1-71 are loaded into a
buffer. Records are read up to one that is shorter than 72 characters or has a blank char-
acter in column 72. The contents of columns 16-71 (or the end of the record) are
appended to the buffer; the contents of columns 1 through 15 are discarded; they are not
inspected to verify that they are blank. The contents of the buffer are written when the
end of the statement is reached.
Input Record Format: An Assembler statement consists of one or more lines. Lines
before the last one have a non-blank character in column 72. The last line of a statement
is blank in column 72, or shorter than 72 characters.
Record Delay: When column 72 is blank, asmcont strictly does not delay the record.
Records that are not blank in column 72 are delayed until the next record that is blank in
column 72.
Premature Termination: asmcont terminates when it discovers that its output stream is
not connected.
Notes:
1. asmcont does not support changes to the statement format by the ICTL Assembler
instruction.
──┬─ASMFIND──┬────────┬──────────────────────┬──
│ └─string─┘ │
└─STRASMFInd──┬─────────┬──delimitedString─┘
└─ANYcase─┘
Syntax Description: A string is optional for asmfind. The string starts after exactly one
blank character. Leading and trailing blanks are significant. The maximum string length
is 71 characters.
Operation: Input records are matched the same way XEDIT matches text in a FIND
command (tabs 1, image off, case mixed respect):
A null string matches any record.
Blank characters in the string represent positions that must be present in the input
record, but can have any value.
An underscore in the string represents a position where there must be a blank char-
acter in the input record.
All other characters in the string must be equal to the contents of the corresponding
position in the input record. Case is ignored if ANYCASE is specified.
When the first line of a statement is matched, asmfind copies all lines of the statement
without further inspection to the primary output stream (or discards them if the primary
output stream is not connected). When the first line of a statement is not matched, asmfind
discards all lines of the statement without further inspection (or copies them to the
secondary output stream if it is connected).
Input Record Format: An Assembler statement consists of one or more lines. Lines
before the last one have a non-blank character in column 72. The last line of a statement
is blank in column 72, or shorter than 72 characters.
Streams Used: Records are read from the primary input stream. Secondary streams may
be defined, but the secondary input stream must not be connected.
Record Delay: An input record is written to exactly one output stream when both output
streams are connected. asmfind strictly does not delay the record.
Commit Level: asmfind starts on commit level -2. It verifies that the secondary input
stream is not connected and then commits to level 0.
Examples: To select all statements in an Assembler program that have a label beginning
with 'LAB':
... | asmfind LAB|...
To select all statements in an Assembler program that have the label 'LAB':
... | asmfind LAB_|...
The underscore indicates that column 4 must be blank; thus the label is three characters.
To select all statements of an Assembler program, except comments and those having a
label:
...| asmfind _|...
Notes:
1. asmfind does not support changes to the statement format by the ICTL Assembler
instruction.
! 2. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
3. Remember that REXX continuation functionally replaces a trailing comma with a blank.
Also recall that when two strings are separated by one or more blanks, REXX concat-
enates them with a single blank. Use the concatenation operator (||) before the
comma at the end of the line if your portrait style has the stage separators at the left
side of the stage and the trailing blank is significant to your application.
──┬─ASMNFIND──┬────────┬──────────────────────┬──
│ └─string─┘ │
└─STRASNMFind──┬─────────┬──delimitedString─┘
└─ANYcase─┘
Syntax Description: A string is optional for asmnfind. The string starts after exactly one
blank character. Leading and trailing blanks are significant. The maximum string length
is 71 characters.
Operation: Input records are matched the same way XEDIT matches text in an NFIND
command (tabs 1, image off, case mixed respect):
A null string matches any record.
Blank characters in the string represent positions that must be present in the input
record, but can have any value.
An underscore in the string represents a position where there must be a blank char-
acter in the input record.
All other characters in the string must be equal to the contents of the corresponding
position in the input record. Case is ignored if ANYCASE is specified.
When the first line of a statement is not matched, asmnfind copies all lines of the statement
without further inspection to the primary output stream (or discards them if the primary
output stream is not connected). When the first line of a statement is matched, asmnfind
discards all lines of the statement without further inspection (or copies them to the
secondary output stream if it is connected).
Input Record Format: An Assembler statement consists of one or more lines. Lines
before the last one have a non-blank character in column 72. The last line of a statement
is blank in column 72, or shorter than 72 characters.
Streams Used: Records are read from the primary input stream. Secondary streams may
be defined, but the secondary input stream must not be connected.
Record Delay: An input record is written to exactly one output stream when both output
streams are connected. asmnfind strictly does not delay the record.
Commit Level: asmnfind starts on commit level -2. It verifies that the secondary input
stream is not connected and then commits to level 0.
Notes:
1. asmnfind does not support changes to the statement format by the ICTL Assembler
instruction.
! 2. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
3. Remember that REXX continuation functionally replaces a trailing comma with a blank.
Also recall that when two strings are separated by one or more blanks, REXX concat-
enates them with a single blank. Use the concatenation operator (||) before the
comma at the end of the line if your portrait style has the stage separators at the left
side of the stage and the trailing blank is significant to your application.
──ASMXPND──
Type: Filter.
Operation: For each input record, one or more 80-byte records are created with continua-
tion characters, as required, in column 72. Columns 1 through 71 of the input record are
written as the first line of a statement, padded with blanks if required. The statement is
not continued when the input record is 71 bytes or shorter: column 72 is made blank.
When the input record is longer than 71 characters, continuation is indicated with an
asterisk in column 72 of the first record for the statement. The remaining characters are
written to continuation records with blanks in columns 1 through 15, 56 characters for each
record. Continuation is indicated in each output record until the input record is exhausted.
Record Delay: asmxpnd does not delay the last record written for an input record.
Premature Termination: asmxpnd terminates when it discovers that its output stream is
not connected.
Examples: To generate an Assembler DC instruction for each input line with the contents
of the line as a character constant:
/* MAKEDC REXX */
'callpipe (name MAKEDC)',
'|*:',
"|change /'/''/", /* double quotes */
"|change /&/&&/", /* ... and ampersands */
"|spec /DC/ 10 /C'/ 16 1-* next /'/ next", /* Make big DC */
'/ / next', /* ensure S&D area available */
'|asmxpnd', /* Continuation if needed */
'|*:'
exit RC
Notes:
1. asmxpnd does not support changes to the statement format by the ICTL Assembler
instruction.
──BEAT──┬──────┬──┬─number────────┬──┬─────────────────┬──
¡ └─ONCE─┘ └─number.number─┘ └─delimitedString─┘
The numeric operand specifies the interval in seconds. Up to six digits may be specified
after the period, allowing for a microsecond interval.
Operation: The delimited string is written to the secondary output stream when the
¡ interval expires without an input record arriving. When ONCE is omitted, beat restarts
¡ another timeout immediately after the output record on the secondary output stream is
¡ consumed; when ONCE is specified, beat waits for the next input record without generating
¡ further output records.
Streams Used: Two streams must be defined. Records are read from the primary input
stream; no other input stream may be connected.
Record Delay: beat strictly does not delay the record it passes to the primary output
stream.
Commit Level: beat starts on commit level -2. It verifies that the secondary input stream
is not connected and then commits to level 0.
Premature Termination: beat terminates when it discovers that either of its output
streams is not connected.
Notes:
1. For beat to work as described, it is assumed that the input records are read into the
pipeline by a device driver that obtains its input through an asynchronous host inter-
face, such as does tcpclient.
2. beat cannot detect timeouts during CMS commands.
──BETWEEN──┬─────────┬──delimitedString──┬─number──────────┬──
└─ANYcase─┘ └─delimitedString─┘
Syntax Description: A keyword is optional. Two arguments are required. The first one
is a delimited string. The second argument is a number or a delimited string. A delimited
string can be a string of characters delimited in the normal XEDIT fashion (for instance,
/abc/) or it can be a literal, which in turn can be hexadecimal or binary (for instance,
xf1f2f3). The number must be 2 or larger.
Operation: between copies the groups of records that are selected to the primary output
stream (or discards them if the primary output stream is not connected). Each group
begins with a record that matches the first specified string. When the second argument is a
number, the group has as many records as specified (or it extends to end-of-file). When
the second argument is a string, the group ends with the next record that matches the
second specified string (or at end-of-file).
When ANYCASE is specified, between compares fields without regard to case. By default,
case is respected.
between discards records before, between, and after the selected groups (or copies them to
the secondary output stream if it is connected).
Streams Used: Records are read from the primary input stream. Secondary streams may
be defined, but the secondary input stream must not be connected.
Record Delay: An input record is written to exactly one output stream when both output
streams are connected. between strictly does not delay the record.
Commit Level: between starts on commit level -2. It verifies that the secondary input
stream is not connected and then commits to level 0.
Examples: To select examples in a Script file, assuming the tags are at the beginning of
the record:
...| between /:xmp./ /:exmp./ |...
Notes:
1. between selects records from multiple groups, whereas frlabel followed by tolabel
selects only one group.
! 2. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
! 3. pick can do what between does and then quite some more.
Some blocking formats fill all output blocks (possibly except for the last) to capacity. As
a consequence data from an input record may be spanned over two or more output blocks.
┌─Fixed──────────────────────┐
──BLOCK────number──┼────────────────────────────┼────
├─Fixed──number──────────────┤
├─SF─────────────────────────┤
├─SF4────────────────────────┤
├─ADMSF──────────────────────┤
├─CMS────────────────────────┤
├─CMS4───────────────────────┤
├─Variable───────────────────┤
├─VB─────────────────────────┤
├─VS─────────────────────────┤
├─VBS────────────────────────┤
├─AWSTAPE────────────────────┤
├─NETdata────────────────────┤
├─┤ PC-style ├───────────────┤
└─TEXTfile──┬──────────────┬─┘
└─┤ PC-style ├─┘
PC-style:
├──┬─C───────────────────────┬──┬───────────┬──┬───────────────┬──┤
│ ┌─15───┐ │ └─TERMinate─┘ │ ┌─3F───┐ │
├─LINEND──┼──────┼────────┤ └─EOF──┼──────┼─┘
│ └─xorc─┘ │ └─xorc─┘
├─CRLF────────────────────┤
└─STRING──delimitedString─┘
Type: Filter.
Syntax Description: The first word of the argument string must be a number specifying
the maximum block size. The blocking format is an optional keyword; the default is
FIXED. The record length may be specified after the keyword FIXED. A line end character
is optional with LINEND; the default is 15 (representing the character X'15').
The minimum acceptable value for the block size is 1, except for ADMSF, where it is 2;
¡ except for AWSTAPE, where it is 7; and except for V, VB, VS, and VBS, where it is 9. There
¡ is no limit to the block size other than the availability of virtual storage. Even though no
maximum block size is enforced for AWSTAPE, a maximum of 4096 should be observed;
files created with a block size larger than 4096 may not function with the real AWSTAPE
device driver.
The formats C, LINEND, CRLF, and STRING support two additional keywords, TERMINATE
and EOF. Use TERMINATE to specify that the last line of the file should have a terminating
line end sequence; the default is to insert line end sequences between lines and leave the
last line without one. Use EOF to specify that an end-of-file character should be appended
to the contents of the file. X'3F' (substitute) is the default end-of-file character. block
inserts an end-of-file character only when the keyword is specified.
The format TEXTFILE supports two additional options: A line end character specified as C,
LINEND, CRLF, or STRING; and the EOF option. The TERMINATE option can be specified, but
it cannot be suppressed. The default is LINEND 15 TERMINATE.
Operation: Input records are blocked or spanned to blocks of the specified size or less.
Descriptor words are generated for formats other than FIXED, C, CRLF, STRING, and LINEND.
FIXED Juxtapose records with no control information in between. Only the last
output block can be short. Null input records are discarded. When a
record length is specified (as the second number), it is ensured that all
input records that are not null are of this length. When a record length
is not specified, the length of the first record that is not null is used as
the record length; it is ensured that all input records have the same
length and that the block size is a multiple of the record length. block
stops with an error message if this is not the case. The corresponding
z/OS record format is Fixed Block Standard (RECFM=FBS).
SF Block records to the structured field format. Each record is prefixed by
a halfword length field; the length includes the length of the halfword
(thus, the minimum length is 2). Except for possibly the last block, each
block is filled completely; a logical record (and the halfword length
field) can span blocks.
SF4 Block records to a format similar to the structured field format. Each
record is prefixed by a fullword length field; the length includes the
length of the fullword (thus, the minimum length is 4). Except for
possibly the last block, each block is filled completely; a logical record
(and the fullword length field) can span blocks.
ADMSF Block records to the structured field format used in GDDM objects. Each
record is prefixed by a halfword length field; the length includes the
length of the halfword (thus, the minimum length is 2). Except for
possibly the last block, each block is filled completely; a logical record
can span blocks. The length field does not span blocks. When the last
byte of a logical record is stored in the second last position of the output
record, the last position is padded with X'00'; the following record is
stored at the beginning of the next block.
CMS Build blocks in the format used internally by the CMS file system for
files with variable length records. This format is exposed in data
unloaded by, for instance, the CMS commands TAPE DUMP and DISK
DUMP. The record descriptor word prefixed to the record is a halfword
(two bytes). It contains the length of the record (not including the
halfword). Logical records (and record descriptors) can span output
blocks. The last block is not padded. (The TAPE DUMP command does
not pad blocks; use pad to pad the last block with zeros as it is in the
file system.) Null input records are discarded. block CMS terminates
with an error message if an input record is 64K or longer.
CMS4 Prefix each record with a fullword length field which contains the count
of characters that follow the length field. That is, the length of the
length field is not included in the count. Null input records are
discarded. Logical records (and record descriptors) can span output
blocks. The last block is not padded.
VARIABLE Block to the OS unblocked variable format (RECFM=V). Each record is
prefixed with a block descriptor word and a record descriptor word,
increasing its length by eight bytes. block V terminates with a message
¡ if a record is longer than 32759 bytes or is too long to fit in the output
buffer (that is, if the length of the input record is greater than the
specified block size minus eight).
VB Block to the OS variable blocked format (RECFM=VB). The output block
contains as many logical records as will fit within the specified block
size. The overhead for descriptor words is four bytes per block plus
four bytes per record. A logical record does not span output blocks.
¡ block VB terminates with a message if a record is longer than 32759
¡ bytes or is too long to fit in the output buffer (that is, if the length of the
input record is greater than the specified block size minus eight).
VS Generate blocks in the OS variable spanned format (RECFM=VS). Records
are segmented to fit within the output buffer. A block has one segment.
A logical record starts a new block.
VBS Block to the OS variable blocked spanned format (RECFM=VBS). Records
are segmented to fit within the output buffer. A logical record can span
blocks; there is a segment in each block that the record is spanned over.
Segment descriptors are prefixed to the segments; segment descriptors do
not span blocks. The output block length (except possibly for the last
block) is between n-5 and n, where n is the specified block size.
AWSTAPE Segment and span records according to the format used by the AWSTAPE
device driver for p370, rs370, and p390 (PC Server System/390). If the
input record plus the six bytes segment descriptor is not longer than the
specified block size, a single output record is produced. For longer input
records, as many segments as required are produced.
NETDATA Segment and span records according to the NETDATA format. Null input
records are discarded. The first byte of each input record that is not null
is a flag byte indicating whether the record is a control record (the bit
for X'20' is on) or a data record (the bit for X'20' is off). Bits 3 to 7
of the flag byte are copied to the corresponding bits of the flag bytes in
all output segments produced from an input record. The NETDATA
control records must have been built previously, most likely injected by
¡ one or more literal or preface stages. The output buffer is flushed after
¡ an \INMR6 record has been written. This ensures that any stacked file
¡ begins in a separate record.
C Block records with an end of line character (line feed, X'25') between
logical records. Line feed control characters in input records are copied
unchanged to the output. Except for possibly the last block, each block
is filled completely; a logical record can span blocks.
LINEND Block records with an end of line character (by default new line, X'15')
between logical records. End of line characters in input records are
copied unchanged to the output. Except for possibly the last block, each
block is filled completely; a logical record can span blocks.
CRLF Block records with carriage return and line feed (X'0D25') between
logical records. The values are EBCDIC; blocking should be done before
the records are translated to ASCII. Except for possibly the last block,
each block is filled completely; a logical record can span blocks, as can
a line end sequence.
STRING Block records with the specified string between logical records. Except
for possibly the last block, each block is filled completely; a logical
record can span blocks, as can a delimiter string.
TEXTFILE Append a line end sequence to each record and join as many records as
will fit in the buffer. Records are not spanned across block boundaries.
If the record and the line end sequence cannot fit in the buffer, the
record and the line end sequence are written as separate records. The
default line end character is X'15'. If EOF is specified, the end-of-file
character is appended to the last record (after the line end sequence).
TEXTFILE is designed for use with a byte stream file system.
Output Record Format: When V, VB, VS, or VBS is specified, the output block is prefixed
by four bytes block descriptor. In a basic block descriptor, the length of the block
(including the block descriptor word) is stored in the first two bytes of the block descriptor
¡ word; the first bit and the last two bytes are zero. An extended block descriptor word is
¡ used when the output block is longer than 32K. Its leftmost bit is one to distinguish the
¡ extended block descriptor from the basic format block descriptor; the length of the block
¡ including the block descriptor is stored in the following 31 bits.
When SF, SF4, ADMSF, V, VB, CMS, or CMS4 is specified, a record descriptor word is
prefixed to each logical record. (That is, to each input record.) For V and VB, the record
descriptor word is four bytes. The length of the record (including the length of the record
descriptor word) is stored in the first two bytes; the next two bytes contain zeros. For SF
and ADMSF, the record descriptor word is two bytes; it contains the length of the record
(including the record descriptor). For SF4, the record descriptor word is four bytes; it
contains the length of the record (including the record descriptor). For CMS, the record
descriptor word is two bytes; it contains the length of the record (excluding the record
descriptor). For CMS4, the record descriptor word is four bytes; it contains the length of
the record (excluding the record descriptor).
When VS, VBS, AWSTAPE, or NETDATA is specified, a segment descriptor word is prefixed
to each segment of a record. For VS and VBS, the the segment descriptor word is four
bytes; the length of the segment (including the segment descriptor word) is stored in the
first two bytes of the segment descriptor word; the third byte has segmentation flags
(X'02' means not first segment, X'01' means not last segment); the last byte is zero. VS
and VBS segments do not span blocks. For AWSTAPE, the segment descriptor is six bytes,
consisting of two halfword length fields which have the least significant byte leftmost, a
flag byte, and a byte of zeros. The first length field contains the number of data bytes that
follow the segment descriptor; the second length field contains the number of data bytes in
the previous segment (thus allowing for read backwards). In the flag byte, X'80' means
the first segment of a physical block; X'40' means an end-of-file record; and X'20'
means the last segment of a physical block. For NETDATA, the segment descriptor is a
byte with the length of the segment (including the descriptor) followed by a flag byte
(X'80' means first segment; X'40' means last segment; X'20' means the record is a
control record); segments can span blocks.
Streams Used: Records are read from the primary input stream and written to the primary
output stream.
Record Delay: block delays input records as required to build an output record. The
delay is unspecified.
Commit Level: block starts on commit level -2000000000. It allocates the buffer of the
specified size and then commits to level 0.
Premature Termination: block terminates when it discovers that its output stream is not
connected.
Examples: To write a CMS file to an unlabelled tape in variable blocked format suitable
for z/OS:
pipe < input file | block 16000 vb | tape
To write a fixed record format CMS file that has record length 80 in a format suitable for
z/OS:
pipe < input file | block 16000 | tape
These examples show the effect of the TERMINATE and EOF options:
pipe literal abc|block 80 linend * | console
abc
Ready;
Notes:
1. Refer to “Netdata Format” on page 65 for usage information about block NETDATA.
Refer to CMS Pipelines Toolsmith’s Guide and Filter Programming Reference,
SL26-0020, for an example of how to build control records. The file INMR123 REXX is
¡ shipped on the CMS system disk (190).
The Netdata format is documented in z/VM: CMS Macros and Functions Reference,
SC24-6075.
2. block C is a convenience for block LINEND 25. block CRLF is a convenience for block
STRING X0D25.
4. Though input records are called logical records and output records are called blocks,
these are still records (or lines) as perceived by the pipeline dispatcher.
──┬─BROWSE─┬──┬─────────────┬──┬──────────────┬──
└─BRW────┘ └─┤ options ├─┘ └─(──┤ attrs ├─┘
options:
├──┬─────────┬──┬────┬──┬──────────────────────┬──
└─devaddr─┘ └─CC─┘ └─DATACODEPAGE──number─┘
──┬──────────────────────┬──┤
└─TERMCODEPAGE──number─┘
Syntax Description: The arguments consist of options, a left parenthesis, and attribute
characters as defined for buildscr. The options specify the terminal to use (the log on
terminal is the default), a keyword to specify that the data to be displayed contain machine
carriage control characters, and keywords to specify the code pages of the data to display
and of the terminal.
A left parenthesis separates the options for browse from an option list passed as the argu-
ments to buildscr. In this option list, the first four words may each be specified as an
asterisk, three characters, or six hexadecimal characters. The characteristic of the terminal
is provided for any remaining unspecified options; an asterisk can be used as a placeholder
when an option is specified and an earlier option is to be defaulted.
Operation: The geometry and features of the device are determined. The input file is
shown in panels.
When CC is specified, the first column of the input record contains a carriage control char-
acter which may be an ASA or machine carriage control character. Each page begins with
a page eject carriage control; a page is displayed in as many panels (the size of the
display) as are required. When CC is omitted, input lines are displayed single spaced; a
page is the size of the display.
Program function keys 13 through 24 perform the same function as keys 1 through 12,
respectively.
On CMS, program access key 2 drops into the INTM terminal monitor program.
Searching: The pop up panel displayed in response to program function key 1 contains an
input area. A search is performed when characters are entered in this input area (case and
blanks are respected in this string). The search is towards the end of the file for the partic-
ular string entered in the case entered. When the specified string is found in the 3270 data
stream.s that represent the panels, the corresponding panel is displayed; there is no indi-
cation of where on the panel the matching string was found. The string can consist
partially or entirely of 3270 control sequences. The last panel of the file is shown when
the search fails; there is no audible indication.
Streams Used: Records are read from the primary input stream; no other input stream
may be connected.
Premature Termination: browse terminates when it runs out of storage. This is likely to
be accompanied by several REXX error messages.
Notes:
1. The part of the input that has been read by browse is kept in virtual storage; the file is
read only as required to build panels for display or search. Thus, you can see the first
few panels of even an infinitely large file.
2. When the output stream is connected, any unprocessed input records are passed to the
output when you exit from browse.
3. browse does not support horizontal scrolling.
buffer—Buffer Records
buffer reads input records and accumulates them in memory. buffer writes the buffered
records to its output when it reaches end-of-file and (if arguments are specified) each time
it reads a null record. Optionally, buffer writes multiple copies of the buffered files.
──BUFFER──┬─────────────────────────────┬──
└─number──┬─────────────────┬─┘
└─delimitedString─┘
Type: Filter.
Syntax Description: Specify a number to write multiple copies of a buffered file; a delim-
ited string after the number specifies a record to be written between multiple copies of a
buffered file.
Operation: When no arguments are specified, records (including null ones) are stored in a
buffer until end-of-file. The buffered records are then written to the primary output stream
in the order they were read.
When a number is specified (it can be 1), the input stream is considered to consist of one
or more files separated by null records. Each file is stored in the buffer and written to the
output (in the order it was read) when a null record is read, or at end-of-file. The set of
records in a file is written as many times as specified by the first argument. If the number
is 2 or more, the copies of a file are delimited by records containing the string specified by
the second argument, or by null records if no second argument is specified. When the file
has been written to the output as many times as requested, the null input record is copied
to the output; the buffer is reset to be empty; and reading continues.
Streams Used: Records are read from the primary input stream and written to the primary
output stream.
Record Delay: When no argument is specified, buffer delays all records until end-of-file.
When an argument is specified, the null record (which terminates the part of the file being
buffered) is consumed before writing the first line of the partial file.
Premature Termination: buffer terminates when it discovers that its output stream is not
connected.
Examples: To read lines from the terminal and put them into the stack after the user
signals end-of-file with a null input line:
If there were no buffer stage, the pipeline would loop as soon as a line was put into the
stack, because the console stage would immediately read that line and write it to the stack
stage, which would read it and put it back into the stack, and then the console stage would
read it again.
With the buffer stage, the pipeline works as follows: console reads lines from CMS and
writes them into the pipeline. It reads all lines from the console stack and the terminal
input queue before it begins reading from the terminal. It terminates when it reads a null
line (which indicates end-of-file). buffer stores all its input lines in its buffer until it
receives end-of-file on its input; it then writes the contents of its buffer to the output; and
stack copies the records from its input to the CMS console stack. Thus, by the time buffer
begins to write records, the console stage has terminated and it is safe to put the records
onto the stack.
A direct read (see console) may be more appropriate for reading from the terminal to the
stack.
A buffer stage is required to buffer the output from rexxvars when data derived from its
output are stored back into the variable pool with a var, stem, varload, or varset stage:
pipe rexxvars | find v_ARRAY.| spec 3-* | buffer | stem vars.
As shown in this example, it may be more efficient to buffer the variables that are set
rather than the output from rexxvars.
Notes:
1. A buffer stage may be needed to prevent stalls in intersecting multistream pipelines.
2. dup makes one or more copies of each input record; the copies are contiguous. buffer
with a number makes copies of complete files.
3. With the same input, these two invocations of buffer produce identical output:
...| buffer |...
...| buffer 1 |...
But the timing of the output records is di erent if there are null records. When used
without arguments, buffer reads all input records before it generates any output. When
an argument is specified, buffer produces its first output record as soon as it reads the
first null input record.
4. Use buffer without arguments to be sure that an arbitrary sequence of records is
buffered in its entirety.
5. To generate two copies of an entire file that contains null lines:
...| instore | dup | outstore |...
The dup stage duplicates the single file descriptor record created by instore; outstore
makes two copies of the file as a result.
──BUILDSCR──┬───────────┬──
└─┤ attrs ├─┘
attrs:
├──┤ attr ├──
──┬─────────────────────────────────────────────────────┬──┤
└─┤ attr ├──┬───────────────────────────────────────┬─┘
└─┤ attr ├──┬─────────────────────────┬─┘
└─┤ attr ├──┬───────────┬─┘
└─┤ sizes ├─┘
attr:
├──┬─*────────────────┬──┤
├─char char char──(1) ─┤
(2) ─────┘
└─XhexString──
sizes:
├──number──┬──────────────────────────┬──┤
└─number──┬──────────────┬─┘
└─┤ termtype ├─┘
termtype:
┌─0────┐
├──┼─3278─┼──┬─────────────────────┬──┤
├─3279─┤ │ ┌─1─┐ │
├─1────┤ └─┴─0─┴──┬──────────┬─┘
├─3277─┤ │ ┌─TEXT─┐ │
└─2────┘ └─┼──────┼─┘
└─APL──┘
Notes:
1 There are no blanks between the three characters.
Syntax Description: The first four blank-delimited words specify extended attributes to be
used for the four possible combinations of underscoring and highlighting. The order of the
attribute items is: neither highlighted nor underscored, underscored, highlighted, and both
highlighted and underscored. An attribute item may be an asterisk (*) to take the default
or three characters that specify extended attributes. The three characters can be specified
as such or as an X followed by six hexadecimal digits. The three attribute characters are
in the order highlighting, colour, and program symbols. For instance, '27e' selects the
programmed symbol set that has ID 'e' (=X'85') and makes it white (7) reverse video
(2). Refer to 3274 Description and Programmer’s Guide, GA23-0061, for a description of
3270 extended attribute characters. Use X'00' to select the default attribute value
depending on the terminal.
Words five and six contain numbers that specify the screen size in character cells (lines
followed by columns). The default is 32 lines of 80 columns. The minimum size is 1920
(24 by 80); the maximum size is 16K.
The seventh word is a switch to indicate whether the display supports the APL/TEXT
feature; it is assumed not to have the feature when the argument string contains six words
or fewer. Specify 0 when the device does not support APL/TEXT. Specify 3278 (or 3279)
for modern 3270 terminals; specify 3277 for the original 3277 terminals and some TELNET
servers. You can also specify this operand in the form used for the third word of the
output from fullscrs: The number 1 specifies 3278-style APL/TEXT; the number 2 specifies
3277-style APL/TEXT.
The eighth word specifies whether the terminal supports extended highlighting and char-
acter attributes; the default is 1. Specify 0 for a device that does not support extended
features (3277 and some TELNET servers).
The ninth word specifies the type of APL/TEXT you wish to enable. The default is TEXT;
specify APL to use such a mapping.
Operation: Two translate tables are set up containing the defaults that correspond to the
CP translation for TEXT ON (or APL ON if the keyword APL is specified) for the specified
type of terminal. If the secondary input stream is defined, these defaults are modified by
overlaying one record from it. Output records are 3270 data streams that can be used to
display the contents of the input file, formatted as on the page. Character attribute
sequences are inserted into the data to switch attributes as determined by the contents of
the corresponding positions of the descriptor record. An input page that has more lines
than the display is written as several output records. Input lines that are too long for the
screen width are truncated at the right hand side. Blank lines are generated for input lines
that have skip carriage control (carriage control characters X'11', X'19', X'1B', X'0B',
and X'13'). Skips to channels other than channel 1 are treated as requests to skip one
line.
Input Record Format: X'00' in the first column indicates a descriptor record in the
format produced by overstr. Each column of the descriptor record specifies the high-
lighting and underscoring of the corresponding column in the data record that follows the
descriptor record. These descriptor values are used:
00 The position is blank.
01 The position contains an underscore. (An underscored blank.)
02 The position contains a character that is neither blank nor underscore.
03 The position contains an underscored character.
04 The position contains a highlighted blank.
05 The position contains a highlighted underscore.
06 The position contains a highlighted (overprinted) character.
07 The position contains a highlighted and underscored character.
Records without X'00' in column 1 must begin with a machine carriage control character;
data are from column 2 onwards. Records that are not preceded by a descriptor record are
neither underscored nor highlighted (though they can contain underscore characters). The
end of a page is indicated by a skip to channel 1 (X'89' or X'8B'). Data in a record that
has X'89' carriage control are on the last line of a page.
If the secondary input stream is connected, a single record is read from it before the file on
the primary stream is processed. (End-of-file is treated as if a null record were read.)
This record can be any length, but only the first 512 bytes are used. The record is
assumed to contain two translate tables that are to be overlaid on the two default translate
tables, starting at the beginning of the first table. The ending part of the tables is left
unchanged if the record is shorter than 512 bytes.
Output Record Format: The first position is a flag byte to indicate whether the screen
image is the first from a page of the document (X'01') or a subsequent one (X'00').
Column 2 has the constant 'B' (X'C2'), which is the write control character for keyboard
restore; 3270 orders and data follow. The screen is formatted to a single protected field
with the attribute character in the lower right hand corner of the screen. The cursor is
inserted on this attribute byte.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. Null input records are discarded. If the secondary input stream is defined,
one record is read and consumed from it. The secondary input stream is severed before
the primary input stream is processed. The secondary output stream must not be
connected.
Record Delay: buildscr delays records until it has filled a panel. It is unspecified whether
it reads one more record before writing the output record.
Premature Termination: buildscr terminates when it discovers that its output stream is
not connected.
Examples: To reformat Script output for a 1403 to be displayed on a 32-line 3270 that
supports APL/TEXT:
pipe < $doc script | overstr | buildscr * * * * 32 80 1 |...
Notes:
1. buildscr is intended to process the output from overstr (possibly with an intervening
xpndhi stage).
2. Use spec to prefix X'09' (write no space) to each line of a file without carriage
control.
...| spec x09 1 1-* 2 | buildscr |...
3. Use asatomc to convert from ASA carriage control to machine carriage control.
4. Use fullscrs, fullscrq, or diagnose 8C to determine the size of the terminal screen.
5. When specifying an attribute item in the operands for buildscr, use X'00' to select
the default behaviour for a particular attribute. This is most conveniently done by
specifying a hexadecimal string. Remember that all three characters of the word must
be specified in hexadecimal; for example, the character q.4 is specified as F4.
...| buildscr * xf40000 x00f700 xf4f700 |...
In this example, characters that are neither highlighted nor underscored are shown with
the default highlighting, colour, and programmed symbol set. Underscored characters
are shown using extended highlighting for underscore with the default colour and
symbol set. Highlighted characters are shown in white.
6. The output from buildscr cannot in general be fed directly to fullscr to be displayed.
A control mechanism is required for an interactive display, for instance to browse the
data (see SCRCTL REXX in PIPGDSCR PACKAGE). At the least, the flag byte must be
replaced by a control byte, as described for fullscr.
7. buildscr cannot ensure that the 3270 data stream is displayed on a device that has the
geometry specified or defaulted. Lines may wrap when the actual screen has a
different line length. The display may give unit check if the actual screen is smaller
than the size specified (the product of lines and columns).
The argument is assumed to be a selection stage; that is, it should specify a program that
reads only from its primary input stream and passes these records unmodified to its
primary output stream or its secondary output stream without delaying them.
──CASEI──┬──────────────────────┬──┬─────────┬──word──
└─ZONE──┤ inputRange ├─┘ └─REVERSE─┘
──┬────────┬──
└─string─┘
Type: Control.
Syntax Description: A word (the name of the program to run) is required; further argu-
ments are optional as far as casei is concerned, but the specified program may require
arguments.
Streams Used: Records are read from the primary input stream; no other input stream
may be connected.
Commit Level: casei starts on commit level -2. It does not perform an explicit commit;
the specified program must do so.
Examples: To select records that contain a GML tag at the beginning of the record,
ignoring case:
pipe ... | casei find :figref | ...
Notes:
1. All built-in selection stages that operate on strings now support the ANYCASE option,
as of CMS/TSO Pipelines level 1.1.10/0015; thus casei is obsolete as far as built-in
programs are concerned. For example, a case insensitive find is:
pipe ... | strfind anycase /:figref / | ...
2. The argument string to casei is passed through the pipeline specification parser only
once (when the scanner processes the casei stage), unlike the argument strings for
append and preface.
¡ 3. End-of-file is propagated from the streams of casei to the corresponding stream of the
¡ specified selection stage.
Return Codes: If casei finds no errors, the return code is the one received from the
selection stage.
──CHANGE──┬─────────┬──┬─────────────────┬──
└─ANYcase─┘ ├─inputRange──────┤
│ ┌──
───────┐ │
─range─┴──)─┘
└─(───
──┬─┤ changeString ├─────────────────┬──┬───────────┬──
└─delimitedString──delimitedString─┘ └─numorstar─┘
changeString:
├──delimiter──string──delimiter──string──delimiter──┤
Type: Filter.
Operation: Within each column range (or the complete record if no range is specified),
occurrences of the first string are replaced with the second string; they are deleted if the
second string is null. Data between substitutions are copied to the output record
unchanged. The substituted string is not scanned for occurrences of the string to be
changed.
When the first string is null, the second string is inserted in front of the first range in
: records that extend to the beginning of that range. If specified, the numorstar must be
: one (1). That is, the string is inserted before the contents of the first column of the first
range if the record extends into that range; the string is appended to records that end in the
position before the first range; short input records are copied unmodified to the output.
Thus, null records are replaced with the string when the first range begins with column 1.
When the keyword ANYCASE is specified, the characters in the first string and the input
record are compared in upper case to determine whether the specified string is present.
When the first string contains one or more upper case characters or contains no letters, the
second string is inserted in the output record without change of case; otherwise, an attempt
is made to preserve the case of the string being replaced. When the first string contains no
upper case letters and begins with one or more (lower case) letters, the following rules
determine the case of the replacement string:
When the first two characters of the replaced string in the input record are both lower
case, the replacement string is used without change.
When the first character of the replaced string in the input record is upper case and the
second one is lower case (or not a letter or the string is one character), the first letter
of the replacement string is upper cased.
When the first two characters of the replaced string in the input record are upper case,
the complete replacement string is upper cased.
Streams Used: Records are read from the primary input stream; no other input stream
may be connected. Both changed and unchanged records are written to the primary output
stream when no secondary output stream is defined. When the secondary output stream is
defined, changed records are written to the primary output stream; unchanged records are
written to the secondary output stream.
Commit Level: change starts on commit level -2. It verifies that the secondary input
stream is not connected and then commits to level 0.
The first string must be entirely within a column range; adjacent ranges are not merged:
Caseless replacement:
pipe literal pipe | change anycase /pipe/line/ | console
line
Ready;
pipe literal Pipe | change anycase /pipe/line/ | console
Line
Ready;
pipe literal PiPe | change anycase /pipe/line/ | console
Line
Ready;
pipe literal PIpe | change anycase /pipe/line/ | console
LINE
Ready;
pipe literal PiPe | change anycase /Pipe/line/ | console
line
Ready;
To upper case all occurrences of the string “user”, irrespective of its case (for instance,
“User”):
... | change anycase /user/USER/ | ...
Notes:
1. The change string must be specified completely. XEDIT assumes a null second string
when the ending delimiters are omitted; CMS/TSO Pipelines does not.
2. change is similar to the CHANGE XEDIT subcommand, with the extension of multiple
ranges (XEDIT supports one range only).
: 3. insert and spec can be used to insert a string in all records; change with a null first
string inserts the second string only in records that contain the column before the first
column of the first range.
4. The default is to change all occurrences in the record; the XEDIT default is to change
one occurrence only. XEDIT change is case sensitive (even when XEDIT command
“case mixed ignore” has been issued); thus XEDIT does not support the function
provided by ANYCASE.
5. When the secondary output stream is defined, change works as a selection stage,
equivalent to locate for the first string. In this configuration, change discards output
records that are written to an unconnected output stream as long as the other output
stream is still connected; it terminates prematurely when both output streams are not
connected. Unlike a selection stage, change writes “unmatched” records to the
primary output stream when the secondary output stream is not defined.
! 6. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
7. Only one inputRange can be specified. When specifying a list of ranges in paren-
theses, the ranges must be old-fashioned column ranges, even when there is only one
range in the parentheses.
8. change cannot be cajoled into appending a string at the end of the record in general.
Use insert instead.
┌─80──────┐
──┬─CHOP─────┬──┼─────────┼──
! └─TRUNCate─┘ └─snumber─┘
──CHOP──┬─────────┬──┬─────────────────────────┬──┬─────┬──
└─ANYCase─┘ │ ┌─BEFORE─┐ │ └─NOT─┘
└─┬─────────┬──┼────────┼─┘
└─snumber─┘ └─AFTER──┘
──┤ target ├──
target:
├──┬─xrange──────────────────────┬──┤
└─┬─STRing─┬──delimitedString─┘
└─ANYof──┘
Type: Filter.
Syntax Description: With no arguments, input records are truncated after column 80.
! With a single operand that is zero or positive, chop truncates the record after this column;
! the result of truncating after column zero is a null record. With a single negative operand,
! chop truncates the record after the column that is the sum of the record length and the
! negative operand; the result of truncating before column zero is a null record (that is, the
! record is shorter than the negative of the operand).
Use a hex range or a delimited string to truncate the record depending on its contents. A
hex range matches any character within the range. The keyword STRING followed by a
delimited string matches the string. The keyword ANYOF followed by a delimited string
matches any one character in the string. (The keyword is optional before a one character
string, because the effect is the same in either case.) The default is to truncate the record
before the first character matching the target.
The truncation column may be specified with an offset relative to the beginning or end of
the matching string or character. The offset may be negative.
Operation: A chop position is established in the input record. Data, if any, before the
chop position are written to the primary output stream; the remainder of the input record is
written to the secondary output stream (if it is defined and connected).
Streams Used: Records are read from the primary input stream; no other input stream
may be connected. Output is written to the primary output stream and the secondary
output stream.
Commit Level: chop starts on commit level -2. It verifies that the secondary input stream
is not connected and then commits to level 0.
Notes:
1. “chop not z | locate 1” discards all records without leading z(s) because chop
truncates before the first character that is not a z. This yields a null record, which is
discarded by locate.
2. Records are not padded.
3. The minimum abbreviation of ANYCASE is four characters because ANYOF takes
precedence (ANYOF can be abbreviated to three characters).
! 4. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
¡ Electronic Code Book (ECB) and Cipher Block Chaining (CBC) modes are supported.
¡¡ z/OS
¡ ──CIPHER──┬─AES──────┬──┬─ENCRYPT─┬──
¡ ├─BLOWFISH─┤ └─DECRYPT─┘
¡ ├─DES──────┤
¡ └─3DES─────┘
¡ ──┬────────────────────────────────────┬──
¡ └─CBC──┬───────────────────────────┬─┘
¡ (1) ─┬─delimitedString─┬─┘
└─IV───
¡ (2) ──────────┘
└─TOD───
¡ Notes:
¡ 1 Not valid with Blowfish decrypt.
¡ Type: Filter.
¡ Syntax Description: The first word specifies the algorithm to be used. The second word
¡ specifies the “direction”.
¡ Operation:
¡ For Blowfish, the initial vector is encrypted and written to the output; the decrypting stage
¡ decrypts this initial record and writes the initialisation vector in plain text to the output.
¡ This first record would normally be discarded when decrypting.
¡ Key Handling: When no secondary streams are defined, the first record on the primary
¡ input stream is used as the key; there is then no provision for dynamic key change.
¡ When the secondary input stream is defined, the first record is read unconditionally from
¡ that stream and used as the initial key. The pipeline will stall if a record is presented on
¡ the primary input stream instead.
¡ The key is changed dynamically when a record that is not null is read from the secondary
¡ input.
¡ Input Record Format: Input records must have a length that is a multiple of the block
¡ size.
¡ Streams Used: Secondary streams may be defined. Records are written to the primary
¡ output stream; no other output stream may be connected. Null input records are discarded.
¡ Commit Level: cipher starts on commit level -2. It verifies that the primary output
¡ stream is the only connected output stream and then commits to level 0.
¡ Premature Termination: cipher terminates when it discovers that its primary output
¡ stream is not connected.
¡ cphrsamp
¡ Entry point CIPHER not found.
¡ ... Issued from stage 4 of pipeline 1 name "CPHRSAMP".
¡ ... Running "cipher des encrypt".
¡ 4 *-* 'PIPE (end \ name CPHRSAMP)', '\literal Now is the time for all',/*
¡ Trailing blank */ '|xlate e2a', '|strliteral x0123456789abcdef
¡ ', /* Key */ '|cipher des encrypt', '|deblock 16',
¡ '|spec 1.4 c2x 1 5.4 c2x nw 9.4 c2x nw 13.4 c2x nw', '|cons'
¡ +++ RC(-27) +++
¡ Ready(-0027);
¡ Notes:
¡ 1. “Hardware instructions” should be taken to mean “Message-security Assist” and
¡ “Message-security Assist Extension 1” facilities. cipher specifically does not support
¡ Cryptographic coprocessors (“Integrated cryptographic facility”).
¡ 2. The CP assist feature must be installed and enabled for the underlying instructions to
¡ be available in a virtual machine.
¡ 3. The user should ensure that there is never a record present at the same time on both
¡ input streams, as this would lead to an indeterminate time of key change, which, in
¡ general, would make the enciphered text indecipherable.
¡ 4. When DES is specified and a 16-byte key is used, the first eight bytes should be
¡ different from the last eight bytes (if not, we have single DES); however, the triple DES
¡ standard specifies to use such a key to interoperate with single DES; thus, this is not
¡ enforced. The performance increase by downgrading to single DES is not exploited.
¡ 5. When BLOWFISH is specified, the initial S and P boxes cannot be changed from the
¡ default; see http://www.schneier.com/code/constants.txt
¡ 6. Use pad MODULO to pad records out to full cipher blocks.
¡ Key area. A null record is written when the key length (K) is zero in the count area.
¡ Data area. A null record is written when the data length (DD) is zero in the count
¡ area.
¡
¡ ──CKDDEBLOCK──
¡ Premature Termination: ckddeblock terminates when it discovers that its output stream is
¡ not connected.
¡ Notes:
¡ 1. The converse operation is join 2.
CMS
──CMS──┬────────┬──
└─string─┘
Operation: The argument string (if present) and input lines are issued to CMS through the
CMS subcommand environment, as REXX does for the Address CMS instruction.
The response from the CMS commands is not written to the terminal. The response from
each command is buffered until the command ends and is then written to the primary
output stream. cms does not intercept CP-generated terminal output.
Each invocation of cms maintains a private CMSTYPE flag; this flag is initially set as it is
by SET CMSTYPE RT. If a command that is issued through a particular invocation of cms
issues SET CMSTYPE HT, subsequent command response lines that apply to the stage are
discarded until a SET CMSTYPE RT command is issued while the stage is running. The
HT/RT setting is preserved between commands.
When the secondary output stream is defined, the return code is written to this stream after
each command has been issued and the response has been written to the primary output
stream.
Streams Used: Records are read from the primary input stream; no other input stream
may be connected. Null and blank input records are discarded.
Record Delay: cms writes all output for an input record before consuming the input
record. When the secondary output stream is defined, the record containing the return
code is written to the secondary output stream with no delay.
Commit Level: cms starts on commit level -2. It verifies that the secondary input stream
is not connected and then commits to level 0.
Premature Termination: When the secondary output stream is not defined and cms
receives a negative return code on a command, it terminates. The corresponding input
record is not consumed. When the secondary output stream is defined, cms terminates as
soon as it discovers that this stream is not connected. If this is discovered while a record
is being written, the corresponding input record is not consumed.
Examples: To discard the service level information in the CMS version message:
pipe cms query cmslevel | chop , | console
CMS Level 22
Ready;
Notes:
1. Use subcom CMS to issue CMS commands without intercepting line mode output to the
terminal.
2. cms is not recommended to invoke applications that run in full screen mode, for
instance, XEDIT, because line mode console output is intercepted. Any line mode
output during the session (for instance, REXX error messages) is delayed until the
application completes.
3. cms does not capture some console output on releases prior to CMS 8. Suspend
FULLSCREEN to obtain the output. Try variations of the following command to see
whether output is captured in your system:
pipe cms nucxmap | count lines | console
If the response is a number, command output was intercepted; if the response is a list
of nucleus extensions, command output was not intercepted.
4. Do not issue the immediate commands HT and RT while a cms stage is dispatched; this
action cannot be distinguished from the SET CMSTYPE command.
5. Remember that REXX continuation functionally replaces a trailing comma with a blank.
Also recall that when two strings are separated by one or more blanks, REXX concat-
enates them with a single blank. Use the concatenation operator (||) before the
comma at the end of the line if your portrait style has the stage separators at the left
side of the stage and the trailing blank is significant to your application.
Return Codes: When a secondary output stream is not defined and a negative return
code is received on a command, the return code from cms is that negative return code.
When a secondary output stream is not defined and the return code is zero or positive, all
input records have been processed; the return code is the maximum of the return codes
received. When the secondary output stream is defined, the return code is zero unless an
error is detected by cms.
collate—Collate Streams
collate compares two input streams containing master records and detail records.
Depending on the contents of a key field in the records, input records are passed to one of
three output streams (if connected) or discarded.
┌─NOPAD─────┐
──COLLATE──┬──────────────┬──┼───────────┼──┬─────────┬──
└─STOP──ANYEOF─┘ └─PAD──xorc─┘ └─ANYcase─┘
┌─MASTER──DETAIL─────┐
──┬────────────────────────────┬──┼────────────────────┼──
└─inputRange──┬────────────┬─┘ ├─MASTER─────────────┤
└─inputRange─┘ └─DETAIL──┬────────┬─┘
└─MASTER─┘
Type: Sorter.
Syntax Description: The keyword NOPAD specifies that key fields that are partially
present must have the same length to be considered equal; this is the default. The
keyword PAD specifies a pad character that is used to extend the shorter of two key fields.
The keyword ANYCASE specifies that case is to be ignored when comparing fields; the
default is to respect case.
Two input ranges are optional; the default is the complete record. The first input range
defines the key on the primary input stream; the second input range defines the key on the
secondary input stream. When both ranges are specified, any WORDSEPARATOR or
FIELDSEPARATOR specified for the first input range applies to the second input range as
well, unless specified again. A single input range applies to both input streams.
Two keywords are optional to define the sequence of records on the primary output stream.
Operation: The master file is read from the primary input stream; detail records are read
from the secondary input stream. Both files should be ordered ascending by their keys.
The master file should only have one record for each key.
Streams Used: Two or three streams may be defined. If it is defined, the tertiary input
stream must not be connected. Records are read from the primary input stream and
secondary input stream. Records are written to all defined output streams.
¡ Unless STOP ANYEOF is specified, the primary input stream is shorted to the secondary
output stream when the secondary input stream reaches end-of-file and the secondary input
stream is shorted to the tertiary output stream when the primary input stream reaches end-
¡ of-file. When STOP ANYEOF is specified collate terminates as soon as it senses end-of-file
¡ on either input stream. The other stream is left unconsumed.
┌──────────┐
Master records ─────┤ ├───── Matched records
│ │
Detail records ─────┤ ├───── Unmatched masters
│ │
│ ├───── Unmatched details
└──────────┘
Commit Level: collate starts on commit level -2. It verifies that the tertiary input stream
is not connected and then commits to 0.
Examples: Assuming that the file STOP WORDS contains an ordered list of words to
suppress, this subroutine pipeline suppresses such words from the caller’s input stream:
/* Do stop words */
'callpipe (end ?) :* | split | sort unique | c: collate',
'?< stop words | c: | *:'
To include master records that have no corresponding detail records in the primary output
stream:
/* Allow unreferenced master records */
'callpipe (end ?)',
'*.input.0: | c: collate' arg(1) '| i: faninany | *.output.0:',
'?*.input.1: | c: | i:',
'? c: | *.output.1:'
Exit RC
c: i:
┌────────┐ ┌────────┐
─────┤collate ├─────┤faninany├───────────
│ │ │ │
│ │ │ │
─────┤ ├─────┤ │ ┌─────
│ │ └────────┘ │
│ │ │
│ ├────────────────────┘
└────────┘
To prefix a field from the corresponding master record to each detail record and discard
unmatched master records. Unmatched detail records are written to the secondary output
stream:
Notes:
1. Unless ANYCASE is specified, key fields are compared as character data using the IBM
System/360 collating sequence. Use spec (or a REXX program) to put a sort key first
in the record if you wish, for instance, to use a numeric field that is not aligned to the
right within a column range. Use xlate to change the collating sequence of the file.
! 2. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
3. collate supports only one key field (unlike sort). Use spec to gather several key fields
into one in front of the original record.
When only one input stream is defined, combine combines runs of records on this stream.
When two input streams are defined, combine combines pairs of records, one from each
stream.
──COMBINE──┬───────────────────────┬──┬─Or──────────┬──
│ ┌─1─────────────────┐ │ ├─aNd─────────┤
├─┼───────────────────┼─┤ ├─eXclusiveor─┤
│ ├─number────────────┤ │ ├─FIRST───────┤
│ ├─*─────────────────┤ │ └─LAST────────┘
│ └─KEYLENgth──number─┘ │
│ ┌─STOP──ALLEOF─────┐ │
└─┼──────────────────┼──┘
└─STOP──┬─ANYEOF─┬─┘
└─number─┘
Type: Filter.
Syntax Description: When only one input stream is defined, a number is optional; it
specifies how many records to combine with the first one in a range. (That is, one less
than the number of records in a range.) The default is to combine two input records (as if
the number 1 were specified). Specify an asterisk to combine all input records into one
output record.
KEYLENGTH specifies that runs of records that contain the same key in the first n columns
are combined. (n is the number specified after the keyword.)
When two input streams are defined, a number or KEYLENGTH is rejected. The keyword
STOP specifies when combine should terminate. ALLEOF, the default, specifies that combine
should continue as long as at least one input stream is connected. ANYEOF specifies that
combine should stop as soon as it determines that an input stream is no longer connected.
A number specifies the number of unconnected streams that will cause combine to termi-
nate. The number 1 is equivalent to ANYEOF.
LAST The contents of an output column are taken from the last record in the range
that contains the particular column.
Operation: combine supports a single input stream and two input streams.
When a single input stream is defined, runs of records are combined. When two input
streams are defined, a record from the primary input stream is combined with a record
from the secondary input stream.
Records are combined as follows: A buffer to build the output record is made empty.
Each input record has two (possibly null) parts, the part that corresponds to buffer posi-
tions that have been filled by previous records in the range, and the part by which the
record exceeds the contents of the buffer so far. The latter part is appended to the contents
of the buffer; the first part, if any, is processed according to the particular function
requested. The contents of the buffer are written to the output when all records in the
range have been processed or when end-of-file is met.
When KEYLENGTH is specified, the key is left unchanged; only positions beyond the key
are combined. When there is only one record with a particular key, it is copied unchanged
to the output.
Record Delay: When combine combines records from two input streams, it strictly does
not delay the record.
When it combines records from the primary input stream, combine writes the output record
before it consumes the last record in a run of combined records. Thus, the last record of a
run is not delayed; records before the last one are discarded.
Premature Termination: combine terminates when it discovers that its primary output
stream is not connected.
Examples: Given two variables, before and after, determine how many columns contain
the same character in the two records (assuming they do not contain X'00'):
/* Now see how many */
'callpipe (name COMBINE)',
'|var before',
'|append var after',
'|combine x',
'|xlate *-* 01-ff 1 00 0', /* byte-map of inequality */
'|deblock 1',
'|sort count',
'|...
Assuming that the two input records are the same length, each byte of the output record
from combine contains the exclusive OR of the bytes in the corresponding positions of the
input records. A byte that contains X'00' indicates that the two input bytes were equal; a
nonzero value shows the bit differences. The xlate stage maps a byte of zeros to the
character “0” and the 255 other possible values to the character “1”. Thus, at the output
from xlate, each position of the record contains a zero if the input records were equal and
a one if they were not. deblock writes a record for each character in its input record and
sort COUNT computes the distribution.
Notes:
1. The option O can be written in full: OR. AND is a synonym for N. EXCLUSIVEOR is a
synonym for X; it can be abbreviated down to three characters.
2. Input records of different lengths are not padded; rather, the last part of the longer
record is copied to the output record without modification.
CMS
──┬─COMMAND─┬──┬────────┬──
└─CMD─────┘ └─string─┘
Operation: The argument string (if present) and input lines are issued as CMS commands
using program call with an extended parameter list, as REXX does for the Address
command instruction.
The command is passed to CMS using an SVC 202 (CMSCALL on VM/XA* System Product
and Virtual Machine/Enterprise Systems Architecture) with a call flag byte of 1 indicating
that an extended parameter list is present (but not command call). The argument string
and input lines should be in upper case unless you wish to manipulate objects with mixed
case names.
The response from the CMS commands is not written to the terminal. The response from
each command is buffered until the command ends and is then written to the primary
output stream. command does not intercept CP-generated terminal output.
Each invocation of command maintains a private CMSTYPE flag; this flag is initially set as
it is by SET CMSTYPE RT. If a command that is issued through a particular invocation of
command issues SET CMSTYPE HT, subsequent command response lines that apply to the
stage are discarded until a SET CMSTYPE RT command is issued while the stage is running.
The HT/RT setting is preserved between commands.
When the secondary output stream is defined, the return code is written to this stream after
each command has been issued and the response has been written to the primary output
stream.
Streams Used: Records are read from the primary input stream; no other input stream
may be connected. Null and blank input records are discarded.
Record Delay: command writes all output for an input record before consuming the input
record. When the secondary output stream is defined, the record containing the return
code is written to the secondary output stream with no delay.
Commit Level: command starts on commit level -2. It verifies that the secondary input
stream is not connected and then commits to level 0.
Premature Termination: When the secondary output stream is not defined and command
receives a negative return code on a command, it terminates. The corresponding input
record is not consumed. When the secondary output stream is defined, command termi-
nates as soon as it discovers that this stream is not connected. If this is discovered while a
record is being written, the corresponding input record is not consumed.
Examples: To discard the service level information in the CMS version message:
/* Show CMS version without service level */
pipe command QUERY CMSLEVEL | chop , | console
CMS Level 22
Ready;
command is useful to manipulate file objects with names in mixed case. To erase the file
“mIxEd CaSe”:
pipe command ERASE mIxEd CaSe A | console
Notes:
1. Use subcom CMS to issue CMS commands without intercepting line mode output to the
terminal. Use cms to issue CMS commands with full command resolution.
2. command is not recommended to invoke applications that run in full screen mode, for
instance, XEDIT, because line mode console output is intercepted. Any line mode
output during the session (for instance, REXX error messages) is delayed until the
application completes.
3. command does not capture some console output on releases prior to CMS 8. Suspend
FULLSCREEN to obtain the output. Try variations of the following command to see
whether output is captured in your system:
pipe command NUCXMAP | count lines | console
If the response is a number, command output was intercepted; if the response is a list
of nucleus extensions, command output was not intercepted.
4. Do not issue the immediate commands HT and RT while a command stage is
dispatched; this action cannot be distinguished from the SET CMSTYPE command.
5. Remember that REXX continuation functionally replaces a trailing comma with a blank.
Also recall that when two strings are separated by one or more blanks, REXX concat-
enates them with a single blank. Use the concatenation operator (||) before the
comma at the end of the line if your portrait style has the stage separators at the left
side of the stage and the trailing blank is significant to your application.
Return Codes: When a secondary output stream is not defined and a negative return
code is received on a command, the return code from command is that negative return
code. When a secondary output stream is not defined and the return code is zero or posi-
tive, all input records have been processed; the return code is the maximum of the return
codes received. When the secondary output stream is defined, the return code is zero
unless an error is detected by command.
z/OS
──┬─COMMAND─┬──┬────────┬──
└─CMD─────┘ └─string─┘
Operation: The argument string (if present) and input lines are passed to the TSO service
routine to be issued as commands. A return code from this service routine indicating that
the command does not exist is recoded as return code -3 from the command; other errors
from the service routine cause processing to terminate with an error message.
When the secondary output stream is defined, the return code is written to this stream after
each command has been issued.
Streams Used: Records are read from the primary input stream; no other input stream
may be connected. Null and blank input records are discarded. command does not write
to the primary output stream. If the secondary output stream is defined, the return code is
written to it.
Record Delay: When the secondary output stream is defined, the record containing the
return code is written to the secondary output stream with no delay.
Commit Level: command starts on commit level -2. It verifies that the secondary input
stream is not connected and then commits to level 0.
Premature Termination: When the secondary output stream is not defined and command
receives a negative return code on a command, it terminates. The corresponding input
record is not consumed. When the secondary output stream is defined, command termi-
nates as soon as it discovers that this stream is not connected. If this is discovered while a
record is being written, the corresponding input record is not consumed.
Examples: To issue a command from a REXX filter (which is not merged with the TSO
environment and therefore has no ability to Address TSO):
/* Now do the command */
'callpipe var command | command'
Notes:
1. Use tso to issue TSO commands and write the response to the pipeline for further proc-
essing.
2. command issues GCS commands on GCS.
3. Remember that REXX continuation functionally replaces a trailing comma with a blank.
Also recall that when two strings are separated by one or more blanks, REXX concat-
enates them with a single blank. Use the concatenation operator (||) before the
comma at the end of the line if your portrait style has the stage separators at the left
side of the stage and the trailing blank is significant to your application.
Return Codes: When a secondary output stream is not defined and a negative return
code is received on a command, the return code from command is that negative return
code. When a secondary output stream is not defined and the return code is zero or posi-
tive, all input records have been processed; the return code is the maximum of the return
codes received. When the secondary output stream is defined, the return code is zero
unless an error is detected by command.
When configure is first in a pipeline, it writes to the output stream the value of all
configuration variables that have been set.
When configure is not first in a pipeline, its input records contain the name and optionally
the new value for configuration variables. configure updates the variable (if a second word
is specified) and then writes the value of the specified variable to the output stream.
──CONFIGURE──
Input Record Format: Each record may contain one or two words. The first word is the
name of the configuration variable. Case is ignored in variable names.
If the second word is present, it contains a new value for the specified configuration vari-
able. Case is ignored in keywords; values are make upper case.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. Null and blank input records are discarded.
Examples:
pipe literal style | configure | console
STYLE PIP
Ready;
Notes:
1. Refer to Chapter 28, “Configuring CMS/TSO Pipelines” on page 839 for a list of
configuration variables.
──┬─CONSole──┬──┬──────────────────────┬──┬────────────────┬──
└─TERMinal─┘ ├─EOF──delimitedString─┤ ├─DIRECT─────────┤
└─NOEOF────────────────┘ ├─ASYNchronously─┤
└─DARK───────────┘
EOF specifies a delimited string; end-of-file is signalled when this string is entered (with
leading or trailing blanks, or both). NOEOF specifies that input data are not inspected for
an end-of-file indication; console stops only when it finds that its output stream is not
connected. The null string signals end-of-file if neither of these keywords is specified.
A second type of keyword is supported only on CMS. It specifies the interface to be used
when reading from the terminal. If the keyword is omitted, a normal CMS terminal read is
performed; the program stack and the console queue are emptied before CMS reads from
the terminal, at which time a VM READ is put up on the virtual machine console.
When one of the second type of keywords is specified, console performs a direct read; that
is, console reads directly from the terminal. The program stack and the console queue are
bypassed.
Operation: When console is first in a pipeline, lines are read from the terminal until a
line is read that is equal to the delimited string specified with EOF (by default the null
string).
When console is not first in a pipeline, lines from the primary input stream are written to
the terminal of the virtual machine, and copied to the primary output stream, if connected.
When console is in a pipeline set that has been issued under control of runpipe EVENTS,
console signals console read or write events, as appropriate, instead of accessing host inter-
faces.
Streams Used: When console is first in a pipeline (it is reading from the terminal),
records are written to the primary output stream. When console is not first in a pipeline (it
is writing to the terminal), records are read from the primary input stream and copied to
the primary output stream, if connected.
Examples: To read lines directly from the terminal into the stack without going into a
loop:
pipe console direct | stack
Notes:
1. Only one console stage should be used to read from the console at any time.
2. console ASYNCHRONOUSLY should be used with caution; it is not possible to enter
immediate CMS commands while it is waiting for terminal input.
3. One console ASYNCHRONOUSLY stage can read from the terminal at a time; a subse-
quent one stacks the current reading stage, which resumes control of the terminal
when the new stage terminates.
4. On z/OS, GETLINE and PUTLINE macros are used to read and write the TSO terminal.
TGET and TPUT are used when the pipeline is started with the CALL command or refer-
enced with PGM= in an EXEC job control statement and a TSO environment is active.
Write to programmer is used as a last resort.
5. Lines written to the terminal are truncated to fit the particular interface on z/OS. No
truncation is required for CMS. (But CP may truncate the lines written to the console
SPOOL.)
6. Use the CMS command PIPMOD STOP or send a record into the pipestop stage to termi-
nate console ASYNCHRONOUSLY while it is waiting for an attention interrupt. Note that
you cannot enter immediate commands from the terminal while console ASYNCHRO-
NOUSLY is running; the command must be generated in the pipeline.
7. On input, CMS has performed the SET INPUT translation on the lines typed at the
console before console reads them and writes them to the pipeline. Likewise, CMS
does SET OUTPUT translation after console writes the line to the terminal.
8. terminal is a synonym for console. INVISIBLE is a synonym for DARK.
9. On CMS 5.5 and later releases, input is truncated after 1024 characters. Note that the
stack still truncates at 255. Thus, longer lines must be typed at the terminal (and the
terminal must have a suitably long input area).
──COPY──
Operation: Each input record is read into a buffer. The input record is consumed before
the contents of the buffer are written to the output.
Premature Termination: copy terminates when it discovers that its output stream is not
connected.
Examples: To hold the output on the primary output stream from chop while the balance
of the input record is being written to the secondary output stream, and the two pieces are
to be reunited in a later stage:
/* Uppercase label: */
'PIPE (end ? name COPY)',
... ,
'|c: chop before blank',
'| xlate upper',
'| copy',
'|s: spec 1-* 1 select 1 1-* next',
... ,
'?c:',
'|s:'
Records that have a leading blank (that is, assembler statements that have no label field)
are passed in their entirety to the secondary output stream after a null record has been
written to the primary output stream.
┌──
──────────────┐
┬─CHARACTErs─┬┴──
──COUNT───
├─WORDS──────┤
├─LINES──────┤
├─MINline────┤
└─MAXline────┘
Type: Filter.
Syntax Description: Specify what to count; you can specify up to five keywords in any
order.
Operation: Input records are read and counters are updated; the counters are 56 bits
ignoring overflow. Bytes and lines are counted without reference to the contents of the
input record; count WORDS references the storage area holding the input record.
When there are no input records, the shortest record is reported as infinity (2G-1, because
this is the longest possible record); the longest record is reported as null.
Output Record Format: A record is built with the result when the primary input stream
reaches end-of-file. Irrespective of the order of the options, this record has a number for
each specified counting option in the order characters, words, lines, minimum record
length, and maximum record length. There is one blank between numbers.
Streams Used: Records are read from the primary input stream; no other input stream
may be connected. When the secondary output stream is not defined, input records are
discarded; the record containing the counts is written to the primary output stream. When
the secondary output stream is defined, the input records are copied to the primary output
stream; the primary output stream is severed at end-of-file on the primary input stream;
and the record containing the counts is then written to the secondary output stream.
Record Delay: When the secondary output stream is defined, count does not delay the
records that are copied to the primary output stream.
Premature Termination: If the secondary output stream is defined, count terminates when
the primary output stream becomes not connected; the counts are written to the secondary
output stream; the record that receives end-of-file on the primary output stream is not
included in the counts. Thus, the counts on the secondary output stream reflect the amount
of data consumed by the stage connected to the primary output stream.
To count both the number of words and the number of unique words:
/* Special counting */
'PIPE (end ?)|',
'< profile exec|',
'c: count words|',
'split|',
'sort unique|',
'count lines|',
'spec 1-* 1 , unique words., next|',
'console',
'?c: |',
'spec 1-* 1 , words total., next|',
'console'
Notes:
1. RECORDS is a synonym for LINES. CHARS and BYTES are synonyms for CHARACTERS.
CMS
┌─8192───┐
──CP──┼────────┼──┬────────┬──
└─number─┘ └─string─┘
Syntax Description: A number and a string are optional. If the first word of the argu-
ment string is a number, it specifies the required size of the response buffer; the default is
8192. Only a number is allowed when the secondary output stream is defined.
Operation: The argument string (after the number, if present) and input lines are issued
as CP commands through the extended diagnose 8 interface. cp terminates with an error
message if a command is longer than the 240 bytes supported by CP.
The first blank-delimited word of each command is inspected. If it is different from its
upper case translation, the entire command is translated to upper case before it is issued to
CP.
When the length of the response buffer is not specified and the command issued is QUERY
(or an abbreviation thereof, down to one character), the buffer is extended dynamically to
accept the complete command response. In other cases, there is no indication of error
when CP truncates the response because the buffer is too small, unless there is a secondary
output stream defined.
When the secondary output stream is defined, the return code is written to this stream after
each command has been issued and the response has been written to the primary output
stream. The return code has a leading plus sign if the command response was truncated.
Streams Used: Records are read from the primary input stream; no other input stream
may be connected. Null and blank input records are discarded. Secondary streams may be
defined.
Record Delay: cp writes all output for an input record before consuming the input record.
When the secondary output stream is defined, the record containing the return code is
written before the corresponding input record is consumed.
Commit Level: cp starts on commit level -2. It verifies that the secondary input stream is
not connected and then commits to level 0.
cp allocates a sufficiently large buffer to accommodate whatever reply you will receive to
the query.
To transfer all reader files to another user, making certain that all 9999 possible SPOOL
files can be transferred and the full response still be captured:
pipe cp 999900 transfer rdr all to someuser | ...
Notes:
1. Write the CP command verb in upper case to avoid translation of the arguments to
upper case when using cp to issue commands with mixed case arguments. For
example:
pipe cp MSG OSCAR Hi, there...
2. Use a buffer size operand to issue queries that have side effects (if any exist). This
ensures that the command is issued once only.
3. Remember that REXX continuation functionally replaces a trailing comma with a blank.
Also recall that when two strings are separated by one or more blanks, REXX concat-
enates them with a single blank. Use the concatenation operator (||) before the
comma at the end of the line if your portrait style has the stage separators at the left
side of the stage and the trailing blank is significant to your application.
Return Codes: When no secondary output stream is defined and cp terminates because CP
does not recognise a command, the return code is 1 (irrespective of return codes from
other commands). When no secondary output stream is defined and the return code is not
1, all input records have been processed; the return code is the maximum of the return
codes received from CP. When the secondary output stream is defined, the return code is
zero unless cp detects an error.
! ┌──
──────────────────────┐ ┌─CRC-32─────┐
! ┬────────────────────┬┴──┼────────────┼──
──CRC───
! ├─APPEND─────────────┤ ├─CRC-16─────┤
! ├─EACH──┬──────────┬─┤ ├─CRC-16I────┤
! │ └─CRCFIRST─┘ │ ├─CCITT-16───┤
! └─ADDLENgth──────────┘ ├─CKSUM──────┤
! └─┤ Custom ├─┘
! Custom:
! ┌──
──────────────────────┐
! ┬────────────────────┬┴──┤
├──┬─16-BIT─┬──hexString───
! └─32-BIT─┘ ├─ADDLENgth──────────┤
! ├─COMPLEMENT─────────┤
! ├─PRELOAD──hexString─┤
! ├─REFLIN─────────────┤
! ├─REFLOUT────────────┤
! └─XOROUT──hexString──┘
! Type: Filter.
! Syntax Description: The operands to crc are of two types, general flags, and operands to
! specify the algorithm.
!! APPEND Pass the input to the primary output. Write the CRC as a separate final
! record. Secondary streams are not allowed.
!! EACH Write the CRC for each record and reset to initial conditions.
!! CRCFIRST Reverse the order of writing the message and the CRC
!! ADDLENGTH Logically append the number of bytes in the record or file to the data
! being checksummed. Only the significant bytes of this count are
! included. The length is processed with the rightmost byte first.
! Algorithmic operands are specified with 16-BIT or 32-BIT, one of which is required,
! followed by a word in hexadecimal that specifies the polynomial, followed by the
! remaining operands, which are all optional.
! A number of “canned” CRC algorithms may be selected, but they cannot be modified by
! further options. Their parameters are listed in Figure 387 below.
! AUTODIN2 is a synonym for CRC32 that hints at the heritage of the polynomial.
! Operation: When a single output stream is defined, the default is to read the entire file,
! compute the CRC, and write a single record at end-of-file. The record contains two, four,
! or twelve bytes of binary data, depending on the parameters specified. When the
! secondary output stream is defined, the input is passed to the primary output without being
! delayed; the CRC is written to the secondary at end-of-file unless EACH is specified, in
! which case a CRC is computed for each input record. When EACH is specified without
! CRCFIRST, the primary output stream is written before the secondary output stream; the
! order is reversed when CRCFIRST is specified.
! A CRC record of twelve bytes is produced by the canned crc CKSUM, but only when using
! the canned version; specifying ADDLENGTH does not add this field. This record contains
! four bytes CRC followed by eight bytes binary count of the number of bytes included in the
! CRC.
! Streams Used: Secondary streams may be defined. Records are read from the primary
! input stream; no other input stream may be connected.
! Commit Level: crc starts on commit level -2. It verifies that the primary input stream is
! the only connected input stream and then commits to level 0.
! Premature Termination: crc terminates when it discovers that any of its output streams is
! not connected.
! Examples:
! pipe literal abc|crc cksum|spec 1-* c2x 1 | console
! 97B7E3490000000000000003
! Ready;
! pipe literal abc|xlate e2a|crc cksum|spec 1-* c2x 1 | console
! 48AA78A20000000000000003
! Ready;
! pipe literal abc|xlate e2a|crc cksum|spec 1.4 c2d 1 | console
! 1219131554
! Ready;
! On an ASCII Linux system:
! j /home/john: echo -e -n abc|cksum
! 1219131554 3
! With
! CCJOHN:/home/ccjohn: >echo -e -n abc|cksum
! 1073619496 10
! The length 10 indicates that the flags are not respected and that a line end is appended (as
! it should be when -n is omitted).
! Notes:
! 1. crc CKSUM interoperates with the POSIX cksum command, which produces the equiv-
! alent of
! crc 32-bit 04c11db7 complement addlength
! Note, however, that the UNIX command, when issued on an ASCII system, produces a
! different CRC than does CMS/TSO Pipelines, because the input data are not the same
! (X'414243' versus X'818283'). Also note that the POSIX cksum produces two
! unsigned decimal numbers (the second is the byte count) whereas crc in general
! produces a 2-byte or 4-byte binary number; crc CKSUM produces a 12-byte binary
! number.
! Publications:
! Ross Williams: A PAINLESS GUIDE TO CRC ERROR DETECTION ALGORITHMS.
! CRC32: Ethernet specifications (Xerox, DEC, Intel), September 30 1980.
! CRC16: IBM form number GA22-6844-4 (IBM 2701 Data Adapter Unit OEM).
! CKSUM:
! http://www.opengroup.org/onlinepubs/009695399/utilities/cksum.html
──C14TO38──┬──────────────────────┬──
│ ┌──
──────────────────┐ │
─xorc──xorc──xorc─┴─┘
└──
Syntax Description: The argument string can have up to 255 conversion triplets. Each
triplet consists of three blank-delimited words, each of which can be a single character or a
two-character hexadecimal representation of a character. The default is to convert the
1403 box characters, generated by Document Composition Facility (SCRIPT/VS), to 3800
code points.
Operation: Input records are copied to the output until a record is met with a write no
space operation code (X'01').
c14to38 tries to merge a record having write no space carriage control with the following
record. Each position in the two records is inspected; if the characters are the first two of
a triplet (in either order), a blank is stored in the first record and the third character of the
triplet is stored in the second record. If the character in the first record is not blank and
the character in the second record is blank, the two characters are swapped. The first
record is discarded when it is blank from column 2 to the end; if not, it is written to the
output. If the second record has X'01' carriage control, it is then merged with the next
one, and so on.
Input Record Format: The first position of the record is a machine carriage control char-
acter.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. Null input records are discarded.
Record Delay: c14to38 delays or discards records that contain X'01' in column 1. It
does not delay other records.
Premature Termination: c14to38 terminates when it discovers that its output stream is
not connected.
Examples: To print a document formatted for an IBM 1403 on an IBM 3800 printer or
an all points addressable (APA) printer under control of Print Services Facility (PSF):
cp spool 00e fcb s8 char it12 ib12
cp tag dev 00e mvs system 0 OPTCD=J
< $doc script | c14to38 | overstr | optcdj | printmc
cp close 00e
Notes:
1. The output is a 1403 type data stream as far as carriage control is concerned; no TRC
is added.
──DAM──
Type: Gateway.
Operation: dam waits for the first record to arrive on its primary input stream.
¡ When the first record arrives on the primary input stream, dam shorts all input streams to
the corresponding output stream (the dam bursts). The streams are shorted in numerical
order. dam then terminates.
When end-of-file arrives on the primary input stream (that is, the primary input stream is
empty), dam terminates without copying any records and without consuming any.
¡ Streams Used: All streams are shorted if a record arrives on the primary input stream.
¡ No records are passed otherwise.
dam ignores end-of-file on its primary output stream; it propagates end-of-file between the
two sides of streams 1 and higher.
Commit Level: dam starts on commit level -2. It allocates the resources it needs and and
then commits to level 0.
Examples:
Notes:
1. Though dam does not delay the record, it does not produce output until it has sensed a
record on its primary input stream. You are likely to need something upstream on the
secondary input stream to hold the records while the decision is being made about
processing the records.
This article is a synopsis; refer to z/VM: CMS Pipelines Reference, SC24-6076, for further
information.
CMS
──DATECONVERT──┬────────────┬──
└─inputRange─┘
──┬────────────────────────────────────┬──┬──────────────────┬──
└─┤ dateformat ├──┬────────────────┬─┘ ├─WINDOW──snumber──┤
└─┤ dateformat ├─┘ └─BASEYEAR──number─┘
Type: Filter.
SHORTDATE mm/dd/yy
FULLDATE mm/dd/yyyy
ISODATE yyyy-mm-dd
PIPE yyyymmdd
EUR dd.mm.yyyy
WINDOW A sliding window is used for date conversion, based on the year in
which the conversion is performed.
BASEYEAR A fixed window is used for date conversion.
The various formats supported by the REXX date() built-in function are specified by
REXXletter, where the letter represents the first character of first argument to the REXX
function. For example, REXXJ specifies the Julian date (yyddd).
The default input range is the entire record. The default input date format is SHORTDATE;
the default output date format is ISODATE. The default sliding window begins fifty years in
the past and extends forty-nine years into the future.
Operation: If the specified input range is not present in the record, the record is passed
unchanged to the primary output stream and no further action is taken.
If the input range is present in the input record, the contents of the specified range are
converted as requested.
When the conversion succeeds, the conversion result replaces the specified range in the
record and the updated record is written to the primary output stream. Note that the output
record can have a different length than the input record.
When the conversion fails and the secondary output stream is defined, the input record is
passed to the secondary output stream.
dateconvert terminates with an error message when the conversion fails and the secondary
output stream is not defined.
Streams Used: Secondary streams may be defined. Records are read from the primary
input stream; no other input stream may be connected.
Commit Level: dateconvert starts on commit level -2. It verifies its arguments and then
commits to level 0.
Premature Termination: dateconvert terminates when it discovers that any of its output
streams is not connected.
Examples:
pipe cp query time | take 1 | console
TIME IS 10:58:58 DST WEDNESDAY 07/14/10
Ready;
pipe literal 03/09/46 | dateconvert | console
2046-03-09
Ready;
pipe literal 03/09/46 | dateconvert window -99 | console
1946-03-09
Ready;
pipe literal 03/09/46 | dateconvert baseyear 1925 | console
1946-03-09
Ready;
pipe literal 03/09/46 | dateconvert baseyear 1130 | console
1146-03-09
Ready;
pipe literal 02/29/46 | dateconvert | console
Date cannot be converted; input date 02/29/46 is not valid.
... Issued from stage 2 of pipeline 1.
... Running "dateconvert".
Ready(01183);
Notes:
1. Leap seconds are ignored in the calculations.
2. dateconvert assumes that the first day of the Gregorian calendar is September 14,
1752. The day before that is September 2, 1752; it is in the Julian calendar.
By default, records are written to the output streams round robin, that is, the first record to
the primary output stream, the second record to the secondary output stream, and so on.
Alternatively, you can supply the stream identifier for the stream to receive a record or you
can specify that a run of records containing the same key are written to the same output
stream.
┌─STOP──ALLEOF────────────────────┐
──DEAL──┼─────────────────────────────────┼──
├─STOP──┬─ALLEOF─┬────────────────┤
│ ├─ANYEOF─┤ │
│ └─number─┘ │
├─SECONDARY──┬─────────┬──────────┤
│ ├─RELEASE─┤ │
! │ └─LATCH───┘ │
├─KEY──inputRange──┬───────┬──────┤
│ └─STRIP─┘ │
└─STREAMid──inputRange──┬───────┬─┘
└─STRIP─┘
Type: Gateway.
Syntax Description: Arguments are optional; the default is STOP ALLEOF. The four
options are exclusive; specify at most one of them.
STOP Specify the condition under which deal should terminate prematurely.
ALLEOF, the default, specifies that deal should continue as long as at
least one output stream is connected. ANYEOF specifies that deal should
stop as soon as it determines that an output stream is no longer
connected. A number specifies the number of unconnected streams that
will cause deal to terminate. The number 1 is equivalent to ANYEOF.
SECONDARY The secondary input stream contains the stream identifiers of the streams
that are to receive the records on the primary input stream.
RELEASE Consume the record on the secondary input stream before reading the
record from the primary input stream.
!! LATCH deal waits for a record to arrive on its input streams. When a record
! arrives at the secondary input stream, the specified output stream is
! selected and the record is discarded; that is, LATCH implies RELEASE.
! When a record arrives at the primary input, it is copied to the currently
! selected output. The primary output stream is selected initially.
KEY Specify the input range that contains the key of each record. Runs of
records that contain the same key are written to the same output stream.
STREAMID Specify the input range within the record on the primary input stream
that contains the stream identifier to receive the record. Each record is
routed individually.
STRIP Delete the key or stream identifier from the output record. The
inputRange must be either at the beginning or at the end of the record.
When KEY, SECONDARY, and STREAMID are omitted, records are passed to the output
streams round robin. The first input record is passed to the primary output stream, the
second input record is passed to the secondary output stream, and so on until it wraps after
the highest-numbered output stream. The next record is then passed to the primary output
stream and the cycle is repeated. deal reacts to end-of-file on an output stream by trying
the next output stream until as many streams are at end-of-file as specified in the STOP
option.
When KEY is specified, the first record is passed to the primary output stream and the
contents of its key field are then stored in a buffer. A run of records that contain the
stored key is passed to the same output stream. When the key of the input record is not
equal to the stored key, the next output stream is selected and the run of records is passed
to this stream.
! When SECONDARY is specified and LATCH is omitted, a pair of records from each input
stream is processed together. deal first peeks at the record on the secondary input stream
to determine where to write the record on the primary input stream. The record from the
secondary input stream specifies the number or identifier of the stream to receive the corre-
sponding record from the primary input stream. The record from the primary input stream
is then passed to the specified stream. When RELEASE is specified, the record on the
secondary input stream is consumed before the primary input stream is read; otherwise, the
two records are consumed, the one on the primary input stream first, after the output
record has been written.
! When SECONDARY LATCH is specified, records on the secondary input stream specify the
! output stream to be selected; such records are discarded immediately. Subsequent records
! on the primary input stream are passed to the stream last selected by a record on the
! secondary input stream.
When STREAMID is specified, the record is routed based on its contents. The contents of
the specified range determine the output stream to receive the record.
Input Record Format: When SECONDARY is specified, the secondary input stream
contains one word per record. This word is the stream identifier for the output stream that
should receive the corresponding record from the primary input stream. The stream
identifier can be a number, in which case it is the number of the stream (from 0); or it can
be an alphanumeric stream identifier.
Streams Used: Records are read from the primary input stream. If SECONDARY is
specified, records are also read from the secondary input stream; the two input streams are
synchronised unless RELEASE is specified. Records are, in general, written to all connected
output streams.
Commit Level: deal starts on commit level -2. It verifies that only the primary input
stream is connected unless SECONDARY is specified, in which case the secondary input
stream must be connected; and then commits to level 0.
When STOP is specified or defaulted, it specifies how many output streams can go to end-
of-file before deal terminates. The corresponding input records are not consumed.
Examples: To pass an input record to one of three virtual machines, as a special message;
that is, to spread the load among multiple servers:
'callpipe (end ? name DEAL.STAGE:120)',
'?*:',
'|d:deal',
'|change //SMSG BEE1 /',
'|cp',
'?d:',
'|change //SMSG BEE2 /',
'|cp',
'?d:',
'|change //SMSG BEE3 /',
'|cp'
Note that the records are discarded by hole. Leaving an output stream unconnected will
not cause records to be dropped; deal will try to write the record until its succeeds or until
it has received sufficient end-of-file indications to terminate.
/*********************************************************************/
/* This subroutine distributes work amongst server pipelines, which */
/* are connected from output 0 to input 1, from output 1 to input 2, */
/* and so on (this makes for easy coding of the main pipeline). */
/* */
/* A server writes a record (a "ready" token) when it is ready to */
/* process a request. An input record will then be made available. */
/* The server should consume this input record "quickly" (to avoid */
/* blockage of the DEAL stage that doles out work to the performing */
/* pipelines). */
/* */
/* Each server pipeline should contain a pipeline that produces */
/* ready tokens as appropriate. For example, you can use a sipping */
/* pipeline along these lines: */
/* */
/* output ready! */
/* callpipe *:|take 1|<server> */
/* */
/* Each "ready" token is then turned into the stream's number and */
/* fed into the buffer of available streams. */
/*********************************************************************/
'maxstream output'
maxstream=RC-1
If RC=0
Then exit 999
pipe=''
first='tod:|'
do j=0 to maxstream
pipe=pipe,
'\*..'j+1':',
'|spec /'j'/ 1',
'|i:',
'\'first'd:',
'|*..'j+1':'
first=''
end
exit RC
Notes:
| 1. It is unlikely to be efficient to distribute work round robin amongst identical proc-
| essing stages that do not in some way connect to a different task or another virtual
machine.
! 2. You should ensure that no two records arrive concurrently when SECONDARY LATCH is
! specified; the order of processing is unspecified and in general random if they do.
┌─Fixed──80────────────────────────┐
──DEBLOCK──┼──────────────────────────────────┼──
│ ┌─Fixed─┐ │
├─┴───────┴──number──┬───────────┬─┤
│ └─FROMRIGHT─┘ │
├─Variable─────────────────────────┤
├─CMS──────────────────────────────┤
├─CMS4─────────────────────────────┤
├─SF───────────────────────────────┤
├─SF4──────────────────────────────┤
├─RDW──range──number──┬───────┬────┤
│ └─STRIP─┘ │
├─RFC959───────────────────────────┤
├─ONEBYTE──────────────────────────┤
├─ADMSF────────────────────────────┤
├─GDForders────────────────────────┤
├─AWSTAPE──────────────────────────┤
├─DECIMAL──number──┬─INCLUSIVE─┬───┤
│ └─EXCLUSIVE─┘ │
├─NETdata──────────────────────────┤
├─TEXTUNIT─────────────────────────┤
├─MONITOR──────────────────────────┤
├─TEXTfile─────────────────────────┤
└─┤ PC-style ├─────────────────────┘
PC-style:
├──┬─C───────────────────────┬──┬───────────┬──┬───────────────┬──┤
│ ┌─15───┐ │ └─TERMinate─┘ │ ┌─3F───┐ │
├─LINEND──┼──────┼────────┤ └─EOF──┼──────┼─┘
│ └─xorc─┘ │ └─xorc─┘
├─CRLF────────────────────┤
└─STRING──delimitedString─┘
Type: Filter.
Operation: In general, deblock performs the inverse operation of block. Input record
boundaries are ignored in some input formats, because the file itself contains the informa-
tion required to reconstruct the logical record structure; such input files are called byte
streams.
FIXED Produce as many records of fixed length as required for each input
block. Output records do not span input blocks; a short record is written
for the last part of the block if a block is not an integral multiple of the
record length. Thus, deblock FIXED accepts input records that cannot be
created with block FIXED.
Normally, deblock FIXED processes the input block from left to right, but
the order is reversed when FROMRIGHT is specified. The first part of the
block is then written last; it may be a short record.
VARIABLE Deblock OS variable length records. deblock V supports all four OS vari-
able formats: V, VB, VS, and VBS. Null segments are discarded.
¡ Extended block descriptor words are supported.
CMS Reconstruct a file from the format used to store variable record format
files internally in the CMS file system. Internally, each logical record has
a halfword prefix containing the length of the record. Length zero
means end-of-file. The length field is not present in the output record.
The input is considered a byte stream.
CMS4 Reconstruct a file where each logical record is prefixed four bytes length
that specifies the number of data characters that follow. Length zero
means end-of-file. The length field is not present in the output record.
The input is considered a byte stream.
SF Deblock structured fields. A structured field consists of a halfword
(sixteen bits) length field and a variable length data field. The contents
of the length field includes those two bytes. Thus, a null structured field
consists of the data X'0002'. The halfword length field is not present
in the output record. The input is considered a byte stream; the struc-
tured fields can be spanned over input records.
SF4 Logical records are prefixed four bytes length. The length field contains
four plus the count of data characters that follow. That is, the length
field specifies the length of the logical record inclusive of the record
descriptor. Length zero means end-of-file. The length field is not
present in the output record. The input is considered a byte stream.
RDW Logical records contain a descriptor word in the positions defined by
| range. The contents are treated as a binary unsigned number. The
number specifies an overhead to be added to the contents of this record
descriptor word to determine the length of the logical record, inclusive
of the descriptor word. STRIP specifies that the record descriptor word is
| not written as part of the output record; the range must then begin in
| column 1.
RFC959 Deblock according to the format defined in Request For Comments 959
(“File Transfer Protocol”). A convenience for rdw 2.2 3.
ONEBYTE Deblock logical records that consist of a one byte length field and a vari-
able length data field. The contents of the length field includes this one
byte. Thus, a null one byte record consists of the data X'01'. The
length field is not present in the output record. The input is considered a
byte stream; the logical record can be spanned over input records.
ADMSF Deblock GDDM structured fields. When a logical record ends one char-
acter before the end of a block and the remaining character is X'00',
this pad character is ignored. The halfword length field is not present in
the output record.
The formats C, LINEND, CRLF, and STRING support two keywords, TERMINATE and EOF.
Specify TERMINATE to suppress a trailing null line when the file ends in a line end
sequence. Use EOF to specify an end-of-file character. deblock discards the end-of-file
character and all remaining input. That is, deblock consumes the first record that contains
an end-of-file character and then terminates. X'3F' (substitute) is the default end-of-file
character. deblock scans for an end-of-file character only when the keyword is specified.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. Null input records are discarded.
Record Delay: deblock delays input records as required to build an output record. The
delay is unspecified.
Premature Termination: deblock terminates when it discovers that its output stream is
not connected. deblock TEXTUNIT terminates when it has consumed the input record that
contains the INMR06 control record that terminates the file.
To extract the text units describing the data set from a reader file in the NETDATA format:
/* Get text units from reader file */
'PIPE',
' reader',
'|find' '41'x,
'|spec 2-* 1.80',
'|deblock net',
'|find' 'e0'x,
'|deblock textunit',
'|...
/* Deblock ADMGDF */
'PIPE (name DEBLOCK)',
'|< x admgdf', /* Read gdf file */
'|drop 1', /* Drop descriptor */
'|spec 21-*', /* Drop record identifier */
'|deblock admsf', /* Unravel GDDM structured fields */
'|deblock gdf', /* Now unblock the orders */
'|...
To restore the record format of a LIST3820 (or similar) file that has lost its record bounda-
ries during file transfer:
... | deblock rdw 2.2 1 | ...
The records contain a carriage control character (one byte), which is followed by a struc-
tured field. The first two bytes of the structured field contain the length of the field, inclu-
sive of these two bytes, but it does not include the carriage control character; therefore, the
adjustment factor is one.
Notes:
1. VB, VS, and VBS are synonyms for VARIABLE.
2. Though input records are called blocks and output records are called logical records,
these are still records (or lines) as perceived by the pipeline dispatcher.
3. Netdata and PC deblocking may produce output records that contain data from several
input records; in this respect, deblock can be considered to be blocking input records
rather than deblocking them.
delay—Suspend Stream
delay copies an input record to the output at a particular time of day or after a specified
interval has elapsed. The first word of each input record specifies the time at which it is to
be passed to the output.
──DELAY──
Operation: The input record is copied to the primary output stream when the delay
expires (the specified time is reached or the specified interval elapses). A record that
specifies a time of day (that is, without a leading plus) is copied to the output immediately
when it is read after the time specified.
Input Record Format: The first word of each input record specifies when the record is to
be copied to the output; the remainder of the record is not inspected.
The delay is a word that contains up to three numbers separated by colons. A leading plus
indicates that the time is relative to the time of day when the record is read; with no
leading plus, the time is local time relative to the previous midnight.
The numbers represent hours, minutes, and seconds. The seconds field may contain a
decimal point and up to six fractional digits, giving microsecond resolution. The numbers
must be zero or positive, but are not restricted to the normal conventions for seconds per
minute, minutes per hour, and hours per day. You can wait until 1:17:64, which is
equivalent to 1:18:04. You can wait until any time in the future, as long as the
time-of-day clock has not changed sign. (If your system is using the standard epoch, the
sign will change in 2041.) A delay of 8760 waits until midnight on the 365th day
following the current day. (Assuming the system stays up that long and assuming no drift
of the time-of-day clock.)
When the first word has one or two numbers, the interpretation depends on the presence of
a leading plus. With the plus (indicating a relative interval), the rightmost number is taken
to be seconds; a further number to the left of it represents minutes. When there is no
leading plus (a time of day is specified), the leftmost number represents the hour; minutes
and seconds are assumed to be zero when not specified.
Record Delay: delay strictly does not delay the record. That is, delay consumes the input
record after it has copied it to the primary output stream; records are delayed in time, but
the relative order of records that originate in a particular filter is unchanged.
Premature Termination: delay terminates when it discovers that its output stream is not
connected; delay also stops if the immediate command PIPMOD STOP is issued or if a record
is passed to pipestop.
Examples: To perform an action at 3 am every day. Note the code to determine whether
the subroutine pipeline was called before or after 3 am.
/* 3AM REXX */
If time('Hours') > 2 /* 3am or later? */
Then addl=''
Else addl='literal 3|' /* No, wait till then */
'PIPE',
'literal 27|', /* 3am tomorrow */
'dup *|', /* Forever */
addl, /* Maybe 3am today? */
'delay|', /* Wait */
'*:'
Notes:
1. On CMS, delay uses the clock comparator. The virtual machine must have ECMODE set
on.
2. On CMS, delay issues diagnose 0 to determine the time zone offset.
3. literal 0 | delay never waits; use 24 to wait until the next midnight.
4. delay does not depend on the date of the epoch (the date corresponding to a zero value
of the time-of-day clock). The epoch must begin at midnight GMT when waiting to a
particular time of day.
5. No more than 16 delay stages can be active concurrently on z/OS.
¡¡ CMS
¡ ──DEVINFO──devaddr──┬────────┬──
¡ └─number─┘
¡ Syntax Description:
¡ Output Record Format: For each device, the output line contains as a minimum the
¡ device number and its generic type.
¡ For devices that respond to E4 sense, the next two words contain device type and control
¡ unit type.
¡ For a CKD device, the rest of the line contains the number of primary tracks, number of
¡ tracks per cylinder, and usable space per track.
¡ For FBA, the rest of the line contains block size, blocks per track, blocks per cylinder,
¡ blocks under movable heads, and blocks under fixed heads.
¡ Premature Termination: devinfo terminates when it discovers that its output stream is not
¡ connected.
¡ Examples:
¡ pipe devinfo 9|cons
¡ 0009 TERM
¡ R;
¡ pipe devinfo 180|cons
¡ 0180 TAPE 3480 3480
¡ R;
¡ pipe devinfo 190|cons
¡ 0190 DASD 3390 3990 107 15 58786
¡ R;
¡ pipe devinfo 100|cons
¡ 0100 FBA 9336 6310 512 111 777 4000 0
¡ R;
dfsort—Interface to DFSORT/CMS
dfsort builds a parameter list to call DFSORT/CMS, inserts the input file on the E15 exit, and
extracts the sorted file from the E35 exit and writes it into the pipeline.
CMS
──DFSORT──string──
Type: Sorter.
¡ Syntax Description: Specify sort control statements in the parameter string. The string
¡ passed to the sort is made upper case. dfsort adds this statement to the end of the
specified string:
RECORD TYPE=V,LENGTH=32760
Input records are always variable record format as far as DFSORT/CMS is concerned; dfsort
adds and removes record descriptor words. Because the record descriptor word occupies
the first four positions of the record as seen by DFSORT/CMS, the sort fields must specify a
field position that is four larger than the position in the record that is passed to dfsort.
Premature Termination: dfsort terminates when it discovers that its output stream is not
connected.
Notes:
1. Refer to DFSORT/CMS User’s Guide, SC26-4361.
2. dfsort saves the GLOBAL TXTLIB setting, if any, and sets up a GLOBAL TXTLIB DFSRTLIB
¡ (unless the TXTLIB is already in the list of global TXTLIBs) before invoking
DFSORT/CMS. The original TXTLIB setting is restored when DFSORT/CMS returns to
dfsort. User programs must not interfere with this library.
3. Use sort to sort records that are longer than 32K. Be sure to have enough virtual
storage to hold the entire file.
¡ 4. syncsort is a variant of dfsort that uses SYNCSORT TXTLIB instead of DFSRTLIB TXTLIB.
¡ It appears that SyncSort is not reentrant; running two syncsort stages concurrently is
¡ likely to have unpredictable results.
¡ 5. vmsort is a variant of dfsort that uses VMSLIB TXTLIB instead of DFSRTLIB TXTLIB.
¡¡ CMS
¡ ──DIAGE4──
¡ Input Record Format: Input records contain a command verb that identifies the desired
¡ variety of diagnose E4.
¡¡ CMS
¡ ──┬─QLINK──┬──word──devaddr──
¡ └─QMDISK─┘
¡¡ CMS
¡ ──FULLPACK──word──devaddr──devaddr──word──
¡¡ CMS
¡ ──FULLVOL──devaddr──┬────────┬──devaddr──word──
¡ └─number─┘
¡ devaddr The device number of the real device on which to place an overlay.
¡ number Cylinder/block number to be verified for conflicts. a The default is 0.
¡ devaddr The virtual device number of the created minidisk overlay.
¡ word The link mode desired.
¡ Output Record Format: The first four bytes contain the return code in binary. The next
¡ four bytes contain binary zeros. The remainder of the record contains the contents of the
¡ parameter list after the diagnose has been issued.
¡ Notes:
¡ 1. FULLPACK and FULLVOL require CP directory OPTION DEVMAINT or equivalent ESM
¡ privilege. An unprivileged user may use QLINK and QMDISK only to enquire the char-
¡ acteristics of her own virtual machine or its directory entry.
¡ Return Codes: diage4 sets a nonzero return code only when it discovers syntactic errors
¡ in the input records. In particular, the return code does not reflect success or failure of the
¡ diagnose instructions issued.
¡ ──DIGEST──┬─SHA1───┬──┬─────────────────┬──
¡ ├─SHA256─┤ ├─APPEND──────────┤
¡ ├─SHA384─┤ └─VERIFY──┬─────┬─┘
! ├─SHA512─┤ └─NOT─┘
¡ └─MD5────┘
¡ Type: Filter.
¡ Operation: The following applies when APPEND and VERIFY are omitted.
¡ With only the primary streams defined, digest reads all its input and then produces a single
¡ digest on the primary output stream.
¡ With secondary streams defined, the message on the primary input stream is passed to the
¡ secondary output stream. At end-of-file on the primary input stream or whenever a record
| arrives on the secondary input stream, the current digest is written to the secondary output
| stream and the process is restarted. This mode is useful for batch signing of messages or
¡ files possibly for aggregation or transmission.
¡ Record Delay: digest does not delay the record when APPEND is specified. When
¡ secondary streams are defined, it does not delay the record on the primary input stream
¡ being written to the secondary output stream or writing the record on the primary output
¡ stream relative to the record one the secondary input stream. Otherwise it delays the
¡ record until end-of-file.
¡ Commit Level: digest starts on commit level -2. When APPEND is specified digest verifies
¡ that no secondary streams are defined. When VERIFY is specified, digest verifies that the
¡ secondary input stream is not connected. digest then commits to 0.
¡ Premature Termination: digest terminates when it discovers that any of its output
¡ streams is not connected.
¡ Notes:
| 1. When the input data are in EBCDIC and you wish to send a signed ASCII message, you
| should translate appropriately and join all lines with carriage return and line feed
| before passing the record to digest APPEND.
¡ 2. digest discards records on the secondary input stream.
¡ 3. digest SHA1 and digest SHA256 use hardware instructions when the message security
¡ assist feature is installed.
¡ 4. “Hardware instructions” should be taken to mean “Message-security Assist” and
¡ “Message-security Assist Extension 1” facilities. digest specifically does not support
¡ Cryptographic coprocessors (“Integrated cryptographic facility”).
¡ 5. As NIST does not publish a reference test for SHA384, no test case is available. SHA384
¡ is supposedly a simple variation of SHA512, but it will generate a different hash than
¡ the first 48 bytes of the SHA512 hash of a particular stream of bytes.
¡ 6. The MD5 function is “derived from the RSA Data Security, Inc. MD5 Message-Digest
¡ Algorithm”.
Depending on the CMS level and the actual syntax of the parameters, diskback selects the
appropriate device driver to perform the actual I/O to the file.
CMS
──DISKBACK──string──
Operation: The actual device driver to be used is selected based on the argument string:
Notes:
1. fileback is a synonym for diskback.
Depending on the operating system and the actual syntax of the parameters, diskfast selects
the appropriate device driver to perform the actual I/O to the file.
──DISKFAST──string──
Warning: diskfast behaves differently when it is a first stage and when it is not a first
stage. Existing data can be overlaid when diskfast is unintentionally run other than as a
first stage. To use diskfast to read data into the pipeline at a position that is not a first
stage, specify diskfast as the argument of an append or preface control. For example,
|append diskfast ...| appends the data produced by diskfast to the data on the primary
input stream.
Operation: The actual device driver to be used is selected based on the argument string:
See Also: >, >>, <, diskback, diskrandom, diskslow, diskupdate, members, and pdsdirect.
Notes:
1. filefast is a synonym for diskfast.
2. Use <sfsfast or >>sfsfast to access a file using a that would be scanned by diskfast as a
mode letter or a mode letter followed by a digit.
Depending on the CMS level and the actual syntax of the parameters, diskrandom selects
the appropriate device driver to perform the actual I/O to the file.
CMS
──DISKRANDOM──string──
Operation: The actual device driver to be used is selected based on the argument string:
Notes:
1. filerandom is a synonym for diskrandom.
Depending on the operating system and the actual syntax of the parameters, diskslow
selects the appropriate device driver to perform the actual I/O to the file.
──DISKSLOW──string──
Warning: diskslow behaves differently when it is a first stage and when it is not a first
stage. Existing data can be overlaid when diskslow is unintentionally run other than as a
first stage. To use diskslow to read data into the pipeline at a position that is not a first
stage, specify diskslow as the argument of an append or preface control. For example,
|append diskslow ...| appends the data produced by diskslow to the data on the
primary input stream.
Operation: The actual device driver to be used is selected based on the argument string:
See Also: >, >>, <, disk, diskback, diskrandom, diskupdate, members, and pdsdirect.
Notes:
1. fileslow is a synonym for diskslow.
2. Use <sfsslow or >>sfsslow to access a file using a that would be scanned by diskslow
as a mode letter or a mode letter followed by a digit.
Depending on the CMS level and the actual syntax of the parameters, diskupdate selects the
appropriate device driver to perform the actual I/O to the file.
CMS
──DISKUPDATE──string──
Operation: The actual device driver to be used is selected based on the argument string:
Notes:
1. fileupdate is a synonym for diskupdate.
┌─FIRST─┐ ┌─1──────┐
──DROP──┼───────┼──┼────────┼──┬───────┬──
└─LAST──┘ ├─number─┤ └─BYTES─┘
└─*──────┘
FIRST Records are discarded from the beginning of the file. This is the default.
LAST Records are discarded from the end of the file.
number Specify the count of records or bytes to discard. The count may be
zero, in which case nothing is discarded.
* All records are discarded.
BYTES The count is bytes rather than records.
Operation: When BYTES is omitted, drop FIRST copies the specified number of records to
the secondary output stream (or discards them if the secondary output stream is not
connected). It then passes the remaining input records to the primary output stream.
drop LAST stores the specified number of records in a buffer. For each subsequent input
record (if any), drop LAST writes the record that has been longest in the buffer to the
primary output stream and then stores the input record in the buffer. At end-of-file, drop
LAST flushes the records from the buffer into the secondary output stream (or discards
them if the secondary output stream is not connected).
When BYTES is specified, operation proceeds as described above, but rather than counting
records, bytes are counted. Record boundaries are considered to be zero bytes wide. In
general, the specified number of bytes will have been dropped in the middle of a record,
which is then split after the last byte. When FIRST is specified the first part of the split
record is discarded and the remainder is selected. When LAST is specified, the first part of
the split record is selected and the second part is discarded.
Streams Used: Records are read from the primary input stream. Secondary streams may
be defined, but the secondary input stream must not be connected. drop FIRST severs the
secondary output stream before it shorts the primary input stream to the primary output
stream. drop LAST severs the primary output stream before it flushes the buffer into the
secondary output stream.
Record Delay: An input record is written to exactly one output stream when both output
streams are connected. drop FIRST does not delay the record. drop LAST delays the
specified number of records.
Commit Level: drop starts on commit level -2. It verifies that the secondary input stream
is not connected and then commits to level 0.
duplicate—Copy Records
duplicate writes each input record into the pipeline one more time than the specified
number.
┌─1───────┐
──DUPlicate──┼─────────┼──
├─snumber─┤
└─*───────┘
Type: Filter.
Record Delay: An input record is consumed after all corresponding output records have
been written.
Premature Termination: duplicate terminates when it discovers that its output stream is
not connected.
CMS/TSO Pipelines does not buffer the contents of the pipeline. Thus, though an infinite
supply of records is at hand, records are not produced faster than they are consumed. That
is, duplicate * provides an infinite supply of records, but it produces only one at a time.
Notes:
1. duplicate -1 consumes all input and produces no output.
When elastic has two input streams, the secondary input stream is assumed to be a feed-
back from the stages connected to the primary output stream.
──ELASTIC──
Type: Gateway.
Operation: When the secondary input stream is not defined, elastic reads records as they
arrive and writes them as they are consumed. It tries to minimise the number of records
buffered inside.
When the secondary input stream is defined, elastic first passes the primary input stream to
the primary output stream, buffering any records it receives on the secondary input stream.
When the primary input stream is at end-of-file, elastic enters a listening mode on the
secondary input stream. As long as it has records buffered, it writes to the primary output
stream and reads what arrives at the secondary input stream and stores it in the buffer.
elastic flushes its buffer and terminates when the secondary input stream reaches end-of-
file. elastic also terminates when the buffer is empty and there is no input record available
after it has suspended itself to let all other ready stages run. At this point there should be
no further records in the feedback loop; elastic terminates, because reading a further record
would be likely to cause a stall.
Put another way: When elastic has a secondary input stream, it maintains a “to do” list.
It adds items to do when records arrive on the secondary input stream and it deletes an
item from the list when the corresponding output record is consumed. It terminates when
the list is empty.
Streams Used: All records are passed from the primary input stream to the primary
output stream before any records are passed from the secondary input stream to the
primary output stream.
Record Delay: When the secondary input stream is defined, the records on the primary
input stream are not delayed. elastic delays the records that it buffers; it may consume a
record before writing it, even if the record can be written immediately.
Commit Level: elastic starts on commit level -2. It verifies that the secondary output
stream is not connected, sets up a buffering stage, and then commits to level 0.
Premature Termination: elastic terminates when it discovers that its primary output
stream is not connected.
Notes:
1. Use copy when a delay of one record is sufficient. Use buffer when you know that
the complete file must be buffered; for example, when another branch of the pipeline
topology contains a sort stage.
2. It is expected that a case can be constructed that makes elastic with two input streams
terminate before all data are processed.
3. It is unspecified how many records elastic buffers at a particular time. It may buffer
more records than are required to avoid a stall.
4. elastic cannot cause a stall.
emsg—Issue Messages
emsg issues input lines as CMS/TSO Pipelines error messages under control of the message
level setting and the standard VM message editing facility (the CP command SET EMSG).
The message level controls how the message is delivered (the terminal, the console stack,
or the output from runpipe) and what additional messages are issued to pinpoint the stage
that issued the message. CP message editing can remove the message prefix or the
message text, or it can suppress the message altogether.
──EMSG──
Operation: Each line is issued with the MESSAGE pipeline command, which in turn passes
the message on to CMS unless the pipeline set is under control of runpipe or the message
level includes the bit for 256 which causes the message to be stacked instead. The line
must have a 10- or 11-character prefix with module, message number (3 or 4 digits), and
severity code.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. The input record is released after the message is issued.
Examples:
The second example shows how to disable the automatic messages; when one uses emsg it
is seldom interesting which stage issued the message.
──EOFBACK──word──┬────────┬──
└─string─┘
Type: Control.
Premature Termination: eofback terminates when it discovers that its output stream is
not connected.
Examples: To write the file being processed to the terminal, without the console stage
consuming all input:
... | eofback console | tolabel ...
Had console been used by itself, it would have written the entire input to the terminal.
When used with eofback, it shows only those records that are consumed by the following
stages, which typically will contain a partitioning selection stage.
──ESCAPE──┬─────────────────┬──
└─delimitedString─┘
Type: Filter.
Syntax Description: A delimited string is optional. The first character in the string is the
escape character. Additional characters specify further characters to be escaped. A string
consisting of double quotes, backward slash, and a vertical bar (/"\|/) is used by default.
Operation: In the input record, occurrences of characters in the argument string are
prefixed with the first character of the argument string.
Premature Termination: escape terminates when it discovers that its output stream is not
connected.
Examples:
pipe literal This: "a beautiful day" | escape | console
This: ""a beautiful day""
Ready;
Notes:
1. escape is a convenient substitute for a cascade of change filters.
fanin—Concatenate Streams
fanin passes all records on the primary input stream to the primary output stream, then all
records on the secondary input stream to the primary output stream, and so on.
──FANIN──┬────────────┬──
│ ┌──
────────┐ │
─stream─┴─┘
└──
Type: Gateway.
Streams Used: Records are passed from all defined input streams or all specified ones;
records are written to the primary output stream only.
Commit Level: fanin starts on commit level -2. It verifies that the primary output stream
is the only connected output stream and then commits to level 0.
Premature Termination: fanin terminates when it discovers that its primary output stream
is not connected.
Examples: To write two files into the pipeline, one being upper cased and the other being
lower cased:
/* CATTWO REXX */
parse arg fn1 ft1 fn2 ft2 .
'callpipe (end ?)',
'|<' fn1 ft1, /* Read first file */
'|xlate upper', /* Uppercase it */
'|f:fanin', /* Join inputs */
'|*:', /* Pass on to next */
'?<' fn2 ft2 , /* Read second file */
'|xlate lower', /* Lowercase it */
'|f:' /* Append to first */
Notes:
1. fanin can cause a pipeline network to stall if two or more input streams originate in
the same device driver; faninany cannot cause such a stall.
2. An elastic or a stage that buffers its file applied to all input streams except the primary
will prevent a stall at the expense of storage.
──FANINANY──┬────────┬──
└─STRICT─┘
Type: Gateway.
Operation: When STRICT is specified, faninany ensures that it passes the record from the
lowest-numbered stream that has a record available.
Streams Used: Records are read from all input streams; they are written to the primary
output stream only.
Commit Level: faninany starts on commit level -2. It verifies that the primary output
stream is the only connected output stream and then commits to level 0.
Premature Termination: faninany terminates when it discovers that its primary output
stream is not connected.
/* UPXMP REXX */
'callpipe (end ?)',
'|*:', /* Input stream */
'|i:inside /:xmp./ /:exmp./', /* Take contents of examples */
'|xlate upper', /* Do something on them */
'|f:faninany', /* Re-merge the streams */
'|*:', /* Pass them on */
'?i:', /* Short-circuit rest of file */
'|f:'
Notes:
1. faninany cannot cause a stall.
2. Records from any one input stream appear in the output stream in the order in which
they were read from that input stream, but they may be interspersed with records from
other input streams. When multiple streams are being read, the relative order of the
records from any two input streams is unspecified unless the input streams originate in
a common selection stage or fanout (or similar) and the meshes in the pipeline
topology consist entirely of stages that do not delay the record.
3. Depending on the number of input streams, STRICT may add significant overhead. Use
it only when you can prove from the topology that it really is needed.
──FANINTWO──
fanintwo passes the record from the primary input stream to the output only when there is
no record available on the secondary input stream. When fanintwo has written a record
that originated on the primary input stream, it passes (and consumes) as many records from
the secondary input stream as it can before it consumes the record on the primary input
stream.
Streams Used: Records are passed from the primary input stream and the secondary input
stream to the primary output stream. A record on the secondary input stream is passed in
preference to one from the primary input stream.
Record Delay: fanintwo strictly does not delay the record it passes from the secondary
input stream. It delays records it passes from the primary input stream by the number of
records arriving on the secondary input stream before the record is consumed on the
primary input stream.
Commit Level: fanintwo starts on commit level -2. It verifies that the primary output
stream is the only connected output stream and then commits to level 0.
Premature Termination: fanintwo terminates when it discovers that its primary output
stream is not connected.
Examples: To generate an input to clear a TSO screen that is displaying the three aster-
isks:
'callpipe (end ? name FANINTWO.STAGE:56)',
'?*.input.0:', /* Transactions */
'|clr: fanintwo', /* Merge with automatic enters */
'|dvmusi', /* Send to TSO */
'|ldsfcfy', /* Figure out the 3270 data */
'|f: strfind x1e', /* MORE... */
"|spec /DL0 ' /", /* Generate ENTER automatically */
'|elastic', /* No stall, please */
'|clr:', /* Pass to TSO */
'?f:', /* Data records */
'|*.output.0:' /* To output */
The point of using fanintwo here is that an inbound record containing the attention
identifier for the Enter key should be injected whenever TSO has written a line of three
asterisks to indicate that more output is waiting. Were an input command issued instead, it
would be ignored by TSO.
The workings of dvmusi and ldsfcfy are “unspecified”; and they are not supplied with
CMS/TSO Pipelines. dvmusi interfaces to the Logical Device Support Facility; ldfscfy
classifies an inbound data stream to determine the state of the simulated terminal.
Notes:
1. fanintwo is useful to close an inner feedback loop where the feedback should have
priority over the input from outside the loop.
fanout—Copy Records from the Primary Input Stream to All Output Streams
For each input record, fanout writes a copy to the primary output stream, the secondary
output stream, and so on.
┌─STOP──ALLEOF─────┐
──FANOUT──┼──────────────────┼──
└─STOP──┬─ANYEOF─┬─┘
└─number─┘
Type: Gateway.
Syntax Description: A keyword with its attendant option is optional to specify the condi-
tions under which fanout should terminate. ALLEOF, the default, specifies that fanout
should continue as long as at least one output stream is connected. ANYEOF specifies that
fanout should stop as soon as it determines that an output stream is no longer connected.
A number specifies the number of unconnected streams that will cause fanout to terminate.
The number 1 is equivalent to ANYEOF.
Streams Used: Records are read from the primary input stream; no other input stream
may be connected. Records are written to all connected output streams.
Commit Level: fanout starts on commit level -2. It verifies that the primary input stream
is the only connected input stream and then commits to level 0.
Examples: To generate a copy of the input record when the record is tested destructively:
'PIPE (end ? name FANOUT)',
'|... ',
'|two: fanout',
'|p: predselect',
'|...',
'?two:',
'| xlate upper',
'|l: locate /ANYCASE/',
'|p:',
'|...',
'?l:',
'|p:'
Notes:
1. faninany is normally used to gather records from a network of pipelines that is fed by
fanout. Strictly, it is not the converse operation, because a cascade of fanout and
faninany would generate as many copies of a particular record as there are streams
between the two stages.
──FANOUTWO──
Type: Gateway.
Operation: fanoutwo passes the input record first to the primary output stream and then to
the secondary output stream. It terminates when it receives end-of-file on the primary
output stream; it shorts the primary input stream to the primary output stream when it
receives end-of-file on the secondary output stream.
Streams Used: Records are read from the primary input stream; no other input stream
may be connected. Two streams must be defined.
: Commit Level: fanoutwo starts on commit level -2. It verifies that the secondary input
: stream is not connected and then commits to level 0.
Examples: To display the first part of the file on the terminal, up to a line containing
“stop”.
'PIPE (end ?)',
'?< input file',
'|o:fanoutwo',
'|totarget locate /stop/',
'?o:',
'|console'
This example is contrived, because the same function could be performed by having
totarget in front of console.
! ──FBAREAD──devaddr──┬─────────────┬──
! └─┤ Extents ├─┘
! Extents:
! ┌──
────────────────────┐
! ─number──┬─number─┬─┴──┤
├───
! └─*──────┘
! Syntax Description:
! devaddr Specify the device number of the disk to read. It must refer to an FBA
! device.
! Extent An extent is specified as the block number of the first block (decimal)
! followed by the number of blocks (decimal) or an asterisk which indi-
! cates to the end of the device. A single block is read when the count is
! omitted in the last extent. The first block on a device has number 0.
! Output Record Format: Each record contains sixteen bytes prefix followed by one or
! more blocks of 512 bytes.
! The blocks in an extent are written sequentially. Blocks from separate extents are never
! written to the same output record.
! Commit Level: fbaread starts on commit level -10. It allocates an I/O buffer and then
! commits to level 0.
! Premature Termination: fbaread terminates when it discovers that its output stream is
! not connected.
! The label is at offset 4 in block number 1 on the device. The range above also includes
! the 16 bytes header.
! ──FBAWRITE──devaddr──┬─word────────────────────┬──number──
! ├─STRing──delimitedString─┤
! └─*───────────────────────┘
! ──number──
! Syntax Description: Specify the device number, the current label on the device, and the
! first and last block in the writable extent.
! The first and last blocks specify the extent into which blocks are written; the actual block
! address is obtained from the input record.
! Operation: fbawrite verifies the device number and label as part of the syntax check.
! Input Record Format: Each input record supplies a contiguous range of blocks to be
! written.
! Each record contains sixteen bytes prefix followed by one or more blocks of 512 bytes.
──FBLOCK──number──┬──────┬──
└─xorc─┘
Type: Filter.
Operation: Conceptually, all records on the primary input stream are concatenated to a
single logical record, which is then written as a number of records that have fixed length.
An input record is, in general, spanned over output records (block FIXED does not allow
this). The last output record is short when the length of the concatenated input is not an
integral multiple of number and the pad character is omitted; the last record is padded to
the record length with the pad character, if present.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. Null input records are discarded.
Record Delay: fblock delays input records as required to build an output record. The
delay is unspecified.
Premature Termination: fblock terminates when it discovers that its output stream is not
connected.
Examples: To count occurrences of characters in a file, first turn it into records of one
character that sort COUNT can tally:
...| fblock 1 | sort count |...
To ensure that the length of the input records to vchar is a multiple of three:
...| fblock 3000 | vchar 12 16 |...
This ensures that characters are not spanned across record boundaries.
When it is first in a pipeline, filedescriptor reads from the file until it receives zero bytes.
When it is not first in a pipeline, filedescriptor appends the contents of its input records to
the file.
The file must have been opened by the application before it issues the PIPE command; and
the file descriptor must be closed by the application after the pipeline has completed. A
program can use the callable services interface to open a file and to close a file descriptor.
──FILEDEScriptor──number──┬─────┬──
└─hex─┘
The number specifies the file descriptor for the file to be read or written. File descriptors
are integers from zero and up. Standard input is usually associated with file descriptor 0;
| standard output with 1; and standard error with 2. When a file descriptor is assigned, it
| gets the lowest unused number.
If it is present, the second operand specifies the storage address of an area into which
filedescriptor stores additional information in the event of an error being reflected from
OpenExtensions. The application should reserve thirty-two bytes for this area. Sixteen
bytes are currently stored: The return code and reason code (each four bytes binary); and
the name of the routine that returned the error (eight bytes, character). This operand
should be specified only when the pipeline is issued from a program; results are unpredict-
able if this operand is specified from the command line or from a REXX program.
Streams Used: When filedescriptor is first in the pipeline, it writes records to the primary
output stream. When filedescriptor is not first in a pipeline, it passes the input record to
the output (if it is connected) after the record is written to the file.
Commit Level: filedescriptor starts on commit level -2000000000. It verifies that the
system does contain OpenExtensions and then commits to level 0.
Notes:
1. When a return value of -1 is received from OpenExtensions and the second operand is
omitted, filedescriptor issues error messages to identify the error before it terminates.
¡ 2. stdin is a convenience for filedescriptor 0.
¡ 3. stdout is a convenience for filedescriptor 1.
¡ 4. stderr is a convenience for filedescriptor 2.
Return Codes: When the second operand is specified, the return code is the error number
associated with the error. Otherwise the return code is the number of the error message
issued.
When it is first in a pipeline, filetoken reads from the file. When it is not first in a pipeline
and RANDOM is omitted, it writes records to the file.
The file must have been opened by the application before it issues the PIPE command; and
the file must be closed by the application after the pipeline has completed. You can use
the CMS callable services to open the file.
CMS
──FILETOKEN──hex──
┌──
─────────┐
┬───────┬┴─┬──
──┬─RANDOM──┬────────┬──┬─────────┬───
│ └─NUMBER─┘ └─BLOCKed─┘ └─range─┘ │
├─BACKwards────────────────────────────────────┤
├─UPDATE───────────────────────────────────────┤
│ ┌──
─────────────────┐ │
┬─CHOP──────────┬┴──────────────────────────┘
└──
├─COERCE────────┤
├─PAD──┬──────┬─┤
│ └─xorc─┘ │
├─NOCHOP────────┤
└─NOPAD─────────┘
Warning: filetoken behaves differently when it is a first stage and when it is not a first
stage. Existing data can be overlaid when filetoken is unintentionally run other than as a
first stage. To use filetoken to read data into the pipeline at a position that is not a first
stage, specify filetoken as the argument of an append or preface control. For example,
|append filetoken ...| appends the data produced by filetoken to the data on the
primary input stream.
Syntax Description:
BACKWARDS Read the file backwards. NOPAD is recognised only when filetoken is
first in a pipeline.
BLOCKED Write a range of records from the file as a single output record; the file
must have fixed record format. NOPAD is recognised only when filetoken
is first in a pipeline.
COERCE A convenience for PAD CHOP. COERCE is recognised only when filetoken
is not first in a pipeline.
CHOP Truncate long input records to the logical record length of the file. The
logical record length of a variable record format file is 65535 bytes.
NOPAD is recognised only when filetoken is not first in a pipeline.
NOCHOP Do not truncate long records. Issue a message instead. NOPAD is
recognised only when filetoken is not first in a pipeline.
NOPAD Do not pad short records. Issue a message on short records in fixed
format files; ignore null records in variable record format files. NOPAD
is recognised only when filetoken is not first in a pipeline.
NUMBER Prefix the record number to the output record. The field is ten characters
wide; it contains the number with leading zeros suppressed. NOPAD is
valid only after RANDOM is specified.
PAD Pad short records with the character specified. The blank is used as the
pad character if the following word does not scan as an xorc. In a fixed
format file, short records are padded on the right to the file’s record
length; in a variable record format file, a single pad character is written
for a null record. NOPAD is recognised only when filetoken is not first in
a pipeline.
: Input Record Format: When RANDOM is specified, input records contain a blank-
: delimited list where each word is a range.
When UPDATE is specified, the first 10 columns of an input record contain the number of
the record to replace in the file (the first record has number 1). Leading and trailing
blanks are acceptable; the number needs not be aligned in the field. It is an error if an
input record is shorter than 11 bytes.
The valid values for the record number depends on the record format of the file:
Fixed For fixed record format files, any number can be specified for the record
number (CMS creates a sparse file if required). An input record can contain
any number of consecutive logical records as a block. The block has a single
10-byte prefix containing the record number of the first logical record in the
block.
Variable When the file has variable record format, the record number must at most be
one larger than the number of records in the file at the time the record is
written to it. The data part of input records must have the same length as the
records they replace in the file.
Record Delay: filetoken strictly does not delay the record. When RANDOM is specified
and filetoken is not a first stage, an input record that contains a single number is not
delayed. Nor is an input record that contains a single range, when BLOCKED is specified.
Commit Level: filetoken starts on commit level -2000000000. It allocates a buffer and
then commits to level 0.
Examples: To read records from a file for random update (error checking is omitted to
make the example shorter):
/* Get private work unit */
call csl 'dmsgetwu sfsrc sfsreason workunit'
/* Open the file */
file='MY MASTER .INVENTORY'
intent='WRITE NOCACHE'
call csl 'dmsopen sfsrc sfsreason file' length(file),
'intent' length(intent) 'filetoken workunit'
xtoken=c2x(filetoken) /* Make printable */
'PIPE',
'|filetoken' xtoken 'random number' ranges,
'| ... ',
'|filetoken' xtoken 'update'
pipeRC=RC
/* Return unit of work, which closes the file implicitly */
call csl 'dmsretwu sfsrc sfsreason workunit'
Notes:
1. Note that the file token is specified as an unpacked hexadecimal number. If you
opened the file in a REXX program you must use the C2X built-in conversion function
to make the file token printable.
2. You can use one filetoken stage to read records from a file and another one to replace
or append to the same file, because the two stages use the same file token, as seen by
SFS.
¡ ──FILLUP──
¡ Type: Gateway.
¡ Operation: Initially, records are passed to the primary output stream. When the primary
¡ output stream severed by its consumer (it propagates end-of-file backwards), fillup switches
¡ to the secondary output stream, and so on until all records are copied or there is only one
¡ stream left. In the latter case, fillup shorts the primary input to the last remaining output
¡ stream.
¡ Streams Used: Records are read from the primary input stream; no other input stream
¡ may be connected.
¡ Commit Level: fillup starts on commit level -2. It verifies that the primary input stream
¡ is the only connected input stream and then commits to level 0.
¡ Premature Termination: fillup terminates when it discovers that its primary output stream
¡ is not connected.
¡ ──FILTERPACK──┬─DROP──┬───────────┬──┬─────────────┬─┬──
¡ │ └─┤ Scope ├─┘ ├─ALL─────────┤ │
¡ │ └─┤ Modules ├─┘ │
¡ │ ┌─ALL────┐ │
¡ ├─LIST──┼────────┼──┬─────────┬────────┤
¡ │ ├─GLOBAL─┤ └─HEADING─┘ │
¡ │ └─THREAD─┘ │
¡ ├─LOAD──┬───────────┬──┬─────────────┬─┤
¡ │ └─┤ Scope ├─┘ └─┤ Modules ├─┘ │
¡ │ ┌─ALL────┐ │
¡ ├─MODLIST──┼────────┼──word────────────┤
¡ │ ├─GLOBAL─┤ │
¡ │ └─THREAD─┘ │
¡ │ ┌──
────────┐ │
¡ ┬──────┬┴──────────────────┘
└─RESOLVE───
¡ └─word─┘
¡ Scope:
¡ ├──┬─GLOBAL─┬──┤
¡ └─THREAD─┘
¡ Modules:
¡ ┌──
──────┐
¡ ─word─┴──┤
├───
¡ Placement: filterpack filterpack LIST and filterpack MODLIST must be a first stage. For
¡ other variants, you may supply a list of modules or entry points on the primary input
¡ stream in addition to those specified as arguments.
¡ Syntax Description:
¡¡ DROP Terminate use of filter packages. You can drop only filter packages that
¡ have been loaded by filterpack LOAD.
¡¡ LIST Write a line for each filter package currently loaded.
¡¡ LOAD Use filter packages. Specify the names of the modules to be loaded.
¡¡ MODLIST List contents of a filter package.
¡¡ RESOLVE Resolve an entry point and write the name of the containing filter
¡ package.
¡¡ GLOBAL Specify the global scope. This is the default for CMS and for the z/OS
¡ job step task. When used with filterpack LIST or filterpack MODLIST, the
¡ search for a filter package is restricted to the specified scope.
¡¡ LOCAL Specify the thread local scope. This is the default for z/OS tasks other
¡ than the job step task. When used with filterpack LIST or filterpack
¡ MODLIST, the search for a filter package is restricted to the specified
¡ scope.
¡ 3. The use count. The number of stages currently active in the filter package.
¡ 4. The address of the entry point table.
¡ 5. The address of the keyword look up table.
¡ 6. The address of the message text table.
¡ 7. The address of the user function table for spec.
¡ filterpack MODLIST writes detailed information about the filter package specified including
¡ its entry points.
¡ filterpack RESOLVE writes one or two for each word in the arguments and input: The name
¡ of the stage and, if the name is resolved, the name of the filter package that contains it.
¡ Built-in programs are considered to be in the filter package builtin (one leading blank),
¡ which is in lower case and has a leading blank.
¡ Premature Termination: filterpack terminates when it discovers that its output stream is
¡ not connected.
──┬─FIND──┬────────┬──────────────────────┬──
│ └─string─┘ │
└─STRFIND──┬─────────┬──delimitedString─┘
└─ANYcase─┘
Syntax Description: A string is optional for find. The string starts after exactly one blank
character. Leading and trailing blanks are significant.
Operation: Input records are matched the same way XEDIT matches text in a FIND
command (tabs 1, image off, case mixed respect):
A null string matches any record.
Blank characters in the string represent positions that must be present in the input
record, but can have any value.
An underscore in the string represents a position where there must be a blank char-
acter in the input record.
All other characters in the string must be equal to the contents of the corresponding
position in the input record. Case is ignored if ANYCASE is specified.
find copies records that match to the primary output stream (or discards them if the
primary output stream is not connected). It discards records that do not match (or copies
them to the secondary output stream if it is connected).
Streams Used: Records are read from the primary input stream. Secondary streams may
be defined, but the secondary input stream must not be connected.
Record Delay: An input record is written to exactly one output stream when both output
streams are connected. find strictly does not delay the record.
Commit Level: find starts on commit level -2. It verifies that the secondary input stream
is not connected and then commits to level 0.
The first pipeline has two literal records that are both selected (the blank in the argument
to find means “don’t care”). The argument string to find is four bytes in the second pipe-
line; thus, the record created by the second literal stage is not selected because it is only
three bytes long.
There are two blank characters after find. This means a record must have at least one
character to be selected (but it does not matter what the character is).
Notes:
1. All matching records are selected, not just the first one.
! 2. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
3. Remember that REXX continuation functionally replaces a trailing comma with a blank.
Also recall that when two strings are separated by one or more blanks, REXX concat-
enates them with a single blank. Use the concatenation operator (||) before the
comma at the end of the line if your portrait style has the stage separators at the left
side of the stage and the trailing blank is significant to your application.
──FITTING──word──
Type: Gateway.
Syntax Description:
Operation: When fitting is first in a pipeline, it accepts data from the copipe’s fitting
request parameter list and injects these data into the pipeline.
When fitting is not first in a pipeline, it makes its input records available in the copipe’s
fitting request parameter list. When the copipe has consumed the record, it is passed to the
output stream (if it is connected).
Notes:
1. fitting can be used only when the pipeline is invoked with a FITG parameter token.
2. Refer to PIPE Command Programming Interface for further information. It is avail-
able for download from:
http://pucc.princeton.edu/˜pipeline
┌─EDF─┐
┌─┼─────┼──┤ DateFormats ├─┐
│ └─CDF─┘ │
──FMTFST──┼──────────────────────────┼──
├─EDF──CDF─────────────────┤
└─CDF──EDF─────────────────┘
DateFormats:
├──┬─SHOrtdate───────────────┬──┤
├─ISOdate─────────────────┤
├─FULldate────────────────┤
├─STAndard────────────────┤
! └─STRing──delimitedString─┘
Syntax Description: Two keywords are optional to specify the format of the FST on input
or output, or both. EDF indicates that the FST is in the 64-byte format used by VM/System
Product and later versions; CDF specifies a 40-byte block used with the file system in
VM/370. Selected information from the FST is formatted unless two keywords for FST
format are specified.
When the FST is formatted, you can select one of these keywords to specify how the file’s
timestamp should be formatted:
FULLDATE The file’s timestamp is formatted in the American format, with the
century: 3/09/1946 23:59:59.
ISODATE The file’s timestamp is formatted with the century in one of the formats
approved by the International Standardisation Organisation:
1946-03-09 23:59:59.
SHORTDATE The file’s timestamp is formatted in the American format, without the
century: 3/09/46 23:59:59.
STANDARD The file’s timestamp is formatted as a single word in a form that can be
used for comparisons: 19460309235959.
!! STRING Specify custom timestamp formatting, similar to the POSIX strftime()
! function. The delimited string specifies formatting as literal text and
! substitutions are indicated by a percentage symbol (%) followed by a
! character that defines the substitution. These substitution strings are
! recognised by fmtfst:
! %% A single %.
! %Y Four digits year including century (0000-9999).
! %y Two-digit year of century (00-99).
! %m Two-digit month (01-12).
! %n Two-digit month with initial zero changed to blank ( 1-12).
! %d Two-digit day of month (01-31).
! %e Two-digit day of month with initial zero changed to blank ( 1-31).
! %H Hour, 24-hour clock (00-23).
! %k Hour, 24-hour clock first leading zero blank ( 0-23).
! %M Minute (00-59).
! %S Second (00-60).
! %F Equivalent to %Y-%m-%d (the ISO 8601 date format).
! %T Short for %H:%M:%S.
! %t Tens and hundredth of a second (00-99).
Operation: If it is present, the second keyword specifies the format to which the FST
entry should be converted. When converting to CDF, the seconds part of the timestamp is
lost; some counts are converted from fullword to halfword; a value of 64K or more is
stored as 64K-1 in the output file status table.
Input Record Format: When the arguments are omitted or the first keyword is EDF, the
input record must be 64 bytes. When the first keyword is CDF, the input record must be
40 bytes. The record is assumed to be in the format of a file status table entry as defined
for the FSSTATE macro.
Output Record Format: When two keywords are specified, the output record is 40 or 64
bytes, as appropriate to the second keyword. Otherwise, selected fields of the file status
are formatted and written as a record: the file name, type, and mode; the record format
and logical record length; the number of records and the number of disk blocks in the file;
the date and time of last change to the file.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. Null input records are discarded.
Premature Termination: fmtfst terminates when it discovers that its output stream is not
connected.
Notes:
1. fmtfst is designed to process the output from aftfst NOFORMAT, state NOFORMAT, and
statew NOFORMAT.
2. SORTED is a synonym for STANDARD.
: 3. When converting between FST formats, the number of disk blocks is copied
: unchanged, except for capping. While this does not reflect what would have happened
: when copying a file between the two disk formats, the computation is in general
: intractable and therefore not attempted.
──┬─FRLABEL──┬────────┬─────────────────────────────────────┬──
│ └─string─┘ │
| │ ┌─EXCLUSIVe─┐ │
└─STRFRLABEL──┬─────────┬──┼───────────┼──delimitedString─┘
¡ └─ANYcase─┘ └─INCLUSIVe─┘
Syntax Description: A string is optional for frlabel. The string starts after exactly one
blank character. Leading and trailing blanks are significant.
Operation: Characters at the beginning of each input record are compared with the argu-
ment string. When ANYCASE is specified, case is ignored in this comparison. Any record
matches a null argument string. A record that is shorter than the argument string does not
match.
frlabel copies records up to (but not including) the matching one to the secondary output
stream (or discards them if the secondary output stream is not connected). It then passes
the remaining input records to the primary output stream.
Streams Used: Records are read from the primary input stream. Secondary streams may
be defined, but the secondary input stream must not be connected. frlabel severs the
secondary output stream before it shorts the primary input stream to the primary output
stream.
Record Delay: An input record is written to exactly one output stream when both output
streams are connected. frlabel strictly does not delay the record.
Commit Level: frlabel starts on commit level -2. It verifies that the secondary input
stream is not connected and then commits to level 0.
Examples: To discard records on the primary input stream up to the first one beginning
with the characters 'abc':
/* Skip to first record with label */
'callpipe *: | frlabel abc'
Because this invocation of frlabel has no secondary output stream, records before the first
one beginning with the string are discarded. The CALLPIPE pipeline command ends when
frlabel shorts the primary input stream to the unconnected primary output stream; the
matching record stays in the pipeline.
Notes:
1. fromlabel is a synonym for frlabel.
! 2. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
3. Remember that REXX continuation functionally replaces a trailing comma with a blank.
Also recall that when two strings are separated by one or more blanks, REXX concat-
enates them with a single blank. Use the concatenation operator (||) before the
comma at the end of the line if your portrait style has the stage separators at the left
side of the stage and the trailing blank is significant to your application.
──FRTARGET──word──┬────────┬──
└─string─┘
Type: Control.
Syntax Description: The argument string is the specification of a selection stage. The
stage must support a connected secondary output stream. If the secondary input stream to
frtarget is connected, the argument stage must also support a connected secondary input
stream.
Commit Level: frtarget starts on commit level -2. It issues a subroutine pipeline that
contains the argument stage. This subroutine must commit to level 0 in due course.
Examples: To pass to the secondary output stream all records up to the first one that
contains a string and to pass the remaining records to the primary output stream:
/* Frtarget example */
'callpipe (end ? name FRTARGET)',
'|*:', /* Connect to input */
'|f: frtarget locate /abc/', /* Look for it */
'|*.output.0:', /* target and following */
'?f:',
'|*.output.1:' /* Records before target */
exit RC
Notes:
1. fromtarget is a synonym for frtarget.
2. It is assumed that the argument stage behaves like a selection stage: the stage should
produce without delay exactly one output record for each input record; it should termi-
nate without consuming the current record when it discovers that its output streams are
no longer connected. However, for each input record the stage can produce as many
records as it pleases on its secondary output stream; it can delete records. The stage
should not write a record first to its secondary output stream and then to its primary
output stream; this would cause the trigger record to be written to both output streams.
If the argument stage has delayed record(s) (presumably by storing them in an internal
buffer) at the time it writes a record to its primary output stream, it will not be able to
write these records to any output stream; the streams that are connected to the two
output streams are severed when the argument stage writes a record to its primary
output stream. End-of-file is reflected on this write. The records held internally in the
argument stage will of necessity be lost when the stage terminates.
3. The argument string to frtarget is passed through the pipeline specification parser only
once (when the scanner processes the frtarget stage), unlike the argument strings for
append and preface.
¡ 4. frtarget is implemented using fillup and fanoutwo. The stage under test has only
¡ primary streams defined. The primary output stream is connected to a stage that reads
¡ a record without consuming it and then terminates. This means that any usage that
¡ depends on the secondary stream in the stage under test, will fail.
Return Codes: If frtarget finds no errors, the return code is the one received from the
selection stage.
──FULLSCReen──┬───────────┬──┬──────────────────┬──┬──────────┬──
(1) ┘
└─devaddr─── (1) ┘
└─ASYNchronously─── ├─NOREAD───┤
└─CONDREAD─┘
(1) ──┬────────┬──┬────────────┬──
──┬──────────┬──┬─────────┬── (1) ──
Syntax Description: Options can be in any order. On z/OS, only the options NOREAD,
CONDREAD, and READFULL are accepted.
A hexadecimal word specifies the virtual device address of a 3270 terminal attached or
dialled to the virtual machine. The virtual machine console is used if no device address is
specified. Do not specify the virtual device address of the console for the virtual machine.
The option NOREAD suppresses reads from the terminal for all input lines. CONDREAD
suppress reads from the terminal when the control byte in column one of the input record
includes the bit for X'01'.
READFULL specifies that the terminal is to be read with a read buffer operation; the default
is to use a read modified operation.
The CMS CONSOLE macro interface is used in an XA-mode virtual machine. The options
CONSOLE and DIAG58 can be used in a 370-mode virtual machine to specify the interface to
be used to access the terminal. CONSOLE selects the CMS CONSOLE interface; DIAG58
selects the CP diagnose 58 interface. An installation can tailor the default for this option.
CMS/TSO Pipelines is shipped with DIAG58 as the default; your installation may have
selected CONSOLE as the default.
WAIT can be specified when the diagnose 58 interface is used; it causes an enabled wait to
be entered after an error has caused a unit check on a locally attached non-SNA terminal.
When the CONSOLE interface is used, a path may be specified with PATH; the default path
is a unique name generated by fullscr. The path is closed at end-of-file unless NOCLOSE is
specified. Use PATH and NOCLOSE with a dialled terminal on releases prior to CMS 6
unless you wish CMS to drop the dialled connection after fullscr closes the path.
The keyword ASYNCHRONOUSLY specifies that input records are written to the display as
they arrive and that data are read from the terminal in response to an attention interrupt.
This allows additional input records to be written to the display without waiting for the
user to cause an attention interrupt. ASYNCHRONOUSLY is incompatible with NOCLOSE.
Operation: When diagnose 58 is used to write and read the virtual machine console, the
control byte at the beginning of each record is inserted into byte 5 of the X'29' CCW;
appropriate CCWs are generated for a dialled or attached terminal. As far as CMS/TSO
Pipelines is concerned, the control byte can specify any bit pattern; if the X'A0' bits are
on (indicating clear and structured field), then an erase/write alternate CCW is prefixed and
command chained to the write CCW, and the leftmost two bits are cleared in the control
byte of the write CCW for the structured field.
On an attached device or a dialled-in screen not accessed through the CONSOLE interface, a
suitable CCW string is built based on the contents of the control byte.
A write operation is performed after fullscr has read an input record unless you use
CONDREAD and the input record is the single character X'02' or X'06'.
There are several ways to control the conditions under which fullscr waits for a response
or reads the device:
Use the option NOREAD to specify that you do not wish to wait for the terminal oper-
ator to enter a transaction. This is useful for an application that requires no operator
intervention, for instance to update a status display or to write to a printer.
Use CONDREAD to defer the decision to each individual data record. With this option,
the device is read only if the rightmost bit (X'01') of the control byte is zero. With
the CONDREAD option, the rightmost bit of the control byte set in the X'29' CCW is
always zero.
Specify ASYNCHRONOUS to read only in response to an attention interrupt from the
terminal.
Specify neither option to wait for an attention interrupt after each write, and then read
the terminal.
A record containing the single byte X'00' is written to the output when CP signals that the
terminal is in line mode at the time a full screen write is attempted (the write receives
X'8E' status). Assuming that the screen is written without error, that ASYNCHRONOUSLY
is omitted, and that fullscr does not wait for an attention interrupt, an output record
containing the single byte X'02' is written as soon as the write completes.
A solicited read operation is performed without writing to the terminal if you specify
CONDREAD and the input record contains a single X'02' or X'06'. The former causes a
read buffer operation; the latter causes a read modified. The option READFULL is ignored
for a solicited read.
When a solicited read is not performed and input is not suppressed, fullscr waits for an
attention interrupt from the terminal, reads the inbound 3270 data stream, and writes it to
the pipeline. Read modified is the default way of reading; read buffer is requested by the
option READFULL. Write for positioning is not supported. The first byte of the output
record is the attention ID (AID) character. The rest of the data depends on 3270 idiosyn-
crasies. Refer to the 3274 Description and Programmer’s Guide, GA23-0061, for details.
Input Record Format: The first position is a control byte that specifies how the record is
to be processed. When the control byte includes the bit for X'20', the remainder of the
input record consists of structured fields. When the control byte does not include the bit
for X'20', additional data are an outbound 3270 data stream; the second byte of the record
is the write control character (WCC); the remainder of the record is 3270 orders and data.
Some of the bits in this control byte are defined by CP (the control field for CCWs with
operation code X'29'); others are defined by CMS/TSO Pipelines:
100x xxxx Erase. The screen is cleared and set to the default size (24 by 80). When
no device address is specified (the virtual machine console is being used),
CP sets full screen mode.
110x xxxx Erase and write alternate. The screen is cleared and set to the alternate
size. (The alternate size depends on the terminal; it is usually larger than
24 by 80.) Real 3277s do not support the alternate mode; the command
will be rejected by CP. When no device address is specified (the virtual
machine console is being used), CP sets full screen mode.
1x10 0000 Erase and write structured field. The screen is cleared. A write struc-
tured field operation is performed. When no device address is specified
(the virtual machine console is being used), CP sets full screen mode.
0010 0000 Write structured field without erasing. A write structured field operation
is performed. When no device address is specified (the virtual machine
console is being used), the screen must be in full screen mode.
1x01 xxxx Reflect the terminal break key to CMS/TSO Pipelines. This is available
only when using the diagnose interface (DIAG58). The terminal break key
function (normally Program Access key 1) is disabled. When the bit for
X'10' is zero, the terminal break key can cause a CP break-in (that is CP
takes over the terminal in line mode), unless break-in is set to guest
control (CP TERMINAL BREAKIN GUESTCTL).
xxxx xxx1 Perform no read. When used with CONDREAD, this bit specifies that the
write operation should be performed without waiting for an attention and
without reading from the terminal.
0000 0010 Perform a read buffer. When no device address is specified (the virtual
machine console is being used), the screen must be in full screen mode.
CONDREAD must be specified to enable this; the input record must be one
byte.
0000 0110 Perform a read modified. When no device address is specified (the virtual
machine console is being used), the screen must be in full screen mode.
CONDREAD must be specified to enable this; the input record must be one
byte.
0000 000x Perform a write without erase. When no device address is specified (the
virtual machine console is being used), the screen must be in full screen
mode.
A null input record is processed as X'0040', which indicates a write with a Write Control
Character specifying no operation and no data.
Output Record Format: The output record consists of an inbound data stream from the
terminal. The first character is an attention identifier (AID) or a pseudo-AID generated by
CMS/TSO Pipelines. fullscr writes a one byte output record (a pseudo-AID) when it cannot
or should not read from the terminal. If CP has put the terminal into line mode, fullscr
does not attempt to recover. More than the present 3270 data stream may be needed to
reformat the screen or set alternate screen size, or both.
The following values (in hexadecimal) are defined for the first byte of the output record:
00 CP has put the screen into line mode, as indicated by X'8E' unit status. The write
was rejected; nothing was written to the screen.
01 CP has put the screen into line mode, as indicated by X'8E' unit status. Data were
written to the screen, but the subsequent read failed because of a CP break-in.
02 Data have been written to the terminal; the read was suppressed as requested by
NOREAD, or by CONDREAD with a control byte indicating that the read should be
suppressed.
60 The screen has been read in response to a solicited read or a CP-generated attention
due to a pending CP warning. Positions 2 and 3 contain the cursor position. For a
solicited read, the data are in the format requested (read buffer or read modified).
For a CP-generated attention, a read buffer has been performed if READFULL is
specified; otherwise a read modified has been performed.
88 A structured reply has been read from the terminal. The first structured field begins
in column 2.
xx Data have been received from the terminal in response to an operator action. The
AID identifies the key that generated the inbound transmission. Positions 2 and 3
contain the cursor position. When READFULL is in effect, the remaining data contain
the complete screen buffer; when READFULL is not specified, the remaining data are
modified fields.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. The input record is consumed after the screen is written and before the
output record is written. This prevents a stall when both the input stream and the output
stream from fullscr are connected to a REXX program that controls the panels shown on the
display.
Commit Level: fullscr starts on commit level -2000000000. It ensures that the device is
not already in use by another stage, allocates a buffer, and then commits to level 0.
Premature Termination: fullscr terminates when it discovers that its output stream is not
connected. Use hole to consume output from fullscr that is not to be processed further.
Examples: PIPDSCR EXEC shipped in PIPDSCR PACKAGE uses fullscr; see also RPQRY EXEC
and 3270LOAD EXEC. SCRCTL REXX shipped in PIPGDSCR PACKAGE shows how to manage a
full screen display; the subroutine POPUP generates a panel.
To sound the alarm (assuming the terminal is in full screen mode and diagnose 58 is
used):
To poll the display to allow user input from a status display: send a record containing
X'02' (read buffer) or X'06' (read modified) to fullscr to perform a solicited read.
fullscr will then write an output record containing data read from the screen. The resulting
attention ID is X'60' (a hyphen) if the user has not caused an attention interrupt. Note
that such a read may show user input that is being entered, for which the user has not yet
pressed an attention key.
/* Poll the user */
'PIPE strliteral x02 | fullscr condread path demo noclose | var response'
If left(response,1)¬='-' /* Action key pressed? */
Then call user_input /* Input or CP break in */
Notes:
1. On a 3274 control unit supporting structured fields, you can issue any command as a
3270 data stream (3270DS) structured field. This lets you issue, for instance, an erase
all unprotected command.
2. Improper data stream programming (lack of keyboard restore (X'02') in the Write
Control Character) can get a 3270 terminal into a state where the keyboard is locked
while fullscr is waiting for input from the terminal. Use the reset key on a locally
attached terminal to enable keyboard entry. Use the ATTN key (in the upper left hand
corner of the keyboard) to gain access to CP from a terminal that is attached to an SNA
control unit.
3. CP does not reflect errors in a 3270 data stream on terminals attached via PVM logical
devices, VM/VCNA, or VM/VTAM. Thus, message 160 cannot be issued for such a
terminal. VM/VCNA disconnects the terminal in this case; CMS/TSO Pipelines hangs
until the terminal is reconnected. It may be necessary to enter CMS DEBUG and issue
HX or to IPL CMS to recover.
4. fullscr supports an attached IBM 4224 printer using the Intelligent Printer Data Stream
(IPDS). Request acknowledgement in the last structured field of a transmission so that
an attention interrupt is always generated, be that for a NACK or as a solicited
acknowledgement. This ensures that there is an inbound transmission after each write.
5. When CONSOLE is used, the macro interface requires that the first write to a path must
erase the screen unless it is a write structured field. When the control byte of the first
input record to fullscr indicates neither erase nor write structured field, and the control
unit supports structured fields, fullscr generates a dummy write structured field order
with a null 3270DS structured field to allow read or write without an initial erase.
6. When CONSOLE is used, the bit for X'10' in the control byte cannot be used to
suppress the break-in function. Instead, issue the command TERMINAL BRKKEY NONE
(or TERMINAL BRKKEY PF24 if the terminal has 12 program function keys) to suppress
the break function.
7. CMS/TSO Pipelines performs I/O operations directly to the terminal when using the
diagnose interface. Specify the WAIT option to make fullscr enter an enabled wait
state when a unit check occurs. The screen is not reset after the error condition; you
can enter test mode on the terminal and display the hardware control blocks with
information about where your 3270 data stream is in error. Go back to the normal
mode and log on to VM again. Press reset and clear the screen if you do not go into
test mode. Enter the immediate command HW to get out of the enabled wait.
9. Though IBM 3270 terminals observe the protocols and orders specified in IBM 3270
Information Display System, Data Stream Programmer’s Reference, GA23-0059, terminal
emulators and protocol converters in general have been observed to divert from this
specification, in particular amongst the ones found in university environments.
Though this may be regrettable, it is a fact of life, with which a pipeline programmer
must cope.
──FULLSCRQ──┬─────────┬──
└─devaddr─┘
Syntax Description: A device address is optional on CMS. The log on terminal is queried
when no address is specified. No arguments are allowed on z/OS.
Operation: One record containing the terminal characteristics is written to the primary
output stream.
Output Record Format: On CMS, the output record consists of sixteen bytes of informa-
tion from diagnose 24 followed by the information returned by diagnose 8C. On z/OS,
this information is synthesised from information provided by TSO, possibly augmented with
the response to a device query.
Commit Level: fullscrq starts on commit level -2000000000. It obtains the information
required, in some cases by doing I/O to the terminal, and then commits to 0.
Notes:
1. CP caches the query reply; the contents of the output record reflect the status at the
time the user logged on or last reconnected. Applications that need to know which
symbol sets are currently loaded should perform their own query.
When fullscrs is not first in a pipeline it is assumed that the input record is in the format
produced by fullscrq; when fullscrs is first in a pipeline it prefixes fullscrq to produce a
record describing the terminal or the specified display (which must have been dialled in).
──FULLSCRS──┬─────────┬──
└─devaddr─┘
Syntax Description: A device address is optional on CMS. The device (or the log on
terminal) may be queried by fullscrs when there is insufficient information in the input
record. The device address is verified only if fullscrs is first in a pipeline or if it needs to
perform an I/O operation to the device.
3. APL/TEXT flag:
0 APL/TEXT not present.
1 3278 APL/TEXT is present; use X'08' graphics escape orders.
2 3277 APL/TEXT is present; use X'1D' escape sequences.
Commit Level: fullscrs starts on commit level -1. It verifies its arguments and then
commits to 0.
Examples: To format the device information so that the character set information is on a
line by itself:
pipe fullscrs | spec word 1.10 1 write word 11 1 write word 12-* 1 | conso
No real device attached for CONSOLE.
... Issued from stage 1 of pipeline 1.
... Running "fullscrq".
Ready(00166);
──GATE──┬────────┬──
└─STRICT─┘
Type: Gateway.
Operation: gate issues the SELECT ANYINPUT pipeline command to wait for a record to
¡ arrive on any of its input streams. When a record arrives on the primary input stream, the
¡ primary input stream is shorted to the primary output stream gate then terminates.
When a record arrives on a stream other than the primary input stream and the option
STRICT is specified, gate checks the primary input stream to see if a record is available
before passing the record on other input streams to the corresponding output stream; gate
¡ then terminates without consuming the record.
When a record arrives on a stream other than the primary input stream and STRICT is
omitted, gate may continue passing records while the primary input stream has a record
ready; how many records depends on the pipeline dispatcher’s strategy, which is
unspecified. However, when gate is used to gate output from a selection stage, it may be
known that there can be only one record available on all inputs at any one time; in this
case you can avoid the overhead of the STRICT option.
If no record arrives on the primary input stream, gate terminates normally when all input
streams are at end-of-file.
¡ Streams Used: The primary input stream is shorted to the primary output stream.
Records are passed from other input streams to the corresponding output stream.
gate ignores end-of-file on its primary output stream; it propagates end-of-file between the
two sides of streams 1 and higher.
Commit Level: gate starts on commit level -2. It allocates the resources it needs and and
then commits to level 0.
Examples: To terminate a starmsg and an udp stage when the immediate command STOP
is issued:
A subroutine pipeline that terminates when it meets a record containing the string abc (in
essence the function performed by totarget):
/* To target */
'callpipe (end ? name GATE)',
'|*:', /* Read input */
'|l: locate /abc/', /* Select the trigger */
'|g: gate', /* Force it to terminate */
'?l:', /* Records that don't contain string */
'|g:', /* Until one that does */
'|*:' /* Pass output */
exit RC
This subroutine pipeline passes records until a record is met that causes locate to write to
its primary output. When this happens, gate terminates. This, in turn, severs both output
streams from locate, and locate terminates without consuming the trigger record.
Notes:
¡ 1. Use a cascade of gate stages instead of a single stage having more than two streams
¡ defined to avoid stalls when gate is blocked in writing a record. This also avoids the
¡ overhead of specifying STRICT.
In the round robin mode, gather passes a record from the primary input stream to the
primary output stream, then a record from the secondary input stream to the primary output
stream, and so on. It returns to the primary input stream when it has passed a record from
all defined streams.
In the stream identifier mode, gather reads a record from the primary input stream and
extracts a stream identifier from a specified input range. If the stream identifier is null,
blank, decimal zero, or an identifier that resolves to stream zero, the record is passed to the
primary output stream. Otherwise a record is passed from the designated stream to the
primary output stream.
┌─STOP──ALLEOF─────────┐
──GATHER──┼──────────────────────┼──
├─STOP──┬─ALLEOF─┬─────┤
│ ├─ANYEOF─┤ │
│ └─number─┘ │
└─STREAMid──inputRange─┘
Type: Gateway.
STOP ALLEOF, the default, specifies that gather should continue as long as at
least one input stream is connected. ANYEOF specifies that gather should
stop as soon as it determines that an input stream is no longer connected.
A number specifies the number of unconnected streams that will cause
gather to terminate. The number 1 is equivalent to ANYEOF.
STREAMID Specify the input range that contains the stream identifier to be used to
select the input stream to read from.
When STREAMID is specified and the record is passed from a stream that is not the primary
input stream, the record on the primary input stream is consumed after the record that is
passed.
Commit Level: gather starts on commit level -2. It verifies that the primary output
stream is the only connected output stream and then commits to level 0.
Premature Termination: gather terminates when it discovers that its primary output
stream is not connected. gather STREAMID terminates as soon as it reaches end-of-file on
any of its input streams.
Examples: gather is often used to gather the records that deal spread out to a set of
parallel streams. Such a pipeline specification is usually built in a loop:
pipe='(end ?)'
beg='*:|D:deal'
end='gather|*:'
do i=1 to streams
pipe=pipe '|parallel process|G:' end
beg='?D:' /* Just the stream next time */
end='' /* Only first time */
end
'callpipe (end ?)' pipe
gather STREAMID is designed for the application that needs to process some records
through a particular pipeline segment and others in some other way, where the processing
involves elastics. Assume that records containing 1 in the first column must be sent to a
server for processing; other records contain 0 in column one. To allow for some overlap,
the server should run in parallel; there might even be several server threads (using deal):
When doing this type of pipeline networks, you should be careful not to flood the server.
In this example this is ensured by the keyword ONERESPONSE, which makes tcpclient not
delay the record; thus, fanout will be blocked if a second request arrives before the earlier
one is processed.
Notes:
1. Input records must arrive in the order that gather reads them. Use faninany when the
order cannot be predicted.
getfiles—Read Files
getfiles reads the contents of files into the pipeline. The files to read (as defined for <) are
specified in input records. On CMS, this consists of the file name, the file type, and
optionally the file mode. On z/OS, a single word or two words are acceptable.
──GETfiles──
Operation: getfiles transforms each input record into a subroutine pipeline that is issued
with pipcmd to read the file. A file is unpacked if it is packed.
Input Record Format: Input lines list files to be read into the pipeline. If present in the
first seven columns, the string ' &1 &2 ' is ignored. (It is generated by the CMS
command LISTFILE with the EXEC option.) After this string is removed, the first three
words of the input line are passed as the arguments to a < stage, which reads the contents
of the file into the pipeline.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. Null input records are discarded.
Record Delay: getfiles writes all output for an input record before consuming the input
record.
Examples: To read files whose names match a pattern and count the number of lines in
them all:
pipe cms listfile ug* script h | getfiles | count lines | console
Notes:
1. getfiles terminates prematurely only if the pipeline stalls.
Return Codes: The return code from getfiles is the one from pipcmd, which in turn is the
aggregate of the return codes from the CALLPIPE pipeline commands that issue the subrou-
tine pipelines. A negative return code causes pipcmd to terminate; a positive return code
indicates that all input records have been processed, though one or more are processed in
error.
¡ ──GREG2SEC──┬────────────────────┬──
¡ └─OFFSET──┬─number─┬─┘
¡ └─*──────┘
¡ Type: Filter.
¡ Syntax Description:
¡ Streams Used: Records are read from the primary input stream and written to the primary
¡ output stream. Null and blank input records are discarded.
¡ Premature Termination: greg2sec terminates when it discovers that its output stream is
¡ not connected.
¡ Notes:
¡ 1. The epoch started at 00:00:00 GMT on January first, 1970. This is the epoch used in
¡ UNIX systems.
¡ 2. LOCAL may also be specified to apply the local time zone offset.
¡ 3. A time zone offset of 86399 is not the same as one of -1.
¡ 4. For dates before year 1970, greg2sec ignores all issues as to whether the day actually
¡ occurred or the year existed at all.
¡ 5. The largest valid input timestamp is 99991231235959.
¡ 6. Leap seconds are not accounted for, as most UNIX systems also ignore this issue.
Help about CMS Pipelines is stored either in the file PIPELINE HELPLIB or as standard CMS
help files. Help about TSO Pipelines is stored in a partitioned data set which should be
allocated to FPLHELP.
On CMS, help requests for CMS Pipelines topics are forwarded to standard CMS HELP in
either of these cases:
help is issued and the STYLE configuration variable is set to DMS.
help or ahelp cannot find the file PIPELINE HELPLIB or the requested information is not
in the library.
The remainder of this article applies to the case where PIPELINE HELPLIB is used or where a
DB2 Server for VM topic is displayed.
When help finds the information to display, help is displayed on your terminal (using a
normal XEDIT session on CMS) unless the primary output stream is connected; when the
output stream is connected, the information is written to the pipeline rather than to the
terminal.
──┬─HELP──┬──┬─────────────────────┬──
└─AHELP─┘ │ ┌─BUILTINS─┐ │
├─MENU──┼──────────┼──┤
│ ├─COMMANDS─┤ │
│ ├─HOST─────┤ │
│ ├─MESSAGES─┤ │
│ ├─OTHER────┤ │
│ └─SYNTAX───┘ │
├─MSG──number─────────┤
├─number──────────────┤
├─SQL──string─────────┤
├─SQLCODE──┬────────┬─┤
│ └─number─┘ │
└─word────────────────┘
Operation: When MENU is specified, you can select other menus or members to be
displayed with the cursor and press the Enter key or Program function key 1 (or 13).
When CMS HELP is used, help calls standard CMS HELP for PIPE MENU; you are offered one
menu for all built-in programs and all pipeline commands.
Tailoring help: The help library has character graphics and syntax diagrams using code
points that display correctly on a 3270 with TEXT ON.
When help information is displayed in an XEDIT session (as opposed to using the system
HELP), you can control how the XEDIT session is set up and you can change the help file as
it is loaded into the XEDIT session.
When help information is displayed in an XEDIT session, the contents of the global variable
PIPELINE_HELP_XEDIT_OPTIONS are automatically presented as options on the XEDIT
command after a left parenthesis (which you should not add to the variable). This allows
you to specify which profile to run or to suppress running a profile. A left parenthesis on
the invocation of help sets the global variable permanently to the text following the paren-
thesis.
After the options are scanned for a left parenthesis, help looks for a REXX filter by the
name xithlp03. If the file exists, it is called as a subroutine with the arguments help
received and with its primary output stream connected to help’s primary output stream.
The exit can inspect and set the default, if it so desires. A “good” default could be
NOPROFILE NOMSG.
Two REXX filters are called, if they are present, to process lines before they are sent to
XEDIT; the filters normally call a subroutine pipeline with an xlate filter. You can use
other filters to change the help text to suit the character set in your terminal.
XITHLP02 REXX is used for lines in a menu; XITHLP01 REXX is called when sending help text
to XEDIT. This example shows how to use only plus and hyphen for character graphics.
/* XITHLP01 REXX -- exit to translate lines for no TEXT feature */
'callpipe (name XITHLP01)',
'|*:',
'|xlate *-* ea-eb + ee-ef + ab-ac + bb-bc + bf - 8f + fa 4f',
'|*:'
exit RC
Streams Used: If the primary output stream is connected, help is written to the primary
output stream rather than being displayed. This applies even when CMS HELP is used.
Premature Termination: When the primary output stream is connected initially (and help
writes the information to the output rather than displaying it), help terminates when the
primary output stream becomes not connected.
Examples: Use help connected to xedit to display help information from SQL or from
in the current XEDIT ring:
PIPELINE HELPLIB
Notes:
1. When operation reverts to standard CMS HELP, severity codes must be specified with
message numbers; built-in program names and pipeline commands must be spelt out to
the minimum abbreviation.
2. Use help with a connected output stream to obtain help information when you wish to
display it without additional XEDIT commands being issued.
3. As of CMS/TSO Pipelines level 1.1.10, the help library carries an index member,
which relates the various names to the actual member names. As a result, more than
eight characters can be used when requesting help for a topic.
Configuration Variables: For help, the configuration variable STYLE determines whether
the PIPELINE HELPLIB is used or not. help goes directly to CMS HELP when the style is
DMS. In the PIP and FPL styles help and ahelp are synonymous.
When ahelp is used in the DMS style and the file PIPELINE HELPLIB is not on an accessed
mode, ahelp issues a dummy CMS HELP to make CMS access its help disk (where PIPELINE
HELPLIB is stored) and then looks for the library one more time.
Syntax Description: The argument specifies the path to a file. When the first non-blank
character is neither a single quote (') nor a double quote ("), the path is a blank-delimited
word. Otherwise the path is enclosed in quotes in the REXX fashion; two adjacent quotes
of the type that encloses the path represent a single occurrence of that quote. Only path
names that contain blanks must be enclosed in quotes. A word that is not enclosed in
quotes can contain quotes in the second and subsequent position; such quotes should not
be “doubled up”.
Operation: When hfs is first in the pipeline, it reads bytes from the file into the pipeline.
The number of bytes read at a time is unspecified.
When hfs is not first in the pipeline, it appends the contents of its input records to the file.
Streams Used: When hfs is first in the pipeline, it writes records to the primary output
stream. When hfs is not first in a pipeline, it passes the input record to the output (if it is
connected) after the record is written to the file.
Commit Level: hfs starts on commit level -2000000000. It verifies that the system does
contain OpenExtensions, opens the file, and then commits to level 0.
To append to a file:
pipe literal startx | block 100 textfile | hfs /u/john/.profile
Notes:
1. Shell variables are not expanded; hfs does not run in the OpenExtensions environment.
2. When the first character of the path is not a forward slash (/), OpenExtensions prefixes
the current working directory to the path.
3. OpenExtensions files are byte stream files. That is, they contain a number of bytes,
but are not structured into records. Use block TEXTFILE to append newline characters
to logical records that contain textual data.
Syntax Description: The argument specifies the path to a directory, which must exist.
When the first non-blank character is neither a single quote (') nor a double quote ("), the
path is a blank-delimited word. Otherwise the path is enclosed in quotes in the REXX
fashion; two adjacent quotes of the type that encloses the path represent a single occur-
rence of that quote. Only path names that contain blanks must be enclosed in quotes. A
word that is not enclosed in quotes can contain quotes in the second and subsequent posi-
tion; such quotes should not be “doubled up”.
Commit Level: hfsdirectory starts on commit level 0. It verifies that the system does
contain OpenExtensions, allocates a buffer, opens the file, and then commits to level 0.
Notes:
1. Shell variables are not expanded; hfsdirectory does not run in the OpenExtensions
environment.
2. When the first character of the path is not a forward slash (/), OpenExtensions prefixes
the current working directory to the path.
3. OpenExtensions files are byte stream files. That is, they contain a number of bytes,
but are not structured into records. Use block TEXTFILE to append newline characters
to logical records that contain textual data.
4. Pass the output from hfsdirectory to hfsstate to obtain information about the file.
──┬─HFSQuery─┬──
└─BFSQuery─┘
Input Record Format: A request code which may be followed by additional parameters.
The request codes are:
pwd Query the path to the present working directory. This can be set by
cd hfsxecute.
symlink Query the contents of the symbolic link. The path to the symbolic link
is specified as the second word of the line. It can be enclosed in quotes.
path:
├──┬─word─────┬──┤
├─'string'─┤
└─"string"─┘
uname Query the name of the Current Operating System. The output record
contains information from the BPXYUSTN data structure.
Output Record Format: The output record for the pwd request contains the path to the
current working directory.
The output record for the symlink request contains the contents of the symbolic link. That
is, the name of the file being pointed to.
The output record for the uname request contains five tab-delimited fields (there are four
tab characters in the record):
1. Name of implementation of operating system.
2. Name of this node within a communications network.
3. Release level.
4. Version level.
5. Name of the hardware type.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. Null and blank input records are discarded.
Commit Level: hfsquery starts on commit level -2000000000. It verifies that the system
does contain OpenExtensions, allocates a buffer, and then commits to level 0.
Premature Termination: hfsquery terminates when it discovers that its output stream is
not connected.
Syntax Description: The argument specifies the path to a file. When the first non-blank
character is neither a single quote (') nor a double quote ("), the path is a blank-delimited
word. Otherwise the path is enclosed in quotes in the REXX fashion; two adjacent quotes
of the type that encloses the path represent a single occurrence of that quote. Only path
names that contain blanks must be enclosed in quotes. A word that is not enclosed in
quotes can contain quotes in the second and subsequent position; such quotes should not
be “doubled up”.
Operation: hfsreplace opens the file with the O_TRUNC flag, which causes the file to be
truncated to a null file.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. Null input records are discarded.
Commit Level: hfsreplace starts on commit level -2000000000. It verifies that the system
does contain OpenExtensions, opens the file, and then commits to level 0.
Notes:
1. Shell variables are not expanded; hfsreplace does not run in the OpenExtensions envi-
ronment.
2. When the first character of the path is not a forward slash (/), OpenExtensions prefixes
the current working directory to the path.
3. OpenExtensions files are byte stream files. That is, they contain a number of bytes,
but are not structured into records. Use block TEXTFILE to append newline characters
to logical records that contain textual data.
4. Warning: The file is opened on commit level -2000000000. Opening the file causes
it to be truncated to a null file. This will destroy any existing data in the file, even if
the pipeline is abandoned before reaching commit level 0.
──┬─HFSSTATe─┬──┬──────────┬──┬───────┬──
└─BFSSTATe─┘ └─NOFORMAT─┘ └─QUIET─┘
NOFORMAT Write the unformatted BPXYSTAT data area to the output stream.
QUIET Set return code zero, even when one or more files do not exist.
Streams Used: Records are read from the primary input stream; no other input stream
may be connected. Null input records are discarded. When a file is found, information
about it is written to the primary output stream (if it is connected). When a file is not
found, the input record is passed to the secondary output stream (if it is connected).
Commit Level: hfsstate starts on commit level -2000000000. It verifies that the system
does contain OpenExtensions and that the secondary input stream is not connected and
then commits to level 0.
Notes:
1. Shell variables are not expanded; hfsstate does not run in the OpenExtensions environ-
ment.
2. When the first character of the path is not a forward slash (/), OpenExtensions prefixes
the current working directory to the path.
3. OpenExtensions files are byte stream files. That is, they contain a number of bytes,
but are not structured into records. Use block TEXTFILE to append newline characters
to logical records that contain textual data.
4. The time of last modification is reported in a format that is suitable for sorting. The
time is UTC; no time zone offset is applied.
5. On CMS, OpenExtensions directories do not contain the “dot” (.) and “dot-dot” (..)
files; the example above was run on z/OS.
──┬─HFSXecute─┬──
└─BFSXecute─┘
Input Record Format: Each input record contains a request. Case is ignored in the first
word; it is respected in path names.
path:
├──┬─word─────┬──┤
├─'string'─┤
└─"string"─┘
mode:
(1) ─┬────────────────────────────────────────────┬──┤
├──octalDigit───
└─octalDigit──┬────────────────────────────┬─┘
└─octalDigit──┬────────────┬─┘
└─octalDigit─┘
Note:
1 There are no blanks between the digits of a mode.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. hfsxecute passes the input record to the output (if it is connected) after it
has passed the request to OpenExtensions.
Commit Level: hfsxecute starts on commit level -2000000000. It verifies that the system
does contain OpenExtensions and then commits to level 0.
Notes:
1. Shell variables are not expanded; hfsxecute does not run in the OpenExtensions envi-
ronment.
2. When the first character of the path is not a forward slash (/), OpenExtensions prefixes
the current working directory to the path.
3. OpenExtensions files are byte stream files. That is, they contain a number of bytes,
but are not structured into records. Use block TEXTFILE to append newline characters
to logical records that contain textual data.
──HLASM──┬──────────────────┬──
└─word──┬────────┬─┘
└─string─┘
Type: Filter.
Syntax Description:
word Specify the file name of the input file. The default is “$temp$”.
string Specify the parameter string for the High Level Assembler. This string
must not contain exit specifications for any of the exits that are being
used by hlasm.
Whenever an exit is driven, hlasm supplies a record to the High Level Assembler or
disposes of an output record. The High Level Assembler does not read from the input data
set and it does not write to the output data sets for which an exit is declared.
Output Record Format: The primary output stream contains 80-byte card images. The
secondary output stream contains the listing with whatever carriage control is specified and
at whatever length the High Level Assembler supplies. The tertiary output stream contains
the SYSADATA records; refer to the Programmer’s Guide or the data area (DSECT) for the
format.
Streams Used: One to three streams may be defined. Records are read from the primary
input stream; no other input stream may be connected.
Record Delay: The record delay is unspecified. In general, it is not possible to relate an
output record to any particular input record.
Commit Level: hlasm starts on commit level -1. It builds the parameter list and loads the
High Level Assembler into storage and then commits to level 0.
Premature Termination: Once the High Level Assembler has been called, it is not in
general possible to make it terminate prematurely. hlasm must wait until the Assembler
returns before it can terminate.
Examples: To assemble the current file in the XEDIT ring and bring the listing into the
ring:
/* Assemble file */
'extract /fname'
':1'
'xedit' fname.1 'listing'
':1'
'delete *'
address command,
'PIPE (end ? name HLASM.STAGE:94)',
'?xedit' fname.1 'assemble',
'|pad 80',
'|h: hlasm',
'|hole',
'?h:',
'|xedit' fname.1 'listing'
':1'
This example ignores issues such as setting up macro libraries, setting the option to
generate a listing file, and suppression of spurious XEDIT messages when deleting all lines
of a file.
Notes:
1. In addition to connecting the output streams, you must also enable the production of
the corresponding file. The OBJECT option is specified by default; you can specify
NOOBJECT to override it, but this will attract an assembler diagnostic message. LIST
must be specified (or defaulted) to obtain any output on the secondary output stream.
2. The exits all use the same module name,
Publications: High Level Assembler for MVS & VM & VSE: Programmer’s Guide MVS
& VM Edition, SC26-4941. High Level Assembler for MVS & VM & VSE: Language
Reference MVS and VM, SC26-4940. High Level Assembler for MVS & VM & VSE:
Installation and Customization Guide MVS & VM Edition, SC26-3494.
Return Codes: Unless messages are issued, the return code is the one received from the
High Level Assembler.
──HLASMERR──
Type: Filter.
Premature Termination: hlasmerr terminates when it discovers that its output stream is
not connected.
Note that the ADATA option must be specified for hlasm to generate the ADATA records.
And thus, a placeholder file name must also be specified.
hole—Destroy Data
hole reads and discards records without writing any. It can be used to consume output
from stages that would terminate prematurely if their output stream were not connected.
──HOLE──
Streams Used: hole reads from all defined input streams; it does not write output. The
output streams remain connected until hole reaches end-of-file on all its input streams.
Examples: To write two 3270 data streams that generate no response to the terminal:
To issue the CMS command “zonq”, discarding terminal output and processing the lines
stacked by the command after the command has ended:
pipe cms zonq | hole | append stack | ...
This works because the primary output stream from hole remains connected until end-of-
file on its input; thus, append starts stack only after the ZONQ command has ended.
To discard records up to the next record that contains USER in the first four columns and a
blank in column five:
/* Skip rest of the cards for the user */
'callpipe *: | tolabel USER | hole'
The hole stage consumes all output from tolabel. If it were omitted, the subroutine pipe-
line would terminate immediately without consuming any records.
Notes:
1. Connect append stages to multiple output streams from hole to start more than one
device driver stage after a stream reaches end-of-file.
z/OS CMS
──HOSTBYADDR──
Output Record Format: Records on the secondary output stream contain three or more
words:
1. The input IP address.
2. The numeric ERRNO associated with the failure to resolve the address. (For example,
“2053”.)
3. The symbolic ERRNO associated with the failure to resolve the address. (For example,
“EUNKNOWNHOST”.)
4. The explanation of the ERRNO. (For example, “Unknown host”.)
Streams Used: Secondary streams may be defined. Records are read from the primary
input stream; no other input stream may be connected. Null and blank input records are
discarded.
Commit Level: hostbyaddr starts on commit level -2. It verifies that the secondary input
stream is not connected and then commits to level 0.
Examples:
pipe literal 9.12.14.1 | hostbyaddr | console
wtscpok.itso.ibm.com
R;
pipe literal 9.55.5.13 | hostbyaddr | console
R;
Notes:
| 1. On CMS, hostbyaddr uses RXSOCKET Version 2 or later for name resolution. As a
consequence, the name is resolved using RXSOCKET rules. This implies that the file
TCPIP DATA must be available and must point to the name server. RXSOCKET (unlike
CMS Pipelines) uses the server virtual machine specified in TCPIP DATA.
2. RXSOCKET uses the file TCPIP DATA to determine the name of the TCP/IP service
machine, the IP address of the name server, and so on.
3. RXSOCKET does not support hexadecimal components of a dotted-decimal number. It
discards leading zeros in the components, and thus it treats an octal specification as a
decimal one.
z/OS CMS
──HOSTBYNAME──
Syntax Description:
Output Record Format: Records on the secondary output stream contain three or more
words:
1. The input domain name.
2. The numeric ERRNO associated with the failure to resolve the name. (For example,
“2053”.)
3. The symbolic ERRNO associated with the failure to resolve the name. (For example,
“EUNKNOWNHOST”.)
4. The explanation of the ERRNO. (For example, “Unknown host”.)
Streams Used: Secondary streams may be defined. Records are read from the primary
input stream; no other input stream may be connected. Null and blank input records are
discarded.
Commit Level: hostbyname starts on commit level -2. It verifies that the secondary input
stream is not connected and then commits to level 0.
Examples:
Notes:
| 1. On CMS, hostbyname uses RXSOCKET Version 2 or later for name resolution. As a
consequence, the name is resolved using RXSOCKET rules. This implies that the file
TCPIP DATA must be available and must point to the name server. RXSOCKET (unlike
CMS Pipelines) uses the server virtual machine specified in TCPIP DATA.
2. RXSOCKET uses the file TCPIP DATA to determine the name of the TCP/IP service
machine, the IP address of the name server, and so on.
3. There may be more than one IP address associated with a domain name. In this case,
the output line contains blank-delimited IP addresses.
──HOSTID──┬──────────────┬──
└─USERid──word─┘
Syntax Description:
USERID Specify the user ID of the virtual machine or started task where TCP/IP
runs. The default is TCPIP.
Commit Level: hostid starts on commit level -10. It connects to the TCP/IP address space
and then commits to level 0.
Premature Termination: hostid terminates when it discovers that its output stream is not
connected; hostid also stops if the immediate command PIPMOD STOP is issued or if a
record is passed to pipestop.
──HOSTNAME──┬──────────────┬──
└─USERid──word─┘
Syntax Description:
USERID Specify the user ID of the virtual machine or started task where TCP/IP
runs. The default is TCPIP.
Commit Level: hostname starts on commit level -10. It connects to the TCP/IP address
space and then commits to level 0.
Premature Termination: hostname terminates when it discovers that its output stream is
not connected.
¡ ──HTTPSPLIT──┬────────┬──
¡ └─EBCDIC─┘
¡ Type: Gateway.
¡¡ EBCDIC Parse the header records in the EBCDIC domain. By default, header
¡ records are considered to be encoded in ASCII.
¡ Operation: httpsplit processes its primary input stream in one of two modes, header or
¡ data. It starts in header mode.
¡ In header mode, the input is considered a byte stream. A leading CRLF is discarded
¡ because an empty set of headers makes no sense. (This sequence is often sent erroneously
¡ by the client as a trailing CRLF on the previous data part.) Input is deblocked for carriage
¡ return and line feed until a null line is met. The deblocked records are written to the
¡ primary output stream.
¡ The data part is processed in one of two ways. When the headers do not contain a line
¡ specifying content-length=, the primary output stream is severed and all further input
¡ data are passed to the secondary output stream. Otherwise as many bytes as specified are
¡ passed to the secondary output. Processing then reverts to header mode.
¡ Streams Used: Secondary streams may be defined. Records are read from the primary
¡ input stream and written to the primary output stream.
¡ Record Delay: httpsplit has the potential to delay one record on the primary output
¡ stream, because it can span input records. It strictly does not delay the record on the
¡ secondary output stream.
¡ Commit Level: httpsplit starts on commit level -2. It verifies that the secondary input
¡ stream is not connected and then commits to level 0.
¡ Notes:
¡ 1. The data part is not joined or deblocked; raw input records are produced, possibly
¡ parts thereof for the first and last.
¡ 2. httpsplit does its processing for the primary output stream in the ASCII domain unless
¡ EBCDIC is specified.
──IEBCOPY──┬──────────┬──
├─STRIPKEY─┤
└─OSPDSDIR─┘
Syntax Description: A keyword is optional. STRIPKEY specifies that the key portion of
each block should be discarded. OSPDSDIR specifies that the key portion should be
discarded and each block should be processed as a directory block of a partitioned data set.
Operation: iebcopy processes input records until it meets a disk block with key length
zero and data length zero (an end-of-file block). An unloaded PDS contains several logical
files: the first one is the directory of the PDS; the members follow in the order they are on
disk (rather than the order of the directory, which is alphabetical).
Input Record Format: Input records contain one or more disk blocks. Each disk block
consists of a 12-byte prefix followed by the key field (if any) and the data field. The
prefix has the form FMBBCCHHRKDD:
F A flag byte. The three leftmost bits must be zero.
M Ignored; should be zero (X'00').
BB Ignored; should be zero (X'0000').
CC Cylinder number.
HH Head number.
R Record number.
K Length of key part or zero.
DD Length of data part or zero for an end-of-file record.
Output Record Format: With no keyword specified, each output record contains the key
and data parts of a disk block. With STRIPKEY, each output record contains the data part
of a disk block. With OSPDSDIR, each output record contains a directory entry. The end-
of-file record is not written to the output.
Record Delay: iebcopy does not delay the last record written for an input record.
Premature Termination: iebcopy terminates when it discovers that its output stream is
not connected.
Notes:
1. iebcopy returns when an end-of-file record is read. A control stage is needed to load
all members of a PDS; refer to the sample file.
2. The first two logical records of the input data set must be discarded prior to iebcopy.
The data set is usually written variable blocked spanned (RECFM=VBS); use deblock to
obtain records in a format suitable for iebcopy:
pipe tape | deblock v | drop 2 | iebcopy ospdsdir | > pds drctry a
Records that are selected by the selection stage are processed by the stages between the if
stage and the first label reference to the stage.
When there is only one label reference to the if stage, rejected records are merged with the
secondary input stream and passed to the secondary output stream.
When there is a second label reference to the if stage, records that are rejected by the
selection stage are processed by the stages between the two label references. Records on
the secondary input stream are merged with the records on the tertiary input stream and
passed to the tertiary output stream.
──IF──word──┬────────┬──
└─string─┘
Type: Gateway.
Syntax Description:
Operation: if adds a pipeline specification that contains the selection stage and a faninany
stage to the running pipeline set.
┌───────────┐
*.input.1:──┤ Faninany ├──*.output.2:
│ │
*.input.2:──┤ │
└───────────┘
In the if/then configuration, the secondary output stream from the selection stage is
connected directly to the faninany stage.
Streams Used: Two streams must be defined; up to three streams may be defined. if
reads from and writes to all defined streams.
Record Delay: if strictly does not delay the record. If the stages between the label refer-
ences also do not delay the record, the output records will be in the same order as the
input records.
Commit Level: if starts on commit level -2. if does not commit; the selection stage will
cause it to commit.
Examples: To upper case records that contain “up” in the first two columns and delete
those columns from the upper cased records:
'...',
'|if1: if strfind /up/',
'|not chop 2',
'|xlate',
'|if1:',
'|...
To mark records that contain the string “abc” and shift the balance of the file three
columns to the right:
'...',
'|if1: if locate /abc/',
'|insert /-> /',
'|if1:',
'|insert / /',
'|if1:',
'|...
Notes:
1. if does not enforce any particular topology and any particular type of selection stage.
2. Do not use end characters with if. In particular, records must be able to flow both
into and out of its secondary streams and its tertiary streams.
Return Codes: Unless a message is issued by if, the return code is the one from the
selection stage.
──IMMCMD──word──┬──────────┬──
! └─INTERNAL─┘
Commit Level: immcmd starts on commit level -1. It sets up an immediate command
handler for the specified name and then commits to level 0.
Premature Termination: immcmd terminates when it discovers that its output stream is
not connected. immcmd does not complete normally. immcmd also stops if the immediate
command PIPMOD STOP is issued or if a record is passed to pipestop.
Examples: To process (in a service virtual machine) commands issued from the terminal
as well as commands sent via SMSG from other users:
/* GETCMD REXX */
address command 'CP SET SMSG IUCV'
'callpipe (end ?)',
'|immcmd cmd', /* Immediate commands */
'|spec ,00000004*, 1.16 1-* next', /* As if SMSG from self */
'|f:faninany', /* Join all */
'|*:', /* Pass to output */
'?starmsg', /* Listen for SMSGs */
'|f:', /* Merge with commands */
'?immcmd stop', /* Stop command */
'|pipestop' /* Force stop */
Two immediate commands are set up in this pipeline specification. One is for CMD; the
output from this immcmd stage is transformed into the format for special messages from
“*” and merged with special messages received from other users. The output from the
second invocation of immcmd is passed to pipestop, which signals the immcmd and
starmsg stages to terminate.
This example also shows a pipeline (the last one) that is not connected to the other pipe-
lines in a set.
Notes:
1. Multiple immcmd stages can be used, but the command name should be unique.
When more than one stage uses a particular immediate command, it is unspecified
which one is started last and receives the commands from CMS.
2. Use pad 1 to turn null lines into lines with a single blank.
3. immcmd may be useful in pipeline specifications containing delay or starmsg stages, or
both.
4. TSO keeps the keyboard locked while a command runs. Hence the need for the atten-
tion and the prompt.
! 5. immcmd ... INTERNAL cannot obscure a stall.
! 6. Use INTERNAL only when you do not wish to wait for the user to issue the immediate
! command, but wish to allow the user to issue the command while the pipeline runs.
! You need to take precautions to terminate immcmd, for example with gate, or avoid a
! stall when specifying INTERNAL.
! 7. You can find an example server infrastructure at
! http://vm.marist.edu/˜pipeline/servus.rexx
┌─BEFORE─┐
──INSERT──delimitedString──┼────────┼──┬────────────┬──
└─AFTER──┘ └─inputRange─┘
Type: Filter.
Syntax Description:
Operation: When the input range is not present in the record, the range used is the first
or the last column of the record, depending on whether the first part of the inputRange is
relative to the beginning or the end of the record.
Notes:
1. insert is a convenience; specs can perform all the functions that insert can perform.
──INSIDE──┬─────────┬──delimitedString──┬─number──────────┬──
└─ANYcase─┘ └─delimitedString─┘
Syntax Description: A keyword is optional. Two arguments are required. The first one
is a delimited string. The second argument is a number or a delimited string. A delimited
string can be a string of characters delimited in the normal XEDIT fashion (for instance,
/abc/) or it can be a literal, which in turn can be hexadecimal or binary (for instance,
xf1f2f3). The number must be zero or positive.
Operation: inside copies the groups of records that are selected to the primary output
stream (or discards them if the primary output stream is not connected). Each group
begins with the record after the one that matches the the first specified string. When the
second argument is a number, the group has as many records as specified (or it extends to
end-of-file). When the second argument is a string, the group ends with the record before
the next record that matches the second specified string (or at end-of-file).
When ANYCASE is specified, inside compares fields without regard to case. By default,
case is respected.
inside discards records before, between, and after the selected groups (or copies them to
the secondary output stream if it is connected).
Streams Used: Records are read from the primary input stream. Secondary streams may
be defined, but the secondary input stream must not be connected.
Record Delay: An input record is written to exactly one output stream when both output
streams are connected. inside strictly does not delay the record.
Commit Level: inside starts on commit level -2. It verifies that the secondary input
stream is not connected and then commits to level 0.
Examples: To process the examples in a Script file, discarding the example begin and end
tags (assuming these tags are in separate records):
...| inside /:xmp./ /:exmp./ |...
Notes:
1. With identical string arguments, inside differs from between in that inside does not
select the records that match the strings.
! 2. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
! 3. pick can do what inside does and then quite some more.
Red Neon!
No built-in program can process the output from instore PGMLIST. For all practical
purposes outstore is the only way to process the output from instore without the
PGMLIST option. In either case, do not attempt to process the file token with filter
stages.
┌──
─────────────┐
┬───────────┬┴──
──INSTORE───
├─REVERSE───┤
├─PGMLIST───┤
! └─ALET──hex─┘
Syntax Description:
! Operation: The input is read into a buffer in virtual storage or the specified data space.
| At end-of-file, an output record is produced describing the file.
When the keyword PGMLIST is specified, the descriptor list is built as specified for REXX
programs in storage; the descriptor list is written to the output.
When the keyword PGMLIST is omitted, the file is stored in a chained list of records. Each
record has an eight byte prefix consisting of a pointer to the next record and a fullword
length.
| The output record is valid only until it is consumed; the buffers are released after that.
! When ALET is specified, the data space must have been created by ADRSPACE CREATE
! INITIALISE or equivalent. It must have the storage key under which CMS/TSO Pipelines
! executes, which is X'E0' on CMS and X'80' on z/OS. instore locks the data space with a
! key identifying itself to ensure exclusive access; the lock is released when instore termi-
! nates, thus making the data space available for other use.
Output Record Format: When the keyword PGMLIST is omitted, the format of the record
! written is defined in the STORBUF member of FPLGPI MACLIB and in the built-in structure
! fplstorbuf.
Commit Level: instore starts on commit level -2. When ALET is specified, it verifies the
integrity of the data space and then commits to level 0.
When processing the output from instore in a REXX program, use the PEEKTO pipeline
command to read the line describing the file. Issue the READTO pipeline command without
operands to consume the descriptor record after the file has been processed.
Notes:
1. The output record describes the file. The output record must be obtained with a locate
mode call and completely processed before it is consumed; the buffer is returned to
the operating system by instore immediately after the record is consumed.
! 2. The file may be extracted by passing the record to OUTSTORE or, when ALET is
! specified, it may be extracted in a different virtual machine that has obtained access to
! the data space, as long as the output record is not consumed in the creating virtual
! machine. The extracting virtual machine will typically use a different ALET from the
! creating virtual machine.
! 3. Using the ALET operand may offer virtual storage constraint release, as well as data
! transfer between virtual machines.
──IP2SOCKA──
Type: Filter.
Output Record Format: A structure of sixteen bytes. Binary numbers are stored in the
network byte order, that is, with the most significant bit leftmost.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. Null and blank input records are discarded.
Premature Termination: ip2socka terminates when it discovers that its output stream is
not connected.
Examples: To convert an address to internal format and convert this structure to printable
form:
Notes:
1. On CMS, you can specify a host name or a host name followed by a domain. CMS
Pipelines calls RXSOCKET to do the actual name resolution. As a consequence, the
name is resolved using RXSOCKET rules. This implies that the file TCPIP DATA must be
available and must point to the name server. RXSOCKET (unlike CMS Pipelines) uses
the server virtual machine specified in TCPIP DATA.
ispf VCOPY reads the contents of ISPF variables into the pipeline. ispf VREPLACE stores the
contents of input records into ISPF variables.
See Chapter 12, “Using CMS/TSO Pipelines with Interactive System Productivity Facility”
on page 143 for task-oriented information.
┌──
───────────────┐
┬─────────────┬┴─┬──
──ISPF──┬─┬─┬─TBADD─┬──word──┬───────┬─┬───
│ │ ├─TBMOD─┤ └─ORDER─┘ │ └─┤ inField ├─┘ │
│ │ └─TBPUT─┘ │ │
│ └─VREPLACE───────────────────┘ │
│ ┌──
────────────────┐ │
┬──────────────┬┴──────────────┘
└─┬─TBSKIP──word─┬───
└─VCOPY────────┘ └─┤ outField ├─┘
inField:
├──word──inputRange──┬─────────┬──┤
└─NOBSCAN─┘
outField:
├──word──range──┤
Placement: ispf ispf TBSKIP must be a first stage. With other operands, ispf must not be a
first stage.
Syntax Description: The name of an ISPF service for table operation (TBADD, TBMOD,
TBPUT, or TBSKIP) is followed by an optional list of field definitions. These specify the
location of variables in the input or output record. The table name and the variable names
are translated to upper case. The keyword ORDER with ispf TBADD and TBMOD indicates
that rows are inserted in some predefined order. Refer to TBADD ORDER in Dialog
Management Services and Examples, SC34-4010.
The operands VCOPY and VREPLACE support access to ISPF function pool variables without
issuing a table request. At least one field must be specified with these operands.
The field definitions specify the positions of fields in the input and output records. For
input records, specify the field name and an input range; use the optional keyword
NOBSCAN to retain trailing blanks. For output records, specify the name and a column
range. For compatibility with the past, the keyword CHAR is ignored if it is specified after
the name of the field.
Operation: For TBADD, TBMOD, and TBPUT, variables in the function pool are set (using
the VREPLACE service) to the contents of the fields in the input record, stripped of trailing
blanks unless NOBSCAN is specified. The requested ISPF table service is then invoked to
copy the values from the function pool into the table. The input record is then copied to
the output (if connected).
For TBSKIP, the ISPF service TBSKIP is called and an output record is written containing the
contents of the specified variables (obtained by the VCOPY service); a null record is written
when no variables are specified. This process is repeated until ISPF sets return code 8 (end
of table).
ispf VCOPY discards input records. For each input record, it copies the specified variables
from the function pool and stores them in an output record. The output record is written
before the input record is consumed.
ispf VREPLACE stores the contents of the specified fields into function pool variables. It
copies the input record to the output (if connected) before consuming it.
Output Record Format: When ispf is first in a pipeline (ispf TBSKIP), the output record
contains data from variables, as defined by field definitions in the argument list. Positions
between fields are blank.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. When ispf is not first in the pipeline and VCOPY is not specified, the input
record is copied to the output after the ISPF service has been performed.
Commit Level: ispf starts on commit level -2000000000. It processes the argument
string, allocates a buffer to hold the data in the fields, and then commits to 0.
Premature Termination: ispf terminates on most nonzero ISPF return codes. ispf with the
option TBSKIP or VCOPY terminates when it discovers that its output stream is not
connected.
Examples: To store the names of the files on the CMS system disk in a table:
address ispexec,
'TBCREATE LISTFILE NAMES(FN FT FM REST) WRITE REPLACE'
address command
'PIPE (end \ name INSPIPE )',
'cms listfile * * s',
'|ispf tbadd LISTFILE',
'FN 1.8 FT 10.8 FM 19.2 REST 21-80',
'|count lines',
'|var lines'
address ispexec,
'TBCLOSE LISTFILE LIBRARY($$)'
error: exit RC
Notes:
1. To read all rows of a table, position the cursor for the table before the first row prior
to using ispf TBSKIP (for example with subcom ISPEXEC TBTOP).
2. Never address a command to ISPLINK; results are unpredictable.
3. Return code 20 is reflected by ISPEXEC in a REXX program when ISPF does not
recognise the service requested. This can happen, for example, when a PIPE command
is addressed to ISPEXEC rather than to CMS. (This is not a return code from the ispf
stage itself.)
4. When more than one field for ispf TBSKIP refers to the same range, it is unspecified
which field is stored in the output record.
5. Extension variables are not supported directly. Use ispf VREPLACE to set variables to
the contents of a record and then issue TBADD to the ISPEXEC subcommand environ-
ment, specifying the extension variables.
jeremy is useful when debugging a complex multistream pipeline in which data have
ceased to flow in some pipeline segments.
──JEREMY──
Operation: If the secondary output stream is defined, jeremy passes the input line to the
primary output stream after it has written the pipeline status to the secondary output
stream; if there is no secondary output stream defined, jeremy discards the input record
after it has written the pipeline status to the primary output stream.
Streams Used: Secondary streams may be defined. Records are read from the primary
input stream; no other input stream may be connected.
Record Delay: jeremy does not delay the last record written for an input record.
Commit Level: jeremy starts on commit level -2. It verifies that the primary input stream
is the only connected input stream and then commits to level 0.
Premature Termination: jeremy terminates when it discovers that any of its output
streams is not connected.
Examples:
pipe literal | jeremy | console
5785-RAC CMS Pipelines level 1.1.12 sublevel 6 ("0006"x)
From j.
Pipeline specification 1 commit 0
Pipeline 1
literal wait.out.
record on output 0: ""
jeremy ready.
console wait.locate.
Ready;
Notes:
1. When the configuration variable STALLACTION is set to JEREMY, CMS/TSO Pipelines
invokes jeremy when the pipeline is stalled, to note the state of the pipeline set.
| 2. Results are unpredictable, but likely undesirable, when stages that process the output
| from jeremy change the pipeline topology, as this can lead to dangling pointers.
join—Join Records
join puts input records together into one output record, inserting a specified string between
joined records. All input lines are joined into one output record when an asterisk is
specified. The maximum length of an output record can also be specified.
┌─1─────────────────┐
──JOIN──┬───────┬──┼───────────────────┼──
¡ └─COUNT─┘ ├─number────────────┤
├─*─────────────────┤
└─KEYLENgth──number─┘
──┬────────────────────────────────┬──┬────────┬──
└─delimitedString──┬───────────┬─┘ └─number─┘
¡ └─TERMinate─┘
Type: Filter.
¡¡ COUNT The number of contributing input records is prefixed the output record as
¡ 10-characters aligned to the right.
number Unless KEYLENGTH is specified, the first number specifies how many
lines are appended to the first one in a set; it can be zero or more. The
¡ default is to join pairs of lines unless the secondary input stream is
¡ defined, in which case the default is infinity.
KEYLENGTH Join lines that contain the same leading string. The number specifies the
length of key string.
delimitedStringInsert the contents of the specified string between joined records.
¡¡ TERMINATE Append the string also after the last record in a set.
¡ number The second number specifies the maximum output record length exclu-
¡ sive of the count prefix, if any. To be recognised as the maximum
record length, the number cannot be the first word of the arguments
(because it would then specify the number of lines to append rather than
the maximum record length).
Operation: When a maximum record length is specified, records are joined as defined by
the other arguments until an input record would cause more data than specified to be
loaded into the output buffer. The contents of the output buffer (if any) are flushed to the
output and a new set of records is processed. An input record with a length that is equal
to or greater than the maximum output record length is written unchanged (that is, it is not
truncated).
When KEYLENGTH is specified, records are joined as long as the keys are equal (and the
maximum length has not been exceeded). The key is discarded from records 2 to n in a
set of joined lines.
¡ When the secondary input stream is defined, the buffer is flushed whenever a record
¡ arrives on it. The record on the secondary input stream is then discarded. Thus, join does
¡ not delay the output from a record on the secondary input stream.
Streams Used: Secondary streams may be defined. Records are written to the primary
output stream; no other output stream may be connected. The primary input stream is
shorted to the primary output stream if the number is zero.
Record Delay: When both KEYLENGTH and a maximum record length are omitted, join
does not delay the last record written for an input record.
When KEYLENGTH or a maximum record length (or both) is specified, the output record is
delayed to the record following the last one joined.
Premature Termination: join terminates when it discovers that its primary output stream
¡ is not connected. End-of-file on the primary input stream is ignored when the secondary
¡ input stream is defined. join terminates when it discovers that its output stream is not
¡ connected.
¡ To capitalise the first letter of all words while retaining the original record structure:
¡ '(end ?) ... ',
¡ '| o: fanout ',
¡ '| split after blank ', /* Retain multiple blanks */
¡ '| xlate 1 ',
¡ '| j: join ', /* Rebuild record */
¡ '| ... ',
¡ '? o: ',
¡ '| j:'
Notes:
¡ 1. The secondary input stream is intended for solutions where an input record is chopped
¡ up into bits that need to be processed separately and then joined. You would fanout a
¡ copy of the original record and cause that to flush join’s buffer. Used in this way,
¡ join does not delay the record relative to the stage that produces the record on the
¡ secondary input stream to join.
┌─TRAILING──────────┐
──JOINCONT──┬─────────┬──┬─────┬──┼───────────────────┼──
└─ANYCase─┘ └─NOT─┘ ├─RANGE──inputRange─┤
└─LEADING───────────┘
──┬───────┬──┬───────┬──delimitedString──┬──────┬──
¡ └─DELAY─┘ └─ANYof─┘ └─KEEP─┘
──┬─────────────────┬──
└─delimitedString─┘
Type: Filter.
Syntax Description:
TRAILING The continuation string is at the end of the record being continued.
When the continuation string is present, the following record will be
appended to the record that contains the string. This is the default.
RANGE Specify an input range to examine. When the continuation string is
present, the following record will be appended to the record that contains
the string. RANGE implies KEEP.
LEADING The continuation string is at the beginning of the record following the
one being continued. That is, when the continuation string is present,
the record will be appended to the previous record.
¡¡ DELAY DELAY is used with TRAILING or RANGE. DELAY is ignored if it is
¡ specified with LEADING. When DELAY is specified, the last input record
¡ for a particular output record is consumed before the record is written.
¡ This may save a copy stage.
ANYOF The following delimited string enumerates characters to be tested as
continuation. To determine if continuation exists, a single character in
the input record is compared against the characters in the string. A
continuation exists when any character of the string matches the char-
acter. The default compares the string against a leading or trailing string
of the same length.
delimitedStringThe first delimited string specifies the continuation string. When ANYOF
is specified, the string enumerates a set of characters; when ANYOF is
omitted, the delimited string represents a normal string. This string is
deleted from the output record unless NOT or KEEP is specified.
KEEP The continuation string is retained. The default is to delete the continua-
tion string unless RANGE is specified. RANGE implies KEEP.
delimitedStringThe second delimited string specifies the string to be inserted between
the two records in the output record.
Operation: When ANYOF is specified, the continuation criterion is whether the last char-
acter of a record (or the first character of the following one; or the contents of the
specified input range) is present in the string representing an enumerated set of characters.
That is, the character must compare equal to at least one of the set. When ANYOF is
omitted, the string is compared character for character with the end or beginning of a
record; all character positions must compare equal. When NOT is specified, the criterion is
inverted; the absence of the string or character defines continuation.
When the keyword RANGE is specified, the input range is inspected for continuation; when
the keyword TRAILING is specified or defaulted, the trailing part of each input record is
inspected for continuation. When continuation is not indicated, the record is passed
unmodified to the output. When continuation is indicated, the record is loaded into a
buffer; the string or character is deleted if all of RANGE, NOT, and KEEP are omitted. The
second delimited string is appended to the contents of the buffer; the next input record is
read and appended to the buffer. The new record is then inspected for continuation. This
process continues as long as the record appended to the buffer triggers continuation. The
contents of the buffer are written to the output at the end of a run of continued records.
When LEADING is specified, an input record is read into a buffer and the leading string or
character of the next input record is inspected for continuation. When no continuation is
indicated, the contents of the buffer are written to the output and the process repeats by
loading the second record into the buffer. If continuation is indicated, the second string is
appended to the contents of the buffer followed by the remainder of the input record. This
process continues until a record is read that does not contain the specified leading string.
When this happens, the contents of the buffer are written to the output, the input record is
loaded into the buffer and the process is repeated.
Record Delay: joincont TRAILING and joincont RANGE do not delay records that are not
¡ continued. When DELAY is omitted they do not delay the last record of a set of continua-
tion records. joincont LEADING delays records that are not continued by one record; it
delays the last record of a run of continued records by one record.
Premature Termination: joincont terminates when it discovers that its output stream is
not connected.
Examples: To join records that have been split in an RFC 822 header:
/* Join headers: */
'PIPE (end ? name JOINCONT)',
'|... ',
'| xlate *-* 05 blank', /* Tabs to blanks */
'| joincont leading / / keep',
'| ...
This example would be simplistic for splicing paragraphs of text because it keeps multiple
leading blanks, which will appear in the spliced record. Also, it does not show how to
terminate processing at the blank line that ends the header part of the message.
Notes:
1. Note that ANYOF has precedence as far as abbreviations are concerned. Specify at
least four characters of ANYCASE.
! 2. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
: 3. joincont not range 1 /x/ joins records until and including the next one where
: column 1 contains “x”.
──JUXTAPOSE──┬───────┬──
└─COUNT─┘
Type: Gateway.
Syntax Description:
COUNT Prefix ten digits count to each record written to the secondary output
! stream. The count tracks the number of records from the secondary or
! higher-numbered input streams that are matched with a particular record
from the primary input stream.
Operation: Each record from the primary input stream is stored in a buffer, replacing the
previous contents of this buffer. The input record is then consumed.
When a record is available on the primary input stream and no record was read from the
! secondary or higher-numbered input streams while the record resided in the buffer, the
contents of the buffer are written to the secondary output stream before the next record
from the primary input stream is read into the buffer. The keyword COUNT has no effect
on this operation; only the contents of the original record are written.
! Streams Used: juxtapose reads all input streams; it writes only to the primary output
! stream and the secondary output stream.
Record Delay: juxtapose does not delay output records on the primary output stream.
! These records are derived from the records read on the secondary or higher-numbered
! input streams. Records on the secondary output stream are delayed by one record relative
to the primary input stream.
! Commit Level: juxtapose starts on commit level -2. It verifies that the tertiary output
! stream and higher-numbered output streams are unconnected and then commits to level 0.
Examples: To prefix the name, type, and mode of files being read by getfiles:
'PIPE (end ? name JUXTAPOS.STAGE:51)',
'?cms listfile * *', /* Get list of files */
'|o:fanout', /* Make two copies */
'|pad 25', /* Some leading space */
'|j:juxtapose', /* Prefix to contents of file */
'|...', /* Do whatever */
'?o:', /* The file ids */
'|getfiles', /* Read contents */
'|j:' /* Go merge with name */
This example deserves scrutiny. Several conditions must be satisfied for it to work
correctly; that is, reliably to prefix the name of the file to each record of the file:
The two input streams to juxtapose are derived from a common source; in this case,
fanout.
pad does not delay the record.
getfiles writes the contents of the file to its output before it consumes the corre-
sponding input record.
Thus, even though the order of dispatching is undefined, the dispatcher is not given any
leeway; it can ha and hum, but sooner or later it must produce the records to juxtapose.
And at any one time, the dispatcher can produce a record on only one of the input streams.
It might produce two records containing abcdef and abcghi or it might produce only def
and ghi; it depends on which of the two literal stages produces a record first.
The following examples show that in the current implementation cp seems to produce a
record after literal does; but this could change in future.
pipe (end ?) literal abc|j:juxtapose|cons?cp query time|j:
abcTIME IS 10:58:59 DST WEDNESDAY 07/14/10
abcCONNECT= 00:20:01 VIRTCPU= 000:04.98 TOTCPU= 000:06.17
Ready;
pipe (end ?) cp query time|j:juxtapose|cons?literal abc def|split|j:
abc
def
Ready;
To prefix the first byte (the operation code) of the input of a fullscr stage to the corre-
sponding fullscr output records:
/* Retain opcode */
'PIPE (end ?)',
'| ...
'|o: fanout ', /* copy input to fullscr */
'| chop 1 ', /* keep only the opcode */
'|j:juxtapose ', /* preface output with opcode */
'| ... /* process response here */
'?o:',
'| fullscr', /* write input to screen */
'|j: /* fullscr output to juxtapose */
To ensure that the correct prefix is available at the time it is needed, we connect the
primary output stream from fanout to the primary input stream of juxtapose. Thus, the
record will be available to juxtapose before it is available to fullscr, and therefore before
fullscr generates an output record.
ldrtbls is often used to test a compiled REXX program or an Assembler program before it is
generated into a filter package.
CMS
──LDRTBLS──word──┬────────┬──
└─string─┘
Syntax Description: Leading blanks are ignored; trailing blanks are significant. A word
is required; additional arguments are allowed. The entry point specified by the first word
is looked up in the CMS loader tables. If no entry point is found with the name as
specified, it is translated to upper case and the loader tables are searched again.
Operation: The optional string is passed to the program as the argument string.
Record Delay: ldrtbls does not read or write records. The delay depends on the program
being run.
Notes:
1. ldrtbls is useful to test a new version of a filter loaded in the user area while still
retaining the production version for normal use.
2. Remember that REXX continuation functionally replaces a trailing comma with a blank.
Also recall that when two strings are separated by one or more blanks, REXX concat-
enates them with a single blank. Use the concatenation operator (||) before the
comma at the end of the line if your portrait style has the stage separators at the left
side of the stage and the trailing blank is significant to your application.
z/OS
──LISTCAT──┬──────┬──┬─────┬──┬──────────────────────┬──
└─ASIS─┘ └─ALL─┘ │ ┌──
──────┐ │
─word─┴─┘
└─┬────────┬───
└─DSname─┘
Syntax Description:
ASIS Use data set names as written; do not translate to upper case. The
default is to translate data set names to upper case.
ALL Write all entries supplied to the pipeline, prefixed by a one character
code. By default, only data set names and VSAM cluster names are
written.
DSNAME A list of data set qualifiers follows. DSNAME is assumed in front of an
option that is not recognised.
Operation: Names are listed for each word in the DSNAME list and each word in the input
records. When the word does not begin with a quote and a prefix is set, the prefix and a
period are added to the front of the word specified. When the word begins with a quote it
is used as it is.
Output Record Format: When ALL is specified, each output record contains a character
code in the first column:
A Non-VSAM data set.
B Generation data group.
C Cluster.
G Alternate index.
H Generation data set.
L Tape volume catalog library entry.
R VSAM path.
U User catalog connector entry.
W Tape volume catalog volume entry.
X Alias.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. Null and blank input records are discarded.
Record Delay: listcat does not delay the last record written for an input record.
Examples: To list all data sets that begin with the letter T (the prefix is DPJOHN):
z/OS
──LISTDSI──┬────────┬──
└─string─┘
Operation: The argument string (if it is not blank) and each non-blank input record are
passed to the listdsi() function without inspection or modification. When the return
code is less than 16, the variables shown below are obtained from the REXX environment
and written to the output stream if they are defined. Directory information (ADIRBLK,
UDIRBLK, and MEMBERS) is written only if the return code is 0. The variables are written in
the order shown, column by column. When the return code is 16, only the last three
variables are obtained.
DSNAME RECFM ALLOC UNITS EXDATE TRKSCYL MEMBERS
VOLUME LRECL USED EXTENTS PASSWORD BLKSTRK REASON
UNIT BLKSIZE PRIMARY CREATE RACFA ADIRBLK MSGLV1
DSORG KEYLEN SECONDS REFDATE UPDATED UDIRBLK MSGLV2
Output Record Format: The lines that are written to the primary output stream contain a
variable name and its value in a format that is compatible with varset.
Streams Used: Records are read from the primary input stream; no other input stream
may be connected. Null and blank input records are discarded.
Record Delay: listdsi writes all output for an input record before consuming the input
record.
Commit Level: listdsi starts on commit level -2. It verifies that the secondary input
stream is not connected and then commits to level 0.
Premature Termination: When the secondary output stream is not defined, listdsi termi-
nates when it discovers that its output stream is not connected. When the secondary output
stream is defined, listdsi terminates when it discovers that its secondary output stream is
not connected; it ignores end-of-file on the primary output stream.
Examples:
pipe listdsi sys1.proclib | take 3 | terminal
=SYSREASON=0005
READY
pipe listdsi 'sys1.proclib' | take 3 | terminal
=SYSDSNAME=SYS1.PROCLIB
=SYSVOLUME=CCAR02
=SYSUNIT=3380
READY
pipe (end ?) l: listdsi tso.exec | take 3 | terminal ? l: | terminal
=SYSDSNAME=DPJOHN.TSO.EXEC
=SYSVOLUME=FS8E70
=SYSUNIT=3380
0
READY
Notes:
1. Data set names follow the TSO conventions. Enclose a name that is fully qualified in
single quotes. The prefix is applied to data set names that are not enclosed in quotes.
Return Codes: The return code is 0, irrespective of the return codes from LISTDSI.
z/OS
┌──
────────┐
┬──────┬┴──
──LISTISPF──pods───
└─word─┘
pods:
├──┬─dsname───────────────┬──┤
├─dsname(generation)───┤
├─'dsname'─────────────┤
├─'dsname(generation)'─┤
└─DDname=word──────────┘
Syntax Description: Enclose a fully qualified data set name in single quotes; the trailing
quote is optional. Specify the DSNAME without quotes to have the prefix, if any, applied.
Append parentheses containing a signed number to specify a relative generation of a data
set that is a member of a generation data group. To read the directory of an already
allocated data set, specify the keyword DDNAME= followed by the DDNAME already allo-
cated. The minimum abbreviation is DD=.
Specify a list of one or more member names to restrict the information written to these
members; the default is to write a line for each member of the data set. It opens the DCB
and then commits to level 0.
Operation: If no member names are specified, listispf writes a record (in alphabetical
order) for each member for the data set. If one or more members are specified, listispf
writes a record for each of those members in the order they are specified.
Premature Termination: listispf terminates when it discovers that its output stream is not
connected.
Examples: To list the directory of the first data set allocated to SYSEXEC.
pipe listispf dd=sysexec | console
ALLOCFPL
EPOP
RT
SC 01.00 19921218 22:08 136 136 0 DPJOHN
TFT
TISP 01.03 19921215 14:57 14 10 0 PIPER
TISPF
Notes:
1. pdslisti is a synonym for listispf.
2. Refer to the usage notes for listpds for information on how to list the members of all
data sets that are allocated to a particular DDNAME.
3. The flag for the member being stowed by the Software Configuration and Library
Manager is not processed.
4. Members written by TSO Pipelines 1.1.9 sublevel 40 (X'0028') and previous versions
are stored with the seconds as 01.
Syntax Description: CMS: Specify as blank-delimited words the file name and the file
type of the file to be read. A file mode or an asterisk is optional; the default is to search
all modes. If the file does not exist with a file name and a file type as entered, the file
name and the file type are translated to upper case and the search is retried.
z/OS: Enclose a fully qualified data set name in single quotes; the trailing quote is
optional. Specify the DSNAME without quotes to have the prefix, if any, applied. Append
parentheses containing a signed number to specify a relative generation of a data set that is
a member of a generation data group.
To read the directory of an already allocated data set, specify the keyword DDNAME=
followed by the DDNAME already allocated. The minimum abbreviation is DD=.
Specify a list of one or more member names to restrict the information written to these
members; the default is to write a line for each member of the data set.
Operation: On CMS, a record is written for each member of the library. Note that
libraries can have more than one member of a particular name.
On z/OS, listpds can write information about selected members. If no member names are
specified, listpds writes a record (in alphabetical order) for each member for the data set.
If one or more members are specified, listpds writes a record for each of those members in
the order they are specified.
Commit Level: listpds starts on commit level no. On z/OS, listpds starts on commit level
-2000000000. It opens the DCB and then commits to level 0.
Premature Termination: listpds terminates when it discovers that its output stream is not
connected.
Notes:
1. On z/OS, pdsdirect and pdslist are synonyms for listpds.
2. On CMS, listpds supports only simulated libraries that have fixed 80-byte records. It
does not support a partitioned data set on an OS volume.
3. On z/OS, listpds reads the directory of the first data set in a concatenation when the
operand specifies a DDNAME. To read the directories of all data sets in a concat-
enation:
/* Read all directories from a concatenation. */
Signal on novalue
parse upper arg ddname .
exit RC
strliteral can also write its argument string after it has passed the input to the output.
──┬─LITERAL──┬────────┬───────────────────────────────┬──
│ └─string─┘ │
└─STRLITERAL──┬──────────────┬──┬─────────────────┬─┘
└─┤ Keywords ├─┘ └─delimitedString─┘
Keywords:
┌─PREFACE─┐
├──┬─┼─────────┼──┬─────────────┬─┬──┤
¡ │ └─APPEND──┘ └─CONDitional─┘ │
¡ └─IFEMPTY──────────────────────┘
Syntax Description: The string starts after exactly one blank character. Leading and
trailing blanks are significant.
PREFACE Write the output record before passing the input to the output.
APPEND Write the output record after passing the input to the output.
¡¡ CONDITIONAL Write the output record only when there is input. That is, if literal
¡ CONDITIONAL cannot read an input record, it terminates without writing
¡ anything.
¡¡ IFEMPTY Write the literal only when there are no input records.
Operation: literal writes a null record when the parameter string is omitted.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. literal shorts the input to the output after it has written the argument string
to the pipeline.
Record Delay: The first output record is produced before any input is read. Thus, literal
has the potential to delay one record.
Premature Termination: literal terminates when it discovers that its output stream is not
connected.
Examples: A literal 3270 data stream is written twice to the console in full screen mode.
Hit enter twice to continue. (The left brace represents X'C0'):
PIPE literal {BHit Enter or any PF key | dup | fullscr | hole
Notes:
1. Records from a cascade of literal stages appear in the reverse order of their appear-
: ance in the pipeline specification; see Figure 62 on page 34. (This is not always the
: case for strliteral)
2. Use var to write data that contain stage separators, end characters, and other characters
that have a special meaning to the pipeline specification parser.
3. literal may be used to inject a record in front of the file somewhere downstream in a
pipeline, but it can also be a first stage. Note that if you wish to insert a record in
front of a file that comes from disk, you must retain the disk stage as the first in the
pipeline. If not, disk appends the single record to the file instead of reading from the
file.
4. Be careful when literal is used where the contents of a stemmed array are being
updated or in similar situations where the output overwrites the original data source.
Because literal writes the first record before it reads input, this record may be
produced before the input has been read; thus, the first record of the updated object
may be written before it is read, leading to a “destructive overlap”.
¡ 5. literal IFEMPTY may be useful in front of a var stage, to supply a default.
¡ 6. It does not make sense to cascade literal IFEMPTY as the second one will see the first
¡ one’s record and thus never write its literal.
7. Remember that REXX continuation functionally replaces a trailing comma with a blank.
Also recall that when two strings are separated by one or more blanks, REXX concat-
enates them with a single blank. Use the concatenation operator (||) before the
comma at the end of the line if your portrait style has the stage separators at the left
side of the stage and the trailing blank is significant to your application.
──LOCATE──┬─────────┬──┬───────┬──┬─────────────┬──┬───────┬──
¡ └─ANYcase─┘ ├─MIXED─┤ └─inputRanges─┘ └─ANYof─┘
¡ ├─ONEs──┤
¡ └─ZEROs─┘
──┬─────────────────┬──
└─delimitedString─┘
Syntax Description:
No input range, a single input range, or one to ten input ranges in parentheses can be
specified. The default is to search the complete input record.
The characters to search for are specified as a delimited string. A null string is assumed
when the delimited string is omitted.
Operation: locate copies records in which the specified string occurs within any of the
specified input ranges to the primary output stream (or discards them if the primary output
stream is not connected). It discards records that do not contain the string within any of
the input ranges or that do not include any positions in any of the specified column ranges
(or copies them to the secondary output stream if it is connected). Thus, it discards null
records.
A null string matches any record. In this case, records selected are long enough to include
the first position of the input range closest to the beginning of the record. This is used to
select records of a given length or longer. Records of a particular length can be selected
by a cascade of locate and nlocate.
Streams Used: Records are read from the primary input stream. Secondary streams may
be defined, but the secondary input stream must not be connected.
Record Delay: An input record is written to exactly one output stream when both output
streams are connected. locate strictly does not delay the record.
Commit Level: locate starts on commit level -2. It verifies that the secondary input
stream is not connected and then commits to level 0.
Notes:
1. Use a cascade of locate filters when looking for records containing two or more
strings that may occur in any order.
! 2. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
¡ 3. Specifying MIXED and a mask that contains less than two one bits in any one byte will
¡ cause all records to be rejected.
The reference initially contains records from the secondary input stream; this stream is
read to end-of-file before processing detail records. When ALLMASTERS is specified, the
reference contains all records from the secondary input stream, including those that have
duplicate keys; when ALLMASTERS is omitted, lookup stores the first record that has a
particular key in the reference.
The reference can be updated dynamically in several ways while is lookup processing
detail records from the primary input stream:
When AUTOADD is specified, detail records that are not matched are added to the refer-
ence automatically.
Records on the tertiary input stream are added to the reference as they are read.
Records on the quarternary input stream cause the corresponding reference record(s) to
be deleted from the reference.
! Records on the senary input stream replace the corresponding reference record(s).
| A count is maintained in each master record irrespective of the COUNT option. By default,
| one is added for each detail record that matches a particular master record.
──LOOKUP──┬───────┬──┬──────────────────┬──┬───────────┬──
! └─COUNT─┘ └─MAXcount──number─┘ └─INCREMENt─┘
┌─NOPAD─────┐
──┬──────────┬──┬────────────┬──┼───────────┼──┬─────────┬──
└─SETCOUNT─┘ └─TRACKCOUnt─┘ └─PAD──xorc─┘ └─ANYcase─┘
──┬─────────────────────┬──┬─────────┬──┬────────┬──
├─AUTOADD──┬────────┬─┤ └─KEYONLY─┘ └─STRICT─┘
│ └─BEFORE─┘ │
¡ ├─CEILING─────────────┤
¡ └─FLOOR───────────────┘
──┬────────────────────────────┬──
└─inputRange──┬────────────┬─┘
└─inputRange─┘
┌─DETAIL──MASTER──────────────────────┐
──┼─────────────────────────────────────┼──
├─DETAIL──────────────────────────────┤
├─DETAIL──ALLMASTER──┬──────────┬─────┤
│ └─PAIRWISE─┘ │
├─MASTER──┬────────┬──────────────────┤
│ └─DETAIL─┘ │
└─ALLMASTER──┬──────────────────────┬─┘
└─DETAIL──┬──────────┬─┘
└─PAIRWISE─┘
Type: Sorter.
Syntax Description: Arguments are optional. The arguments are in three groups:
Keywords that specify variations on processing.
COUNT A count of matching details is kept with the master record. The count is
prefixed to the master record before it is written to the tertiary output
stream and the quarternary output stream. When COUNT is omitted, only
master records that have a count of zero are written to these two output
streams.
!! MAXCOUNT A master record is deleted after it has been matched by a detail record
! when its match count is equal to or exceeds the specified number. The
! number must be positive.
INCREMENT Records on the primary input stream contain the increment in the first
ten columns. The number may be negative; it may have leading or
trailing blanks. A blank field represents the default increment, one.
This number is added to the master’s count when the record is matched.
The prefix is deleted before the record is matched and written to an
output stream; it should be ignored when specifying the range for the
key field in the detail record.
SETCOUNT Records on the secondary input stream and the tertiary input stream
contain the initial count in the first ten columns. The number must be
zero or positive; it may have leading or trailing blanks. A blank field
represents the default starting count, zero. The prefix is deleted before
the record is matched and entered into the reference; it should be
ignored when specifying the range for the key field in the master record.
TRACKCOUNT The current count after it has been incremented is prefixed to the master
record before it is written to the primary output stream, the tertiary
output stream, and the quarternary output stream.
NOPAD Key fields that are partially present in a record must have the same
length to be considered equal; this is the default.
PAD Specify a pad character that is used to extend the shorter of two key
fields.
ANYCASE Ignore case when comparing fields; the default is to respect case.
AUTOADD Unmatched details are added to the reference after they have been
written to the secondary output stream. If two input ranges are
specified, they must be identical.
BEFORE When AUTOADD BEFORE is specified, the detail record is added to the
reference before it is tested, so that it will always be found. Thus, the
count will be one when the record is added to the reference; it will be
zero when BEFORE is omitted.
¡¡ CEILING A detail matches a master record or the master record that has the next
¡ higher key. The input record is unmatched only when the its key is
¡ larger than the highest master key.
¡¡ FLOOR A detail matches a master record or the master record that has the next
¡ lower key. The input record is unmatched only when the its key is
¡ lower than the lowest master key.
KEYONLY Only the key field is stored in the reference file. Thus, only the key is
available to be written to the primary output stream, the tertiary output
stream, and the quarternary output stream.
STRICT When records are available simultaneously on more than one input
stream, process records from the tertiary input stream before records
from the quarternary input stream before records from the primary input
stream.
Input ranges, which specify the location of the key fields in the detail and master
records.
The first input range specifies the location of the key in records from the primary input
stream (the detail records); the complete input record is the default. The second input
range specifies the location of the key in records that are read from other input streams
(the master records). If the second range is omitted, the first range is used for the
master records as well. When AUTOADD is specified, the second range must be
omitted or must represent the same range as the first one.
Keywords that specify how records are written to the primary output stream when a
detail record contains a key that is also in the reference.
DETAIL Duplicate master records are kept. For each master record having the
ALLMASTER selected key, write a copy of the detail record followed by the master
PAIRWISE record to the primary output stream.
MASTER Duplicate master records are discarded. Write the matching reference to
the primary output stream. The matching detail record is discarded.
MASTER Duplicate master records are discarded. Write the matching reference
DETAIL followed by the detail record to the primary output stream.
ALLMASTER Duplicate master records are kept. Write all matching master records to
the primary output stream. The matching detail record is discarded.
ALLMASTER Duplicate master records are kept. Write all matching master records
DETAIL followed by the detail record to the primary output stream.
ALLMASTER Duplicate master records are kept. For each master record having the
DETAIL selected key, write the master record followed by a copy of the detail
PAIRWISE record to the primary output stream.
Operation: The secondary input stream is read and stored as the initial reference before
the other streams are read. When ALLMASTER is specified, all master records are stored.
When ALLMASTER is omitted, records on the secondary input stream that have duplicate
keys are passed to the quinary output stream (if it is defined and connected) or discarded;
the first record that has a particular key is retained.
The other input streams are then read as records arrive and processed in this way:
Primary Input Stream: When a record is read on the primary input stream, the contents of
the first input range are used as the key. The key field of this detail record is looked up in
the reference. When there is no matching master record, the detail record is passed to the
secondary output stream (if it is connected). When there is a matching master record, one
or more records are written to the primary output stream in the order specified by the
keywords DETAIL, MASTER, ALLMASTER, or PAIRWISE. The default is to write the detail
! record followed by the master record. When MAXCOUNT is specified, an automatic delete
! is triggered when the count of matches reaches or exceeds the specified number.
Tertiary Input Stream: When a record is read on the tertiary input stream, the contents of
the second input range are used as the key. The record is added to the reference if there is
not already a record in the reference for the key. The record is also added if ALLMASTERS
is specified. Otherwise the record is a duplicate and it is passed to the quinary output
stream (if it is defined and connected).
Quarternary Input Stream: When a record is read on the quarternary input stream, the
contents of the second input range are used as the key. The corresponding records are
deleted from the reference. If there is no matching master record, the input record is
passed to the senary output stream (if it is defined and connected). If the quarternary
output stream is connected, the corresponding reference record(s) are written to this stream
according to the rules stipulated for COUNT; the input record is then discarded. Once the
records are deleted from the reference, the count is lost; a subsequent record on the tertiary
input stream will start with a reference count of zero.
! Quinary Input Stream: When a record is read on the quinary input stream, the master is
! reset. Before resetting the master, lookup writes records to the quarternary output
! describing the contents of the master file, as it does on the tertiary output prior to termi-
! nation. All streams remain connected. The record is then discarded.
! Senary Input Stream: When a record is read on the senary input stream, it is treated as if
! the record were passed to the quarternary input stream and then on the tertiary input
! stream.
At end-of-file on all input streams, all streams other than the tertiary output stream are
severed. The contents of the reference (originally from the secondary input stream and
tertiary input stream) are then written to the tertiary output stream (if it is connected) in
ascending order by their keys. Without the COUNT option, only unreferenced master
records are written (those not matched by at least one detail record). When COUNT is
specified, all master records are written to the tertiary output stream; they have a 10-byte
prefix containing the count of primary input records that matched the key of the master
record. Unreferenced records have a count of zero.
Streams Used: Two to six streams can be defined; with AUTOADD, the secondary streams
! need not be defined. If it is defined, the senary input streams must not be connected.
In Figure 388, the inside of the box shows how records on output streams are derived
from input streams, except for the master record being written to the primary output
stream; it also shows how end-of-file propagates forward.
All records are read from the secondary input stream before lookup reads from other input
streams.
When the tertiary output stream is connected, lookup severs all other streams at end-of-file
on all input streams. It then writes the unreferenced master records to the tertiary output
stream (or all master records if COUNT is specified). When both ALLMASTER and COUNT
are specified, the count of matching detail records is prefixed to all master records; thus,
the sum of the counts will in general be larger than the count of detail records.
lookup propagates end-of-file from the primary input stream to the primary output stream
and the secondary output stream; it propagates end-of-file on all first three output streams
to the primary input stream; it propagates end-of-file from the tertiary input stream to the
quinary output stream; it propagates end-of-file from the quarternary input stream to the
! quarternary and senary output streams; it ignores end-of-file on the quinary input stream.
Record Delay: lookup features a chaotic delay structure. It does not delay records from
the primary input stream (that is, detail records written to the primary output stream or the
secondary output stream); nor does it delay records written to the quinary and the senary
output stream. Records are written to the tertiary output stream after end-of-file on all
input streams; thus, these records are delayed to end-of-file. Records are written to the
quarternary output stream before the corresponding input record is consumed from the
quarternary input stream.
Commit Level: lookup starts on commit level -2. It verifies that the quinary and senary
input streams are not connected and then commits to 0.
Examples: The generic EXEC that uses lookup with two input streams and three output
streams:
/* Dictionary lookup */
'PIPE (end ?)',
'?< detail records', /* Read details */
'|l: lookup 1.10', /* Look up first ten columns */
'|> matching records a', /* Details followed by masters */
'?< master records', /* Read master file */
'|l:', /* Secondary streams */
'|> unmatched details a', /* Details that didn't match */
'?l:', /* Tertiary streams */
'|> unreferenced masters a' /* Masters that were not referenced */
To find all words in the primary input stream that are not in the file WORD LIST:
/* CKWB REXX */
'callpipe (end ?)',
'|*:', /* Input stream */
'|split', /* Make words */
'|l:lookup', /* Look them up */
'?< word list', /* Word list */
'|split', /* One word per line */
'|l:', /* Into master */
'|*:' /* Words not in list to output */
Note that the primary output stream from lookup is not connected, but that both secondary
streams are connected. Also note that there is only one end character in this pipeline
specification. The master file is passed into the label reference; the unmatched details
come out of the label reference.
To select the first occurrence of each key within the file and not delay the record:
/* Now find uniques */
'callpipe (end ?) *: | l:lookup 1.5 autoadd keyonly ? l: | *:'
The only connected streams are the primary input stream and the secondary output stream.
Thus, the first time a particular key occurs, the detail record will not be matched; it is
written to the secondary output stream. It is also added to the reference so that subsequent
occurrences of that particular key will match and thus, these records will be discarded.
lookup does not reorder the records. That is, the output is in the same order as the
input (except, of course, that some records are discarded).
lookup does not delay the record.
lookup propagates end-of-file backwards, whereas sort cannot produce output until it
has read the entire file.
lookup stores only the key field, whereas sort must store the entire record. For long
records with short keys, this may mean that lookup can process larger files than sort
can.
However, when the entire file is processed (and the key field is not significantly shorter
than the record), lookup requires as much storage as sort UNIQUE and performance will be
similar.
In a service machine that maintains privileges for its clients, the immediate commands ADD
and DELETE add and delete authorisations dynamically:
/* Simplistic server */
'CP SET SMSG IUCV' /* Enable commands */
'PIPE (end ? name LOOKUP.STAGE:489)',
'?starmsg:', /* Requests here */
'|not chop 8', /* Drop message class */
'|l:lookup count 1.8 master detail', /* See if allowed */
'|...', /* Do it! */
'?< auth file', /* Current authorisations */
'|l:',
'|timestamp 16', /* Let's remember when */
'|>> unauth attempts', /* Log hacking attempts */
'?immcmd add', /* Immediate commands */
'|spec w1 1 w2-* 9 /* Build key */
'|xlate 1.8 upper', /* Uppercase user ID */
'|l:', /* Add to reference */
'|not chop 10', /* Delete count */
'|> auth file a', /* Save updated master */
'?immcmd delete', /* Immediate command */
'|spec w1 1.8', /* Just the user id */
'|xlate', /* Uppercase it */
'|l:', /* And remove from ref */
'|insert /Deleted: /', /* Add some text */
'|console'
In this example, the first four input streams to lookup are connected. Requests from the
users arrive on the primary input stream; the existing authorisations are read into the
secondary input stream; new users are authorised by records on the tertiary input stream;
and authorisations are dropped by records on the quarternary input stream.
COUNT is specified to have all master records written when the server terminates; but the
count is discarded by the chop stage that is connected to the secondary output stream.
Thus, the current master file can be saved when the lookup stage terminates.
The example should not be taken as an example in writing a robust server since it ignores
issues such as terminating the server and recovery in the event of a system crash (any
added or deleted authorisations would be lost if the virtual machine were reset).
Notes:
1. The keyword COUNTDBG is intended to test lookup. Output records have four charac-
ters prefixed to the count field.
2. For compatibility with the past, BOTH specifies the default of writing the detail record
followed by the master record to the primary output stream. MATCHING has the same
effect as DETAIL; only records from the primary input stream are written to the
primary output stream.
3. When the keyword NOPAD is used, key fields must be of the same length to match.
Use PAD to specify a character to extend the shorter of two fields when comparing
them.
4. Unless ANYCASE is specified, key fields are compared as character data using the IBM
System/360 collating sequence. Use spec (or a REXX program) to put a sort key first
in the record if you wish, for instance, to use a numeric field that is not aligned to the
right within a column range. Use xlate to change the collating sequence of the file.
! 5. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
6. lookup supports only one key field (unlike sort). Use spec to gather several key fields
into one in front of the original record.
7. Counters are “sticky” at zero and the maximum value for a fullword integer. That is,
when a counter is decremented below zero, its value is forced to zero; a counter is not
incremented beyond 2147483647 (2**31-1).
! 8. The five counting options are independent to allow you complete control. In partic-
! ular, COUNT is not implied by any of the other four.
Thus INCREMENT without COUNT or TRACKCOUNT causes a counter field to be vali-
dated as a number and then be deleted from the input detail record. SETCOUNT works
similarly for the input master record.
9. When COUNT is omitted, a master record is written to the tertiary output stream or to
the quarternary output stream only when its count is zero. When INCREMENT is
specified and the increment is zero, the reference count will remain zero and thus the
master record will be considered to be unreferenced. Similarly, if SETCOUNT is
specified and a master record is added with a nonzero reference count, that record is
not written to the tertiary output stream or to the quarternary output stream, even when
there are no matching details.
10. When the primary input stream and the secondary input stream are derived from the
same source, for example, a selection stage or even another lookup, you must buffer
the primary input stream to avoid a stall:
/* strange lookup */
'PIPE (end ?)'
'?... ',
'| x: locate /oscar/ ',
'| buffer ',
'| l: lookup ',
'| ... ',
'? x: ',
'| l:
Be sure that AUTOADD cannot perform the task.
! 11. You can replace a record in the reference by passing the replacement on the senary
! input stream or you can pass the new master record first to the quarternary input
stream and then to the tertiary input stream. This destroys the count; to keep the
count, you must use the COUNT and SETCOUNT options and preface the first ten charac-
ters of the output record to the new master before it is passed to the quarternary input
stream (or somehow guess what the count should be and pass that value to the senary
input stream).
¡ 12. AUTOADD, CEILING, and FLOOR are mutually exclusive.
¡ 13. FLOOR and CEILING are useful, for example, to find the control section that contains a
¡ particular address to relate trace data to a load map. It might be easiest to convert the
¡ keys to binary, but other transforms are possible.
! 14. Storage for master records that are deleted by passing a record on the quarternary
! input stream is not reclaimed until lookup terminates.
! When the master record is replaced by passing a record to the senary input stream, the
! existing storage is reused if the two records are the same length, rounded to the next
! multiple of four. Any additional master record for the particular key are not
! reclaimed.
──MACLIB──┬──────┬──
└─word─┘
Syntax Description: A word is optional. It specifies the delimiter word that separates
members in the input stream. '*COPY' is the default.
Operation: maclib first writes an 80-byte placeholder record indicating a null library to
the primary output stream. It then writes the members of the MACLIB with a delimiter
record (X'61FFFF61') after each member. 80-byte directory records (having 16-byte
entries) are written to the secondary output stream (if it is connected) for each five
members.
At end-of-file on input, the final directory record is written to the secondary output stream
and the correct record 1 for the library is written to the tertiary output stream (if it is
connected).
Input Record Format: The format is similar to the format of a file with file type COPY,
as used by the CMS command MACLIB. The input stream has one or more members, each
preceded by a delimiter record with the delimiter word in column 1. The member name is
the second word of a line beginning with the delimiter word; the remainder of the line is
ignored.
Streams Used: One to three streams may be defined. Records are read from the primary
input stream; no other input stream may be connected. maclib writes output to all
connected output streams. It severs the primary output stream at end-of-file on input
before it writes to the secondary output stream. It severs the secondary output stream
before it writes to the tertiary output stream.
Record Delay: The first output record is produced before any input is read. Thus, maclib
has the potential to delay one record. Records are written to the primary output stream
before they are consumed from the primary input stream.
Commit Level: maclib starts on commit level -2. maclib verifies that the primary input
stream is the only connected input stream and then commits to 0.
Examples: Refer to
Notes:
1. To create a CMS macro library, the secondary output stream (which contains the direc-
tory) should be buffered and appended to the contents of the primary output stream;
this aggregate stream should be connected to the primary input stream of the > stage
writing the library; maclib’s tertiary output stream should be connected to the
secondary input stream of >.
2. maclib generates the data records to be put into a library. It accesses no host inter-
face; in particular, it does not write the library to disk.
3. maclib cannot generate the original (CDF) VM/370 library format.
!! CMS
! ┌─FETCH──┐
! ──MAPMDISK──┬─DEFINE──┼────────┼─┬──
! │ ├─RETAIN─┤ │
! │ └─ZERO───┘ │
! ├─IDENTIFY───────────┤
! ├─REMOVE─────────────┤
! └─SAVE───────────────┘
! Syntax Description:
! Each record specifies the mapping of a single data space that must have
! been created by your virtual machine.
! 1. The ASIT for the data space that blocks are mapped into.
! 2. The number of the first page to map in decimal (zero or positive).
! The number is multiplied by 4096 before it is stored in the param-
! eter list.
! 3. The count of pages to map in decimal (positive).
! 4. The number of the first pool block to map in decimal (zero or posi-
! tive).
!
! IDENTIFY
! ──devaddr──number──number──
! Each record defines one minidisk extent in the pool. The pool block
! numbers are assigned sequentially from zero as the input is read.
! 1. The device number.
! 2. The block offset (decimal, zero or positive), which can be obtained
! by diskid.
! 3. The count of blocks (positive decimal), which can be obtained by
! state at the time the minidisk is reserved.
!
! REMOVE
! ──hexString──number──number──
! Streams Used: Records are read from the primary input stream and written to the primary
! output stream. Null and blank input records are discarded.
! Commit Level: mapmdisk starts on commit level 0 or -2. In general, it starts on commit
! level 0, but mapmdisk SAVE starts on commit level -2, establishes an external interrupt
! handler, and then commits to level 0.
! Premature Termination: mapmdisk terminates when it discovers that its output stream is
! not connected. mapmdisk SAVE also stops if the immediate command PIPMOD STOP is
! issued or a record is passed to pipestop.
! Examples: See Chapter 18, “Using VM Data Spaces with CMS Pipelines” on page 207.
! Notes:
! 1. The virtual machine must be in XC mode (this excludes z/CMS).
! 2. All minidisk mappings are destroyed by an IPL of the virtual machine.
──MCTOASA──
Type: Filter.
Input Record Format: The first column of the record is a machine carriage control char-
acter:
xxxx x001 Write the data part of the record and then perform the carriage operation
specified by the five leftmost bits.
xxxx x011 Perform the carriage operation defined by the five leftmost bits imme-
diately (the data part of the record is ignored).
000n n0x1 Space the number of lines (0 through 3) specified by bits 3 and 4.
1nnn n0x1 Skip to the channel specified by bits 1 through 4. The number must be
in the range 1 to 12 inclusive.
Other bit combinations are not valid. In particular, bit 5 (X'04') must be zero.
Output Record Format: The first column of the record is an ASA carriage control char-
acter:
Streams Used: Records are read from the primary input stream and written to the primary
output stream. Null input records are discarded.
Record Delay: When the file has machine carriage control characters, the carriage control
is delayed to the following record; the data part of the record is not delayed.
Premature Termination: mctoasa terminates when it discovers that its output stream is
not connected.
Examples: To discard the two heading lines from an assembler listing file:
...| mctoasa | outside /1/ 2 |...
The System Assembler generates the listing file with ASA carriage control; assembler H
uses machine carriage control. mctoasa ensures that the listing file has ASA carriage
control characters in either case.
Red Neon!
Improper use of mdiskblk WRITE may result in an unreadable minidisk.
CMS
┌─READ─┐ ┌──
─────────┐
┬───────┬┴─┬──
──MDISKBLK──┬─┬────────┬──┴──────┴──letter───
│ └─NUMBER─┘ └─range─┘ │
└─WRITE──letter─────────────────────────────┘
Operation: When mdiskblk is reading blocks from a minidisk, the blocks specified in the
argument string (if any) are read into the pipeline. The blocks specified in input records
are then read. If NUMBER is specified, each output record is prefixed with a 10-byte field
containing the block number of the record.
Input Record Format: When mdiskblk is reading blocks from a minidisk, input records
must contain blank-delimited ranges that specify the blocks to read from the minidisk. A
range that ends with an asterisk (for example, 1-*) extends to the end of the minidisk (or
to wherever it has been recomputed with FORMAT RECOMP); a range that specifies a block
number beyond the end of the minidisk attracts an error message and causes mdiskblk to
terminate.
When mdiskblk is writing blocks to the minidisk, input records must contain the blocks to
write prefixed by a 10-byte record number (in printable decimal). The input record length
must be ten more than the disk block size.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. Null and blank input records are discarded.
Record Delay: mdiskblk writes all output for an input record before consuming the input
record.
Premature Termination: mdiskblk terminates when it discovers that its output stream is
not connected.
Examples: To read the label record of a minidisk on a count key data device:
pipe mdiskblk a 3 | spec 5.6 1 | console
EOF *
Ready;
To read the top pointer block from a file (or the only data block, when the file contains
only one block):
/* Read top pointer */
'PIPE',
' literal PROFILE EXEC A ',
'|state noformat ',
'|spec 41.4 c2d 1',
'|mdiskblk a',
'|> prof pointer a'
This example fails if the file consists entirely of binary zeros; CMS may elect not to write
any disk blocks for such a file.
Notes:
1. The minidisk must be accessed even though the CMS file system is bypassed.
2. Specify the actual ending block number to read a complete minidisk, including the
part of the disk that is reserved for a nucleus.
CMS
| ──MDSKfast──fn──ft──┬───────────────────────────┬──
│ ┌─Variable──────────┐ │
└─fm──┼───────────────────┼─┘
└─Fixed──┬────────┬─┘
└─number─┘
Warning: mdsk behaves differently when it is a first stage and when it is not a first stage.
Existing data can be overlaid when mdsk is unintentionally run other than as a first stage.
To use mdsk to read data into the pipeline at a position that is not a first stage, specify
mdsk as the argument of an append or preface control. For example, |append mdsk ...|
appends the data produced by mdsk to the data on the primary input stream.
Syntax Description: Specify as blank-delimited words the file name and the file type of
the file to be read or appended to. A file mode or an asterisk is optional; the default is to
search all modes. If the file does not exist with a file name and a file type as entered, the
file name and the file type are translated to upper case and the search is retried. No further
arguments may be specified when mdsk is first in a pipeline.
When mdsk is not first in a pipeline, the file is created as A1 if no file mode (or an
asterisk) is specified and no file is found with the name and type given. The record format
and (for fixed format files) the record length are optional arguments. The default is the
characteristics of an existing file when appending, VARIABLE when a file is being created.
When the file exists, the specified record format must match the characteristics of the file.
Operation: When mdsk is first in a pipeline, reading starts at the beginning of the file.
When mdsk is not first in a pipeline, mdsk appends records from the primary input stream
to an existing file. The file is closed before mdsk terminates.
Streams Used: When mdsk is first in a pipeline, it writes records to the primary output
stream.
When mdsk is not first in a pipeline, it first appends to or creates the file from records on
the primary input stream that are not null; all input records are also copied to the primary
output stream. The primary output stream is severed at end-of-file on the primary input
stream. The first records of the file are then overwritten with any records from the
secondary input stream that are not null. All records from the secondary input stream are
copied to the secondary output stream after they are written to the file.
Warning: When the secondary input stream is connected, records read from it must have
the same length as the records they replace in the file, but this is not enforced by CMS for
variable record format files; CMS truncates a variable record format file without indication
of error if a record is replaced with one of different length, be that shorter or longer.
See Also: >, >>, <, diskslow, diskback, diskrandom, diskupdate, members, and pdsdirect.
Examples: To count the number of words in a file which may or may not exist:
pipe disk input file | count words | console
Notes:
1. Use diskslow if mdsk fails to operate.
2. Use diskslow to begin to read or write from a particular record; use diskrandom to
read records that are not sequential; use diskupdate to replace records in random order.
3. Null input records are copied to the output (if connected), but not to the file; CMS files
cannot contain null records.
4. mdsk can read or append to a file with a name in mixed case (if you enter the exact
file name and file type), but it creates only files with file names and file types in upper
case. Use command RENAME to change a file’s name or type to mixed case.
5. When it is first in a pipeline, mdsk may obtain several records from CMS at a time.
When it is not first in a pipeline and it is it is processing records from the primary
input stream, mdsk may deliver several records at a time to CMS to improve perform-
ance. The file may not be in its eventual format while it is being created; it should
not be accessed (by any means) before mdsk terminates. It is unspecified how many
records mdsk buffers, as well as the conditions under which it does so.
6. Connect the secondary input stream when creating CMS libraries or packed files where
the first record has a pointer to the directory or contains the unpacked record length of
a packed file. The stage that generates the file (for instance, maclib) can write a
placeholder first record on the primary output stream initially; it then writes the real
first record to a stream connected to the secondary input stream of mdsk when the
complete file has been processed and the location and size of the directory are known.
7. The fast interface to the file system is bypassed if the bit X'10' is on in offset X'3D'
of the FST that is exposed by the FSSTATE macro. Products that compress files on the
fly or in other ways intercept the file system macros should turn on this bit to ensure
that CMS/TSO Pipelines uses documented interfaces only.
Return Codes: In addition to the return codes associated with CMS/TSO Pipelines error
messages, mdsk is transparent to return codes from CMS. Refer to the return codes for the
FSREAD macro and the FSWRITE macro in VM/ESA CMS Application Development Refer-
ence for Assembler, SC24-5453, for a complete list of return codes. You are most likely to
encounter these:
1 You do not have write authority to the file.
13 The disk is full.
16 Conflict when writing a buffer; this indicates that a file with the same name has
been created by another invocation of disk.
20 The file name or file type contains an invalid character.
24 The file mode is not valid.
25 Insufficient storage for CMS to allocate buffers.
CMS
──DISKBACK──fn──ft──┬────┬──
└─fm─┘
Syntax Description: Specify as blank-delimited words the file name and the file type of
the file to be read. A file mode or an asterisk is optional; the default is to search all
modes. If the file does not exist with a file name and a file type as entered, the file name
and the file type are translated to upper case and the search is retried.
Premature Termination: mdskback terminates when it discovers that its output stream is
not connected.
Examples: To read the last message from a notebook file and append it to the file being
edited:
/* GETLAST XEDIT */
arg fn .
'bottom'
'pipe diskback' word(fn 'all', 1) 'notebook a |',
'tolabel ==========|',
'instore reverse |',
'outstore |',
'xedit'
Notes:
1. For short files it can be more efficient to read the file with < and use instore REVERSE
followed by outstore to reverse the order of the records in a file.
2. mdskback may obtain several records from CMS at a time. It is unspecified how many
records mdskback buffers, as well as the conditions under which it does so.
Return Codes: In addition to the return codes associated with CMS/TSO Pipelines error
messages, mdskback is transparent to return codes from CMS. Refer to the return codes for
the FSREAD macro in VM/ESA CMS Application Development Reference for Assembler,
SC24-5453, for a complete list of return codes. You are most likely to encounter these:
20 The file name or file type contains an invalid character.
24 The file mode is not valid.
25 Insufficient storage for CMS to allocate buffers.
CMS
──DISKRANDom──fn──ft──
──┬──────────────────────────────────────────┬──
│ ┌──
─────────┐ │
┬───────┬┴─┘
└─fm──┬─────────┬──┬────────┬───
└─BLOCKed─┘ └─NUMBER─┘ └─range─┘
Syntax Description: Specify as blank-delimited words the file name and the file type of
the file to be read. A file mode or an asterisk is optional; the default is to search all
modes. Use an asterisk as a placeholder for the file mode when you wish to specify
further arguments and search all accessed modes. If the file does not exist with a file
name and a file type as entered, the file name and the file type are translated to upper case
and the search is retried.
BLOCKED Write a range of records from the file as a single output record; the file
must have fixed record format.
NUMBER Prefix the record number to the output record. The field is ten characters
wide; it contains the number with leading zeros suppressed.
range Further arguments are ranges of records to be read. Use an asterisk as
the end of a range to read to the end of the file.
Operation: The records whose numbers are specified in the argument are read into the
pipeline. Lines are then read from the input stream (if it is connected). Input records
contain blank-delimited words that specify ranges of records to read from the file. Output
records are written in the order specified in the argument string and input records. The file
is closed before mdskrandom terminates.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. Null and blank input records are discarded.
Record Delay: mdskrandom does not delay the last record written for an input record.
An input record that contains a single number is not delayed. Nor is an input record that
contains a single range, when BLOCKED is specified.
Premature Termination: mdskrandom terminates when it discovers that its output stream
is not connected.
See Also: >, >>, <, disk, diskback, diskslow, members, and pdsdirect.
Examples: Both of these commands read records 7, 8, 3, and 1 from a file and write
them to the pipeline in that order:
pipe diskrand profile exec * 7.2 3 1 |...
pipe literal 3 1 | diskrand profile exec * 7.2 |...
Notes:
1. RECNO is a synonym for NUMBER.
2. mdskrandom performs at least one read operation for the records in the arguments, if
specified, and one read operation for each input record. When BLOCKED is specified,
all records in a range are read in a single operation. It is unspecified how many
additional read operations it performs for records specified in the arguments or a
particular input record. This may be significant when the file is updated with
diskupdate. Ensure that no stage delays the record between stages reading and writing
a file being updated.
Return Codes: In addition to the return codes associated with CMS/TSO Pipelines error
messages, mdskrandom is transparent to return codes from CMS. Refer to the return codes
for the FSREAD macro in VM/ESA CMS Application Development Reference for Assembler,
SC24-5453, for a complete list of return codes. You are most likely to encounter these:
20 The file name or file type contains an invalid character.
24 The file mode is not valid.
25 Insufficient storage for CMS to allocate buffers.
When it is first in a pipeline, mdskslow reads a file from disk; it treats a file that does not
exist as one with no records (a null file). When it is not first in a pipeline, mdskslow
appends records to an existing file; a file is created if one does not exist.
If a file is to be written unblocked. This may help to identify which record causes an
error (for example, a program check) in a previous or subsequent stage.
When writing a file from several stages concurrently. (It may be a better idea,
however, to use faninany to gather the streams and write with a single disk stage.)
To begin reading or writing from a particular record number.
CMS
──DISKSLOW──fn──ft──
──┬─────────────────────────────────────────────┬──
│ ┌─Variable──────────┐ │
└─fm──┬──────────────┬──┼───────────────────┼─┘
└─FROM──number─┘ └─Fixed──┬────────┬─┘
└─number─┘
Warning: mdskslow behaves differently when it is a first stage and when it is not a first
stage. Existing data can be overlaid when mdskslow is unintentionally run other than as a
first stage. To use mdskslow to read data into the pipeline at a position that is not a first
stage, specify mdskslow as the argument of an append or preface control. For example,
|append mdskslow ...| appends the data produced by mdskslow to the data on the
primary input stream.
Syntax Description: Specify as blank-delimited words the file name and the file type of
the file to be read or appended to. A file mode or an asterisk is optional; the default is to
search all modes. If the file does not exist with a file name and a file type as entered, the
file name and the file type are translated to upper case and the search is retried. The
keyword FROM is optional after the file mode; the following word specifies the number of
the first record to read or write; the defaults are to read from the beginning of the file and
to append after the last record of the file. No further arguments may be specified when
mdskslow is first in a pipeline.
When mdskslow is not first in a pipeline, the file is created as A1 if no file mode (or an
asterisk) is specified and no file is found with the name and type given. The record format
and (for fixed format files) the record length are optional arguments. The default is the
characteristics of an existing file when appending, VARIABLE when a file is being created.
When the file exists, the specified record format must match the characteristics of the file.
Operation: mdskslow is similar to disk, but it uses the FSREAD and FSWRITE interface to
the file system to read and write records, issuing a call for each record. The file is closed
before mdskslow terminates.
Streams Used: When mdskslow is first in a pipeline, it writes records to the primary
output stream.
When mdskslow is not first in a pipeline, it first appends to or creates the file from records
on the primary input stream that are not null; all input records are also copied to the
primary output stream. The primary output stream is severed at end-of-file on the primary
input stream. The first records of the file are then overwritten with any records from the
secondary input stream that are not null. All records from the secondary input stream are
copied to the secondary output stream after they are written to the file.
Warning: When the secondary input stream is connected, records read from it must have
the same length as the records they replace in the file, but this is not enforced by CMS for
variable record format files; CMS truncates a variable record format file without indication
of error if a record is replaced with one of different length, be that shorter or longer.
See Also: >, >>, <, disk, diskback, diskrandom, diskupdate, members, and pdsdirect.
Notes:
1. Null input records are copied to the output (if connected), but not to the file; CMS files
cannot contain null records.
2. mdskslow does not buffer or block reads or writes to CMS files.
3. Connect the secondary input stream when creating CMS libraries or packed files where
the first record has a pointer to the directory or contains the unpacked record length of
a packed file. The stage that generates the file (for instance, maclib) can write a
placeholder first record on the primary output stream initially; it then writes the real
first record to a stream connected to the secondary input stream of mdskslow when the
complete file has been processed and the location and size of the directory are known.
Return Codes: In addition to the return codes associated with CMS/TSO Pipelines error
messages, mdskslow is transparent to return codes from CMS. Refer to the return codes for
the FSREAD macro and the FSWRITE macro in VM/ESA CMS Application Development
Reference for Assembler, SC24-5453, for a complete list of return codes. You are most
likely to encounter these:
1 You do not have write authority to the file.
13 The disk is full.
16 Conflict when writing a buffer; this indicates that a file with the same name has
been created by another invocation of disk.
20 The file name or file type contains an invalid character.
24 The file mode is not valid.
25 Insufficient storage for CMS to allocate buffers.
CMS
──DISKUPDAte──fn──ft──┬───────────────────────────┬──
│ ┌─Variable──────────┐ │
└─fm──┼───────────────────┼─┘
└─Fixed──┬────────┬─┘
└─number─┘
Syntax Description: Specify as blank-delimited words the file name and the file type of
the file to be updated. A file mode or an asterisk is optional; the default is to search all
modes. If the file does not exist with a file name and a file type as entered, the file name
and the file type are translated to upper case and the search is retried. The record format
is optional after the file mode; a record length is optional for fixed record format files.
must have the same length as the records they replace in the file, but this is not enforced
by CMS for variable record format files; CMS truncates a variable record format file without
indication of error if a record is replaced with one of different length, be that shorter or
longer.
Operation: Columns 11 through the end of the input record replace the contents of the
record in the file. The file is closed before mdskupdate terminates.
Input Record Format: The first 10 columns of an input record contain the number of the
record to replace in the file (the first record has number 1). Leading and trailing blanks
are acceptable; the number needs not be aligned in the field. It is an error if an input
record is shorter than 11 bytes.
The valid values for the record number depends on the record format of the file:
Fixed For fixed record format files, any number can be specified for the record
number (CMS creates a sparse file if required). An input record can contain
any number of consecutive logical records as a block. The block has a single
10-byte prefix containing the record number of the first logical record in the
block.
Variable When the file has variable record format, the record number must at most be
one larger than the number of records in the file at the time the record is
written to it. The data part of input records must have the same length as the
records they replace in the file.
Streams Used: mdskupdate copies the input record (including the record number) to the
output after the file is updated with the record.
Return Codes: In addition to the return codes associated with CMS/TSO Pipelines error
messages, mdskupdate is transparent to return codes from CMS. Refer to the return codes
for the FSWRITE macro in VM/ESA CMS Application Development Reference for Assembler,
SC24-5453, for a complete list of return codes. You are most likely to encounter these:
1 You do not have write authority to the file.
13 The disk is full.
16 Conflict when writing a buffer; this indicates that a file with the same name has
been created by another invocation of disk.
20 The file name or file type contains an invalid character.
24 The file mode is not valid.
25 Insufficient storage for CMS to allocate buffers.
On CMS, members supports only a MACLIB, TXTLIB, or a similar file that has fixed record
format and record length 80. The library must be on a minidisk or in a Shared File
System (SFS) directory that has been accessed with a mode letter. The file must exist.
┌──
────────┐
┬──────┬┴──
──MEMBERs──┬─┤ CMS-file ├────┬───
└─┤ MVS-dataset ├─┘ └─word─┘
CMS-file:
├──fn──ft──┬──────┬──┤
(1) ┘
└─fm───
MVS-dataset:
├──┬─word───────────────┬──┤
├─word(generation)───┤
├─'word'─────────────┤
├─'word(generation)'─┤
└─DDname=word────────┘
Note:
1 The file mode is not optional when additional arguments are specified.
Syntax Description:
CMS Specify as blank-delimited words the file name and the file type of the file to be
read. A file mode or an asterisk is optional; the default is to search all modes. If
the file does not exist with a file name and a file type as entered, the file name and
the file type are translated to upper case and the search is retried. The file must be
fixed record format and record length 80.
MVS Enclose a fully qualified data set name in single quotes; the trailing quote is optional.
Specify the DSNAME without quotes to have the prefix, if any, applied. Append
parentheses containing a signed number to specify a relative generation of a data set
that is a member of a generation data group.
To read members of an already allocated data set, specify the keyword DDNAME=
followed by the DDNAME already allocated. The minimum abbreviation is DD=.
A blank-delimited list of member names is optional after the data set identifier.
Operation: members first reads the contents of members (if any) specified in the argu-
ment string; it then continues with the members specified in input records.
Each member is looked up in the library directory. If the member does not exist as
written, the search is retried with the member name translated to upper case.
A null record is written after each member; on CMS, trailing LDT and end of member
records are discarded.
In a CMS TXTLIB, members finds only the “main” name of a member (the first CSECT);
additional entry points are not found.
Diagnostic messages are issued for members that are not present in the library; the argu-
ment and all input records are processed before returning with return code 150 when one
or more members are not found.
Input Record Format: Blank-delimited lists of members to read from the library.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. Null and blank input records are discarded.
Record Delay: members writes all output for an input record before consuming the input
record.
Commit Level: members starts on commit level no. members starts on commit level -1
on CMS. It reads the directory of the library and then commits to level 0. members starts
on commit level -2000000000 on z/OS. It opens the DCB and then commits to level 0.
Premature Termination: members terminates when it discovers that its output stream is
not connected.
Examples: To extract a member of the system macro library and remove comment lines:
pipe member dmsgpi maclib * lt | nfind .*| chop 72 | console
MACRO
&LBL LT &A,&B
&LBL ICM &A,B'1111',&B
MEND
Ready;
Notes:
1. On CMS, xtract (with synonym extract) performs the same operation as members on a
TXTLIB; it is retained for compatibility with the past.
2. On CMS, members supports libraries created on all releases of VM, including VM/370,
both CDF and EDF format.
3. On CMS a library can contain members that have the same name. When the library
has more than one member by a particular name, it is unspecified which one members
reads.
4. On z/OS, members is a synonym for readpds.
merge—Merge Streams
merge merges multiple input streams down to a single output stream, interleaving the
records according to the contents of their key fields. The input streams should already be
in the specified order; this is not verified.
┌─NOPAD─────┐
──MERGE──┼───────────┼──┬─────────┬──
└─PAD──xorc─┘ └─ANYcase─┘
┌─Ascending─────────────────────────────────────┐
──┼───────────────────────────────────────────────┼──
├─Descending────────────────────────────────────┤
│ ┌──
───────────────────────────────────────────┐ │
│ │ ┌─Ascending──┐ │ │
─inputRange──┼────────────┼──┬───────────┬─┴─┘
└──
└─Descending─┘ ├─NOPAD─────┤
└─PAD──xorc─┘
Type: Sorter.
Syntax Description: Write the keywords PAD or NOPAD in front of the sort fields to
specify the default for all fields; the default is NOPAD. The keyword NOPAD specifies that
key fields that are partially present must have the same length to be considered equal; this
is the default. The keyword PAD specifies a pad character that is used to extend the shorter
of two key fields.
The keyword ANYCASE specifies that case is to be ignored when comparing fields; the
default is to respect case. Up to 10 sort ranges can be specified. The default is to merge
ascending on the complete record. The ordering can be specified for each field; it is
ascending by default. Specify padding after the ordering to treat a field differently than
other fields.
Operation: Records with identical keys on two or more streams are written with the
record from the lowest numbered stream first. This ensures that a sort/merge can be made
stable so that multiple sorts of a file give the same result.
Record Delay: merge consumes an input record after it has been copied to the output. In
this sense it does not delay the record, but it clearly allows a record from one input stream
to overtake the record on another one.
Commit Level: merge starts on commit level -2. It verifies that the only connected output
stream is the primary one and then commits to 0.
Premature Termination: merge terminates when it discovers that its primary output
stream is not connected.
Notes:
1. Large files can be sorted in parts to disk work files and subsequently merged with
merge.
2. Unless ANYCASE is specified, key fields are compared as character data using the IBM
System/360 collating sequence. Use spec (or a REXX program) to put a sort key first
in the record if you wish, for instance, to use a numeric field that is not aligned to the
right within a column range. Use xlate to change the collating sequence of the file.
! 3. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
¡¡ z/OS
¡ ──MQSC──word──┬────────┬──
¡ └─SQUISH─┘
¡ Syntax Description:
¡ word Specify the subsystem ID of the queue manager to access (four charac-
¡ ters). Case is respected in the first operand; most z/OS subsystems have
¡ upper case IDs.
¡¡ SQUISH Replace multiple blanks in the response with a single one.
¡ Input Record Format: The input should contain MQ Command Script commands, as
¡ defined in the MQ Commands Reference manual.
¡ Output Record Format: The output contains the response from the command processor.
¡ In general, there will be one line for each object processed.
¡ Streams Used: Records are read from the primary input stream and written to the primary
¡ output stream. Null and blank input records are discarded.
¡ Commit Level: mqsc starts on commit level -20. It connects to the queue manager if
¡ TSO Pipelines is not already connected to a queue manager. It then opens the queues it
¡ needs and and then commits to level 0.
¡ Premature Termination: mqsc terminates when it discovers that its output stream is not
¡ connected.
¡ Examples:
¡ pipe literal display qmgr|mqsc MQA1|cons
¡ CSQM409I +MQA1 QMNAME(MQA1 )
¡ CSQ9022I +MQA1 CSQMDRTS ' DISPLAY QMGR' NORMAL COMPLETION
¡ READY
¡ Notes:
¡ 1. Multiple instances of mqsc may run concurrently as long as they all refer to the same
¡ queue manager.
¡ 2. Some commands, such as PING CHANNEL are asynchronous; their response indicates
¡ that the operation has been started, but the result of the operation is unknown.
──┬─NFIND──┬────────┬──────────────────────┬──
│ └─string─┘ │
└─STRNFIND──┬─────────┬──delimitedString─┘
└─ANYcase─┘
Syntax Description: A string is optional for nfind. The string starts after exactly one
blank character. Leading and trailing blanks are significant.
Operation: Input records are matched the same way XEDIT matches text in an NFIND
command (tabs 1, image off, case mixed respect):
A null string matches any record.
Blank characters in the string represent positions that must be present in the input
record, but can have any value.
An underscore in the string represents a position where there must be a blank char-
acter in the input record.
All other characters in the string must be equal to the contents of the corresponding
position in the input record. Case is ignored if ANYCASE is specified.
nfind copies records that do not match to the primary output stream (or discards them if
the primary output stream is not connected). It discards records that match (or copies them
to the secondary output stream if it is connected).
Streams Used: Records are read from the primary input stream. Secondary streams may
be defined, but the secondary input stream must not be connected.
Record Delay: An input record is written to exactly one output stream when both output
streams are connected. nfind strictly does not delay the record.
Commit Level: nfind starts on commit level -2. It verifies that the secondary input stream
is not connected and then commits to level 0.
Notes:
1. notfind is a synonym for nfind.
! 2. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
3. Remember that REXX continuation functionally replaces a trailing comma with a blank.
Also recall that when two strings are separated by one or more blanks, REXX concat-
enates them with a single blank. Use the concatenation operator (||) before the
comma at the end of the line if your portrait style has the stage separators at the left
side of the stage and the trailing blank is significant to your application.
──NLOCATE──┬─────────┬──┬───────┬──┬─────────────┬──┬───────┬──
¡ └─ANYcase─┘ ├─MIXED─┤ └─inputRanges─┘ └─ANYof─┘
¡ ├─ONEs──┤
¡ └─ZEROs─┘
──┬─────────────────┬──
└─delimitedString─┘
Syntax Description:
No input range, a single input range, or one to ten input ranges in parentheses can be
specified. The default is to search the complete input record.
The characters to search for are specified as a delimited string. A null string is assumed
when the delimited string is omitted.
Operation: nlocate copies records that have no occurrence of the specified string within
any specified input range (or that are shorter than the beginning of the input range that is
first in the record) to the primary output stream (or discards them if the primary output
stream is not connected). Thus, nlocate always selects null records. nlocate discards
records in which the specified string occurs in a specified input range (or copies them to
the secondary output stream if it is connected).
A null string matches any record. In this case, records that are selected are shorter than
the first position of the input range closest to the beginning of the record. This is used to
select records shorter than a given length; “nlocate 4|” selects records of length 3 or
less. Records of a particular length can be selected with a cascade of nlocate and locate.
Streams Used: Records are read from the primary input stream. Secondary streams may
be defined, but the secondary input stream must not be connected.
Record Delay: An input record is written to exactly one output stream when both output
streams are connected. nlocate strictly does not delay the record.
Commit Level: nlocate starts on commit level -2. It verifies that the secondary input
stream is not connected and then commits to level 0.
Notes:
1. Use a cascade of nlocate filters when looking for records not containing several
strings.
2. notlocate is a synonym for nlocate.
! 3. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
¡ 4. Specifying MIXED and a mask that contains less than two one bits in any one byte will
¡ cause all records to be selected.
──NOEOFBACK──
Type: Filter.
If noeofback were omitted, end-of-file would propagate backwards through the gate stage
and the subroutine would terminate before the first record containing the target. This, in
turn, could cause the caller to malfunction.
Notes:
1. noteofback is a synonym for noeofback.
──NOT──word──┬────────┬──
└─string─┘
Type: Control.
Syntax Description: Specify the name of the selection stage to run and its argument
string.
Streams Used: Records are read from the primary input stream; no other input stream
may be connected. The primary input stream is connected to the primary input stream of
the stage. The primary output stream from the stage is connected to not’s secondary
output stream if one is defined. The secondary output stream from the stage is connected
to not’s primary output stream.
Commit Level: not starts on commit level -2. It verifies that the secondary input stream
is not connected. not does not commit to 0; the stage called must do so.
Notes:
1. When both output streams are connected and the stage that is subject to not produces a
record on both streams for each input record, the output records are produced in the
reverse order of how they would be produced without the not qualifier.
For example, not synchronise and not chop both write a record to the secondary output
stream before they write one to the primary output stream.
In general, dropping not into an existing pipeline in front of not, synchronise, or
similar can lead to a stall or to unexpected output when using stages that are sensitive
to timing, such as juxtapose.
2. The argument string to not is passed through the pipeline specification parser only
once (when the scanner processes the not stage), unlike the argument strings for
append and preface.
¡ 3. End-of-file is propagated from the streams of not to the corresponding stream of the
¡ specified selection stage.
Return Codes: If not finds no errors, the return code is the one received from the
selection stage.
──NOTINSIDe──┬─────────┬──delimitedString──┬─number──────────┬──
└─ANYcase─┘ └─delimitedString─┘
Syntax Description: A keyword is optional. Two arguments are required. The first one
is a delimited string. The second argument is a number or a delimited string. A delimited
string can be a string of characters delimited in the normal XEDIT fashion (for instance,
/abc/) or it can be a literal, which in turn can be hexadecimal or binary (for instance,
xf1f2f3). The number must be zero or positive.
Operation: notinside discards groups of records (or copies them to the secondary output
stream if it is connected). Each group begins with the record after one that matches the
first specified string. When the second argument is a number, the group has as many
records as specified (or it extends to end-of-file). When the second argument is a string,
the group ends with the record before the next record that matches the second specified
string (or at end-of-file).
When ANYCASE is specified, notinside compares fields without regard to case. By default,
case is respected. notinside copies records before, between, and after the selected groups
to the primary output stream (or discards them if the primary output stream is not
connected).
Streams Used: Records are read from the primary input stream. Secondary streams may
be defined, but the secondary input stream must not be connected.
Record Delay: An input record is written to exactly one output stream when both output
streams are connected. notinside strictly does not delay the record.
Commit Level: notinside starts on commit level -2. It verifies that the secondary input
stream is not connected and then commits to level 0.
Examples: To remove lines inside example tags, while retaining the tags:
... | notinside /:xmp./ /:exmp./ | ...
Notes:
1. ninside is a synonym for notinside.
! 2. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
! 3. pick can do what notinside does and then quite some more.
nucext is often used to test a compiled REXX program or an Assembler program before it is
generated into a filter package.
CMS
──NUCEXT──word──┬────────┬──
└─string─┘
Syntax Description: Leading blanks are ignored; trailing blanks are significant. A word
is required; additional arguments are allowed. The first word is the name of a nucleus
extension to invoke. The name is translated to upper case if no extension is found with
the name specified.
Operation: The optional string is passed to the program as the argument string.
Record Delay: nucext does not read or write records. The delay depends on the program
being run.
Notes:
1. The nucleus extension is invoked as a filter with BAL. When the entry point is an
executable instruction (not X'00'), general register 2 points to the SCBLOCK describing
the nucleus extension. The nucleus extension is invoked enabled and in user key irre-
spective of the flags in the SCBLOCK; this is likely to cause a protection exception for
a SYSTEM nucleus extension.
| 2. The program must be capable of being invoked in 31-bit addressing mode.
3. The nucleus extension must not use the CMSRET macro to return.
4. The nucleus extension must be able to distinguish between invocations from CMS/TSO
Pipelines and invocations as a CMS command.
5. Remember that REXX continuation functionally replaces a trailing comma with a blank.
Also recall that when two strings are separated by one or more blanks, REXX concat-
enates them with a single blank. Use the concatenation operator (||) before the
comma at the end of the line if your portrait style has the stage separators at the left
side of the stage and the trailing blank is significant to your application.
──OPTCDJ──┬──────┬──
└─word─┘
Type: Filter.
Operation: Input records with the two rightmost bits on in the carriage control character
(B'xxxx xx11') represent immediate carriage movement; the carriage control character is
passed to the output; additional data in the record are discarded. Records with data (that
is, carriage control of the form B'xxxx xx01') that are not preceded by a descriptor record
are assumed to contain plain characters; the third byte of the argument string (or the
default '0') is inserted after the carriage control character, and the record is copied to the
output (that is, a descriptor of X'02' is assumed for each non-blank column of an input
record that is not preceded by a descriptor record).
Descriptor records and their accompanying data records are processed to assign a Table
Reference Character to each position, using the descriptor value as an index into the argu-
ment string. Records are written for each Table Reference Character required. Line(s)
with underscores are written for positions with underscored characters. The last record
written for an input record has the carriage control character from the input record; other
record(s) have X'01' carriage control (write no space).
Input Record Format: X'00' in the first column indicates a descriptor record in the
format produced by overstr. Each column of the descriptor record specifies the high-
lighting and underscoring of the corresponding column in the data record that follows the
descriptor record. These descriptor values are used:
00 The position is blank.
Output Record Format: Descriptor records are not written to the output. Multiple output
records are written for an input record that is preceded by a descriptor record. The first
position is a machine carriage control character (it is never zero). The second position is a
Table Reference Character. Data to print begin in column 3; the record extends to the last
position requiring that particular TRC. Blanks (X'40') indicate columns that are not
subject to the TRC for the record.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. Null input records are discarded.
Record Delay: optcdj delays records that contain X'00' in column 1. It is unspecified if
it delays other records. Applications should be written to tolerate if optcdj does not delay
records that do not contain X'00' in column 1.
Premature Termination: optcdj terminates when it discovers that its output stream is not
connected.
Examples: To print a document formatted for an IBM 1403 on an IBM 3800 printer or
an all points addressable (APA) printer under control of Print Services Facility (PSF):
cp spool 00e fcb s8 char it12 ib12
cp tag dev 00e mvs system 0 OPTCD=J
< $doc script | c14to38 | overstr | optcdj | printmc
cp close 00e
Notes:
1. Input records are truncated after 256 bytes without indication of error.
2. optcdj owes its name to the z/OS JCL option to indicate that a file contains Table
Reference Characters: OPTCD=J.
──OUTSIDE──┬─────────┬──delimitedString──┬─number──────────┬──
└─ANYcase─┘ └─delimitedString─┘
Syntax Description: A keyword is optional. Two arguments are required. The first one
is a delimited string. The second argument is a number or a delimited string. A delimited
string can be a string of characters delimited in the normal XEDIT fashion (for instance,
/abc/) or it can be a literal, which in turn can be hexadecimal or binary (for instance,
xf1f2f3). The number must be 2 or larger.
Operation: outside discards groups of records (or copies them to the secondary output
stream if it is connected). Each group begins with a record that matches the the first
specified string. When the second argument is a number, the group has as many records
as specified (or it extends to end-of-file). When the second argument is a string, the group
ends with the next record that matches the second specified string (or at end-of-file).
When ANYCASE is specified, outside compares fields without regard to case. By default,
case is respected. outside copies records before, between, and after the selected groups to
the primary output stream (or discards them if the primary output stream is not connected).
Streams Used: Records are read from the primary input stream. Secondary streams may
be defined, but the secondary input stream must not be connected.
Record Delay: An input record is written to exactly one output stream when both output
streams are connected. outside strictly does not delay the record.
Commit Level: outside starts on commit level -2. It verifies that the secondary input
stream is not connected and then commits to level 0.
Examples: To remove two lines of heading on each page from a report file with ASA
carriage control in the first column.
...| mctoasa | outside /1/ 2 |...
Notes:
! 1. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
! 2. pick can do what outside does and then quite some more.
──OUTSTORE──┬───────────┬──
! └─ALET──hex─┘
Syntax Description:
! ALET must be specified when outstore is first in the pipeline; no operands are allowed
! when outstore is not first in a pipeline.
! Operation: When ALET is specified, the file is extracted from the specified data space.
! Otherwise input records are read and the file is extracted from those descriptors. Such
! descriptors may or may not indicate that the file is in a data space.
Input Record Format: Records describe a file which is written to the output a record at a
time. The input must be in the format produced by instore.
Streams Used: Records are read from the primary input stream and written to the primary
output stream.
! Record Delay: outstore does not delay the last record written for an input record.
! Commit Level: outstore starts on commit level -2. It verifies the contents of the specified
! data space and then commits to level 0.
Premature Termination: outstore terminates when it discovers that its output stream is
not connected.
┌─BLANK─┐
──OVERlay──┼───────┼──
└─xorc──┘
Type: Gateway.
Syntax Description: A single character or two hex digits is optional to specify the pad
character; the default is the blank character.
Streams Used: Records are read from all defined and connected input streams beginning
with stream 0; output is written to the primary output stream only.
Record Delay: An input record is consumed as soon as it has been loaded into the output
buffer; no input record is held while the output record is being written. Thus, overlay has
the potential to delay one record on all input streams.
Commit Level: overlay starts on commit level -2. It verifies that the primary output
stream is the only connected output stream and then commits to level 0.
Premature Termination: overlay terminates when it discovers that its primary output
stream is not connected.
Examples: To flush the contents of lines left and right, with a forward slash marking the
point to insert blanks:
pipe (end ?) literal the left/the right | c: chop before / | ...
... o: overlay | console ? c: | spec 2-* 1.50 right | o:
the left the right
Ready;
Note that the pipeline does not stall, because overlay can delay the record on the primary
input stream.
pipe (end ?) literal the left/the right | c: chop before / | ...
... o: overlay | console ? c: | spec 2-* 4 | o:
thethefright
Ready;
──OVERSTR──
Operation: A set of overstruck lines is merged into a single data record preceded by a
descriptor record. In the merged data record, each position contains the character from the
last line in the set of overprinted lines where the corresponding position is neither blank
nor an underscore. A character position is considered overprinted when the position is
neither blank nor underscore in two or more records of the set.
Lines not part of a set of overstruck lines are copied to the output without inspection. It is
verified that all records have valid machine carriage control characters.
Input Record Format: The input records have machine carriage control characters in the
first column. The input can contain sets of records that would be overstruck (printed on
the same line) when sent to a line printer. A set of overstruck lines consists of one or
more lines with X'01' carriage control (write no space) followed by a line with some
other carriage control character.
Output Record Format: X'00' in the first column indicates a descriptor record. Each
column of the descriptor record specifies the highlighting and underscoring of the corre-
sponding column in the data record that follows the descriptor record. These descriptor
values are used:
00 The position is blank.
Each descriptor record is followed by a data record. The first character in a data record is
the machine carriage control character from the last record of the corresponding set of
overstruck input records. Data are from column 2 onwards.
Data records that are not preceded by a descriptor record contain no underscored or high-
lighted data (though they can contain underscore characters).
Streams Used: Records are read from the primary input stream and written to the primary
output stream. Null input records are discarded.
Record Delay: overstr delays records that contain X'01' in column 1. It is unspecified if
it delays other records. Applications should be written to tolerate if overstr does not delay
records that do not contain X'01' in column 1.
Premature Termination: overstr terminates when it discovers that its output stream is not
connected.
Examples: To print a document formatted for an IBM 1403 on an IBM 3800 printer or
an all points addressable (APA) printer under control of Print Services Facility (PSF):
cp spool 00e fcb s8 char it12 ib12
cp tag dev 00e mvs system 0 OPTCD=J
< $doc script | c14to38 | overstr | optcdj | printmc
cp close 00e
Notes:
1. Use asatomc to convert from ASA to machine carriage control.
2. Use c14to38 prior to overstr to change overstrikes of different characters to a single
character.
3. overstr is designed to process the output from c14to38 and deliver its output (possibly
via xpndhi) to buildscr and optcdj.
4. For compatibility with the past, delover invokes this subroutine pipeline:
'callpipe *: | overstr | nfind' '00'x || '| *:'
The resulting records have more data than the (rather naive) original delover.
5. No output record has X'01' carriage control.
──PACK──┬──────────────────────────┬──
│ ┌─Variable─┐ │
└─┴─Fixed────┴──┬────────┬─┘
└─number─┘
Type: Filter.
Syntax Description: Two arguments are optional, a keyword and a number. They specify
the file format and the maximum record length for the input. The default is variable
record format with infinite record length. When FIXED is specified, the default record
length is the length of the first input record.
Operation: The file is packed in 1024-byte records in the format used by COPYFILE and
XEDIT. The last record is padded with binary zeros. When packing fixed record format,
all input records must have the same length, which must be equal to the second argument,
if it is specified.
The first record of a packed file contains an indication of the record length of the file; this
can be set based on the first input record when packing a fixed file. When a record length
is specified for a variable length file, no input record may be longer than this length.
When VARIABLE is specified without a maximum record length (or no arguments are
specified) and the packed file contains more than one record, pack is not able to build a
correct first record at the time it must be written to the output stream. Instead, pack writes
a record that indicates an infinite record length (2G-1). The correct first record for the file
is written to the secondary output stream after the entire file has been processed. The
secondary output stream should be connected to the secondary input stream of disk to write
the packed file to disk properly.
Warning: Ensure that the first record is written correctly. When a variable format file is
packed, the packed file is larger than 1K, and the first record is not rewritten, then the
resulting file can be unpacked with unpack, but not with XEDIT (which gives a message to
the effect that the file is too big) or COPYFILE (which ABENDs CMS on releases prior to CMS
8).
Streams Used: Records are read from the primary input stream; no other input stream
may be connected. Null input records are discarded. Output is written to the primary
output stream. One record is written to the secondary output stream when a variable
format file is packed, if an explicit record length is not specified and more than one record
is written to the primary output stream. The primary output stream is severed before pack
writes to the secondary output stream.
Record Delay: pack delays input records as required to build an output record. The delay
is unspecified.
Commit Level: pack starts on commit level -2. It verifies that the secondary input stream
is not connected and then commits to level 0.
Premature Termination: pack terminates when it discovers that its primary output stream
is not connected.
Examples:
To pack a file that does not have records longer than 256:
pipe < some file|pack v 256|> packed file a Fixed
If you do not know an upper limit on the record length, more care is required if the
resultant file is larger than one disk block and you wish to read it with the CMS COPYFILE
command or load it into XEDIT:
pipe (end ?)< some file|p: pack|d: > packed file a Fixed?p:|d:
You must connect the secondary output stream from pack to the secondary input stream of
> to write the correct first record at end-of-file, when the length of the longest record is
known.
Notes:
1. When writing a packed file to disk, always specify fixed record format to the disk
driver writing the file, even when the unpacked file is variable record format. XEDIT
does not recognise a packed file when the record format is V.
2. pack can produce more bytes of output than it reads input. This is likely to happen
when the data has random characters, for example in an encrypted file or a module
file.
┌─Right─┐ ┌─BLANK─┐
──PAD──┼───────┼──┬────────────────────┬──number──┼───────┼──
¡ └─Left──┘ └─┬────────┬──MODULO─┘ └─xorc──┘
¡ └─number─┘
Type: Filter.
¡ Syntax Description: Two keywords are optional. At least one number is required.
LEFT Pad the record on the left. Thus, the text in the output record is aligned
to the right.
RIGHT Pad the record on the right. That is, add padding to the end of the
record.
¡ number If it is specified, the first number must be smaller than the second. This
¡ number is called the offset. It must be followed by the keyword
¡ MODULO. The default offset is zero.
¡¡ MODULO The following number specifies not the absolute record length, but the
¡ modulo. The output record length is padded to the smallest multiple of
¡ the modulo plus the offset that is equal to or larger than the length of
¡ the input record.
number Specify the minimum record length when MODULO is omitted. The
number must be zero or positive. When MODULO is specified, the
number must be positive.
xorc The pad character is optional after the number; it may be specified as a
single character or a two-character hex code. The default pad character
is the blank.
Premature Termination: pad terminates when it discovers that its output stream is not
connected.
──PARCEL──
Type: Filter.
| Operation: The byte stream on the primary input stream is reformatted into records of
| lengths as specified by the contents of the secondary input stream. If the secondary output
! stream is connected, parcel acts also a take n BYTES, where the number is the sum of the
! numbers read from the secondary input stream. That is, at end-of-file of the secondary
! input stream, any remaining record from the primary input stream is passed to the
! secondary output stream and the primary input stream is then shorted to the secondary
! output stream.
! Streams Used: The secondary input stream must be defined and connected. The
! secondary output stream is optional.
Record Delay: parcel has the potential to delay one record. Output records that contain
data from a single input record are not delayed, but are written as a burst. If an output
record contains precisely the data in an input record, it is strictly not delayed. When both
the records on the two input streams are emptied at the same time (that is, the last number
in a record from the secondary input stream results in an output record that ends with the
last byte of the record on the primary input stream), the record on the primary input stream
is consumed before the one on the secondary input stream.
Commit Level: parcel starts on commit level -2. It verifies that the secondary output
stream is not connected and then commits to level 0.
Premature Termination: parcel terminates when it discovers that its primary output
! stream is not connected. It also stops when the primary input stream reaches end-of-file or
! the secondary input stream reaches end-of-file and the secondary output stream is not
! connected. Message 72 is issued when the primary input stream reaches end-of-file before
| the secondary input stream if there is an unfinished record for the primary output stream;
the last record read from the secondary input stream is not consumed. When the
! secondary input stream reaches end-of-file and the secondary output stream is not
! connected, parcel terminates immediately; the last record read from the primary input
stream is not consumed when any data from it remain to be passed to the output.
Notes:
| 1. Use fblock to format a byte stream into records of equal length.
──PAUSE──
Operation: A pause event is signalled for each input record when the pipeline was issued
by runpipe EVENTS. The record is then passed unmodified to the output.
pause shorts the input to the output when it is running in a pipeline set that was not issued
by runpipe EVENTS.
Premature Termination: pause terminates when it discovers that its output stream is not
connected.
CMS
──PDSdirect──fn──ft──┬────┬──
└─fm─┘
Syntax Description: Specify as blank-delimited words the file name and the file type of
the file to be read. A file mode or an asterisk is optional; the default is to search all
modes. If the file does not exist with a file name and a file type as entered, the file name
and the file type are translated to upper case and the search is retried.
Output Record Format: The first record written is record 1 of the file; it contains infor-
mation about the type, position, and size of the PDS directory. The following records are
the directory of the simulated PDS. The format of the directory records depends on the
particular type of library.
Premature Termination: pdsdirect terminates when it discovers that its output stream is
not connected.
Notes:
1. Use listpds rather than pdsdirect to obtain list of members in a PDS, one member to a
record.
2. Use members to obtain members of simulated partitioned data sets that are fixed
record format and record length 80, for instance TXTLIB and MACLIB.
3. On CMS, pdsdirect supports libraries created on all releases of VM, including VM/370,
both CDF and EDF format.
4. pdsdirect reads CMS files; it does not read the directory of a PDS on an OS volume.
! pick can also partition the file, selecting records from one the satisfies particular relations
! to one that satisfies other particular relations, or for a number of records.
┌─NOPAD─────┐
──PICK──┼───────────┼──┬─────────┬──
└─PAD──xorc─┘ └─ANYcase─┘
! ──┬─┬─────────────────────┬──┤ List ├─┬──
! │ ├─┬─FROM─┬──┬───────┬─┤ │
! │ │ └─TO───┘ └─AFTER─┘ │ │
! │ └─WHILE───────────────┘ │
! └─┤ Fromto ├────────────────────────┘
Fromto:
! ├──FROM──┬───────┬──┤ List ├──┬─TO──┬───────┬──┤ List ├─┬──┤
! └─AFTER─┘ │ └─AFTER─┘ │
! └─COUNT──number───────────┘
List:
! ├──┬───────────────────┬──┤ Test ├──┤
! └─┤ List ├──┬─AND─┬─┘
! └─OR──┘
Test:
! ├──┤ RangeString ├──┬─┤ NonEqualOp ├───(1) ─┤ RangeString ├─┬──┤
Syntax Description:
!! AFTER Optional immediately after FROM or TO. When specified, the selection
! action is performed after the matching record; when omitted it is
! performed before the matching record.
!! AND Match only when both comparisons match. AND groups closer than OR.
! Specify parentheses to group OR comparisons. You may use an amper-
! sand (&) or even two (&&) instead of the keyword.
ANYCASE Case is to be ignored when comparing strings; the default is to respect
case.
!! COUNT Specified in conjunction with FROM. When a record is matched by the
! FROM clause, the specified number of records are passed to the primary
! output stream.
!! FROM Select from the matching record. Records before the matching record
! are rejected. When specified without TO or COUNT, the balance of the
! file is selected. Otherwise records are selected for the specified count or
! up to one that is selected by the TO clause.
NOPAD Do not pad the shorter operand when comparing strings. When PAD is
omitted, the unpaired positions in the longer string are considered to
compare high. (Thus, the shorter string is logically extended with a
value that compares low against X'00'.)
!! OR Match when either comparison matches. You may use a vertical bar (|)
! or even two (||) instead of the keyword; be sure to escape it or use a
! different stage separator.
PAD Specify the padding character to use when comparing strings. The
shorter of the two strings is extended on the right with the specified pad
character for purposes of comparison.
!! WHILE Select the first part of the file until, but not including, a record does not
! match. Reject the balance of the file.
A comparison is specified by an infix operator and two operands. The left side can contain
one operand only. The right hand side may contain a list of operands separated by
commas for the two equal operators; other operators accept one operand only. Each
operand may be:
! An inputRange. This can designate a manifest constant, in which case the compar-
! ison must be numeric.
A delimitedString, which specifies a constant, being it a number for numeric
comparison or a true string.
! A number followed immediately by a plus (+) without blanks. (Automatic field
! length.) The number specifies the beginning column. The length used will be the
! smaller of the length of the other operand and the length of the rest of the record from
! the specified column. You can specify only one of the operands as an auto field
! length. The left operand cannot have auto field length when the right hand side is a
! list of operands separated by commas.
: The relational operators for strings are adopted from the REXX “strict” relational operators.
! pick ignores the member type in character comparison.
== Strictly equal. The two strings must be equal byte for byte except for case
! folding and padding. The right hand side may be specified as a list of oper-
! ands separated by commas; the relation holds when at least one of the oper-
! ands compare equal with the first operand. The inverse of ¬==.
¬== Not strictly equal. The inverse of ==.
<< Strictly less than. After a (possibly null) run of bytes that are the same in the
two strings, the left string must contain a character that is lower in the
collating sequence than the corresponding character in the right hand string.
The inverse of >>=.
<<= Strictly less than or equal. The inverse of >>.
>> Strictly greater than. After a (possibly null) run of bytes that are the same in
the two strings, the left string must contain a character that is higher in the
collating sequence than the corresponding character in the right hand string.
The inverse of <<=.
>>= Strictly greater than or equal. The inverse of <<.
!! IN Match when all characters of the first operand are present in the second one or
! the first operand is null.
! When using relational operators for numeric comparisons of data that have no type associ-
! ated, the fields or constants must conform to the syntax described in “Floating point
! Numbers” on page 714. For typed members of structures the input members are converted
! automatically for types D, F, P, R, and U. Note that literal numbers must also be specified
¡ as a delimitedString to distinguish them from columns. The numeric relational opera-
¡ tors are:
!¡ = Equal. The two numbers must have the same sign and be exactly equal. The
! right hand side may be specified as a list of operands separated by commas;
! the relation holds when at least one of the operands compare equal with the
! first operand. The inverse of ¬=.
¡ ¬= Not equal. The inverse of =.
¡ < Less than. The inverse of >>=.
¡ <= Less than or equal. The inverse of >.
¡ > Greater than. The inverse of <=.
¡ >= Greater than or equal. The inverse of <.
Operation: pick copies records that satisfy the specified relation to the primary output
stream (or discards them if the primary output stream is not connected). It discards
records that do not satisfy the relation (or copies them to the secondary output stream if it
is connected).
! When neither FROM, TO, nor WHILE is specified, pick selects lines that match the list of
! comparisons and discards those that do not.
! When FROM, TO, or WHILE is specified pick partitions the file at the first line that
! matches (or not):
! – FROM rejects the part of the file up to the first matching record. When AFTER is
! specified, the matching record is rejected; otherwise it is selected. The remainder
! of the file is selected.
! – TO selects the part of the file up to the first matching record. When AFTER is
! specified, the matching record is selected; otherwise it is rejected. The remainder
! of the file is rejected.
! – WHILE selects records until, but not including, the first one that does not match; it
! then rejects the remainder of the file.
! When COUNT or TO is specified with FROM, pick discards records up to the first record
! that is matched by the FROM list. It then selects either the number of records specified
! by COUNT or up to the next record that matches the TO list.
! When TO is specified without AFTER, the matching record is not written immediately;
! instead the FROM clause is retested against the record to see whether it starts a new
! range to be selected. This has effect when the TO clause is a subset of the FROM
! clause.
! Having selected the specified records, pick then goes back to rejecting records until
! another one is matched by the FROM list.
Streams Used: Records are read from the primary input stream. Secondary streams may
be defined, but the secondary input stream must not be connected.
Record Delay: An input record is written to exactly one output stream when both output
streams are connected. pick strictly does not delay the record.
! Commit Level: pick starts on commit level -2. It parses the argument list, then verifies
that the secondary input stream is not connected, and then commits to level 0.
Examples: Assuming that input records contain a timestamp in the first four columns,
select the records processed earlier than 8 a.m:
... | pick 1.4 << "0800" | ...
! To select records within an interval; for example, to select the records timestamped from 8
am. to but not including 4 p.m., (15:59 on a 24-hour clock):
! ... | pick 1.4 >>= "0800" and 1.4 <<= "1559" | ...
! This could also have been achieved with a cascade of two pick stages.
! This cannot be accomplished with a cascade of pick stages; a multistream topology would
! be required.
pick can compare two fields in a record; for example, to select records that represent files
that need updating. Assuming that input records contain one ISO-format timestamp
(YYYYMMDDHHMMSS) in columns 23 to 36 and another one in columns 57 to 70, select
those records where the first timestamp is later than the second:
... | pick 23.14 >> 57.14 | ...
To select records where the second word is equal to the fourth word. This also selects
records that contain one word only because both operands would then be null.
... | pick word 2 == word 4 | ...
¡ To select records where the second word contains a number that is greater than the number
¡ in columns 37 to 41:
¡ ... | pick word 2 > 37.5 | ...
¡ To select records where the second word is greater than the constant 37.5:
¡ ... | pick word 2 > /37.5/ | ...
! Consider a stacked COPY file where members are separated by a *COPY record. To select
! all members beginning with A:
! ...|pick from 1+ == /*COPY / and substr 1 of w2 == /A/
! to 1+ == /*COPY /|...
Notes:
¡ 1. \== and /== are synonyms for ¬==. \= and /= are synonyms for ¬=. The not sign
¡ is often mapped by terminal emulators to the caret ().
2. You can specify a literal as both the first and the second string. All records are then
either selected or rejected, depending on the static relation between the two constants.
! 3. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
¡ 4. Input ranges for numeric compare are converted in the same way done by spec for a
¡ field that has a field identifier specified.
! 5. The typical use of IN would be to test whether a column contains one of a set of
! characters.
! 6. The only sensible use of parentheses is to enclose OR items in conjunction with AND,
! but redundant parentheses are accepted.
! 7. The following two invocations of pick select the same set of records:
! ...|pick w6 == /take/ & ( 1 == /a/ or 1 == /b/ or 1 == /c/ )|...
! ...|pick w6 == /take/ and 1 == /a/, /b/, /c/|...
! The motivation for this construct is laziness, but with a complex left hand input range
! there may be some measurable performance increase with large files.
! 8. As an example, these are equivalent:
! ...|between /abc/ /def/|...
! ...|pick from 1+ == /abc/ to after 1+ == /def/|...
! As are these:
! ...|inside /abc/ /def/|...
! ...|pick from after 1+ == /abc/ to 1+ == /def/|...
! Other variations of AFTER are available that cannot be implemented with any of the
! between family of selection stages.
! 9. The between family of stages is made redundant by the enhancements introduced in
! CMS/TSO Pipelines 1.1.11/1D, but they are scanned faster because their syntax is
! simpler. Runtime performance should be equivalent.
! 10. An inputRange must be terminated with a blank (unlike a delimitedString).
! ...|pick 1 == 2, 3|...
! Will give a rather misleading diagnostic. The following is syntactically valid, but
! unlikely to do what you want, because it will not select anything as the second
! operand is six bytes literal delimited by the two “2” digits and no padding is specified:
! ...|pick 1 == 2, 3, 12|...
──PIPCMD──┬──────────────────┬──
└─STOP──┬─ALLEOF─┬─┘
├─ANYEOF─┤
└─number─┘
Type: Control.
Syntax Description:
STOP Inspect the status of the output streams after each input line has been
processed.
ANYEOF Terminate as soon as any stream is at end-of-file.
ALLEOF Terminate as soon as all streams are at end-of-file.
number Terminate when the specified number of streams are at end-of-file.
Note that an input line is not consumed before the corresponding command is complete. A
record is discarded by the READTO if the command consumes records from the primary
input stream.
Input Record Format: Input lines may contain any pipeline command described in
Chapter 25, “Pipeline Commands” on page 723 except for BEGOUTPUT, GETRANGE,
NOCOMMIT, PEEKTO, READTO, SCANRANGE, SCANSTRING, and STREAMSTATE ALL. The most
useful one is no doubt CALLPIPE.
Streams Used: pipcmd reads commands from the primary input stream; it does not write
output, but the commands may connect to any defined stream. The input record is
consumed after the command completes.
A pipeline command should not refer to the primary input stream. If it does, the first line
it sees is the one issued as the command; a line is discarded from the primary input stream
when the command completes (this line is the line containing the command when the
primary input stream has not been read by the command).
Examples: To process the contents of the files whose names match the pattern specified
by the argument string, in this case looking for the string “Dana”:
/*---------------------------*/
/* Get the contents of files */
/*---------------------------*/
parse arg fn ft fm .
'PIPE (name PIPCMD)',
' command LISTFILE' fn ft fm,
'|spec', /* Generate subroutine */
'/callpipe (stagesep ?) / 1', /* Command */
'/?< / next 1-* next', /* Prefix read of file */
'/?spec ,/ next w1 next /, 1.8 1-* 10/ next', /* Add fn */
'/?*:/ next', /* Connect to caller's stream */
'|pipcmd',
'|locate 10-* /Dana/',
'|cons'
For each file found by LISTFILE, the first spec stage builds a subroutine pipeline with a
question mark as the stage separator. The subroutine pipeline reads the file (<) and puts
the file name in columns one through eight (the second spec). Thus, the first spec stage
generates another one of the form spec ,fn, 1.8 1-* 10. The CALLPIPE pipeline
commands are issued by pipcmd to read the files and prefix the file name to each record of
a file. The locate stage selects records containing the string. The result is displayed on
the terminal.
The program is not built in when the return code is zero. (A nonzero return code repres-
ents the entry point address in storage.)
Return Codes: When a negative return code is received on a pipeline command, the
return code from pipcmd is that negative return code. When the return code is zero or
positive, all input records have been processed; the return code is the maximum of the
return codes received.
pipestop also sets an indication in the pipeline set that will prevent further waiting for
external events. This action is irreversible.
──PIPESTOP──
Operation: When an input record arrives, pipestop posts all ECBs that are being waited
upon by other stages. The completion code indicates that the stages should terminate.
pipestop then passes the input record to the output (if it is connected).
Either pass a record to gate when you wish to turn off the delay, or use hole to make gate
wait for end-of-file:
pipe (end ?) ... | hole | g: gate ? literal +60 | delay | g: | pipestop
! ──POLISH──┬─HEXadecimal─┬──
! └─ASSEMBLEr───┘
! Type: Filter.
! Operation: polish parses each input line according to the specified grammar. It then
! writes to the primary output stream a list of actions to perform the statement. The list is
! ended by a null line. Output lines contain a word specifying the action type and other data
! that an evaluator would need to evaluate the expression.
! The expression result will be the single item on the evaluation stack when the evaluator
! reaches a null input record (assuming, of course, that a correct evaluator is supplied).
! Blanks are ignored between terms, but you cannot have blanks in constants, identifiers, or
! the composite operators (//). In particular, blanks are allowed between an identifier and the
! left parenthesis that opens the argument list, and even in conjunction with periods that
! qualify an identifier.
! Dyadic Operators: The following table lists the dyadic operators in order of increasing
! precedence; that is, the last one (binary AND) binds closer than any of the others. All
! operators in a row have the same precedence. All dyadic operators are left associative,
! that is, a+b+c is equivalent to (a+b)+c
! Prefix Operators: The prefix operators are plus (+) and minus (-). They bind tighter than
! dyadic operators, but not as close as suffix operators.
! Suffix Operators: The suffix operator is the question mark (?). Nothing binds closer than
! the question mark (unless you consider the period in a qualified identifier to be an oper-
! ator; it binds even closer).
! The input record can contain blanks only in character self-defining terms. The parser is
! caseless, that is, upper case and lower case are equivalent as far as the parser is concerned,
! but the output respects the case of the input.
! Dyadic Operators: The following table lists the dyadic operators in order of increasing
! precedence; that is, the last ones (multiply and divide) binds closer than any of the others.
! All operators in a row have the same precedence. All dyadic operators are left associative,
! that is, a+b+c is equivalent to (a+b)+c
! Prefix Operators: The prefix operators are plus (+) and minus (-). They bind tighter than
! dyadic operators.
! Output Record Format: The output records contain actions to perform. The first word
! contains a keyword that specifies the action; additional data is present depending on the
! particular keyword.
! Premature Termination: polish terminates when it discovers that its output stream is not
! connected.
! Examples:
! Using qualifiers:
! pipe literal . v - z+x.y?|polish hex | console
! qualifier v
! identifier z
! binary -
! qualifier y x
! monadic ?
! binary +
!
! Ready;
! The expression below might compute the address of the field pointed to by the fullword
! twelve bytes after the address pointed to by fullword addressed by the contents of register
! four (depending on the evaluator, of course):
! (r4?+c)?
! If register 4 contains X'10' and storage location X'10' contains X'20', the result could
! be the contents of location X'2C'.
! Notes:
! 1. A sample evaluator for the hexadecimal parser can be found at
! http://vm.marist.edu/%7epipeline/evalx.exec
! 2. You cannot modify the underlying grammar. In particular, the precedence of the oper-
! ators cannot be changed. You may implement a subset of the grammar by, for
! example, rejecting particular operators at evaluation time.
! 3. ASM is a synonym for ASSEMBLER.
──PREDSELect──
Type: Gateway.
Operation: predselect reads records from whichever input stream has one available. It
stores the last record read from the primary input stream in a buffer (replacing any
previous content) and then consumes the record to release the producer (which is typically
fanout). Records read from the secondary input stream and the tertiary input stream are
discarded; they merely control which stream should receive the stored record read from the
primary input stream. A record on the secondary input stream causes the buffered record
to be written to the primary output stream; a record on the tertiary input stream causes the
buffered record to be written to the secondary output stream. Once the buffered record is
written, subsequent input records on the secondary input stream or the tertiary input stream
are discarded until a record is read from the primary input stream.
When a record arrives on the primary input stream without intervening records on the two
other input streams, the previous record is in effect discarded.
Streams Used: Two streams must be defined; up to three streams may be defined.
predselect propagates end-of-file between the secondary input stream and the primary
output stream; and it propagates end-of-file between the tertiary input stream and the
secondary output stream.
Record Delay: predselect has the potential to delay one record. predselect delays the
record by any delay the record may have incurred before it reaches the secondary input
stream or the tertiary input stream; predselect does not add delay as long as all input
streams are connected and a record on the primary input stream is followed by a record on
the secondary input stream or the tertiary input stream.
Commit Level: predselect starts on commit level -2. It verifies that the tertiary output
stream is not connected and then commits to level 0.
Note that the output from the example would be the same if the last pipeline (feeding the
secondary output stream from the selection stage to the tertiary input stream to predselect)
were omitted. Including this pipeline ensures that the the subroutine as a whole never
delays the record; without this connection, discarded records would be delayed until the
next record became available on the primary input stream.
| This example is contrived because strfind ANYCASE performs the same function cheaper.
preface—Put Output from a Device Driver before Data on the Primary Input
Stream
preface runs a device driver to generate output which is passed to preface’s output; preface
then passes all its input records to the output.
──PREFACE──string──
Type: Control.
Syntax Description: The argument string is normally a single stage, but any pipeline
specification that can be suffixed by a connector (|*:) is acceptable (see usage note 1).
Operation: The string is issued as a subroutine pipeline with CALLPIPE, using the default
stage separator (|), double quotes as the escape character ("), and the backward slash as
the end character (\). The beginning of the pipeline is unconnected. The end of the
pipeline is connected to preface’s primary output stream. (Do not write an explicit
connector.) The input records are passed to the output after the CALLPIPE pipeline
command has completed.
In the subroutine pipeline, device drivers that reference REXX variables (rexxvars, stem,
var, and varload) reach the EXECCOMM environments in effect for preface.
Streams Used: The string that specifies the subroutine pipeline can refer to all defined
streams except for the primary output stream (which will be connected to the end of the
subroutine pipeline by preface). The primary input stream is shorted to the primary output
stream when the subroutine pipeline ends.
Record Delay: preface delays the input file by the number of records that are prefaced.
These records are written before the input file is read.
Commit Level: preface starts on commit level -1. The subroutine pipeline must commit
to 0 if it generates output.
Examples: To put the contents of a variable before the stream being built:
...| preface var firstline |...
Notes:
1. The argument string may contain stage separators and other special characters. Be
sure that these are processed in the right place. The argument string is passed through
the pipeline specification parser twice, first when the pipeline containing the preface
stage is set up, and secondly when the argument string is issued as a subroutine pipe-
line. The two example pipelines below show ways to preface a subroutine pipeline
consisting of more than one stage. In both cases, the split stage is part of the subrou-
tine pipeline and, thus, splits only the record produced by the second literal stage:
pipe literal c d e| preface literal a b || split | console
a
b
c d e
Ready;
Return Codes: The return code is the return code from the CALLPIPE pipeline command.
It may reflect errors in the argument string or trouble with the stage(s) in the pipeline.
printmc—Print Lines
printmc copies lines from the pipeline to a virtual printer. The lines must have machine
carriage control characters.
CMS
┌─00E─────┐
──printmc──┼─────────┼──┬──────┬──
└─devaddr─┘ └─STOP─┘
Syntax Description: Arguments are optional. Specify the device address of the virtual
printer to write to if it is not the default 00E. The virtual device must be a unit record
output printer device. The keyword STOP allows you to inspect the channel programs built
by printmc.
Operation: The first byte of each record designates the CCW command code (machine
carriage control character); it is inserted as the CCW command code. The remaining char-
acters are identified for transport to SPOOL by the address and length fields of the CCW. A
single blank character is written if the input record has only the command code. Control
and no operation CCWs can specify data; the data are written to the SPOOL file. X'5A'
operation codes are supported, but other read commands are rejected with an error
message; command codes are not otherwise inspected.
Records may be buffered by printmc to improve performance by writing more than one
record with a single call to the host interface. A null input record causes printmc to flush
the contents of the buffer into SPOOL, but the null record itself is not written to SPOOL.
After the producing stage has written a null record it is assured that printmc can close the
unit record device without loss of data. Input lines are copied to the primary output
stream, if it is connected.
The virtual Forms Control Buffer (FCB) for a virtual printer (the virtual carriage control
tape) can be loaded by a CCW or the CP command LOADVFCB. The channel program is
restarted after a channel 9 or 12 hole causes it to terminate; even so, such holes in the
carriage tape should be avoided, because they serve no useful purpose; and they generate
additional overhead.
Commit Level: printmc starts on commit level -2000000000. It ensures that the device is
not already in use by another stage, allocates a buffer, and then commits to level 0.
The trick is to pass a null record to printmc to force it to flush the contents of its buffer
into CP SPOOL before the device is closed.
Notes:
1. printmc has not been tested with a dedicated printer; its error recovery is unlikely to
be adequate for a dedicated printer.
2. Any output data can be written, including 3800 CCWs, but be aware that CP support
depends on the virtual device type. For example, the maximum record length
(including CCW operation code prefix) is 133 bytes on a virtual 1403.
3. STOP causes CP console function mode to be entered after each channel program has
been given to CP. In a virtual machine on a CP supporting Diagnose A8, general
register 2 points to the HCPSGIOP data area, from which information about the channel
program can be extracted.
In virtual machines that do not support Diagnose A8 (VM/System Product with or
without VM/High Performance Option), general register 6 points to the byte following
the last CCW built; the beginning of the channel program is found by the CP command
“display caw”. The last CCW executed is inferred from the CSW, which is obtained
by “display csw”. Make sure you SET RUN OFF when using this option. This func-
tion was written to help debug printmc, but it may also be useful to discover errors in
input data.
punch—Punch Cards
punch copies lines from the pipeline to punched cards.
CMS
┌─00D─────┐
──punch──┼─────────┼──┬──────┬──
└─devaddr─┘ └─STOP─┘
Syntax Description: Arguments are optional. Specify the device address of the virtual
punch to write to if it is not the default 00D. The virtual device must be a unit record
output punch device. The keyword STOP allows you to inspect the channel programs built
by punch.
Operation: Each input record that is not null is written to the punch with the command
write, feed, select stacker 2 (X'41').
Records may be buffered by punch to improve performance by writing more than one
record with a single call to the host interface. A null input record causes punch to flush
the contents of the buffer into SPOOL, but the null record itself is not written to SPOOL.
After the producing stage has written a null record it is assured that punch can close the
unit record device without loss of data. Input lines are copied to the primary output
stream, if it is connected. Any output data can be written, but CP truncates cards after 80
bytes without error indication.
Commit Level: punch starts on commit level -2000000000. It ensures that the device is
not already in use by another stage, allocates a buffer, and then commits to level 0.
The trick is to pass a null record to punch to force it to flush the contents of its buffer into
CP SPOOL before the device is closed.
Notes:
1. Use uro to create punch files that contain records having command code X'03' (no
operation).
2. punch has not been tested with a dedicated card punch; its error recovery is unlikely to
be adequate for a dedicated card punch.
3. STOP causes CP console function mode to be entered after each channel program has
been given to CP. In a virtual machine on a CP supporting Diagnose A8, general
register 2 points to the HCPSGIOP data area, from which information about the channel
program can be extracted.
In virtual machines that do not support Diagnose A8 (VM/System Product with or
without VM/High Performance Option), general register 6 points to the byte following
the last CCW built; the beginning of the channel program is found by the CP command
“display caw”. The last CCW executed is inferred from the CSW, which is obtained
by “display csw”. Make sure you SET RUN OFF when using this option. This func-
tion was written to help debug punch, but it may also be useful to discover errors in
input data.
¡ ──QPDECODE──
¡ Type: Filter.
¡ Syntax Description:
¡ Operation: Escape sequences of the form X'3dxxxx' are converted to the single char-
¡ acter represented by the two encoded characters. Other characters are passed unchanged to
¡ the output record.
¡ X'3d' at the end of the line means to splice with the following line.
¡ The escape sequences are validated for being complete and correct ASCII (case is ignored,
¡ however). When this validation fails and no secondary output stream is defined, a message
¡ is issued and qpdecode exits. When the secondary output stream is defined, any partial
¡ record stored as a result of line splicing with a trailing equal sign is written to the primary
¡ output, the entire erroneous input record is passed to the secondary output and processing
¡ continues.
¡ Streams Used: Secondary streams may be defined. Records are read from the primary
¡ input stream; no other input stream may be connected.
¡ Record Delay: qpdecode does not delay the record. qpdecode does not delay the last
¡ record written for an input record.
¡ Commit Level: qpdecode starts on commit level -2. It verifies that the secondary input
¡ stream is not connected and then commits to level 0.
¡ Premature Termination: qpdecode terminates when it discovers that any of its output
¡ streams is not connected.
¡ Notes:
¡ 1. qpdecode is concerned solely with decoding. More work is required to build a
¡ complete receiver for MIME encoded files.
¡ 2. If you are decoding mail, it is probable that the receiving mailer has “converted” the
¡ file to EBCDIC. You must convert it back to ASCII before it is passed to qpdecode.
¡ (And then back to EBCDIC.)
¡ 3. qpdecode validates only escape sequences. Thus, it passes characters that are not valid
¡ in an encoded file.
¡ ──QPENCODE──
¡ Type: Filter.
¡ Operation: Encode records according to the MIME quoted-printable encoding. For each
¡ input record, as many 76-byte output records are produced as required; the last record from
¡ a particular input record is in general shorter than 76 bytes.
¡ In addition to the encoding mandated by the standard, qpencode escapes those special char-
¡ acters that are not codepage invariant in the EBCDIC domain. These are X'21222324'
¡ X'40', X'5b5c5d5e', X'60', and X'7b7c7d7e'.
¡ Record Delay: qpencode does not delay the last record written for an input record.
¡ Notes:
¡ 1. qpencode is concerned solely with encoding. More work is required to build a
¡ complete MIME encoded file.
CMS GCS
──QSAM──word──
Warning: qsam behaves differently when it is a first stage and when it is not a first stage.
Existing data can be overlaid when qsam is unintentionally run other than as a first stage.
To use qsam to read data into the pipeline at a position that is not a first stage, specify
qsam as the argument of an append or preface control. For example, |append qsam ...|
appends the data produced by qsam to the data on the primary input stream.
Operation: The data set is read when qsam is first in a pipeline; it is written when qsam
is not first in a pipeline.
qsam generates record descriptor words and block descriptor words when it writes a data
set in variable format (V, VB, VS, or VBS). Such record descriptor words are removed when
qsam reads a data set. On CMS and GCS, qsam does not support spanned records; it can
write undefined record format data sets, but it cannot read them.
Commit Level: qsam starts on commit level -2000000000. It opens the data set and then
commits to 0.
See Also: disk, <, >, >>, members, pdsdirect, readpds, and writepds.
Notes:
1. qsam can read sequential data sets from OS disks, but CMS does not support writing on
OS disks. Use disk or one of its “almost synonyms” (<, >, or >>) to read and write
CMS files.
2. qsam reads and writes a DCB, not necessarily a disk file; it does not investigate where
the data set is allocated.
3. On CMS, a data definition must have been established with a FILEDEF for the specified
DDNAME before the pipeline specification is issued.
4. Use the LABELDEF command in conjunction with FILEDEF to process standard labelled
tapes.
5. To be compatible with the past, qsam is also shipped for TSO. It may work, but it is
not supported.
┌─VERSION──┐
──Query──┼──────────┼──
├─MSGLEVEL─┤
├─MSGLIST──┤
└─LEVEL────┘
Operation: A message is issued with the information requested when the primary output
stream is not connected. Message 86 is issued to display the pipeline version; message
186 displays the message level; message 189 displays the list of messages issued; message
560 displays the pipeline level.
A line is written (no message is issued) when the primary output stream is connected.
Output Record Format: When the primary output stream is connected, a record is written
in this format:
VERSION The text for message 86, including ten characters prefix.
MSGLEVEL Four bytes of binary data.
MSGLIST 44 bytes containing 11 items of four bytes each. The message number is
in the first three bytes; the severity code is in the last one. When the
message number is 999 or less, it is stored as three characters; when
larger than 999 it is stored as a packed decimal number without sign.
The last item corresponds to the last message issued; the first item corre-
sponds to the message issued the least recently. Leftmost items are
binary zeros when fewer than 11 messages have been issued.
LEVEL Four bytes of binary data. The version (B'0001') is stored in the first
four bits. The release (B'0001') is stored in the next four bits. The
modification level (B'00000111') is stored in the next eight bits. The
last sixteen bits are a serial number for the particular build of the PIPELINE
MODULE.
Commit Level: query starts on commit level -4. It commits to level 0 when the argument
keyword is validated.
Notes:
1. The message number is added to the message list when CMS/TSO Pipelines issues a
message, except for messages 1, 3, 4, 189, 192, 260, 278, and 836. These messages
are informational to describe the conditions under which the previous message was
issued. There is one list for all stages, pipelines, pipeline specifications, and pipeline
sets.
──RANDOM──┬─────────────────────────┬──
│ ┌─*──────┐ ┌─*───────┐ │
└─┴─number─┴──┼─────────┼─┘
└─snumber─┘
Syntax Description:
number Specify the modulus if you wish to restrict the values of the output
numbers. The modulus must be positive; the output number is the
remainder of the pseudorandom number after division by the modulus;
the output numbers are positive. Specify an asterisk as a placeholder.
snumber Specify a number to be used as a seed for the sequence of numbers. If
the seed is omitted or an asterisk is specified, a seed is obtained from the
time-of-day clock.
Premature Termination: random terminates when it discovers that its output stream is
not connected. random does not terminate normally.
Examples:
pipe random | take 3 | spec 1-* c2x | console
AF6353E5
53E6CCDC
CCDBDA16
Ready;
pipe random 7| take 3 | spec 1-* c2x | console
00000003
00000006
00000001
Ready;
Red Neon!
reader does not change the SPOOL settings of the virtual reader it uses. Be sure to
issue the CP command “spool reader hold” or similar if you wish to retain a reader
file after it has been read.
CMS
¡ ┌─00C─────┐ ┌─────────────┐
┬─────────┬┴──┬──────────────┬──
──READER──┼─────────┼───
└─devaddr─┘ ├─4KBLOCK─┤ └─FILE──number─┘
¡ ├─PURGE───┤
¡ ├─KEEP────┤
¡ ├─NOKEEP──┤
¡ ├─HOLD────┤
¡ └─NOHOLD──┘
Specify the device address of the virtual reader to read from, if it is not the default 00C.
The virtual device must be a unit record reader device.
4KBLOCK specifies that each complete 4K buffer should be written to the pipeline without
deblocking.
¡ If any of the options PURGE KEEP NOKEEP HOLD NOHOLD are specified, they are added to
¡ the CLOSE command that closes the reader after the file has been read.
The number after the keyword FILE designates a particular reader file to be processed.
Operation: When FILE is specified, the file is selected for the reader before the first block
is read. The file must not be in hold status; it must have a class that can be read by the
reader in question. When FILE is omitted, CP selects the next available SPOOL file that is
not held and is compatible with the reader.
Unless the 4KBLOCK option is specified or implied, the reader file is deblocked into records
that contain the command code in the first position followed by the data. Trailing blanks
are added if the SPOOL file contains the original length of the record. All CCWs are written
including control and no operation. Data chained sequences (which often span input
blocks) are joined into one logical record.
| The reader is closed with the CP command “close” when processing completes without
| error; the reader is left open when reader terminates due to an error.
Commit Level: reader starts on commit level -2000000000. It determines that the device
is not already in use by another stage, selects the file (if FILE is specified), and then
commits to level 0.
Premature Termination: reader terminates when it discovers that its output stream is not
connected.
Examples: A subroutine pipeline that deblocks a reader file that has been sent as a note
or with the SENDFILE command:
/* Now get the file */
'callpipe (name READER)',
'| reader ', /* Read cards */
'| strfind x41 ', /* Take only cards */
'| spec 2-* 1.80 ', /* Remove CCW code */
'| deblock netdata ', /* Get logical records */
'| strfind xc0 ', /* Take only data */
'| spec 2-* 1 ', /* Remove control byte */
'| *: ' /* Pass on */
Notes:
1. reader does not support an attached card reader.
2. 4KBLOCK must be specified to read a VMDUMP file; results are unpredictable if the
option is omitted when reading a VMDUMP file.
: 3. reader cannot read CP dump files on releases prior to Virtual Machine/Enterprise
: Systems Architecture Version 1 Release 2 Modification 2. From that release, all refer-
: ences to VMDUMP above include CP dumps.
4. There are at least two no operation CCWs at the beginning of a file that has arrived
through RSCS/Networking: the current tag and the original tag.
5. To retain the trailing blanks in the records of a SPOOL file that has been transmitted
through a network, all nodes traversed in the network must store the original length of
the record in the SPOOL file; once this information is lost, it cannot be regenerated.
6. Use the CP command “spool reader keep” or “change rdr nnnn keep” to put a
reader file in user hold status.
¡ 7. Specifying KEEP puts the file in user hold status, whereas HOLD leaves the file so that
¡ it can be read again immediately. Using NOKEEP with a virtual reader that is spooled
¡ KEEP appears to leave the file in the reader.
| 8. reader terminates and leaves the SPOOL file open when it receives an error from CP.
| It closes the file when it receives end-of-file writing a record.
z/OS
──READPDS──pods──┬──────┬──┬────────────────────────────┬──
└─ASIS─┘ └─DELIMiter──delimitedString─┘
┌──
──────────┐
┬────────┬┴──
──┬──────────┬──┬─────────┬───
└─USERDATA─┘ └─MEMBERs─┘ └─member─┘
pods:
├──┬─dsname───────────────┬──┤
├─dsname(generation)───┤
├─'dsname'─────────────┤
├─'dsname(generation)'─┤
└─DDname=word──────────┘
Syntax Description:
pods Enclose a fully qualified data set name in single quotes; the trailing
quote is optional. Specify the DSNAME without quotes to have the
prefix, if any, applied. Append parentheses containing a signed number
to specify a relative generation of a data set that is a member of a gener-
ation data group. To read members of an already allocated data set,
specify the keyword DDNAME= followed by the DDNAME already allo-
cated. The minimum abbreviation is DD=.
ASIS Use member names strictly as written. By default, a member name is
translated to upper case if it is not found in the mixed case spelling.
DELIMITER Specify the beginning of the delimiter record, which is written between
members. The member name is appended to this string.
USERDATA Append the user data field from the directory record to the delimiter
record. The user data is unpacked to printable hexadecimal.
MEMBERS The remaining words are names of members to be read. MEMBERS is
assumed when a word is scanned that is not a recognised keyword.
Operation: readpds first reads the contents of members (if any) specified in the argument
string; it then continues with the members specified in input records.
Each member is looked up in the library directory. If the member does not exist as written
and ASIS is omitted, the search is retried with the member name translated to upper case.
Diagnostic messages are issued for members that are not present in the library; the argu-
ment and all input records are processed before returning with return code 150 when one
or more members is not found.
Input Record Format: Blank-delimited lists of members to read from the library.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. Null and blank input records are discarded.
Record Delay: readpds writes all output for an input record before consuming the input
record.
Commit Level: readpds starts on commit level -2000000000. It opens the DCB and and
then commits to level 0.
Premature Termination: readpds terminates when it discovers that its output stream is
not connected.
Notes:
1. members and pdsread are synonyms for readpds.
┌─ -3────────┐
──RETAB──┼────────────┼──
├─ -number───┤
│ ┌──
────────┐ │
─number─┴─┘
└──
Type: Filter.
A list of positive numbers enumerates the tab stops; the numbers may be in any order.
The smallest number specifies where the left margin is; use 1 to put the left margin at the
beginning of the record.
A negative number specifies a tab stop in column 1, and for each n columns.
Operation: When a list of tab stops is used and the smallest number is not 1, the first
columns of the record are discarded up to the column specified as the left margin.
Premature Termination: retab terminates when it discovers that its output stream is not
connected.
──REVERSE──
Type: Filter.
Premature Termination: reverse terminates when it discovers that its output stream is not
connected.
Examples:
pipe literal Hello, World! | reverse | console
!dlroW ,olleH
Ready;
Note that the argument to find is reversed because the contents of the record are reversed
at the point where find is applied.
──REXX──┬─fn─────────────────────────────────────┬──┬────────┬──
├─(──┤ fileName ├──)─────────────────────┤ └─string─┘
├─┤ streamSpec ├─────────────────────────┤
└─(──┤ streamSpec ├──┬──────────────┬──)─┘
└─┤ fileName ├─┘
fileName:
├──fn──┬────────────┬──┤
└─ft──┬────┬─┘
└─fm─┘
streamSpec:
├──┬─*:────────┬──┤
└─*.stream:─┘
Syntax Description: Leading blanks are ignored; trailing blanks are significant. A word
is required; it may be followed by a string. If the first non-blank character is a left paren-
thesis, up to three words for file name, type, and mode can be specified in parentheses.
The default file type is REXX. On z/OS, the file type specifies the DDNAME of the library
that contains the program. When the first word begins with an asterisk and ends with a
colon, it specifies the input stream from where the REXX program is read; to set the file
name to be used in the REXX source string, the stream specification followed by the file
name (and optionally file type and file mode) must be specified in parentheses.
Operation: A REXX program runs as a pipeline filter. The string is passed to the program
as its argument string. The default command environment processes pipeline commands,
described in Chapter 25, “Pipeline Commands” on page 723. For a task-oriented guide,
see Chapter 7, “Writing a REXX Program to Run in a Pipeline” on page 97 and “Using
CALLPIPE to Run a Subroutine Pipeline” on page 103.
An EXEC can invoke itself as a filter. The seventh word of the source string is a question
mark when the program runs as a filter on CMS; it is PIPE on z/OS.
If, on CMS, the program is not already loaded in storage by an explicit EXECLOAD, it is
loaded with the EXECLOAD command before it is invoked. Concurrent invocations of a
program use the same copy as long as the file is accessible and has the same timestamp;
the program is removed from storage when the last concurrent invocation terminates.
Compiled REXX programs (with option CEXEC) are invoked by direct branch to the
compiler runtime environment. If the runtime environment is not installed as a nucleus
extension, it is invoked with CMSCALL to make it initialise itself.
Streams Used: The REXX program can select any defined stream using the SELECT pipe-
line command; it can define additional streams with the ADDSTREAM pipeline command; it
can determine the number of defined streams from the return code from the MAXSTREAM
pipeline command.
Record Delay: rexx does not read or write records. The delay depends on the program
being run.
Commit Level: rexx starts on commit level -1. When the program is read from an input
stream, rexx commits to zero before reading the program. When the program is not read
from an input stream, rexx commits to level 0 (unless the command NOCOMMIT has been
issued) when the first I/O operation (OUTPUT, PEEKTO, or READTO pipeline commands) is
requested or the pipeline command SELECT ANYINPUT is issued. The program can issue
COMMIT before doing I/O to test whether any other stage has returned with a nonzero return
code on commit level -1.
hello2
Hello, world! (From a dual-path REXX.)
Ready;
TESTALT EXEC reads the program from the secondary input stream:
/* TESTALT EXEC */
'PIPE (end ? name REXX)',
'|literal me Tarzan',
'|r: rexx *.1:',
'|console',
"?literal /* */ 'output Hello, World!'; 'short'",
'|r:'
testalt
Hello, World!
me Tarzan
Ready;
Notes:
1. You need not use an explicit rexx to invoke a REXX program with file type REXX
unless there is a built-in program with the same name; CMS/TSO Pipelines looks for a
REXX program when it cannot resolve a filter in the built-in directories and attached
filter packages.
2. Compiled REXX programs are supported on CMS. Programs can be compiled with the
OBJECT option or the CEXEC option. The former programs are included in filter pack-
ages; the latter are run from disk or EXECLOADed.
3. A program that is used often should be EXECLOADed to improve performance.
4. CMS/TSO Pipelines installs an alternate EXEC interpreter in a slightly different way
than CMS does: When there is no nucleus extension installed for the processor, it is
called to install itself. This is done by CMSCALL with a call type program (flag byte is
zero) and register zero cleared to zeros. The interpreter should install itself (or its
runtime routine) as a nucleus extension and return. CMS/TSO Pipelines then looks
again for the runtime environment and branches to it. CMS/TSO Pipelines supports
only EXECs requiring an alternate processor that installs itself as a nucleus extension.
5. On z/OS, REXX filters run in dedicated reentrant environments. Such environments
cannot be merged with the TSO environment. Issue TSO commands with command,
tso, or subcom TSO instead.
/* Issue a TSO command */
'callpipe command time'
6. The rexx verb cannot be defaulted when the program is to be read from an input
stream.
| 7. Remember that REXX on CMS resolves an external function call using the type of the
program (for example, REXX when the function call is from a REXX filter). If you
have a filter with the same name as an external function, the filter will be invoked
rather than the corresponding EXEC. REXX allows for no way to suppress this
“feature”.
8. Remember that REXX continuation functionally replaces a trailing comma with a blank.
Also recall that when two strings are separated by one or more blanks, REXX concat-
enates them with a single blank. Use the concatenation operator (||) before the
comma at the end of the line if your portrait style has the stage separators at the left
side of the stage and the trailing blank is significant to your application.
Return Codes: Unless it is a return code associated with trouble finding the REXX
program to run, the return code is the one received from the REXX program.
Red Neon!
The output from rexxvars must be buffered if other stages access the variable pool
concurrently, for example to return results with stem.
──REXXVARS──┬──────────┬──┬────────┬──┬──────────┬──
¡ ├─PRODUCER─┤ └─number─┘ └─NOMSG233─┘
└─MAIN─────┘
──┬───────────────────────────────────────┬──┬───────────────┬──
└─TOLOAD──┬───────────────────────────┬─┘ └─WIDTH──number─┘
├─NOCOMMENTS────────────────┤
└─COMMENTS──delimitedString─┘
Syntax Description: It is possible to access a REXX variable pool other than the current
one.
The keyword PRODUCER may be used when the pipeline specification is issued with
CALLPIPE. It specifies that the variable pool to be accessed is the one for the stage that
produces the input to the stage that issues the subroutine pipeline that contains rexxvars,
rather than the current stage. (This is a somewhat esoteric option.) To ensure that the
variable pool persists as long as this invocation of rexxvars, the stage that is connected to
the currently selected input stream must be blocked in an OUTPUT pipeline command while
the subroutine pipeline is running.
The keyword MAIN specifies that the REXX variable pool to be accessed is the one in effect
at the time the pipeline set was created (either by the PIPE command or by the runpipe
stage). MAIN is implied for pipelines that are issued with ADDPIPE.
A number that is zero or positive is optional. It specifies the number of REXX variable
pools to go back. That is, rexxvars can operate on variables in the program that issued the
pipeline specification to invoke rexxvars or in one of its ancestors. (When the number is
prefixed by either PRODUCER or MAIN, the variable pool to be accessed is the producer’s or
the main one, or one of their ancestors.) On CMS, if the number is larger than the number
of REXX environments created on the call path from the PIPE command, rexxvars continues
on the SUBCOM chain starting with the environment active when PIPE was issued.
¡ Specify the option NOMSG233 to suppress message 233 when the REXX environment does
¡ not exit. Either way, rexxvars terminates with return code 233 on commit level -1 when
¡ the environment does not exist.
Specify TOLOAD to write output records in the format required as input to varset (and to
varload): each record contain the variable’s name as a delimited string followed by the
variable’s value. The delimiter is selected from the set of characters that do not occur in
the name of the variable; it is unspecified how this delimiter is selected. The keyword
COMMENTS is followed by a delimited string that enumerates the characters that should not
be used as delimiter characters. The keyword NOCOMMENTS specifies that the delimiter
character can be any character that is not in the variable’s name. By default, neither
asterisk nor blank is used as a delimiter, because these are the default comment characters
used by varload.
The keyword WIDTH specifies the minimum size in bytes of the buffer into which REXX
returns the value of a variable. The number specified is a minimum buffer size; rexxvars
may allocate more. The default width is 512.
Output Record Format: When TOLOAD is specified, one line is written for each variable
in the variable pool. The line contains these items (without additional separators):
1. A delimiter character (in column 1).
2. A variable’s name (beginning in column 2).
3. A delimiter character (a copy of the character in column 1).
| 4. The variable’s value.
When TOLOAD is omitted, the first line contains the source string for the REXX environ-
ment. Subsequent pairs of records describe variables; a record that contains the name of a
variable is followed by a record that contains the variable’s value. Each line is prefixed
with a character describing the item; three prefix characters are used:
s The source string. This is the first line written.
n The name of a variable. The value is on the following line.
v The data contained in the variable whose name is defined in the preceding record.
Only as much data as will fit within the width specified (or 512, the default) are
written to the pipeline.
There is one blank between the prefix character and the data.
Commit Level: rexxvars starts on commit level -1. It verifies that the REXX environment
exists (if it did not do so while processing its parameters). It fetches a dummy variable
from the pool to ensure that it starts fetching variables at the beginning of the pool and
then commits to level 0.
Premature Termination: rexxvars terminates when it discovers that its output stream is
not connected.
Examples: To dump the current REXX variables to a file for later analysis:
/* Sample Syntax error routine */
Syntax:
Say 'Syntax error' RC':' errortext(RC)
parse source . . $$$fn$$$ $$$ft$$$ .
Say 'Error occurred on line' sigl 'of' $$$fn$$$ $$$ft$$$
Say sourceline(sigl)
address command 'PIPE rexxvars | >' $$$fn$$$ 'variables a'
The instruction Signal on Syntax causes the routine to be invoked whenever there is a
syntax error.
The first record written by rexxvars (in this case the only output record that is used)
contains the source string, from which the name of the program can be inferred.
A buffer stage is required to buffer the output from rexxvars when data derived from its
output are stored back into the variable pool with a var, stem, varload, or varset stage:
pipe rexxvars | find v_ARRAY.| spec 3-* | buffer | stem vars.
As shown in this example, it may be more efficient to buffer the variables that are set
rather than the output from rexxvars.
Notes:
1. rexxvars uses the EXECCOMM “get next” interface when it processes a REXX variable
pool. REXX maintains, with the variable pool, a “cursor” to the next variable to
provide. The cursor is reset to the beginning when a variable is dropped, fetched, or
set, either by the interpreter itself or through the EXECCOMM interface. Thus, if some
other stage repeatedly causes the cursor to be reset to the beginning while rexxvars is
extracting the variables of a pool, an infinite number of records may be written by
rexxvars.
Clearly, the variable pool will be accessed if the pipeline writes its result back into
several variables in the same variable pool (stem, varset), but there are many other and
more subtle variations. For example, no other stage may access the variable pool to
fetch variables or drop variables (var, vardrop, and varfetch), lest the cursor be reset.
Thus, the output from a rexxvars stage must be buffered (for example, in a buffer
stage) if anything else in the CMS session could cause REXX to access the variable pool
before an unbuffered rexxvars stage would terminate. (This includes, but is not
limited to, current and future stages in the pipeline set or any pipeline set created by
the pipeline.) By inserting a buffer stage after rexxvars, you allow rexxvars to run to
completion before subsequent stages can possibly begin manipulating the variable
pool.
Special care is needed if the pipeline specification contains stages that access the vari-
able pool, if these stages cannot be proved to be synchronised with the buffered output
(that is, if they might access the variable pool before the buffer stage produces output),
and if there are any stages between rexxvars and buffer. No stage in this cascade may
suspend itself, nor may any stage have secondary streams defined. In this restricted
environment the otherwise unspecified pipeline dispatcher will be guaranteed not to
run stages outside the pipeline segment up to the buffer stage, once it has dispatched
rexxvars on commit level 0.
2. The output from rexxvars is unspecified when more than one rexxvars stage accesses a
particular variable pool concurrently. As far as REXX is concerned, these stages will
be sharing the read cursor and will fetch the variables in the variable pool between
them. When REXX comes to the end of the variable pool, it will signal this condition
to one of the rexxvars stages, which will then terminate. The remaining rexxvars
stages will then read the variable pool from the beginning. Thus, the rexxvars stages
will eventually all terminate, but the result is unlikely to be what you were looking
for.
3. rexxvars obtains variables exposed at the time the pipeline specification is issued.
Any variables hidden by a Procedure instruction are not returned by the underlying
interface.
4. The underlying interface does not provide the default value assigned to a stem,
because it cannot be distinguished from the compound variable with a null index.
Note the difference in these assignments:
array.=0 /* All existing compounds are reset */
ix=''
array.ix=1 /* Only one variable is set */
5. PIPE var stem.|... reads the default value of a stem.
6. When a pipeline is issued as a TSO command, is called to access the variable pool.
When the command is issued with Address Link or Address Attach, rexxvars accesses
the REXX environment from where the command is issued.
7. When IKJCT441 is used, the first line written has two words, TSO CLIST, to identify the
environment.
8. CMS/TSO Pipelines maintains a reference to the current variable environment for each
stage. Initially this is the environment in effect for the PIPE command with which the
original pipeline was started.
When a REXX program is invoked (as a stage or with the REXX pipeline command), its
environment becomes the current one, with a pointer to the previous one.
When a pipeline specification is issued with the runpipe built-in program or the
CALLPIPE pipeline command, the current environment is the one in effect for the stage
issuing runpipe or CALLPIPE; it is known to persist while the subroutine pipeline runs.
On the other hand, when a pipeline specification is issued with the ADDPIPE pipeline
command, the stage that issues ADDPIPE runs in parallel with the added pipeline
specification; it can terminate at any time (indeed, even before the new pipeline
specification starts running). Therefore, for ADDPIPE, the current environment is set to
the one for the last runpipe or the one at initial entry on the PIPE command. Thus, the
MAIN option has effect only for pipeline specifications that are issued by the CALLPIPE
pipeline command.
: 9. rexxvars cannot handle truncation of the value of a variable when the buffer is too
: small, because it cannot retry the call to the underlying interface. Use varfetch to
: ensure you get the complete value:
: pipe rexxvars|find n|substr 3-*|buffer|varfetch toload|...
When the option EVENTS is specified, runpipe writes detailed information about the pipe-
line specification and the progress of its execution. This information is designed to be
processed by a program.
──RUNPIPE──┬──────────────────────┬──┬───────────────────────┬──
└─MSGLevel──┬─number─┬─┘ ├─TRACE─────────────────┤
└─Xhex───┘ └─EVENTS──┬───────────┬─┘
└─MASK──hex─┘
Type: Control.
MSGLEVEL Specify the message level setting for the pipeline sets that are created by
runpipe. The value after the keyword can be a decimal number or the
letter “x” followed by a hexadecimal string. There must be no blank
between the letter and the hexadecimal string. All of sixteen rightmost
bits can be set. If MSGLEVEL is omitted, the pipeline sets inherit the
message level established by PIPMOD rather than the one active for
runpipe.
TRACE Force the trace option for all pipeline specifications in the pipeline set.
This form of trace cannot be disabled in the individual pipeline
specification.
EVENTS Produce event records on the output stream.
| MASK Specify a mask for event records to be suppressed. By default, the mask
| is zero, which enables all events records. Mask bit numbering follows
| standard IBM System/360 conventions; the mask for event type 0 is bit
| number 0, which is the leftmost one (X'80000000'). A record is
| suppressed when the corresponding mask bit is one. Beware that the
| hexadecimal number is scanned as a binary number; specify all eight
| hexadecimal digits. If six digits are specified, event records 0 through 7
| cannot be suppressed, because their mask will of necessity be zero. It
| makes no sense to specify four or fewer hexadecimal digits for the mask.
Operation: Input records are issued as pipeline specifications. A new pipeline set is
created for each record.
When EVENTS is omitted, the new pipeline runs until it completes or issues a message.
When a message is issued, the new pipeline waits while the message is written to the
primary output stream of runpipe; the new pipeline is resumed when the write completes.
When the keyword TRACE is specified, all pipelines in the new pipeline set are forced to be
run with the trace option; this cannot be disabled by options in the individual pipelines.
As TRACE produces messages, the trace of the subject pipeline set is also written to the
output of runpipe.
When EVENTS is specified, records are written by the pipeline specification parser and by
the pipeline dispatcher in addition to records for messages issued. The EVENTS option and
the MASK option apply to all pipeline specifications in the pipeline set.
The REXX environment for the new pipeline is the one in effect for runpipe.
A stall in a pipeline that is issued with runpipe does not affect the pipeline that contains
the runpipe stage. runpipe ignores errors when it writes output records, even errors that
indicate a stall in the pipeline that contains the runpipe stage. That is, error conditions
cannot leak between the two pipeline sets. This ensures that the pipeline issued by runpipe
can terminate in an orderly way, even in the event of severe errors in the controlling pipe-
line set.
Input Record Format: Each line contains a pipeline specification; the syntax of the line
is the same as the syntax of the argument string to PIPE. Specifically, global options are
specified in parentheses at the beginning of the line.
Output Record Format: When EVENTS is omitted, the output records contain CMS/TSO
Pipelines messages issued in response to the pipeline specifications and messages issued
with the MESSAGE pipeline command. The complete message is written irrespective of the
EMSG setting. The first word (10 or 11 characters) is the message identifier. Programs that
process these messages should be able to handle messages numbers that have both three
and four digits.
When EVENTS is specified, output records are written in the format described in
Appendix G, “Format of Output Records from runpipe EVENTS” on page 908.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. Null and blank input records are discarded.
Record Delay: runpipe writes all output for an input record before consuming the input
record.
Examples: To present error messages as XEDIT messages when a pipeline is issued during
an XEDIT session:
/* PIPE XEDIT */
parse arg pipe
address command 'PIPE var pipe | runpipe | xmsg'
r=RC
if r¬=0
then 'emsg Return code:' r'.'
exit r
runpipe is useful when tracing the pipeline dispatcher, which often generates large amounts
of data:
/* debug pipe */
pipe='(trace)' arg(1)
address command
'PIPE var pipe | runpipe | > pipeline trace a'
Notes:
1. Do not mask off console input events (type X'0F') as this causes an input console
stage to produce an infinite number of input lines.
Return Codes: The return code is the aggregate of the return codes from the pipelines
that have been run: The aggregate return code is the minimum return code if any return
code is negative; otherwise it is the maximum of the return codes received.
──SCM──┬────────────────────────────────────────┬──
└─number──┬────────────────────────────┬─┘
└─number──┬────────────────┬─┘
└─word──┬──────┬─┘
└─word─┘
Type: Filter.
Syntax Description: The first number specifies the column where a comment should
begin when the REXX instruction is short enough to leave room for alignment. The default
is 39. The second number specifies the ending column for the comment. The default is
71. The last two words specify the beginning and ending comment strings, respectively.
If both words are omitted, the defaults are /* and */. When the first word is specified,
the default for the second word is null (ADA-style comments).
When a line has only one comment begin string and this is the first non-blank string on
that line, the comment end string is aligned with the ending column (assuming the line
does not extend beyond this column).
When the line contains non-blank characters before the last comment string, the string is
aligned within the specified columns if the instruction part and the comment are both
shorter than the width of their respective column ranges.
When the instruction or the comment is longer than the column range allocated, but
together they are shorter than the area allocated, the comment is aligned to the right.
Commit Level: scm starts on commit level -1. It verifies its arguments and then commits
to 0.
Premature Termination: scm terminates when it discovers that its output stream is not
connected.
Examples:
pipe literal /* Now we check a and b| scm | console
/* Now we check a and b */
Ready;
Notes:
1. Multiline comments (comments that are spanned over several lines) are not supported,
because the comment on the first line will be terminated. If such a program is proc-
essed by scm, it is likely to encounter errors when it is run.
¡ ──SEC2GREG──┬────────────────────┬──
¡ └─OFFSET──┬─number─┬─┘
¡ └─*──────┘
¡ Type: Filter.
¡ Syntax Description:
¡ Streams Used: Records are read from the primary input stream and written to the primary
¡ output stream. Null and blank input records are discarded.
¡ Premature Termination: sec2greg terminates when it discovers that its output stream is
¡ not connected.
¡ Notes:
¡ 1. The epoch started at 00:00:00 GMT on January first, 1970. This is the epoch used in
¡ UNIX systems.
¡ 2. LOCAL may also be specified to apply the local time zone offset.
¡ 3. A time zone offset of 86399 is not the same as one of -1.
¡ 4. For dates before year 1970, sec2greg ignores all issues as to whether the day actually
¡ occurred or the year existed at all.
¡ 5. The largest valid input number of seconds is 253402300799, which corresponds to the
¡ end of year 9999.
¡ 6. Leap seconds are not accounted for, as most UNIX systems also ignore this issue.
CMS
──SFSBACK──fn──ft──dirid──┬───────┬──
└─digit─┘
┌──
─────────────────────────┐
┬───────────────────────┬┴──
───
├─ASIS──────────────────┤
├─ESM──delimitedString──┤
├─OLDDATERef────────────┤
├─OPENRECOVER───────────┤
└─WORKUNIT──┬─number──┬─┘
├─DEFAULT─┤
└─PRIVATE─┘
Syntax Description:
Operation: Reading begins at the last record in the file and proceeds backwards to the
first record. The file is closed before sfsback terminates.
Commit Level: sfsback starts on commit level -2000000000. It creates a private unit of
work if WORKUNIT PRIVATE is specified, opens the file, allocates a buffer if required, and
then commits to level 0.
Premature Termination: sfsback terminates when it discovers that its output stream is not
connected.
This reads your profile from your root directory in the current file pool. diskback selects
sfsback to process the file, because the third word is present, but does not specify a mode.
!! CMS
! ──SFSDIRectory──word──┬───────────┬──
! └─RECURSIVe─┘
! ┌─FORMAT──────────────────┐
! ──┼─────────────────────────┼──
! ├─NOFORMAT────────────────┤
! ├─SHOrtdate───────────────┤
! ├─ISOdate─────────────────┤
! ├─FULldate────────────────┤
! ├─STAndard────────────────┤
! └─STRing──delimitedString─┘
! Syntax Description:
! %% A single %.
! %Y Four digits year including century (0000-9999).
! %y Two-digit year of century (00-99).
! %m Two-digit month (01-12).
! %n Two-digit month with initial zero changed to blank ( 1-12).
! %d Two-digit day of month (01-31).
! %e Two-digit day of month with initial zero changed to blank ( 1-31).
! %H Hour, 24-hour clock (00-23).
! %k Hour, 24-hour clock first leading zero blank ( 0-23).
! %M Minute (00-59).
! %S Second (00-60).
! %F Equivalent to %Y-%m-%d (the ISO 8601 date format).
! %T Short for %H:%M:%S.
! %t Tens and hundredth of a second (00-99).
! Operation: A private unit of work is obtained to ensure a consistent view of the file
! space. A line is written for each file or directory in the specified root directory. When
! RECURSIVE is specified, the contents of subdirectories are also written.
! Output Record Format: For NOFORMAT, the output record is 112 bytes. Refer to the
! macro DIRBUFF in DMSGPI MACLIB for intent FILE and the description of DMSGETDI.
! Premature Termination: sfsdirectory terminates when it discovers that its output stream
! is not connected.
! Examples: To list the root directory of the server that runs the samples. It contains a
! single file:
! pipe sfsdir . str /%F/ | console
! PROFILE EXEC -1 V 19 4 1 2001-12-09 SFS:JOHN3.
! Ready;
CMS
──SFSRANDOM──fn──ft──dirid──┬───────┬──
└─digit─┘
┌──
─────────────────────────┐ ┌──
─────────┐
┬───────────────────────┬┴───
─── ┬───────┬┴──
├─ASIS──────────────────┤ └─range─┘
├─BLOCKed───────────────┤
├─ESM──delimitedString──┤
├─NUMBER────────────────┤
├─OLDDATERef────────────┤
├─OPENRECOVER───────────┤
└─WORKUNIT──┬─number──┬─┘
├─DEFAULT─┤
└─PRIVATE─┘
Syntax Description:
Further arguments are ranges of records to be read. Use an asterisk as the end of a range
to read to the end of the file.
Commit Level: sfsrandom starts on commit level -2000000000. It creates a private unit
of work if WORKUNIT PRIVATE is specified, opens the file, allocates a buffer if required,
and then commits to level 0.
Premature Termination: sfsrandom terminates when it discovers that its output stream is
not connected.
Examples: Both of these commands read records 7, 8, 3, and 1 from a file and write
them to the pipeline in that order:
pipe diskrand profile exec . 7.2 3 1 |...
pipe literal 3 1 | diskrand profile exec . 7.2 |...
This reads your profile from your root directory in the current file pool. diskrand selects
sfsrandom to process the file, because the third word is present, but it does not specify a
file mode.
Notes:
1. RECNO is a synonym for NUMBER.
2. sfsrandom performs at least one read operation for the records in the arguments, if
specified, and one read operation for each input record. When BLOCKED is specified,
all records in a range are read in a single operation. It is unspecified how many
additional read operations it performs for records specified in the arguments or a
particular input record. This may be significant when the file is updated with
diskupdate. Ensure that no stage delays the record between stages reading and writing
a file being updated.
CMS
──SFSUPDATE──fn──ft──dirid──┬───────┬──
└─digit─┘
┌──
─────────────────────────┐
┬───────────────────────┬┴──
───
├─ALLOWEMPTY────────────┤
├─ASIS──────────────────┤
├─ESM──delimitedString──┤
├─Fixed──┬────────┬─────┤
│ └─number─┘ │
├─HARDEN──┬────────┬────┤
│ └─number─┘ │
├─KEEP──────────────────┤
├─MDATE──number─────────┤
├─OPENRECOVER───────────┤
├─SAFE──────────────────┤
├─Variable──────────────┤
└─WORKUNIT──┬─number──┬─┘
├─DEFAULT─┤
└─PRIVATE─┘
Syntax Description:
Operation: Columns 11 through the end of the input record replace the contents of the
record in the file. The file is closed before sfsupdate terminates.
Input Record Format: The first 10 columns of an input record contain the number of the
record to replace in the file (the first record has number 1). Leading and trailing blanks
are acceptable; the number needs not be aligned in the field. It is an error if an input
record is shorter than 11 bytes.
The valid values for the record number depends on the record format of the file:
Fixed For fixed record format files, any number can be specified for the record
number (CMS creates a sparse file if required). An input record can contain
any number of consecutive logical records as a block. The block has a single
10-byte prefix containing the record number of the first logical record in the
block.
Variable When the file has variable record format, the record number must at most be
one larger than the number of records in the file at the time the record is
written to it. The data part of input records must have the same length as the
records they replace in the file.
Streams Used: sfsupdate copies the input record (including the record number) to the
output after the file is updated with the record.
Commit Level: sfsupdate starts on commit level -2000000000. It creates a private unit of
work if WORKUNIT PRIVATE is specified or defaulted, opens the file, allocates a buffer if
required, and then commits to level 0.
Note that sfsupdate runs on a private unit of work, whereas < runs on the default unit of
work. Defining the secondary output stream to change makes it write only changed
records to its primary output stream.
──SNAKE──number──┬────────┬──
└─number─┘
Type: Filter.
Syntax Description: The first number specifies the number of columns to be made. The
second number specifies the number of lines on a “page”. If the second number is
omitted, snake reads the file and determines the minimum number of rows required to fill
all columns; when the number of input records is not evenly divisible by the number of
columns, the last column will not be filled completely.
Operation: When the second number is omitted, snake reads the entire file to determine
the number of records and sets the page depth accordingly.
Assuming the number of lines on a page is n, the first output line contains records 1, n+1,
2*n+1, and so on. Thus, if the input records are sorted, the columns on the page will be
sorted downwards.
Input Record Format: Input records should be of fixed length; snake neither pads nor
truncates to fit records into columns.
Record Delay: snake can delay all records that make up a “page”.
Commit Level: snake starts on commit level -1. It verifies its arguments and then
commits to 0.
Premature Termination: snake terminates when it discovers that its output stream is not
connected.
To transpose a matrix:
pipe literal c d | literal a b | cons | split | snake 2. | console
a b
c d
ac
bd
Ready;
The first two lines of output show the input matrix; the last two show the resulting matrix
without padding.
──SOCKA2IP──
Type: Filter.
Input Record Format: When the input line is four bytes long, the input record contains
the unsigned long IP address to be converted.
Otherwise the input record contains a structure of sixteen bytes. Binary numbers are
stored in the network byte order, that is, with the most significant bit leftmost.
Output Record Format: When the input line is four bytes long, the output record
contains a single word, which is the IP address in dotted-decimal notation.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. Null input records are discarded.
Premature Termination: socka2ip terminates when it discovers that its output stream is
not connected.
Examples:
sort—Order Records
sort reads all input records and then writes them in a specified order.
┌─NOPAD─────┐
──SORT──┬────────┬──┼───────────┼──┬─────────┬──
├─COUNT──┤ └─PAD──xorc─┘ └─ANYcase─┘
└─UNIQue─┘
┌─Ascending─────────────────────────────────────┐
──┼───────────────────────────────────────────────┼──
├─Descending────────────────────────────────────┤
│ ┌──
───────────────────────────────────────────┐ │
│ │ ┌─Ascending──┐ │ │
─inputRange──┼────────────┼──┬───────────┬─┴─┘
└──
└─Descending─┘ ├─NOPAD─────┤
└─PAD──xorc─┘
Type: Sorter.
Syntax Description: Arguments are optional. If present, the keywords COUNT or UNIQUE
must be first. Write the keywords PAD or NOPAD in front of the sort fields to specify the
default for all fields; the default is NOPAD. The keyword NOPAD specifies that key fields
that are partially present must have the same length to be considered equal; this is the
default. The keyword PAD specifies a pad character that is used to extend the shorter of
two key fields.
The keyword ANYCASE specifies that case is to be ignored when comparing fields; the
default is to respect case. Up to 10 sort ranges can be specified. The default is to sort
ascending on the complete record. The ordering can be specified for each field; it is
ascending by default.
Operation: Records with identical sort keys remain in the order they appear on input
unless one of the keywords COUNT or UNIQUE is used.
The first record with a given key is retained when COUNT or UNIQUE is used; subsequent
records with duplicate keys are discarded. A 10-character count of the number of occur-
rences of the key is prefixed to the output record when COUNT is specified.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. sort reads all input records before it writes output. When COUNT or
UNIQUE is specified, records that have duplicate keys are written to the secondary output
Commit Level: sort starts on commit level -2. It verifies that the secondary input stream
is not connected and then commits to level 0.
Examples: To sort hexadecimal data correctly, use xlate to change the collating sequence
so that A through F sort after the numbers. Use another xlate to change the sequence back
after the sort. It is assumed that the sort field contains only blanks, numbers, and A
through F.
/* HEXSORT REXX: Sort in HEX */
'callpipe',
'*: |',
'xlate *-* A-F fa-ff fa-ff A-F |',
'sort' arg(1) '|',
'xlate *-* A-F fa-ff fa-ff A-F |',
'*:'
exit RC
sort sorts binary data, even when the data may look like numbers, which you might expect
to be sorted numerically rather than by the collating sequence:
pipe literal 11 5 2 1 | split | sort | console
1
11
2
5
Ready;
Notes:
1. sort is stable. That is, records that have the same contents of the key field(s) are in
the same order on output as they were on input.
2. The keyword COUNTDBG is intended to test sort. Output records have four characters
prefixed to the count field.
3. When assembled with test code enabled, sort looks for streams with identifiers 'stat',
'trc', and 'xtrc'. The information written to these streams is unspecified.
4. Use DFSORT/CMS, IBM Program Number 5664-325, to sort files that are too large for
sort. dfsort can be used to interface CMS Pipelines to this sort program.
5. Unless ANYCASE is specified, key fields are compared as character data using the IBM
System/360 collating sequence. Use spec (or a REXX program) to put a sort key first
in the record if you wish, for instance, to use a numeric field that is not aligned to the
right within a column range. Use xlate to change the collating sequence of the file.
! 6. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
7. Note in particular that sort performs a binary comparison of key fields from left to
right. Thus, numeric fields will be sorted “correctly” only when the data to be
| compared are aligned to the right within sort fields of equal size. (Since padding is
applied on the right hand side only.) Thus, a numeric sort is unlikely to “work” when
the sort field is defined as, for example, a word. See also “Numeric Sorting” on
page 128.
8. sort UNIQUE orders the file and discards records with duplicate keys. Refer to lookup
for an example of extracting all unique records from a file without altering their order.
¡ ──SPACE──┬────────┬──┬─────────────────────────────┬──
¡ └─number─┘ ├─xorc────────────────────────┤
¡ └─┬────────┬──delimitedString─┘
¡ └─STRing─┘
¡ ──┬────────────────────────────┬──
¡ ├─xorc───────────────────────┤
¡ └─┬───────┬──delimitedString─┘
¡ └─ANYof─┘
¡ Type: Filter.
¡ Syntax Description:
¡ number Specify the number of occurrences of the pad string to insert for each
¡ internal string of delimiters. The default is 1.
¡¡ STRING The second operand specifies the string to replace sequences of delimiter
¡ characters. The default is a single blank.
¡¡ ANYOF The third operand specifies the delimiter character(s). This string
¡ contains an enumeration of characters that are all considered to be
¡ delimiters. The default is a single blank.
¡ Operation: Leading and trailing delimiters are removed from the record. Internal
¡ sequences of delimiter characters are replaced by the specified number of the replacement
¡ string.
¡ Premature Termination: space terminates when it discovers that its output stream is not
¡ connected.
¡ Examples:
¡ pipe literal a b b |space|insert /*/ after | console
¡ a b b*
¡ Ready;
¡ Notes:
¡ 1. When specifying a numeric replacement string of one or two characters (which parses
¡ as an xorc), you must specify a count even when you wish the default of 1.
¡ 2. When just one string argument is specified, it is taken to be the replacement string
¡ unless ANYOF is specified or the count is zero.
¡ 3. Unlike the REXX built-in function, space supports replacement strings longer than one
¡ character and more than one blank character.
spec can convert the contents of fields in several ways. It can generate an output record
containing data from several input records, and it can generate several output records from
a single input record.
This article contains but an overview of spec. Refer to Chapter 16, “spec Tutorial” on
page 163 and Chapter 24, “spec Reference” on page 692.
┌─STOP──ALLEOF─────┐ ┌────────────────────┐
┬─┤ field ├──────┬┴──
──SPECs──┼──────────────────┼───
└─STOP──┬─ANYEOF─┬─┘ ├─READ───────────┤
└─number─┘ ├─READSTOP───────┤
├─WRITE──────────┤
├─SELECT──stream─┤
└─PAD──xorc──────┘
field:
├──┬─inputRange─────────────────────────────────┬──┬───────┬──
├─NUMBER──┬───────────────┬──┬─────────────┬─┤ └─STRIP─┘
│ └─FROM──snumber─┘ └─BY──snumber─┘ │
├─TODclock───────────────────────────────────┤
└─delimitedString────────────────────────────┘
(1) ──┬───────────┬─┬──
──┬────────────────┬──┬─┬─Next─────┬──
└─┤ conversion ├─┘ │ └─NEXTWord─┘ └─.──number─┘ │
├─number────────────────────────┤
└─range─────────────────────────┘
──┬────────┬──┤
├─Left───┤
├─Centre─┤
└─Right──┘
conversion:
(2) ─┤
├──┬─f2t──────────┬───
├─P2t(snumber)─┤
└─f2P(snumber)─┘
Notes:
1 There is no blank between the keyword, the period, and the number.
2 The conversion routines are B2C D2C F2C I2C P2C V2C X2C C2B C2D C2F C2I C2P C2V
Type: Filter.
Syntax Description: Specify STOP to terminate spec when it discovers that a specified
number of input streams are at end-of-file. The default is to process all input streams to
end-of-file.
Specify a list of one or more items. Each item defines a field in the output record or
contains a keyword to control processing.
These keywords specify functions of spec that are not related to formatting output fields:
READ Read another record from the currently active stream (after discarding the
current record). A stream at end-of-file is considered to contain a null record.
READSTOP Perform like READ, but terminate the pass over the specification list if the
stream is at end-of-file.
WRITE Write an output record containing the data from the specification items proc-
essed so far in the list.
SELECT Switch to another input stream and process the record there. The word
following SELECT should be a stream number or a stream identifier of the input
stream to which the following specification items refer.
PAD Use the specified character to pad short fields when storing subsequent items
in the output record. The word following PAD specifies the character to be
used as the pad character; it can be specified as a single character, a two-
character hexadecimal code, or as one of the keywords BLANK or SPACE.
Operation: Output records are built from literal data and fields in input records, which
can come from multiple streams. The output record is built in the order the items are
specified. The length of a literal field is given by the length of the literal itself; TODCLOCK
is eight bytes long; NUMBER is 10 bytes long. A copied input field extends to the end of
the input record or the ending column of the input field, whichever occurs first.
The output record is at least long enough to contain all literal fields and all output fields
defined with a range. It is longer if there is an input data field beyond the last literal field,
and the input record does contain at least part of the input field.
Padding: Pad characters (blank by default) are filled in positions not loaded with charac-
ters from a literal or an input field. The keyword PAD sets the pad character to be used
when processing subsequent specification items.
The beginning and end of an input range are, in general, defined by a pair of numbers
separated by a semicolon (for example, 5;8). An unsigned number is relative to the
beginning of the record; a negative number is relative to the end of the record. None, any
one, or both of the numbers may be negative. When the two numbers have the same sign,
the first number must be less than or equal to the second number.
When both numbers in the range are unsigned, a hyphen may be used as the separator
rather than a semicolon. A range relative to the beginning of a record may also be
specified as two numbers separated by a period, denoting the beginning of the range and
its length, respectively.
An input range with no further qualification denotes a range of columns. WORDS may be
prefixed to indicate a word range; FIELDS may be prefixed to indicate a field range.
The record number: You can put the number of each record into the record, for instance,
to generate a sequence field. The keyword NUMBER (with synonym RECNO) describes a
10-byte input field generated internally; it contains the number of the current record, right
aligned with leading blanks (no leading zeros). Records are numbered from 1 (the
numeral, one) with the increment 1 (the numeral, one) when no further keywords are
specified. The word after the keyword FROM specifies the number for the first record; it
can be negative. The word after the keyword BY specifies the increment; it too can be
negative. The keywords apply to a particular instance of NUMBER. When the record
number is negative, a leading minus sign is inserted in front of the most significant digit in
the record number, unless the number has ten significant digits (in which case there is no
room for the sign).
The Time-of-day Clock: The contents of the time-of-day clock are stored when a set of
input records is ready to be processed. The field is a 64-bit binary counter. It is constant
while output record(s) are built. Refer to Enterprise Systems Architecture/390 Principles of
Operation, SA22-7201, for a description of the time-of-day clock.
Literal field: This is a constant that appears in all output records. A literal character
string is written as:
A delimited string (delimitedString) consisting of a character string between two
occurrences of a delimiter character, which cannot occur in the string. The delimiter
character cannot be blank. It is suggested that a special character be used for the
delimiter, but this is not enforced. However, when an alphanumeric character is used
as the delimiter, there is a possibility that today’s delimited string might become
tomorrow’s keyword.
A hexadecimal literal consisting of a leading “x” or “h” (in lower case or upper case)
followed by an even number of hex characters.
A binary literal consisting of a leading “b” (in lower case or upper case) followed by
zero and one characters in multiples of eight.
Stripping: The keyword STRIP specifies that the field (input field, sequence number, time
of day, or literal) is to be stripped of leading and trailing blanks before conversion (if any)
and before the default output field size is determined.
Conversion: A field (input or literal) is put in the output record as it is when no conver-
sion is requested for the item. Put the name of a conversion routine between the input and
output specifications when you wish to change the format of a field. The functions also
defined for REXX work in a similar way. They are C2D, D2C, C2X, and X2C. The functions
not available in REXX convert bit strings, floating point, dates, packed decimal, and varying
length strings. Note that the REXX name for a conversion function can be misleading: For
instance, C2D is described in the REXX manual as converting from character to decimal;
what it does, however, is convert from the internal IBM System/390* two’s complement
notation of a binary number to the external representation in base-10, zoned decimal.
Some conversions are supported directly between printable formats, for example X2B. This
table summarises the supported combinations. A plus indicates that the combination is
supported. A blank indicates that the combination is not supported.
CDXB FVPI To
+++ ++++ From C
+ ++ From D
++ + ++++ From X
+++ ++++ From B
+ ++ From F
+ ++ From V
+ ++ From P
+ ++ From I
CDXB FVPI To
Composite conversion (x2y) is performed strictly via the C format; that is, x2C followed by
C2y.
Output field position: The output specification can consist of the keywords NEXT or
NEXTWORD, a column number, or a column range. NEXT indicates that the item is put
immediately after the rightmost item that has been put in the output buffer so far.
NEXTWORD appends a blank to a buffer that is not empty before appending the item. (A
field placed with NEXT or NEXTWORD can be overlaid by a subsequent specification indi-
cating a specific output column.) Append a period and a number to specify an explicit
field length with the keywords NEXT and NEXTWORD.
Output field length: Fields for which an explicit length is specified are always present in
the output record. Input fields that are not present in the input record or have become null
after stripping caused by the STRIP keyword are not stored in the output record. A null
literal field is stored in the output record. The default length of the output field is the
length of the input field after conversion (but before placement).
Placement of data in the output field: When an output range is specified without a place-
ment option, the input field after conversion is aligned on the left (possibly with leading
blank characters), truncated or padded on the right with pad characters.
A placement keyword (LEFT, CENTRE, CENTER, or RIGHT) is optional in the output field
definition. If a placement option is specified, the input field after conversion (and thus
after the length of the output field is determined) is stripped of leading and trailing blank
characters unless the conversion is D2C, F2C, I2C, P2C, or V2C.
LEFT The field is aligned on the left of the output field truncated or padded on the
right with pad characters.
CENTRE The field is loaded centred in the output field truncated or padded on both
sides with pad characters. If the field is not padded equally on both sides, the
right side gets one more pad character than the left side. If the field is not
truncated equally on both sides, the left side loses one more character than the
right side.
RIGHT The field is aligned to the right in the output field, truncated on the left or
padded on the left with pad characters.
Multiple records: You can write input fields from consecutive input records to an output
record. Use the keywords READ and READSTOP to consume a record and peek (read
without consuming) the next record on the stream specified by the most recent SELECT (or
the primary stream if there is no prior SELECT). When READ is used, a null record is
assumed if the stream is at end-of-file. When READSTOP is used, end-of-file causes spec to
write the output record built so far and terminate processing of that set of input records.
READ is convenient, for example, to process the primary stream from lookup when it has
both master and detail records. (Do not use the READ keyword if you wish to write one
output record for each input record; a read on all used streams is implied at the end of the
specification list.)
You can write multiple output records based on the contents of an input record (or a set of
input records). The keyword WRITE writes the output record built so far to the primary
output stream, leaving the current output record empty.
Streams Used: The keyword SELECT specifies that subsequent input fields refer to the
specified input stream, which is specified by number or stream identifier. SELECT 0 is
implied at the beginning of the specification list unless an explicit selection occurs before
the first input specification referring to a field in an input record. When more than one
input stream is selected, a record is peeked (read in locate mode) from all specified input
streams before the list is processed. An input stream at end-of-file is considered to hold a
null record. Unless READ, READSTOP, or WRITE are used to read or write during the cycle,
a set of input records is consumed (released with a move mode read) after the output
record is written at the end of the cycle, before further input is obtained.
Input streams defined, but not referenced, are not read when SELECT is used. Only the
primary input stream is used when the specification list has no SELECT keyword (all other
streams are ignored). When STOP ALLEOF is specified (this is the default), spec processes
input records until all input streams being used are at end-of-file. When STOP ANYEOF is
specified, spec terminates when it encounters the first stream at end-of-file. When a
number is specified, spec terminates when that number of streams are at end-of-file, or
when all used streams are at end-of-file. The test for termination is performed only when
spec is reading input records at the beginning of the specification list. End-of-file on a
READ item does not terminate spec immediately; end-of-file on a READSTOP causes spec to
write the output record and terminate processing of the current set of input records.
Record Delay: spec synchronises the referenced input streams. It does not delay the
record, unless READ or READSTOP is used.
Commit Level: spec starts on commit level -2. It verifies that the primary output stream
is the only connected output stream, processes the arguments, and then commits to level 0.
Premature Termination: spec terminates when it discovers that any of its output streams
is not connected.
To generate SEQ8 sequence fields (the record number in columns 73-76 and 77-80 zero):
...| spec 1.72 1.72 pad 0 number 73.4 right ?0000? 77 |...
Columns 1 to 72 are copied across in the first specification item; the output field size
ensures that short records are padded with blanks up to 72 characters. The pad character is
then set to 0 so that the leading blanks in the record number are stored as leading zeros;
the four rightmost characters of the record number are put in columns 73 to 76 and four
zeros appended to make the output record 80 bytes.
To prefix each record (assuming it is shorter than 64K) with a fullword that contains the
length of the data part of the record:
...| spec x0000 1 1-* v2c next |...
A literal with two bytes of binary zeros is put in front of the halfword length generated by
the conversion.
To prefix a record with a length field that is two plus the length of the record, as done in
structured fields:
...| spec 1-* 1 /xx/ next | spec 1-* v2c 1 | spec 1;-3 1 |...
This example uses three spec stages. The first one appends two characters to the record (it
does not matter what these two characters are); the second generates a halfword length
field counting these two characters; and the last one removes the two characters, leaving
the original record with the required length field in front.
To obtain the contents of the first structured field in the record (the converse of the
previous example):
... | spec 1-* 1 /xx/ next | spec 1-* c2v | spec 1;-3 | ...
Specify a scaling of zero to append a decimal point to the number being unpacked:
pipe strliteral x123c | spec 1-* c2p | console
+123
Ready;
pipe strliteral x123c | spec 1-* c2p(0) | console
+123.
Ready;
pipe strliteral x123c | spec 1-* c2p(1) | console
+12.3
Ready;
Notes:
1. Floating point conversion (F2C and C2F) requires extended precision floating point
hardware.
2. Conversion to floating (F2C) is in most cases accurate within rounding of the least
significant bit.
3. C2F conversion can show the effect of rounding errors in the least significant digit
when the exponent is close to the limits of the representation.
4. When the specification is a single field with no output column and without PAD,
SELECT, READ, or WRITE, the output placement is assumed to be column 1.
5. Unlike the keywords FIELDSEPARATOR, PAD, and SELECT apply to the remainder of the
item list; FIELD and WORDS apply to only one input field.
6. An asterisk is rejected for the ending column in an output specification.
7. The time-of-day clock is stored by the machine instruction STCK. In general, this is
the time at the primary meridian. A local time zone offset is not applied.
8. RECNO is a synonym for NUMBER. CENTER is a synonym for CENTRE. The keyword
NWORD is a synonym for NEXTWORD; it can be abbreviated to two characters. The
keyword FS is a synonym for FIELDSEPARATOR. The keyword WS is a synonym for
WORDSEPARATOR.
┌─/ /───────────────────────────────────┐
──SPILL──number──┼───────────────────────────────────────┼──
│ ┌─STRing───────────┐ │
└─┼──────────────────┼──delimitedString─┘
└─┬───────┬──ANYof─┘
(1) ┘
└─NOT───
┌──
───────────────────────────────┐
┬─────────────────────────────┬┴──
───
├─ANYcase─────────────────────┤
├─KEEP────────────────────────┤
└─OFFSET──┬─number──────────┬─┘
└─delimitedString─┘
Note:
1 Blanks are optional between NOT and ANYOF.
Type: Filter.
Syntax Description: A positive number is required as the first operand. The second posi-
tional operand specifies the word separator; it is optional. Remaining operands are
optional and may be specified in any order.
ANYOF Any one of the characters enumerated in the delimited string is a word
separator.
NOT ANYOF Any one of the characters not enumerated in the delimited string is a
word separator. (This is the complement set.)
ANYCASE Ignore case when comparing for the word separator. The default is to
respect case.
KEEP Retain the word separator in the output record. The default is to strip
word separators at the split point.
OFFSET Specify the indent on the second and subsequent output lines for an
input record. A number specifies the number of blanks to insert; a
delimited string specifies the actual string to insert. Output records are
not offset when the number is zero or the string is null; this is the
default.
Operation: Input records that are shorter than the specified length are passed unchanged
to the output.
A leading string is split off long input records until the remainder is not longer than the
| specified length. The remainder is then passed unmodified to the output with offset
| applied.
For the first output record for a long input record, the split position is at the specified
length or before; for subsequent records, the split point is at the specified length less the
length of the offset. The split point is established at the rightmost occurrence of the word
separator within the specified range, or abutting the range on the right.
If no word separator can be found within the required range, further processing depends on
whether the secondary output stream is defined or not. When the secondary output stream
is not defined, the split point is then established within a word. When the secondary
output stream is defined, no further attempts are made to split the record. Instead, the
remainder of the input record is written to the secondary output stream. It is prefixed with
the offset if one or more records were written to the primary output stream before the long
word was encountered.
Unless KEEP is specified, word separators are discarded when records are split; this can
lead to complete record segments being discarded on the output.
Streams Used: Secondary streams may be defined. Records are read from the primary
input stream; no other input stream may be connected.
Record Delay: spill does not delay the last record written for an input record.
Commit Level: spill starts on commit level -2. It verifies that the secondary input stream
is not connected and then commits to level 0.
Premature Termination: spill terminates when it discovers that any of its output streams
is not connected.
When there is no secondary output stream, very long words are split, but not discarded:
pipe literal abcdefghi klmnopq| spill 4 | console
abcd
efgh
i
klmn
opq
Ready;
When the secondary output stream is defined, a very long word results in the rest of the
record being written to that stream:
pipe (end ?) literal abc defghi klmn opq| s: spill 4 | hole ? s: | console
defghi klmn opq
Ready;
In the example above, the primary output stream from spill is discarded. When the second
word is processed, it is determined that the word cannot be spilt within the columns
allowed and the remainder of the record is written to the secondary output stream.
Notes:
1. spill is designed to perform a function similar to XEDIT’s though it has several
enhancements, it is not suitable as a word processor.
! 2. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
──SPLIT──┬─────────┬──┬─────────────────┬──
└─ANYCase─┘ └─MINimum──number─┘
┌─AT──────────────────────┐
──┼─────────────────────────┼──┬─────┬──
└─┬─────────┬──┬─BEFORE─┬─┘ └─NOT─┘
└─snumber─┘ └─AFTER──┘
┌─BLANK──────────────────┐
──┼────────────────────────┼──
└─┤ target ├──┬────────┬─┘
└─number─┘
target:
├──┬─xrange──────────────────────┬──┤
└─┬─STRing─┬──delimitedString─┘
└─ANYof──┘
Type: Filter.
A relative position and the keyword NOT are optional in front of the target.
A relative position consists of the keyword AT or one of the keywords BEFORE or AFTER; a
signed number is optional before the latter two keywords.
The target can be a range of characters or a delimited string. A number is optional after
the target. A hex range matches any character within the range. The keyword STRING
followed by a delimited string matches the string. The keyword ANYOF followed by a
delimited string matches any one character in the string. (The keyword is optional before
a one character string, because the effect is the same in either case.)
The matching character or string is discarded when records are split at a target. AT is the
default qualifier. No parameters means split at blank characters.
Operation: split scans the record matching the pattern. When MINIMUM is specified, split
skips the number of characters specified before it starts looking for the pattern.
Use a number after the pattern to make split stop after the pattern has been matched that
number of times and write any remaining input data to the output stream; the default is to
continue to the end of the record. split writes at most n+1 records when a number is
specified.
A split position is established when the pattern has been matched. With no modifiers, it is
before the first character matching the pattern; with the options BEFORE and AFTER, it can
be offset any number of characters to the left or right by coding snumber. n AFTER a
target is equivalent to m BEFORE a target, where m is -n-length(target). When a split posi-
tion is established within or after the record, a record is written with data from the
previous split position (initially before the first character in the record) to the newly estab-
lished split position or the end of the record, whichever occurs first. When splitting AT,
the split position is updated with the length of the target after a record is written, so that
the target is discarded; thus, records that consist entirely of the target are discarded.
Record Delay: split does not delay the last record written for an input record.
Premature Termination: split terminates when it discovers that its output stream is not
connected.
The set buffer address order (X'11') marks the beginning of a field in an inbound 3270
data stream from a read modified command. If the 3270 data stream uses twelve-bit
addressing, you can split each inbound transmission into individual fields by | split
before 11 |. This is too simplistic if the 3270 data stream uses fourteen or sixteen bit
addressing: the two-byte buffer address that follows the order code could itself contain
X'11', which would trigger a split too early. To be sure:
... | split minimum 3 before 11 | ...
Notes:
1. split copies null input records to the output; it does not generate null records.
! Caveat emptor! What this means is that a record that contains only the target string,
! no matter how many instances, is dropped. The pipeline below causes the variable
! ROB to be dropped.
! rob=' '
! 'PIPE var rob | split | var rob'
! One way to retain the variable is to strip before splitting; this creates a null record,
! which is passed by split.
! rob=' '
! 'PIPE var rob | strip | split | var rob'
2. Use deblock FIXED to split an input record into records of the specified length, where
only the last part can be shorter than the record length.
3. The minimum abbreviation of ANYCASE is four characters because ANYOF takes
precedence (ANYOF can be abbreviated to three characters).
! 4. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
sql—Interface to SQL
sql queries DB2 tables, inserts rows into DB2 tables, and issues SQL statements in general.
┌──
────────────────────┐
│┌─COMMIT───────────┐│
┼─RELEASE──────────┼┴──┬─┤ select-statement ├──┬──
──SQL───
├─NOCOMMIT─────────┤ ├─┤ insert-statement ├──┤
├─REPEATable───────┤ ├─┤ connect-statement ├─┤
├─INDicators───────┤ └─EXECUTE──┬────────┬───┘
├─NOINDicators─────┤ └─string─┘
├─PGMOWNER──word───(1) ┤
├─SUBSYSid──word───(2) ┤
(2) ────┘
└─PLAN──word───
select-statement:
├──┬─────────┬──┬──────────┬──SELECT──string──┤
└─EXECUTE─┘ └─DESCRIBE─┘
insert-statement:
├──┬─────────┬──INSERT──INTO──word──┬────────────────┬──┤
└─EXECUTE─┘ │ ┌─,────┐ │
─word─┴──)─┘
└─(───
connect-statement:
├──CONNECT──┬─────────────────────────────┬──┬────────────┬──┤
(1) ┘
└─word──IDENTIFIED BY──word─── (3) ┘
└─TO──word───
Notes:
1 Available on CMS only.
2 Available on z/OS only.
General options:
COMMIT Commit the unit of work without releasing the connection to the DB2
service machine at completion, or roll back without releasing in the
event of an error. This is the default.
RELEASE Commit work with release at the completion of the stage, or roll back
work with release when a negative return code is received from DB2.
Use this option when you do not expect to use sql again in the near
future; being connected ties up resources in the DB2 virtual machine, but
on the other hand there is a certain overhead in reestablishing the
connection.
NOCOMMIT Do not commit the unit of work when processing is complete without
errors. Roll back without release in the event of an error. Use this
option when processing with multiple cursors or if you wish to issue SQL
statements from multiple invocations of sql as a single unit of work.
The connection to the DB2 server is retained.
PLAN The following word specifies the plan to use. The default is FPLSQI;
your installation may have specified a different default. (z/OS only.)
Operation: Tables are loaded using sql INSERT, queried with sql SELECT, and maintained
with sql EXECUTE.
Select: Perform a query. The argument specifies a complete SELECT statement. One
record is written for each row of the result of the query. By default, each column is
preceded by a 2-byte indicator word specifying whether the column has a null value or
contains data. Use NOINDICATORS to suppress this field in the output record.
In an indicator word, binary zero indicates that the column has a value; a negative indi-
cator word indicates that the column is null. A positive value in the indicator word means
that the column is truncated; this should not occur, as each column has as many positions
reserved as sql DESCRIBE reports for the table. Blanks or zeros, as appropriate to the field
format, are stored in the unfilled positions of columns that contain a null value and
columns that have variable length. When the last field has variable length, the record is
truncated to the end of the data present.
/* Query a table */
'pipe sql select * from test where name="Oscar" |...'
When sql SELECT or sql DESCRIBE SELECT is issued with EXECUTE, output from the first
query is written to the primary output stream, the result of the second query goes to the
secondary output stream, and so on, until there are no further output streams; the result of
the remaining queries is written to the highest numbered stream defined. The streams must
be connected.
Describe select: The argument is a query to be described. One record is written for each
field of the query. Refer to the description of the SQLDA in DB2 Server for VSE & VM
Application Programming. Each record has five blank-delimited fields of fixed length:
3 The decimal number defining the field type.
16 The field type decoded, or “Unknown” if the field type is not recognised by
CMS/TSO Pipelines. The first four positions have the word LONG if the field is a
long character or graphics field.
5 The field length as reported by DB2. This is a single number except for decimal
fields where the precision and scale are reported with a comma between them.
5 The maximum length of the field in characters, including a halfword length field if
required, computed from the length and field type. This is the number of bytes sql
SELECT reserves for the field in the output record from a query, and the number of
bytes required in the input record to sql INSERT. The length does not include the
indicator word.
30 The field name. The record is truncated at the end of the name; the name field is
from 1 to 30 bytes.
The remainder of this section describes how sql processes an insert statement that has no
values() clause and no subquery. This is supported only on CMS because DB2 does not
provide the underlying interface to insert on a cursor. When no values() clause is
specified, sql supplies one that references fields in input records. A row is inserted into
the table for each input record. You can specify a list of fields to insert in parentheses;
this list is used by sql to build a data area describing the input record.
When there is no list of columns after the name of the table, the input record contains data
for all columns in the table in the order returned by sql DESCRIBE SELECT * FROM. When a
list of columns is specified, the input record has the columns in the order specified and in
the format returned by the describe function. You cannot insert a literal value into a
column; use spec to put a literal into all input records. The format of an input record is
the same as the output record from sql SELECT. In particular, a variable length column
must be padded to its maximum length.
Use the option INDICATORS when you wish to load null values into selected columns. All
columns must have a 2-byte prefix, which is binary zero when the field has a value; it is
negative to indicate a null value. The default is not to use indicator words.
Input records are read from the primary input stream when EXECUTE is omitted. When sql
INSERT statement(s) with no values() clause are issued with the EXECUTE option, the first
statement reads records from the secondary input stream, the second statement reads from
stream number 2, and so on. It is an error if a stream is not defined.
/* Insert data into a table */
'pipe < to insert | sql insert into test' /* CMS ONLY */
Connect: Connect to a database. On VM, you can specify the user ID and password under
which you wish to communicate with DB2. The first word represents the user ID; the
second one represents the password. The keyword TO specifies that you wish to connect to
a particular database. The database name can be up to 18 characters long.
Execute: Perform SQL statements. A statement after EXECUTE is issued first; the primary
input stream is then read and each record is performed. All SQL statements are performed
as a single unit of work. Most SQL statements are supported; refer to the description of the
PREPARE statement in DB2 Server for VSE & VM Application Programming, for a list of
unsupported statements. sql processes COMMIT, CONNECT, and ROLLBACK directly; thus,
they are also supported. Unsupported statements are rejected by DB2 with SQLCODE -515.
Processing stops as soon as an error is reported by DB2.
/* Drop a program */
'pipe literal drop program pipsqi | sql execute'
Using Multiple concurrent sql stages: Up to ten sql stages can run concurrently in all
active pipelines. It is paramount that the option NOCOMMIT be used. DB2 considers all sql
stages to be part of one unit of work; an implied commit by a stage causes errors when
other stages resume using their cursors. Explicit commit or rollback is done with sql
COMMIT and sql ROLLBACK.
If RC/=0
Then commit='rollback'
Else commit='commit'
Streams Used: SQL statements are read from the primary input stream when EXECUTE is
used. On CMS, rows to insert are read from the input streams. The result of queries is
written to the output streams.
Record Delay: sql produces all output records from a query before it consumes the corre-
sponding input record.
Commit Level: sql starts on commit level -4. It connects to the database engine, allocates
a cursor, and then commits to level 0.
Premature Termination: sql terminates when it discovers that any of its output streams is
not connected. It also terminates if a negative return code (indicating an error) is received
from DB2; the unit of work is rolled back when a negative return code is received, unless
DB2 indicates it has already done so.
Notes:
1. spec is often used to insert indicator words for columns that are always present.
2. On CMS, use SQLINIT to specify the database to access, before using sql to access it.
3. The result of a query can be four bytes binary integer; use spec to convert it to
decimal, if desired.
/* Determine query size */
'PIPE',
' sql select count(*) from test where name="Oscar" ',
'| spec 1-* c2d 1',
'| var rows'
if RC=0 then if rows>0 then call process
4. The access module must be generated before you can access DB2 tables with CMS
Pipelines. DMSPQI ASMSQL is the input to the preparation process. Your database
administrator must give connect privileges to the user DMSPIPE.
Return Codes: Error codes from DB2 are reflected in the return code; such return codes
are negative. Positive return codes represent errors detected by CMS/TSO Pipelines.
When DB2 returns a positive number that is not 100 (which means “no more data”),
CMS/TSO Pipelines generates an error message and terminates.
Configuration Variables: On CMS, two configuration variables supply the default program
owner and the default program name to be used by sql.
SQLPGMOWNER specifies the program owner; the default is 5785RAC in the PIP style; it is
DMSPIPE in other styles.
SQLPGMNAME specifies the program name; the default is PIPSQI in the PIP style; it is
DMSPQI in other styles.
──SQLCODES──
Output Record Format: 44 bytes are written to the primary output stream; these are 11
fullwords with 4 bytes for each nonzero (other than 100) return code expressed as a binary
number in two’s complement notation. The return code received most recently is in the
last four bytes; the oldest is in the first four bytes. The leftmost slots contain zero when
fewer than 11 nonzero return codes have been received from SQL.
Syntax Description: If the first non-blank character is a left parenthesis, the string up to
the first right parenthesis is processed as options; refer to the description of the sql built-in
program. The remainder of the argument is processed as an SQL Select statement.
Operation: sqlselect obtains a description of the query from SQL, computes a pipeline that
will convert the query result to printable text, writes a heading line showing the names of
the columns in the output file, and then performs the query with formatting.
Output Record Format: It issues a subroutine pipeline to describe the query. This will
cause a commit to level 0 if the query can be described.
Premature Termination: sqlselect terminates when it discovers that its output stream is
not connected.
Notes:
1. Timestamps cannot be formatted, because their encoding is not published. Such fields
will display as apparently random alphanumeric characters.
2. sqlselect uses sql under the covers; the SQL configuration variables apply to sqlselect
as well.
┌─FIFO─┐
──STACK──┼──────┼──
└─LIFO─┘
Operation: When stack is first in a pipeline, it issues the command SENTRIES on CMS to
obtain the number of lines on the console stack; it then reads as many lines from the stack
as indicated by the return code and writes these lines to the output stream. The intent is to
be able to drain the stack into the pipeline, including null lines that would make console
stop. A terminal read may result if another stage (possibly another invocation of stack)
reads from the stack concurrently with stack.
When stack is not first in a pipeline, records on the input stream are stacked and then
copied to the output stream. By default the lines are queued FIFO in the CMS console
stack. Beware of loops if lines are being read by console at the beginning of the pipeline;
such loops are best prevented with a buffer stage.
Examples: The contents of the stack may be saved in REXX variables while running a
REXX program and a new stack created at the end of the program (see the following
example), but the effect of any MAKEBUF is lost.
/* Save the stack */
'PIPE',
'stack|',
'stem save_stack.'
/* Process */
Notes:
1. Due to the limited width of the CMS stack, stacked data are truncated after 255 charac-
ters.
Before invoking starmon, you should attach the monitor segment to the virtual machine
using the CMS command “segment load” and also enable the monitor domains you wish
to process using the CP command “monitor”.
CMS
┌─SHAREd────┐
──STARMONitor──word──┼───────────┼──┬───────────────┬──
├─EXCLUSIVe─┤ └─SUPPRESS──hex─┘
├─SAMPLEs───┤
└─EVENTs────┘
Syntax Description:
word Specify the name of the monitor shared segment to be used. The
segment must have been attached to the virtual machine before starmon
is invoked.
The first keyword specifies the type of interface used to the system service.
SHARED SHARED is the default. It implies both EVENTS and SAMPLES.
EXCLUSIVE Connect to the monitor service with exclusive use of the monitor
segment.
SAMPLES Connect to the monitor service in shared mode. Only sample data are to
be stored in the monitor segment.
EVENTS Connect to the monitor service in shared mode. Only event data are to
be stored in the monitor segment.
SUPPRESS Specify a bit map of the monitor domains you wish to suppress. The
next word is converted from hexadecimal to binary. The sixteen right-
most bits are used as a mask. Thus, SUPPRESS 8000 specifies that
starmon should not write records from the system domain. Performance
will improve if you enable monitor domains selectively using the
MONITOR command rather than using the SUPPRESS option to ignore data
from enabled domains.
Operation: starmon connects to the CP *MONITOR system service using the Inter User
Communication Vehicle (IUCV). The message limit is zero, which selects the default for
the service requested. The user parameters (IPUSER) are set according to the options
specified.
starmon sets up the immediate command HMONITOR. Issue this command to halt the
starmon stage.
Output Record Format: Monitor records as defined in the MONITOR LIST1403 sample file.
In contrast to the MONWRITE command, starmon writes each monitor record as a separate
logical record. The logical record begins with the MRHDR structure.
Commit Level: starmon starts on commit level -2000000000. It verifies that no other
stage has requested a connection to the *MONITOR service., sets up an immediate command
(HMONITOR), connects to the system service, and then commits to level 0.
Premature Termination: starmon terminates when it discovers that its output stream is
not connected. starmon terminates when CP signals that it has not processed the data in
time. This is accompanied by error messages indicating a nonzero IPAUDIT field. This
indicates that CP has changed the monitor data under starmon; the integrity of the output
from the pipeline is questionable.
starmon terminates when the immediate command HMONITOR is issued while it is waiting
for CP to provide a new batch of monitor records. starmon also stops if the immediate
command PIPMOD STOP is issued or if a record is passed to pipestop.
Examples:
/* Monitor it */
address command
/* Enable domains */
'CP MONITOR SAMPLE ENABLE I/O ALL'
'CP MONITOR SAMPLE ENABLE USER ALL'
'CP MONITOR SAMPLE ENABLE PROCESSOR'
'CP MONITOR SAMPLE ENABLE STORAGE'
'CP MONITOR SAMPLE RATE 1 SEC'
/* Attach the monitor segment */
'SEGMENT LOAD MONDCSS'
/* Start the monitor */
'CP MONITOR START'
'PIPE |',
'starmon mondcss sample |', /* Obtain data */
'monred |', /* Reduce it on the fly */
'>> monitor' date('s') 'a' /* Write the output file */
r=RC
CMS
──STARMSG──┬───────┬──┬────────┬──
└─*word─┘ └─string─┘
To reach a service other than *MSG, specify as the first operand the name of the CP service
required, beginning with an asterisk (for example, *MSGALL). You can connect to any
system service that sends messages and does not require a reply. *MSG is the default.
Operation: starmsg connects to a CP system service using the Inter User Communication
Vehicle (IUCV). The message limit is zero, which selects the default for the service
requested. The user parameters (IPUSER) are set to binary zeros. Message data in the
¡ parameter list are not supported.
starmsg sets up an immediate command that may be used to halt the starmsg stage. The
name of the immediate command is the name of the service prefixed by an 'H' (for
instance, HMSG).
If it is present, the command string is sent to the CMS subcommand environment after
starmsg is connected to the system service.
When starmsg is not first in a pipeline, it issues each command to CMS. When the
command is complete, starmsg loops writing any responses trapped to the output and
suspending itself to let these responses be processed. When no more responses arrive,
starmsg tries to read the next input record. It disconnects from the system service and
terminates normally when it receives end-of-file on the input. When starmsg is first in a
pipeline, it does not terminate normally.
Output Record Format: Columns 1-8 contain the message class (IPTRGCLS) converted to
hexadecimal (eight bytes). The message class is the only field from the interrupt parame-
ters that is present in the output record. (Refer to CP Programming Services, SC24-5520,
for the authoritative meaning of the message class; some common message classes are
shown below.) The message follows (as received with IUCV RECEIVE). For message
classes 1, 2, 4, and 8, the first eight bytes of the message (columns 9-16 of the output
record) contain the user ID of the sending virtual machine, padded with blanks.
Streams Used: Records are read from the primary input stream and written to the primary
output stream.
Record Delay: The delay is unspecified when starmsg is not a first stage.
Commit Level: starmsg starts on commit level -2000000000. It verifies that no other
stage has requested a connection to the CP service, sets up an immediate command envi-
ronment, establishes the connection to the CP service, and then commits to level 0.
Premature Termination: starmsg terminates when it discovers that its output stream is
not connected. A particular invocation of starmsg is terminated when the immediate
command exit is driven that has the name of the service with an 'H' substituted for the
leading asterisk; for instance, HMSGALL. starmsg also stops if the immediate command
PIPMOD STOP is issued or if a record is passed to pipestop.
Then, when you issue the CP command “message * stop”, starmsg writes an output line
that causes the tolabel stage to terminate. This severs starmsg’s output stream which in
turn causes starmsg to terminate. You must have issued the CP command “set msg iucv”
for the message to be trapped by starmsg; the tolabel stage is all in vain if the message
does not get trapped.
Still, the probability is low that RSCS will respond before starmsg discovers that is has no
more commands to issue. To terminate starmsg after five seconds:
/* Wait for a short while */
'PIPE (end ? name STARMSG)',
'|literal +5', /* Five seconds */
'|delay', /* Wait a bit */
'|g: gate', /* Shut the gate */
'?starmsg CP SMSG RSCS Q SYS A', /* Trap responses */
'|g:', /* Until the interval expires */
'|...
When the record is written by delay five seconds after it reads it, the output stream from
starmsg is severed. This will cause starmsg to terminate.
Notes:
1. CMSIUCV is used to connect to the message service.
2. Use CP SET to select which responses you wish to process; for instance, “cp set
cpconio iucv”.
3. When CPCONIO is set to IUCV, all CP console output is presented through the *MSG
interface. Enable IUCV for other settings to make it easier to distinguish different
forms of CP console output. For instance, messages are presented as CP console output
(message class 3) when the message setting is ON, but as messages (message class 1)
when the MSG setting is set to IUCV.
4. Any CP system service can be selected; results are unpredictable when the service is
not a message service or a similar one-way service.
5. You cannot connect to *MSGALL when CMS FULLSCREEN is ON because CMS is already
connected to the service; CP rejects further attempts to connect.
6. Only one starmsg or starsys stage is allowed at a time for a particular system service.
7. Though it is possible to use starmsg to connect to the *ACCOUNT, *LOGREC, and
*SYMPTOM system services, the recommended device driver is starsys. starmsg may
require large amounts of buffer space to hold all pending messages, and once a
message is read by starmsg, it is purged by CP. starsys, on the other hand, accepts
only one message from CP at a time and does not signal to CP that the message has
been received until after the corresponding output record has been consumed.
8. Remember that REXX continuation functionally replaces a trailing comma with a blank.
Also recall that when two strings are separated by one or more blanks, REXX concat-
enates them with a single blank. Use the concatenation operator (||) before the
comma at the end of the line if your portrait style has the stage separators at the left
side of the stage and the trailing blank is significant to your application.
When starsys is not first in a pipeline, it reads replies to CP from its input.
CMS
¡ ┌──
───────────────────────────┐
┬─────────────────────────┬┴──*word──┬────────┬──
──STARSYS───
¡ ├─ASYNchronous────────────┤ └─string─┘
¡ ├─IPUSER──delimitedString─┤
¡ ├─LOCAL───────────────────┤
¡ ├─PARMLIST────────────────┤
¡ ├─PRMDATA─────────────────┤
¡ └─TARGET──word────────────┘
¡ Syntax Description: Options may be specified when starsys is not first in a pipeline.
¡ After these, a word beginning with an asterisk may be followed by a command string.
*word Specify the name of the CP service required, beginning with an asterisk
(for example, *LOGREC). When starsys is first in the pipeline, you can
connect to any CP service that sends messages and expects a reply:
*ACCOUNT, *LOGREC, *SYMPTOM.
Operation: starsys connects to a CP system service using the Inter User Communication
Vehicle (IUCV). The message limit is zero, which selects the default for the service
requested. The user parameters (IPUSER) are set to binary zeros except the byte at offset 8
which is set to X'02', indicating that the two-way protocol is desired. Message data in
¡ the parameter list are not supported unless starsys is not a first stage.
starsys sets up an immediate command that may be used to halt the starsys stage. The
name of the immediate command is the name of the service prefixed by an 'H' (for
instance, HACCOUNT).
If it is present, the command string is sent to the CMS subcommand environment after
starsys is connected to the system service.
: When starsys is first in the pipeline, it sends a reply to the message as soon as the output
: record is consumed.
¡ When starsys is not first in a pipeline, it sends a reply when a record is available on the
¡ input. The record is discarded.
Commit Level: starsys starts on commit level -2000000000. It verifies that no other stage
has requested a connection to the CP service, sets up an immediate command environment,
establishes the connection to the CP service, and then commits to level 0.
Premature Termination: starsys terminates when it discovers that its output stream is not
connected. A particular invocation of starsys is terminated when the immediate command
exit is driven that has the name of the service with an 'H' substituted for the leading
asterisk, for instance HACCOUNT. starsys also stops if the immediate command PIPMOD
STOP is issued or if a record is passed to pipestop.
Examples:
Notes:
1. CMSIUCV is used to connect to the system service.
2. Any CP system service can be selected; results are unpredictable when the service is
not a two-way service.
3. Only one starmsg or starsys stage is allowed at a time for a particular system service.
¡ 4. starsys is still unsuitable for services to which the program must send, such as *SPL.
5. Remember that REXX continuation functionally replaces a trailing comma with a blank.
Also recall that when two strings are separated by one or more blanks, REXX concat-
enates them with a single blank. Use the concatenation operator (||) before the
comma at the end of the line if your portrait style has the stage separators at the left
side of the stage and the trailing blank is significant to your application.
CMS
──STATE──┬──────────────────────────────────────────────────┬──
│ ┌─FORMAT──────────────────┐ │
├─┼─────────────────────────┼──┬───────┬──┬──────┬─┤
│ ├─NODETAILS───────────────┤ └─QUIET─┘ └─ASIS─┘ │
│ ├─NOFORMAT────────────────┤ │
│ ├─SHOrtdate───────────────┤ │
│ ├─ISOdate─────────────────┤ │
│ ├─FULldate────────────────┤ │
│ ├─STAndard────────────────┤ │
! │ └─STRing──delimitedString─┘ │
│ ┌─*──┐ │
├─┬─*──┬──┬─*──┬──┼────┼───────────────────────────┤
│ └─fn─┘ └─ft─┘ └─fm─┘ │
└─fn──ft──dirid────────────────────────────────────┘
Syntax Description: Arguments are optional. The argument string can consist of
keywords or the name of a file.
FORMAT Information about files that are found is written in a printable format
using the short date format.
NOFORMAT The raw control block describing a file is written.
NODETAILS The file name as specified is written.
QUIET Set return code 0 even when one or more files are not found; the default
is to set return code 28 or 36 when files are not found.
FULLDATE The file’s timestamp is formatted in the American format, with the
century: 3/09/1946 23:59:59.
ISODATE The file’s timestamp is formatted with the century in one of the formats
approved by the International Standardisation Organisation:
1946-03-09 23:59:59.
SHORTDATE The file’s timestamp is formatted in the American format, without the
century: 3/09/46 23:59:59.
STANDARD The file’s timestamp is formatted as a single word in a form that can be
used for comparisons: 19460309235959.
!! STRING Specify custom timestamp formatting, similar to the POSIX strftime()
! function. The delimited string specifies formatting as literal text and
! substitutions are indicated by a percentage symbol (%) followed by a
! character that defines the substitution. These substitution strings are
! recognised by state:
! %% A single %.
! %Y Four digits year including century (0000-9999).
! %y Two-digit year of century (00-99).
! %m Two-digit month (01-12).
! %n Two-digit month with initial zero changed to blank ( 1-12).
! %d Two-digit day of month (01-31).
! %e Two-digit day of month with initial zero changed to blank ( 1-31).
! %H Hour, 24-hour clock (00-23).
! %k Hour, 24-hour clock first leading zero blank ( 0-23).
! %M Minute (00-59).
! %S Second (00-60).
! %F Equivalent to %Y-%m-%d (the ISO 8601 date format).
! %T Short for %H:%M:%S.
! %t Tens and hundredth of a second (00-99).
ASIS Do not look for files with the name and type in upper case after it is
determined that the file does not exist with the name and type as written.
Alternatively, you can specify a file in the same format as an input record.
Operation: When a file argument is specified, the first output record contains information
about the specified file. Then a line is generated for the file specified on each input line
read.
Each file is processed as follows: The third word (file mode, name definition, or directory)
is translated to upper case. All accessed minidisks and directories are searched if the third
word is omitted or is an asterisk. state first looks for a file that has the file name and file
type as written. If the file does not exist with a file name and a file type as entered and
ASIS is omitted, the file name and the file type are translated to upper case and the search
is retried. If the file is still not found, the file name, as written originally, is written to the
secondary output stream (if it is connected).
Input Record Format: Two or three words, specifying the file name, file type, and
optionally the file mode, name definition, or directory. When a mode is specified, the file
name or the file type, or both, can be specified as a single asterisk, which means that it
matches any file; other forms of “wildcards” are not supported by the underlying CMS
interface. The underlying interface to look in a directory does not support asterisks.
Output Record Format: The primary output stream: When NOFORMAT is specified, the
output record contains 64 bytes in the format defined by the FSTD data area. When
NODETAILS is specified, the output record contains the input record (if the file exists).
Otherwise, selected fields of the file status are formatted and written as a record: the file
name, type, and mode; the record format and logical record length; the number of records
and the number of disk blocks in the file; the date and time of last change to the file.
When the file is in an SFS directory that is not accessed, the file mode is shown as a
hyphen (-). When the file is on an accessed mode, the real file mode is shown. Thus, the
mode shown may not be the mode specified. When a name definition or a directory is
specified and the file resides in SFS, the fully qualified path to the directory that contains
the file is appended after the timestamp. (The file can reside on a minidisk that is
accessed as an extension to a mode on which the directory is accessed.)
Streams Used: Secondary streams may be defined. Records are read from the primary
input stream; no other input stream may be connected. Null and blank input records are
discarded. When a file is found, information about it is written to the primary output
stream (if it is connected). When a file is not found, the input record (or the argument
string) is passed to the secondary output stream (if it is connected).
Commit Level: state starts on commit level -2. It verifies that the secondary input stream
is not connected and then commits to level 0.
The primary output stream is not connected; the secondary output stream is connected to
the console stage.
Notes:
1. When looking for a file on a mode, state exposes the way the CMS command STATE
works. Though it is not documented, the CMS command searches the active file table
before it looks for files on the file modes.
2. When testing whether several files exist, and you are interested only in the return
code, be sure to specify hole to avoid premature termination:
'PIPE ... | state | hole'
if RC=0 then return /* All fine */
3. When looking for a file in a directory, state exposes the callable services and These
interfaces do not distinguish between a file not being present in a directory and a
missing directory in the path. Thus, return code 36 is not set for such a file.
4. Be sure to set numeric digits 14 when performing comparisons on STANDARD
timestamps; if you forget, REXX will use just nine digits precision. This means that
the first digit of the hour will be the least significant one and the remainder of the
precision will be lost.
5. SORTED is a synonym for STANDARD.
6. It may be easier to use the CMS STATE command directly if the file is on an accessed
mode.
Return Codes:
0 All input lines have been processed. All files exist, the keyword QUIET was
specified, or state terminated prematurely.
20 Invalid character in the file name or file type. Processing stops as soon as CMS
sets this return code.
24 Invalid file mode. Processing stops as soon as CMS sets this return code.
28 The keyword QUIET was omitted. One or more files were not found; all input
lines have been processed.
36 The keyword QUIET was omitted. One or more files referred to a mode that is
not accessed; all input lines have been processed.
other A return code other than 0, 28, or 36 is received from CMS. Processing is
terminated with this return code.
z/OS
──STATE──┬────────────────────────┬──
├─QUIET──────────────────┤
├─┬─word───────────────┬─┤
│ ├─word(generation)───┤ │
│ ├─'word'─────────────┤ │
│ ├─'word(generation)'─┤ │
│ └─DDname=word────────┘ │
└─word──word─────────────┘
Syntax Description: Arguments are optional. The keyword QUIET specifies that return
code 0 is set also when one or more files are not found; the default is to set return code 28
when files are not found.
Alternatively you can specify the first (or only) file to search for.
Operation: When a DDNAME is specified, SVC 99 is issued to query the allocation. When
a DSNAME is specified, the master catalog is searched for the data set.
Input Record Format: There can be one file name per input record. The name is either a
DDNAME (prefixed by the keyword DDNAME=) or a data set name (DSNAME). The current
prefix (if any) is prefixed to the DSNAME unless it is enclosed in single quotes.
Output Record Format: The fully qualified DSNAME is written to the primary output
stream when the file is found. The input record is copied to the secondary output stream
when the file is not found.
Streams Used: Secondary streams may be defined. Records are read from the primary
input stream; no other input stream may be connected. Null and blank input records are
discarded. When the file is found, information about it is written to the primary output
stream (if it is connected). When the file is not found, the input record (or the argument
string) is copied to the secondary output stream (if it is connected).
Notes:
1. The fact that a data set exists does not imply that it is readable. You may not have
RACF authority; the data set could have been migrated.
2. state also supports a member name and a DDNAME for which TSO Pipelines maintains
an open DCB. It is unspecified which DCBs TSO Pipelines uses.
CMS
──STATEW──┬──────────────────────────────────────────────────┬──
│ ┌─FORMAT──────────────────┐ │
├─┼─────────────────────────┼──┬───────┬──┬──────┬─┤
│ ├─NODETAILS───────────────┤ └─QUIET─┘ └─ASIS─┘ │
│ ├─NOFORMAT────────────────┤ │
│ ├─SHOrtdate───────────────┤ │
│ ├─ISOdate─────────────────┤ │
│ ├─FULldate────────────────┤ │
│ ├─STAndard────────────────┤ │
! │ └─STRing──delimitedString─┘ │
│ ┌─*──┐ │
├─┬─*──┬──┬─*──┬──┼────┼───────────────────────────┤
│ └─fn─┘ └─ft─┘ └─fm─┘ │
└─fn──ft──dirid────────────────────────────────────┘
Syntax Description: Arguments are optional. The argument string can consist of
keywords or the name of a file.
FORMAT Information about files that are found is written in a printable format
using the short date format.
NOFORMAT The raw control block describing a file is written.
NODETAILS The file name as specified is written.
QUIET Set return code 0 even when one or more files are not found; the default
is to set return code 28 or 36 when files are not found.
FULLDATE The file’s timestamp is formatted in the American format, with the
century: 3/09/1946 23:59:59.
ISODATE The file’s timestamp is formatted with the century in one of the formats
approved by the International Standardisation Organisation:
1946-03-09 23:59:59.
SHORTDATE The file’s timestamp is formatted in the American format, without the
century: 3/09/46 23:59:59.
STANDARD The file’s timestamp is formatted as a single word in a form that can be
used for comparisons: 19460309235959.
Alternatively, you can specify a file in the same format as an input record.
Operation: When a file argument is specified, the first output record contains information
about the specified file. Then a line is generated for the file specified on each input line
read.
Each file is processed as follows: The third word (file mode, name definition, or directory)
is translated to upper case. All accessed minidisks and directories are searched if the third
word is omitted or is an asterisk. statew first looks for a writable file that has the file
name and file type as written. If the file does not exist with a file name and a file type as
entered and ASIS is omitted, the file name and the file type are translated to upper case and
the search is retried. If the file is still not found, the file name, as written originally, is
written to the secondary output stream (if it is connected).
Input Record Format: Two or three words, specifying the file name, file type, and
optionally the file mode, name definition, or directory. When a mode is specified, the file
name or the file type, or both, can be specified as a single asterisk, which means that it
matches any file; other forms of “wildcards” are not supported by the underlying CMS
interface. The underlying interface to look in a directory does not support asterisks.
Output Record Format: The primary output stream: When NOFORMAT is specified, the
output record contains 64 bytes in the format defined by the FSTD data area. When
NODETAILS is specified, the output record contains the input record (if the file exists).
Otherwise, selected fields of the file status are formatted and written as a record: the file
name, type, and mode; the record format and logical record length; the number of records
and the number of disk blocks in the file; the date and time of last change to the file.
When the file is in an SFS directory that is not accessed, the file mode is shown as a
hyphen (-). When the file is on an accessed mode, the real file mode is shown. Thus, the
mode shown may not be the mode specified. When a name definition or a directory is
specified and the file resides in SFS, the fully qualified path to the directory that contains
the file is appended after the timestamp.
Streams Used: Secondary streams may be defined. Records are read from the primary
input stream; no other input stream may be connected. Null and blank input records are
discarded. When a file is found, information about it is written to the primary output
stream (if it is connected). When a file is not found, the input record (or the argument
string) is passed to the secondary output stream (if it is connected).
Commit Level: statew starts on commit level -2. It verifies that the secondary input
stream is not connected and then commits to level 0.
The primary output stream is not connected; the secondary output stream is connected to
the console stage.
Notes:
1. When looking for a file on a mode, statew exposes the way the CMS command
STATEW works. Though it is not documented, the CMS command searches the active
file table before it looks for files on the file modes.
2. When testing whether several files exist, and you are interested only in the return
code, be sure to specify hole to avoid premature termination:
'PIPE ... | statew | hole'
if RC=0 then return /* All fine */
3. When looking for a file in a directory, statew exposes the callable services and These
interfaces do not distinguish between a file not being present in a directory and a
missing directory in the path. Thus, return code 36 is not set for such a file.
4. Be sure to set numeric digits 14 when performing comparisons on STANDARD
timestamps; if you forget, REXX will use just nine digits precision. This means that
the first digit of the hour will be the least significant one and the remainder of the
precision will be lost.
5. SORTED is a synonym for STANDARD.
6. It may be easier to use the CMS STATEW command directly if the file is on an accessed
mode.
Return Codes:
0 All input lines have been processed. All files exist, the keyword QUIET was
specified, or statew terminated prematurely.
20 Invalid character in the file name or file type. Processing stops as soon as CMS
sets this return code.
24 Invalid file mode. Processing stops as soon as CMS sets this return code.
28 The keyword QUIET was omitted. One or more files were not found; all input
lines have been processed.
36 The keyword QUIET was omitted. One or more files referred to a mode that is
not accessed; all input lines have been processed.
other A return code other than 0, 28, or 36 is received from CMS. Processing is
terminated with this return code.
A stemmed array consists of variables that have names ending in an integer that is zero or
positive (the index). The variable that has index 0 contains the count of “data” variables,
which are numbered from 1 onwards.
──STEM──word──┬──────────┬──┬────────┬──┬──────────┬──
¡ ├─PRODUCER─┤ └─number─┘ └─NOMSG233─┘
└─MAIN─────┘
┌─SYMBOLIC─┐
──┼──────────┼──┬──────────────┬──
└─DIRECT───┘ ├─APPEND───────┤
└─FROM──number─┘
Warning: stem behaves differently when it is a first stage and when it is not a first stage.
Existing data can be overlaid when stem is unintentionally run other than as a first stage.
To use stem to read data into the pipeline at a position that is not a first stage, specify stem
as the argument of an append or preface control. For example, |append stem ...|
appends the data produced by stem to the data on the primary input stream.
The keyword PRODUCER may be used when the pipeline specification is issued with
CALLPIPE. It specifies that the variable pool to be accessed is the one for the stage that
produces the input to the stage that issues the subroutine pipeline that contains stem, rather
than the current stage. (This is a somewhat esoteric option.) To ensure that the variable
pool persists as long as this invocation of stem, the stage that is connected to the currently
selected input stream must be blocked in an OUTPUT pipeline command while the subrou-
tine pipeline is running.
The keyword MAIN specifies that the REXX variable pool to be accessed is the one in effect
at the time the pipeline set was created (either by the PIPE command or by the runpipe
stage). MAIN is implied for pipelines that are issued with ADDPIPE.
A number that is zero or positive is optional. It specifies the number of REXX variable
pools to go back. That is, stem can operate on variables in the program that issued the
pipeline specification to invoke stem or in one of its ancestors. (When the number is
prefixed by either PRODUCER or MAIN, the variable pool to be accessed is the producer’s or
the main one, or one of their ancestors.) On CMS, if the number is larger than the number
of REXX environments created on the call path from the PIPE command, stem continues on
the SUBCOM chain starting with the environment active when PIPE was issued.
¡ Specify the option NOMSG233 to suppress message 233 when the REXX environment does
¡ not exit. Either way, stem terminates with return code 233 on commit level -1 when the
¡ environment does not exist.
The keyword SYMBOLIC specifies that REXX should treat the variable names generated as it
would a variable that is written in a program. DIRECT specifies that REXX should use the
variable name exactly as written.
The keyword APPEND is optional when stem is not a first stage. The keyword FROM
followed by a number is optional.
When stem is first in a pipeline or the APPEND keyword is specified, the variable <stem>0
is read from the variable pool; it must be an integer that is zero or positive.
Operation: When stem is first in a pipeline the value of the variable <stem>0 specifies
the number of the last record to write to the pipeline; unless FROM is specified to set the
starting index number, the first output record contains the value of <stem>1, the second
record contains the value of <stem>2, and so on to the number specified. No record is
written if <stem>0 is zero or less than the value specified after FROM.
When stem is not first in a pipeline and the keywords APPEND and FROM are omitted,
variables <stem>1, <stem>2, and so on, are set to the contents of each successive input
record. Records are copied to the primary output stream (if it is connected) after the vari-
able is set. When APPEND is specified, writing starts with <stem>n where n is one more
than the value returned for <stem>0. The index of the last variable set is stored in the
variable <stem>0 at end-of-file. When there are no input records, <stem>0 is left
unchanged if APPEND is specified; <stem>0 is set to zero if APPEND is not specified.
Commit Level: stem starts on commit level -1. It verifies that the REXX environment
exists (if it did not do so while processing its parameters) and then commits to level 0.
The inverse pipeline can be used to transfer the contents of the array back to the caller:
/* Return parameters to caller */
address command,
'PIPE stem parms. | stem parms. 1'
Notes:
1. The APPEND keyword is not the same as the append control.
2. When a pipeline is issued as a TSO command, is called to access the variable pool.
When the command is issued with Address Link or Address Attach, stem accesses the
REXX environment from where the command is issued.
3. CMS/TSO Pipelines maintains a reference to the current variable environment for each
stage. Initially this is the environment in effect for the PIPE command with which the
original pipeline was started.
When a REXX program is invoked (as a stage or with the REXX pipeline command), its
environment becomes the current one, with a pointer to the previous one.
When a pipeline specification is issued with the runpipe built-in program or the
CALLPIPE pipeline command, the current environment is the one in effect for the stage
issuing runpipe or CALLPIPE; it is known to persist while the subroutine pipeline runs.
On the other hand, when a pipeline specification is issued with the ADDPIPE pipeline
command, the stage that issues ADDPIPE runs in parallel with the added pipeline
specification; it can terminate at any time (indeed, even before the new pipeline
specification starts running). Therefore, for ADDPIPE, the current environment is set to
the one for the last runpipe or the one at initial entry on the PIPE command. Thus, the
MAIN option has effect only for pipeline specifications that are issued by the CALLPIPE
pipeline command.
4. Unless DIRECT is specified, stem uses the symbolic interface to access REXX variables.
This means that you should write the variable name the same way you would write it
in an assignment statement. Consider this program fragment:
/* Process an array */
x='fred'
'PIPE literal a | stem z.x.'
The variable Z.fred.1 is set to 'a '. On the other hand, this would set the variable
Z.x.1:
/* Process directly */
'PIPE literal a | stem Z.x. direct'
Note that the stem must be in upper case when DIRECT is used.
5. An unset variable (that is, a variable that has been dropped or has never been assigned
a value) is treated differently by the three variable repositories: REXX returns the
name of the variable in upper case; EXEC2 and CLIST return the null string.
6. It is unspecified how many variables stem obtains at a time from the variable pool.
Applications that update a stemmed array to add items to it should buffer the file
before it is written back to the array:
'pipe stem x. | dup | buffer | stem x.'
Without the bu ering, variable x.2 could be created (containing a copy of the contents
of variable x.1) by the second stem stage before the first stage has read it.
Applications should not rely on this behaviour of stem.
7. For REXX stems, it is normal to specify a period as the last character of the stem (the
first word of the argument string). To allow access to EXEC2 variable pools, stem does
not append a period to the word specified. This means that you can use stem var to
set simple variables, such as VAR1, VAR2, and so on. VAR0 will be set to the count of
variables set.
Warning: storage behaves differently when it is a first stage and when it is not a first
stage. Existing data can be overlaid when storage is unintentionally run other than as a
first stage. To use storage to read data into the pipeline at a position that is not a first
stage, specify storage as the argument of an append or preface control. For example,
|append storage ...| appends the data produced by storage to the data on the primary
input stream.
Syntax Description: Two arguments are required, a hexadecimal string and a decimal
! number. When storage is not first in a pipeline and READ is omitted, a third word is
required to specify the protect key of the storage area. The key is in the leftmost four bits
of a character; the rightmost four bits must be zero. Key zero is rejected. The third
operand has no effect on z/OS; specify 80 to be consistent with the CMS implementation.
! When storage is first in a pipeline or READ is specified with an address and length, it
ensures that the first and last byte of the storage area are addressable. (On CMS, it only
! performs this check if the storage area ends beyond the size of the virtual machine unless
! and ALET is specified.) When it is not first in a pipeline and READ is omitted, storage
verifies that it can modify the first and last byte of the storage area. Areas outside the
virtual machine can be specified, but a subsequent stage referencing the contents of the
record sent in the pipeline fails with an addressing exception if part of the storage area is
not attached to the virtual machine.
Operation: The arguments are converted to binary and used as the address and length of
an area of virtual storage.
! When storage is first in the pipeline or READ is specified, the address and length are used
! in an output call, in effect writing virtual machine storage into the pipeline. The area is
! first copied into your primary space when ALET is specified for a read request.
! When READ is specified, storage first writes that storage area specified by the address and
! length, if any, and then a record for each non-blank input record. The record specifies the
! address (hexadecimal) and length (decimal); the addressability of this area is not verified
! by storage (but it will be by whatever processes the record).
Otherwise, when storage is not first in the pipeline, input records are copied into the area
in storage. The last part of the record is not copied if the input record is longer than the
length of the storage area. The input record is copied to the output, if it is connected.
! Streams Used: When storage is first in a pipeline and READ is omitted, it writes a record
! to the primary output stream. When it is not first in the pipeline and READ is omitted, it
copies the input record to the output after its contents have been copied into storage.
Examples: To display some possibly not randomly chosen bytes from storage:
pipe storage 200 32 | console
z/VM V5.2.0 2009-05-12 12:10
Ready;
Notes:
1. storage can cause message 530 to be issued (destructive overlap) if the storage area
overlaps a buffer used by a filter later in the pipeline.
2. Writing to storage has deliberately been made different from reading from storage; this
insures against accidental misplacement of a storage stage in a pipeline.
! 3. On CMS, the virtual machine must be in XC mode to use ALET.
! 4. The ALET operand is supported on both CMS and z/OS, but on z/OS you must create
! and discover the ALET yourself. As CP uses only the primary list, this flag is added on
! CMS, that is, ALETs 2 and 01000002 are equivalent. On z/OS the ALET must be
! specified exactly.
! 5. Specifying ALET 0 has no effect. You cannot specify ALET 1 on CMS.
┌─BOTH─────┐
──STRIP──┬─────────┬──┼──────────┼──┬─────┬──
└─ANYCase─┘ ├─LEADING──┤ ├─TO──┤
└─TRAILING─┘ └─NOT─┘
┌─BLANK──────────────────┐
──┼────────────────────────┼──
└─┤ target ├──┬────────┬─┘
└─number─┘
target:
├──┬─xrange──────────────────────┬──┤
└─┬─STRing─┬──delimitedString─┘
└─ANYof──┘
Type: Filter.
The target can be a range of characters or a delimited string. A number is optional after
the target. A hex range matches any character within the range. The keyword STRING
followed by a delimited string matches the string. The keyword ANYOF followed by a
delimited string matches any one character in the string. (The keyword is optional before
a one character string, because the effect is the same in either case.) A number after the
target limits the number of characters stripped; this can cause part of a string to remain in
the record. This number applies independently to each side when stripping BOTH. The
default target is a blank; thus, the default is to strip leading and trailing blank characters.
Premature Termination: strip terminates when it discovers that its output stream is not
connected.
The specs stage appends an asterisk to the record to show where it ends.
Notes:
1. The minimum abbreviation of ANYCASE is four characters because ANYOF takes
precedence (ANYOF can be abbreviated to three characters).
! 2. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
! ┌─SET────┐
! ──STRUCTure──┬─ADD──┬─────────┬──┼────────┼────────┬──
! │ └─ANYcase─┘ ├─CALLER─┤ │
! │ └─THREAD─┘ │
! │ ┌─SET────┐ │
! ├─DELete──┼────────┼──┬───────────┬───┤
! │ ├─CALLER─┤ └─┤ Names ├─┘ │
! │ └─THREAD─┘ │
! ├─LIST──┬───────────┬─────────────────┤
! │ └─┤ Names ├─┘ │
! │ ┌──
───────────┐ │
! └─LISTALL──┬─────────┬───┬─────────┬┴─┘
! └─MEMBERs─┘ ├─CALLER──┤
! ├─SET─────┤
! ├─THREAD──┤
! └─BUILTIN─┘
! Names:
! ┌──
────────────┐
! ─identifier─┴──┤
├───
! Type: Gateway.
! Syntax Description:
! Operation: For structure DELETE, the structures specified in the operand list and each set
! of structures in an input record are processed as a unit. You may delete a structure that
! embeds other structures in the same scope if all are deleted in the same record. Otherwise
! you cannot remove a structure that is embedded in another structure; you must delete the
! embedding structure first. For caller and set scope, you can delete from the outermost
! scope only.
! Input Record Format: For structure ADD, the input is in free form; it may be spanned
! across lines. The syntax of the input stream is:
! ┌──
───────────────┐
! ─┤ Structure ├─┴──
───
!
! Structure:
! ┌──
────────────────┐
! ┬─┤ Constant ├─┬┴──┤
├──:──identifier───
! └─┤ Member ├───┘
!
! Constant:
! ├──identifier──=──number──┤
!
! Member:
! ├──┬─┬─┤ Identdim ├─┬──┬──────────┬──┬─number────────┬─┬──┤
! │ ├─-────────────┤ └─┤ Type ├─┘ ├─number.number─┤ │
! │ └─.────────────┘ └─┤ Datafield ├─┘ │
! └─identifier──┤ Wordstyle ├─────────────────────────┘
! Identdim:
! ├──identifier──┬────────────────────────┬──┤
! └─(──┬─number───────┬──)─┘
! (1) ┤
├─identifier───
! └─*────────────┘
! Note:
! 1 The identifier must be declared as a manifest constant.
! Type:
! (1) ───┤
├──letter─┬───────────┬──
! └─(snumber)─┘
! Note:
! 1 No blanks are allowed from the letter to the right parenthesis, if a scale is present.
! Datafield:
! ├──┬─Member──┤ Identsub ├──┬────────┬──┬─number─┬──┬──┤
! │ └─Length─┘ └─*──────┘ │
! ├─Length──┬─number─┬────────────────────────────┤
! │ └─*──────┘ │
! └─STRUCTure──identifier──┬─number─────────────┬─┘
! ├─Member──identifier─┤
! └─Next───────────────┘
!
! Identsub:
! ├──identifier──┬───────────┬──┤
! └─subscript─┘
!
! Wordstyle:
! ┌──
────────────────────────┐
! ┬──────────────────────┬┴──┬─Word──────┬──range──┤
├──identifier───
! ├─WORDSEParator──xorc──┤ ├─Field─────┤
! └─FIELDSEparator──xorc─┘ └─Autofield─┘
! The input to structure ADD defines one or more structures, or is empty (consists of blanks
! only). The first non-blank character must be a colon. A colon marks the beginning of the
! definition of a structure.
! For each structure, the first word after the colon specifies the name of the structure.
! The structure identifier must not exist within the innermost of the specified scope (there is
! one thread scope only). In caller or set scope, the structure may also exist in thread scope
! or in one of the nesting caller or set scopes. In particular, it is allowed to define a struc-
! ture in set scope that is obscured by an already defined structure in caller scope within the
! current set; likewise an obscured structure can be defined in thread scope. Thus, the new
! definition replaces the existing one(s) until it is removed or the scope ends.
! Input data up to the next colon or to end-of-file define the contents of the structure. The
! definition is written as tokens that are delimited by blanks or line ends. The members may
! be manifest constants or members. During definition, a current position is maintained as
! the next available position after the last defined member, unless that definition precludes
! such a definition.
! A member defines a range of the record. The first word of the member definition contains
! an identifier, which must be unique within the structure being defined, optionally followed,
! in parentheses, by a dimension, which is a positive number, an identifier for a manifest
! constant, or an asterisk indicating an unbounded array. No position is established when
! the dimension is an asterisk. Members come in two flavours, which can be intermixed:
! “Proper” members define a fixed number of columns in the record; they can be
! chained indicating that the member immediately follows the previous one (when a
! position has been established). A hyphen or period instead of the member identifier
! specifies an unnamed filler.
! The word style is, in effect, a symbolic name for a range of words or fields. Such
! members do not establish a position.
! A single letter other than L defines the type of the member (L is the abbreviation of
! LENGTH). structure makes this type upper case, but it does not attach any particular
! meaning to it; however, pick and spec do for types C, D, F, P, R, and U.
! A signed number in parentheses is optional after the type character. If present, there must
! be no blanks in the type and number. The number is restricted to -32768 through 32767.
! Again, structure attaches no meaning to this number, but pick and spec interpret it as a
! scale factor when the type is P (packed decimal); that is, a positive number specifies the
! number of decimal places after the implied decimal point.
! Following this single letter and optional number, you specify the location of the member in
! the record. This can be:
! A single number, or two numbers separated by a period (asterisks are not allowed).
! The first or only number is the beginning column. The second number is the count of
! bytes in the field; when omitted, the length is one.
! The keyword MEMBER, which can be abbreviated down to one letter, followed by an
! identifier, an optional subscript, and a count. A subscript is a positive number in
! parentheses. This defines the member as being overlaid on the already defined
! member for the specified length. This is equivalent to the ORG instruction in Assem-
! bler parlance.
! The keyword LENGTH, which can be abbreviated down to one letter, followed by a
! number defining the field length, or an asterisk indicating a field of variable length that
! extends to the end of the record for an input field; for an output field, an asterisk
! specifies that the output will have the same length as the input. A position must have
! been established unless this is the first member of the structure, in which case column
! 1 is the position. There is no current position after the field when an asterisk is
! specified; such a field does not contribute to the length of the structure as it is poten-
! tially infinite.
! The keyword STRUCTURE defines an embedded structure, which must have been
! defined previously, perhaps earlier in the input stream. When adding structures to
! thread scope, only other structures in thread scope may be referenced; any structure
! may be referenced when defining in caller scope, but structures in set scope cannot
! resolve to structures defined within call scope in the outermost set (though they can
! resolve any structure defined in a nesting set). The keyword is followed by the
! identifier of the structure being referenced and the position within the embedding
! structure (the structure being defined). The length of the member is defined by the
! length of the structure; you cannot specify it explicitly. The position of the embedded
! structure is specified by:
! – A number, which is the beginning column.
! – The keyword MEMBER, which can be abbreviated down to one letter, followed by
! an identifier and an optional subscript. This defines the structure as being overlaid
! on the already defined member.
! – The keyword NEXT, which can be abbreviated down to one letter. A position
! must have been established unless this is the first member of the structure, in
! which case column 1 is the position.
! Output Record Format: structure ADD and structure DELETE produce no output.
! The output from structure LIST is in a form that can be passed to structure ADD if
! comments are removed, for example, by chop <. Note, however, that it may not be the
! form used to define the structure, but the two definitions are equivalent.
! Premature Termination: structure terminates when it discovers that its output stream is
! not connected. This can happen only when listing structure definitions.
! The structure is defined in set scope. Note that this defines two members rather than
! specify a length for the first member (to do that you must specify mbr 1.4 or mbr len 4).
! The append stage ensures that the list is made after all input structures are defined. Struc-
! ture s is discarded immediately the PIPE command terminates, thus the second pipeline
! produces no output as there is nothing to list.
! Notes:
! 1. These caseless structures are built into CMS/TSO Pipelines:
! EVENTRECORD Records produced by runpipe EVENTS (Member EVENTREC of FPLOM
! MACLIB).
! FPLASIT The first eighty bytes of a data space from adrspace CREATE
! INITIALISE.
! FPLSTORBUF The output record from instore.
! VMCMHDR VMCF interrupt header (see VMCLISTEN).
! VMCPARM VMCF parameter list (see VMCLIENT and VMCDATA).
! DIRBUFF The CMS data area, which is the data returned by DMSGETDI.
! 2. A structure that is not in the current pipeline set can be obscured by defining a struc-
! ture of the same name in caller or set scope. A structure in the current set is obscured
! by one in caller scope. A built-in structure is obscured by a definition in any scope.
! 3. Embedded structure names are resolved when the structure is defined. Obscuring an
! embedded structure has no effect on already existing definitions.
! 4. You can delete only structures in thread scope, in the current pipeline set, or in the
! calling stage; structures cannot be deleted in nesting pipeline sets or caller scopes
! within those sets, but they can be obscured.
! 5. structure ADD cannot issue messages that relate to the original input records because it
! conditions the input stream so that a structure definition does not span record bounda-
! ries. Instead, it relates messages to the count of complete structure definitions proc-
! essed and members processed within the structure being defined.
! 6. Be sure that structures are defined before they are referenced, because defining a struc-
! ture implies reading input records, which must happen on commit level 0. Thus, it is
! likely that any reference to a structure in the same pipeline specification would be a
! reference to an undefined structure.
! This can be resolved in three ways:
! Define the structures in thread scope. The drawback is that this may make them
! more visible than desired, and possibly obscured by other definitions in a nesting
! set scope.
! Make the PIPE command run a REXX program that issues CALLPIPE to add the
! structure definitions to caller or set scope before it issues the “real” pipeline, also
! with CALLPIPE. This is recommended for production strength code. (Remember
! that an EXEC can invoke itself as a REXX stage; you do not need an additional file,
! but the EXEC will then be processed twice by the interpreter.)
! Cascade structure ADD and append to issue a subroutine pipeline after structure
! ADD has ended. This is a handy way, for example, to list the definition of a
! structure. Note that the entire pipeline must be the argument to APPEND, or at
! least the stages that reference the newly defined structure. (Double up the vertical
! bars or use a different stage separator.)
! One advantage of the first two approaches is that you can inspect the return code from
! the pipeline that loads the structures before issuing the “real” pipeline.
! 7. You may wonder whether it is possible to create a recursion in embedded structures.
! You can prove by induction that this is not possible because a structure cannot contain
! itself. This is because the structure is not defined until the next colon or end-of-file is
! met; nor can it embed an undefined structure. However, a structure definition can
! embed a structure that it is about to obscure, but that structure could not embed itself
! when it was defined, so there is still no recursion.
! 8. The index origin is 1 for arrays. That is, the first member of an array has subscript 1.
! 9. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
! 10. spec does not allow a question mark in an identifier, as the question mark is parsed as
! the conditional operator.
! 11. structure starts at commit level 0. This has implications when it is issued with
! CALLPIPE on a negative commit level, as this commit will force a commit of the caller.
! 12. The motivation for caller scope is this: Suppose two REXX programs both require a
! particular structure, perhaps the first creates a record containing the structure and the
! second formats such a structure, but they do not always run as a cascade, so the
! second program cannot rely on the first program always defining the structure. Thus
! each program will wish to add the structure definition, but as it turns out, there is no
! way for a stage to determine whether it would be successful in adding a structure to
! the set. It might query by LISTALL, but even when the query indicates that a particular
! structure does not exist, that does not preclude one from being defined by the time the
! stage is resumed. Conversely, a REXX stage may define structures in caller scope with
! impunity.
! 13. Using ADDPIPE to issue a pipeline specification to define a structure will not increase
! the commit level of the current pipeline specification, but it is undefined when the
! structure will be defined and the issuer will be unable to determine whether the
! definition was successful or not.
──┬─┬─STRASMFIND────┬──┬─────────┬──delimitedString─┬──
│ ├─STRASMNFIND───┤ └─ANYcase─┘ │
│ ├─STRFIND───────┤ │
│ ├─STRFRLABEL────┤ │
│ ├─STRFROMLABEL──┤ │
│ ├─STRNFIND──────┤ │
│ ├─STRTOLABEL────┤ │
│ └─STRWHILELABEL─┘ │
│ ┌─PREFACE─┐ │
└─STRLITERAL──┼─────────┼──┬─────────────────┬────┘
└─APPEND──┘ └─delimitedString─┘
Operation: Refer to the description of the stage without the str prefix.
Record Delay: An input record is written to exactly one output stream when both output
streams are connected.
Notes:
! 1. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
¡ ──STSI──hexString──
¡ Syntax Description: The operand consists of three hexadecimal digits that specify the
¡ configuration level to write. These configurations can currently be requested: 111, 121,
¡ 122, 221, 222, 322.
¡ Output Record Format: The output record is 4096 bytes long, as this is the size of the
¡ system information block specified by the architecture, but most of it contains binary zeros.
¡ Refer to the Principles of Operation manual for the contents of the system information
¡ block.
¡ Premature Termination: stsi terminates when it discovers that its output stream is not
¡ connected.
¡ Notes:
¡ 1. You need CP support and hardware support for stsi. The hardware support ships on
¡ generation 6 and later machines. Without the corresponding hardware feature, you can
¡ obtain information about the virtual machine only, that is, code 322.
──SUBCOM──word──┬────────┬──
└─string─┘
Syntax Description: A word is required. A string is allowed, unless the secondary output
stream is defined.
Operation: The first blank-delimited word of the argument string is the name of the
subcommand environment to process the commands. If there is no environment with the
name specified, the environment name is translated to upper case. The remainder of the
argument string (if present) and input lines are issued as commands to the subcommand
environment.
| Input records are passed to the output after the command is issued; no output is produced
| on the primary output stream for a command specified as operands to subcom.
On z/OS, the default REXX environment is searched for the subcommand environment,
even when subcom is in a pipeline specification that was issued from a REXX filter (which
runs in a reentrant environment).
When the secondary output stream is defined, the return code is written to this stream after
each command has been issued and the command has been written to the primary output
stream.
Streams Used: Records are read from the primary input stream; no other input stream
may be connected.
Record Delay: subcom strictly does not delay the record. When the secondary output
stream is defined, the record containing the return code is written after the input record is
passed to the output.
Commit Level: subcom starts on commit level -2. It verifies that the secondary input
stream is not connected and then commits to level 0.
Premature Termination: When the secondary output stream is not defined, subcom termi-
nates as soon as it receives a negative return code on a command. The corresponding
input record is not copied to the output and it is not consumed. When the secondary
output stream is defined, subcom terminates as soon as it discovers that this stream is not
connected. If this is discovered while a record is being written, the corresponding input
record is not consumed.
See Also: aggrc, cms, command, cp, starmsg, xedit, and xmsg.
Examples: subcom is often used to send commands to XEDIT; this example shows how to
insert records into the current file:
...| change //i / | subcom xedit
Remember that the lines are processed according to the XEDIT settings of CASE, IMAGE, and
so on.
Use the fact that subcom copies input lines to the output after the command has been
issued to write a line to the console:
/* Append ready message to commands */
'PIPE immcmd CMS',
'| subcom CMS', /* This won't "trap" the command output */
'| spec /Ready!/ 1',
'| console'
Notes:
1. Use cms (or command) to pass a command on to SUBCOM EXEC if you wish to issue a
subcommand and intercept terminal output:
/* SUBCOM EXEC: Issue command to a subcommand environment */
signal on novalue
parse arg where command
address value where
''command
exit RC
2. Null and blank input lines are issued to the subcommand environment. The CMS
subcommand environment ignores such commands, but this should not be taken as a
Return Codes: When a secondary output stream is not defined and a negative return
code is received on a subcommand, the return code from subcom is that negative return
code. When a secondary output stream is not defined and the return code is zero or posi-
tive, all input records have been processed; the return code is the maximum of the return
codes received. When the secondary output stream is defined, the return code is zero
unless an error is detected by subcom.
¡ ──SUBSTRing──inputRange──
¡ Type: Filter.
¡ Syntax Description:
¡ Premature Termination: substring terminates when it discovers that its output stream is
¡ not connected.
¡ Notes:
¡ 1. substring is an optimisation for a special case of spec.
¡ 2. Use substring instead of not chop number.
──┬─SYNChronise─┬──
└─SYNChronize─┘
Operation: synchronise processes a record from all input streams in this cycle:
It peeks at each input stream, beginning with the primary input stream and proceeding
in numerical order.
When all input streams have a record available (that is, all streams have been peeked),
the records are written to the corresponding output streams, beginning with the
primary output stream and proceeding in numerical order.
Only when all output streams have been written successfully are the input records
consumed, beginning with the primary input stream and proceeding in numerical order.
Streams Used: All input streams are read; all output streams are written.
Record Delay: synchronise synchronises its input streams. It strictly does not delay the
record.
| That is, when synchronise terminates because end-of-file is met on an input stream, no
| input has been consumed for this set of records. When synchronise discovers that an
| output stream is not connected, a record has been written to streams that have lower
| numbers.
Examples: synchronise can be used to tie the processing of records to external events,
such as the receipt of a message from the *MSG system service.
/* PACER REXX: Use external events to pace record processing */
'CALLPIPE (endchar ?) *.input: | sync: synchronize | *.output:',
'? starmsg | sync: | hole'
Exit RC*(RC<>12)
synchronise peeks a record from its primary input stream, which is connected to the calling
pipeline, but it does not process that record until the starmsg stage has captured a message
and made it available on the secondary input stream of synchronise. synchronise then
copies the record that it received from the calling pipeline to its primary output stream,
which is also connected to the calling pipeline, and copies the message record to its
secondary output stream, which is connected to the hole stage. Thus, only one record
flows through the calling pipeline for each message received from *MSG.
duplicate * produces as many records as it can, but it cannot produce another record until
the previous record has been consumed by synchronise. Once synchronise has received
input on its secondary input stream, which is connected to the calling pipeline, it copies
one record from each stream to its corresponding output streams, which are connected to
the input streams for overlay. overlay overlays the record from its secondary input stream
on the record from its primary input stream and then writes the combined record on its
primary output stream, which is connected to the calling pipeline. Thus, when the records
are returned to the calling pipeline, they have had a background grid added to them. The
purpose of using synchronise here is to prevent duplicate * from flooding the overlay stage
with input records.
To run a stage (here udp) until it produces an output record, store the record in a variable,
and then force the stage to terminate because its output stream is severed, without the
record being consumed:
/* TFTPUDP REXX -- Destroy socket after reading lines from it. */
signal on novalue
signal on error
do forever
'callpipe (end \ name TFTPUDP)',
'|udp 69', /* Listen on port */
'|s:synchronise', /* Cheat it to get a line */
'|stem dgram.', /* Load into stem */
'\literal', /* Get a null line */
'|s:' /* And synchronise with udp's output */
If dgram.0=0 /* Was it forced to stop? */
Then exit
'output' dgram.1 /* Write the line */
end
error: exit RC*(RC<>12)
synchronise first waits for udp to produce a record. When the record becomes available on
its primary input stream, synchronise then peeks at the null record on its secondary input
stream. It now has a record on all defined input streams, so it writes the record from its
primary input stream to its primary output stream, where stem stores it in the stemmed
array. But when it tries to write the null record from its secondary input stream, it
discovers that its secondary output stream is not connected, so it terminates without
consuming either input record. This causes udp to terminate, because its OUTPUT
command receives a return code of 12 (end-of-file).
The point is that udp is forced to terminate immediately rather than when it tries to write
the next record. That is, the resource used by udp is released as soon as it has produced
one record, thus immediately becoming available to be used elsewhere.
Notes:
1. synchronise has been used in front of spec in the past to make spec terminate as soon
as one of its input streams reached end-of-file. This usage should be replaced with
spec STOP ANYEOF.
z/OS
──SYSDSN──┬────────┬──
└─string─┘
Commit Level: sysdsn starts on commit level -2. It commits to level 0 before processing
data.
Premature Termination: sysdsn terminates when it discovers that its primary output
stream is not connected.
Notes:
1. Data set names follow the TSO conventions. Enclose a name that is fully qualified in
single quotes. The prefix is applied to data set names that are not enclosed in quotes.
z/OS
┌─CLASS──*──────────┐
──SYSOUT──┼───────────────────┼──┬─────────────────────┬──
└─CLASS──┬─letter─┬─┘ └─OUTDESCriptor──word─┘
└─digit──┘
┌─SPIN───┐
──┬────────────────────────┬──┼────────┼──┬─────────┬──
└─DESTination──word.word─┘ └─NOSPIN─┘ ├─MACHine─┤
└─ASA─────┘
Syntax Description:
CLASS Specify the output class. Asterisk, which is the default, selects the
default output class for the job. The class can be a letter or a digit;
letters are translated to upper case.
OUTDESCRIPT Specify an output descriptor, which has been defined by the TSO
command or the JCL statement The output descriptor can be one to
twelve characters; it is translated to upper case. By default, no output
descriptor is associated with the data set. Only one output descriptor is
supported.
DESTINATION Specify the destination node and user ID separated by a period. The two
words are translated to upper case and truncated after eight characters.
SPIN Release the data set as soon as it is closed. This is the default.
NOSPIN Release the data set at the end of the job.
MACHINE The records contain machine carriage control in the first column.
ASA The records contain ASA carriage control in the first column.
sysout allocates the data set, opens it, and then commits to level 0.
Operation: sysout writes each input record to SPOOL and then copies it to the output, if it
is connected.
Notes:
1. If the options supported by sysout are not adequate for your application, use then
ALLOCATE command to allocate a SYSOUT data set and then use > DD= to write the
data set.
2. Specify a class on TSO. The default output class is usually purged.
3. printmc is a synonym for sysout, which sets MACHINE by default.
4. punch is a synonym for sysout, which does not set carriage control by default.
5. Option code J is not supported.
z/OS
┌──
────────┐
┬──────┬┴──
──SYSVAR───
└─word─┘
Operation: The contents of the system variables specified in the arguments string (if any)
are written to the pipeline. For each input record, the contents of the specified system
variables are written to the pipeline.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. Null and blank input records are discarded.
Record Delay: sysvar does not delay the last record written for an input record. It does
not delay the response to an input record that contains a single word. It produces all
output records before consuming the input record that contains the corresponding variable
names.
Premature Termination: sysvar terminates when it discovers that its output stream is not
connected. sysvar also terminates if an undefined variable is referenced.
Notes:
1. sysvar is both a filter and a host command interface. It is classified as a filter because
it terminates as soon as its output stream is not connected. Querying variables has no
side effects; there is no point in continuing when the result of the query is discarded.
┌─FIRST─┐ ┌─1──────┐
──TAKE──┼───────┼──┼────────┼──┬───────┬──
└─LAST──┘ ├─number─┤ └─BYTES─┘
└─*──────┘
FIRST Records are selected from the beginning of the file. This is the default.
LAST Records are selected from the end of the file.
number Specify the count of records or bytes to select. The count may be zero,
in which case nothing is selected.
* All records are selected.
BYTES The count is bytes rather than records.
Operation: When BYTES is omitted, take FIRST copies the specified number of records to
the primary output stream (or discards them if the primary output stream is not connected).
If the secondary output stream is defined, take FIRST then passes the remaining input
records to the secondary output stream.
take LAST stores the specified number of records in a buffer. For each subsequent input
record (if any), take LAST writes the record that has been longest in the buffer to the
secondary output stream (or discards it if the secondary output stream is not connected).
The input record is then stored in the buffer. At end-of-file take LAST flushes the records
from the buffer into the primary output stream (or discards them if the primary output
stream is not connected).
When BYTES is specified, operation proceeds as described above, but rather than counting
records, bytes are counted. Record boundaries are considered to be zero bytes wide. In
general, the specified number of bytes will have been taken in the middle of a record,
which is then split after the last byte. When FIRST is specified the first part of the split
record is selected and the remainder is discarded. When LAST is specified, the first part of
the split record is discarded and the second part is selected.
Streams Used: Records are read from the primary input stream. Secondary streams may
be defined, but the secondary input stream must not be connected. take FIRST severs the
primary output stream before it shorts the input to the secondary output stream. take LAST
severs the secondary output stream before it flushes the records from the buffer to the
primary output stream.
Record Delay: An input record is written to exactly one output stream when both output
streams are connected. take FIRST does not delay the record. take LAST delays the
specified number of records.
Commit Level: take starts on commit level -2. It verifies that the secondary input stream
is not connected and then commits to level 0.
CMS
──TAPE──┬───────────┬──┬─────────────────┬──┬───────┬──
├─hexString─┤ │ ┌─1──────┐ │ └─EOTOK─┘
└─TAPchar───┘ └─WTM──┼────────┼─┘
└─number─┘
Warning: tape behaves differently when it is a first stage and when it is not a first stage.
Existing data can be overlaid when tape is unintentionally run other than as a first stage.
To use tape to read data into the pipeline at a position that is not a first stage, specify tape
as the argument of an append or preface control. For example, |append tape ...|
appends the data produced by tape to the data on the primary input stream.
Syntax Description: One word is optional when tape is first in a pipeline; a word, a
keyword with an optional number, and a keyword are optional when tape is not first in a
pipeline. The first argument is a hexadecimal address, or four characters that are translated
to upper case and copied to the CMS RDTAPE or WRTAPE parameter list without inspection.
CMS is rather fussy about which tape drives it reads and writes; TAP0 through TAPF are the
only practical specifications.
When writing to the tape, use the keyword WTM to write one or more tape marks at end-
of-file or end of tape; the default is not to write a tape mark. Specify EOTOK to suppress
the message issued when the tape is full.
Operation: tape reads and writes the tape without positioning it (for instance, the tape is
not rewound).
tape writes records to the tape when it is not first in a pipeline. It stops at end-of-file on
the input or when the tape drive signals end of tape. Having written the file, tape writes
tape marks, as requested by the keyword WTM. Message 291 is issued at end of tape after
the tape marks (if any) are written, unless suppressed with the keyword EOTOK. A control
stage can invoke tape repetitively to write a multivolume file from a single input file.
tape does not inspect tape labels (you can create tape labels with CMS/TSO Pipelines if
you wish). Use WTM to write a tape mark after a file is written to tape. Other tape control
operations are performed with the CMS TAPE command or equivalent.
tape handles blocks up to 65535 characters (64K-1), which is the maximum length for the
underlying CMS interface.
To write a file on several unlabelled volumes, switching repeatedly between tapes 181 and
182:
/* Multivolume unlabelled tape write */
signal on novalue
tapes='181 182'
do forever
parse var tapes tape tapes /* Get drive to use */
tapes=tapes tape /* Put it at the end of the list */
'callpipe (name MVULTAPE)',
'|*:', /* Input file */
'|tape' tape 'wtm eotok' /* Write tape */
If RC/=0
Then exit RC
'peekto' /* End-of-file? */
If RC/=0
Then leave /* Most likely */
address command 'TAPE RUN (' tape
'callpipe cp MSG OPERATOR Please mount next tape.'
end
exit RC*(RC¬=12)
To extract the data records from a file in CMS TAPE DUMP format that contains records of
variable length:
Notes:
1. A tape mark is written on output only if you ask for it. This lets you build a tape file
with multiple pipeline commands. Remember to write a tape mark when you want
one.
2. CMS tapes usually end with two tape marks.
3. tape does not convert to and from the TAPE DUMP format; it reads and writes blocks
from a tape.
4. Some (if not all) tape units specify that certain command sequences are not valid, but
do not check for such sequences. Refer to the reference manual for the tape drive you
are using when mixing reads and writes to a tape. Usually it is required that a tape
mark be spaced over before starting to write after it. Therefore, if you have been
reading a file and wish to write after the tape mark that terminated the read operation,
you must perform a backward space file (TAPE BSF) followed by a forward space file
(TAPE FSF) before invoking the pipeline to write to the tape.
5. Many tape drives require a minimum of 18 bytes in a block. Shorter blocks may be
considered noise.
! ──TCPCKSUM──┬────────┬──
! └─number─┘
! Type: Filter.
! Operation: Without an operand, tcpcksum computes the checksum of each input record
! and produces a 16-bit result checksum on its output. This result is all zeros when the
! input message contains a valid TCP/IP checksum field.
! When specified, the operand designates the begin column of the 16-bit checksum field
! within the record. The checksum of the record is computed and stored into the specified
! position; the updated record is then written to the output. For correct interoperation with
! TCP/IP, the checksum field in the input record must contain binary zeros.
! Premature Termination: tcpcksum terminates when it discovers that its output stream is
! not connected.
! Notes:
! 1. tcpcksum interoperates with the IP, TCP, and UDP headers.
! 2. Refer to RFCs 1071, 1141, 1624, and 1936.
──TCPCLIENT──IPaddress──number──
┌──
─────────────────────────────────┐
┬───────────────────────────────┬┴──
───
├─┤ Deblock ├───────────────────┤
├─GETSOCKName───────────────────┤
├─GREETING──────────────────────┤
├─KEEPALIVe─────────────────────┤
├─LINGER──number────────────────┤
│ ┌─ANY───────┐ │
├─LOCALIPaddress──┼─HOSTID────┼─┤
│ └─IPaddress─┘ │
├─LOCALport──number─────────────┤
├─ONERESPONSE───────────────────┤
├─OOBINLINE─────────────────────┤
├─REUSEADDR─────────────────────┤
├─SF────────────────────────────┤
├─SF4───────────────────────────┤
¡ ├─EMSGSF4───────────────────────┤
├─STATistics────────────────────┤
├─TIMEOUT──number───────────────┤
└─USERid──word──────────────────┘
Deblock:
├──DEBLOCK──┬─SF──────────────────────┬──
├─SF4─────────────────────┤
├─LINEND──┬──────┬────────┤
│ └─xorc─┘ │
├─crlf────────────────────┤
└─STRING──delimitedString─┘
──┬────────────────────────┬──┤
└─GROUP──delimitedString─┘
Syntax Description: Two positional operands are required. The first operand specifies the
IP address of the host where the server is running. The IP address is specified as a
“dotted-decimal” number (for example, 9.55.5.13) or (on CMS) as a host name or a host
name and a domain (for example, jph.dk.ibm.com). The second operand specifies the
port at which the server is listening.
LOCALIPADDR Specify the local IP address to be used when binding the socket. The
default, ANY, specifies that TCP/IP may use any interface address. (An IP
address of binary zeros is used to bind the socket.) HOSTID specifies that
TCP/IP should use the IP address that corresponds to the host name.
Specify the dotted-decimal notation or (on CMS) the host name for a
particular interface to be used.
LOCALPORT Specify the local port to be bound to the client. The default is zero,
which causes TCP/IP to assign the port number. Use this option if a port
is reserved for your use.
ONERESPONSE Expect one response record to each transmitted record.
OOBINLINE Turn on the OOBINLINE socket option.
REUSEADDR Turn on the REUSEADDR socket option.
SF Add halfword length field to records being sent. The length field
includes its own length; the null record would be transmitted as
X'0002'. Expect halfword length field in messages received; deblock or
block messages and write output records for each complete logical
record.
SF4 Add fullword length field to records being sent. The length field
includes its own length; the null record would be transmitted as
X'00000004'. Expect fullword length field in messages received;
deblock or block messages and write output records for each complete
logical record.
STATISTICS Write messages containing statistics when tcpclient terminates. The
STATS
format of these statistics is undefined. STATS is a synonym.
TIMEOUT Specify the timeout value in seconds. tcpclient will terminate after the
timeout if it receives no response to sending a transaction. When SF, SF4
or DEBLOCK is specified, tcpclient will read the entire response record; it
will time out if any segment does not arrive. When none of these
options is specified, tcpclient can ensure only that the first segment of
the response arrives within the specified time limit.
USERID Specify the user ID of the virtual machine or started task where TCP/IP
runs. The default is TCPIP.
Operation: Input records are written to the socket as they arrive; records are read from
the socket and passed to the primary output stream as they arrive. When SF or SF4 is
specified (without specifying DEBLOCK), a record descriptor word is transmitted in front of
each input record.
Data received on the socket are written to the primary output stream. When DEBLOCK is
specified, the appropriate deblocking stage is inserted into the output stream. When GROUP
is further specified, the grouping stage is inserted into the output stream. A response is
deemed to have been received only when a record is passed to the stage initially connected
to the output of tcpclient. Thus, ONERESPONSE and TIMEOUT apply to the point after the
deblocking and grouping stages.
¡ The primary output stream is severed when end-of-file is received from the socket. The
¡ socket shutdown for write function is performed when tcpclient discovers that the primary
¡ input stream has been severed. If ONERESPONSE is specified, the socket is then closed.
When end-of-file is received on the input stream and LINGER is specified, tcpclient waits
until the connection is closed or the number of seconds specified has expired, whichever
occurs first. No indication is provided as to which event occurs; indeed, they could occur
simultaneously.
Streams Used: Records are read from the primary input stream; no other input stream
may be connected. When the secondary output stream is defined, a record is written to it
when tcpclient terminates after TCP/IP has reported an “ERRNO”.
Commit Level: tcpclient starts on commit level -10. It connects to the server’s port,
verifies that its secondary input stream is not connected, and then commits to level 0.
tcpclient also terminates when an error is reflected by TCP/IP (known as an ERRNO). How
it terminates depends on whether the secondary output stream is defined or not.
When the secondary output stream is not defined, error messages are issued to describe the
error and tcpclient terminates with a nonzero return code.
When the secondary output stream is defined, a single record is written to the secondary
output stream; tcpclient then terminates with the return code zero. The record written
contains the error number; the second word contains the symbolic name of the error
number if the error number is recognised by CMS/TSO Pipelines. The assumption is that a
REXX program will inspect the error number and decide whether it should retry the opera-
tion, discard the current transaction and retry, or give up entirely.
tcpclient also stops if the immediate command PIPMOD STOP is issued or if a record is
passed to pipestop.
Examples:
pipe literal HELO | tcpclient 9.55.5.13 7 linger 5 | console
Notes:
1. TCP/IP transports a byte stream; you cannot expect record boundaries to be preserved
across the network. Use the option SF or SF4 to add record descriptors to the data
sent. This presumes that the server expects such record descriptors; this is not the
TCP/IP tradition.
2. Many servers expect ASCII commands that are terminated by line ends.
3. tcpclient does not perform name resolution on TSO; you must specify the dotted-
decimal notation for the location of the server.
On CMS, you can specify a host name or a host name followed by a domain. CMS
Pipelines calls RXSOCKET to do the actual name resolution. As a consequence, the
name is resolved using RXSOCKET rules. This implies that the file TCPIP DATA must be
available and must point to the name server. RXSOCKET (unlike CMS Pipelines) uses
the server virtual machine specified in TCPIP DATA.
4. CMS/TSO Pipelines defines error numbers in the 5000 range in addition to the ones
defined by TCP/IP:
Return Codes: When the secondary output stream is defined and tcpclient terminates due
to an error that is reported by TCP/IP as an ERRNO, tcpclient sets return code 0 because the
error information is available in the record that is written to the secondary output stream.
When tcpclient terminates because of some other error (for example, if it could not connect
to the TCP/IP address space), the secondary output stream is ignored and the return code is
not zero, reflecting the number of the message issued to describe this error condition.
┌──
──────────────────┐
┬────────────────┬┴──
──TCPDATA───
├─┤ Deblock ├────┤
├─GETSOCKName────┤
¡ ├─GREETING───────┤
├─KEEPALIVe──────┤
├─LINGER──number─┤
├─ONERESPONSE────┤
├─OOBINLINE──────┤
├─REUSEADDR──────┤
├─SF─────────────┤
├─SF4────────────┤
└─STATistics─────┘
Deblock:
├──DEBLOCK──┬─SF──────────────────────┬──
├─SF4─────────────────────┤
├─LINEND──┬──────┬────────┤
│ └─xorc─┘ │
├─crlf────────────────────┤
└─STRING──delimitedString─┘
──┬────────────────────────┬──┤
└─GROUP──delimitedString─┘
SF Add halfword length field to records being sent. The length field
includes its own length; the null record would be transmitted as
X'0002'. Expect halfword length field in messages received; deblock or
block messages and write output records for each complete logical
record.
SF4 Add fullword length field to records being sent. The length field
includes its own length; the null record would be transmitted as
X'00000004'. Expect fullword length field in messages received;
deblock or block messages and write output records for each complete
logical record.
STATISTICS Write messages containing statistics when tcpdata terminates. The
STATS
format of these statistics is undefined. STATS is a synonym.
Operation: tcpdata peeks at the first input record, which contains the information required
to take the socket that represents the conversation with the client. When tcpdata has
obtained the socket, it passes input records to the client and writes data it reads from the
socket to the output stream. When SF or SF4 is specified (without specifying DEBLOCK), a
record descriptor word is transmitted in front of each input record.
Data received on the socket are written to the primary output stream. When DEBLOCK is
specified, the appropriate deblocking stage is inserted into the output stream. When GROUP
is further specified, the grouping stage is inserted into the output stream. A response is
deemed to have been received only when a record is passed to the stage initially connected
to the output of tcpdata. Thus, ONERESPONSE and TIMEOUT apply to the point after the
deblocking and grouping stages.
¡ The primary output stream is severed when end-of-file is received from the socket. The
¡ socket shutdown for write function is performed when tcpdata discovers that the primary
¡ input stream has been severed. If ONERESPONSE is specified, the socket is then closed.
Input Record Format: The first record must be in the format written by tcplisten:
Streams Used: Records are read from the primary input stream; no other input stream
may be connected. When the secondary output stream is defined, a record is written to it
when tcpdata terminates after TCP/IP has reported an “ERRNO”.
Commit Level: tcpdata starts on commit level -10. It verifies that its secondary input
stream is not connected, and then commits to level 0.
tcpdata also terminates when an error is reflected by TCP/IP (known as an ERRNO). How it
terminates depends on whether the secondary output stream is defined or not.
When the secondary output stream is not defined, error messages are issued to describe the
error and tcpdata terminates with a nonzero return code.
When the secondary output stream is defined, a single record is written to the secondary
output stream; tcpdata then terminates with the return code zero. The record written
contains the error number; the second word contains the symbolic name of the error
number if the error number is recognised by CMS/TSO Pipelines. The assumption is that a
REXX program will inspect the error number and decide whether it should retry the opera-
tion, discard the current transaction and retry, or give up entirely.
tcpdata also stops if the immediate command PIPMOD STOP is issued or if a record is
passed to pipestop.
The input record is fed through fanin to tcpdata before fanin completes the loop that will
transmit the response back to the client. The record received is sent unmodified.
Notes:
1. Normally, the server will send a response that is based on a transaction from the
client. To do this, the pipeline must have feedback. Be sure to avoid stalls resulting
from this; elastic is recommended to buffer sufficient records to prevent the stall.
2. TCP/IP may segment transmissions so that tcpdata may write a different number of
output records (fewer or more, in general) than the corresponding tcpclient stage read
on its input when it transmitted the data.
If both the server and the client are implemented using CMS/TSO Pipelines, you can
use the SF or SF4 options in both device drivers to maintain the record structure across
the byte streams of the network.
If you are writing a server, you can specify that the package as transmitted contains
the package length in the first two or four bytes and then use the appropriate option to
simplify your server; but beware that this may not be popular with the client imple-
menters.
3. CMS/TSO Pipelines defines error numbers in the 5000 range in addition to the ones
defined by TCP/IP:
5000 (EpipeResponseTimedOut) No response was received within the interval
specified by TIMEOUT.
5001 (EpipeStopped) The pipeline was signalled to stop by passing a record to
pipestop or through a similar action.
5002 (EpipeSocketClosed) ONERESPONSE is specified and the socket was closed by
the communications partner without it sending a response to a transaction.
CMS/TSO Pipelines also defines this error number:
Return Codes: When the secondary output stream is defined and tcpdata terminates due
to an error that is reported by TCP/IP as an ERRNO, tcpdata sets return code 0 because the
error information is available in the record that is written to the secondary output stream.
When tcpdata terminates because of some other error (for example, if it could not connect
to the TCP/IP address space), the secondary output stream is ignored and the return code is
not zero, reflecting the number of the message issued to describe this error condition.
┌──
─────────────────────────────────┐
┬───────────────────────────────┬┴──
──TCPLISTEN──number───
├─BACKLOG──number───────────────┤
├─GETSOCKName───────────────────┤
│ ┌─ANY───────┐ │
├─LOCALIPaddress──┼─HOSTID────┼─┤
│ └─IPaddress─┘ │
├─REUSEADDR─────────────────────┤
├─STATistics────────────────────┤
└─USERid──word──────────────────┘
Syntax Description: Specify as the first operand the number of the port that tcplisten
should listen on. Specify 0 to have TCP/IP assign the port number; use the GETSOCKNAME
option to discover the port number assigned by TCP/IP.
BACKLOG Specify the maximum number of pending connection requests for the
port. The default is 10.
GETSOCKNAME Write the contents of the socket address structure to the primary output
| stream after the socket is bound. That is, as the first record after
| tcplisten has committed to level 0, but before it starts listening.
LOCALIPADDR Specify the local IP address to be used when binding the socket. The
default, ANY, specifies that TCP/IP may use any interface address. (An IP
address of binary zeros is used to bind the socket.) HOSTID specifies that
TCP/IP should use the IP address that corresponds to the host name.
Specify the dotted-decimal notation or (on CMS) the host name for a
particular interface to be used.
REUSEADDR Turn on the REUSEADDR socket option.
STATISTICS Write messages containing statistics when tcplisten terminates. The
STATS
format of these statistics is undefined. STATS is a synonym.
USERID Specify the user ID of the virtual machine or started task where TCP/IP
runs. The default is TCPIP.
| Operation: tcplisten creates a socket, binds it to the specified port, writes the socket
| address if GETSOCKNAME is specified, and listens on the socket. tcplisten then performs
these steps repeatedly:
¡ 1. If tcplisten is not a first stage, it waits for a record to arrive on the primary input
¡ stream. It terminates if the primary input stream is severed.
: 2. It accepts a connection. This will cause it to wait when no connection request is
: queued in the TCP/IP stack.
3. It performs the givesocket() function to allow another program to take the socket.
4. It writes an output record that describes the socket that is allocated to the connection.
This record should be passed to a tcpdata stage without being delayed. The tcpdata
will perform the takesocket() function to obtain the socket.
5. It closes the socket. If the socket has not been taken by a tcpdata stage, possibly
because the request should be rejected, TCP/IP will now terminate the connection.
¡ 6. If tcplisten is not a first stage, it consumes the record on the primary input stream.
Streams Used: Secondary streams may be defined. When the secondary output stream is
defined, a record is written to it when tcplisten terminates after TCP/IP has reported an
“ERRNO”.
Commit Level: tcplisten starts on commit level -10. It binds a socket to the port, verifies
that its secondary input stream is not connected, and then commits to level 0.
Premature Termination: When tcplisten is first in the pipeline, it does not terminate
normally. It terminates when it discovers that its primary output stream is not connected.
tcplisten also terminates when an error is reflected by TCP/IP (known as an ERRNO). How
it terminates depends on whether the secondary output stream is defined or not.
When the secondary output stream is not defined, error messages are issued to describe the
error and tcplisten terminates with a nonzero return code.
When the secondary output stream is defined, a single record is written to the secondary
output stream; tcplisten then terminates with the return code zero. The record written
contains the error number; the second word contains the symbolic name of the error
number if the error number is recognised by CMS/TSO Pipelines. The assumption is that a
REXX program will inspect the error number and decide whether it should retry the opera-
tion, discard the current transaction and retry, or give up entirely.
tcplisten also stops if the immediate command PIPMOD STOP is issued or if a record is
passed to pipestop.
/* TCPSERVER REXX */
signal on error
do forever
'peekto' /* Wait for connection request */
'addpipe *.output: | i: fanin | tcpdata | server | i:' /* subtask */
'callpipe *: | take 1 | *:' /* Pass one record to subtask */
'sever output' /* Let server run unconnected */
end
error: exit RC*(RC<>0)
The four pipeline commands in the example above implement a loop that spawns a sepa-
rate pipeline for each connection request. The record produced when tcplisten receives a
connection request is passed to this pipeline, which is then cut loose to live its own inde-
pendent life.
Any vetting of the client should be done before the ADDPIPE pipeline command is issued.
If the client is not authorised, the input record should be consumed by a READTO pipeline
command. This will cause tcplisten to close the socket without its being taken by the
server task, and thus, the request will be rejected.
Notes:
1. tcplisten does not read or write data to the sockets it handles.
2. CMS/TSO Pipelines defines error numbers in the 5000 range in addition to the ones
defined by TCP/IP:
5000 (EpipeResponseTimedOut) No response was received within the interval
specified by TIMEOUT.
5001 (EpipeStopped) The pipeline was signalled to stop by passing a record to
pipestop or through a similar action.
5002 (EpipeSocketClosed) ONERESPONSE is specified and the socket was closed by
the communications partner without it sending a response to a transaction.
CMS/TSO Pipelines also defines this error number:
0000 (OKSocketClosed) The connection was closed by the communications partner.
The stage is not expecting a response; that is, ONERESPONSE is omitted.
Return Codes: When the secondary output stream is defined and tcplisten terminates due
to an error that is reported by TCP/IP as an ERRNO, tcplisten sets return code 0 because the
error information is available in the record that is written to the secondary output stream.
When tcplisten terminates because of some other error (for example, if it could not connect
to the TCP/IP address space), the secondary output stream is ignored and the return code is
not zero, reflecting the number of the message issued to describe this error condition.
¡ ──THREEWAY──inputRange──
¡ Type: Gateway.
¡ Operation: The input record is split before and after the specified input range. The part
¡ up to the beginning of the range is written to the primary output stream; the contents of
¡ the input range is then written to the secondary output stream; and the balance of the
¡ record is then written to the tertiary output stream. Finally, the input record is consumed.
¡ Streams Used: Three streams must be defined. Records are read from the primary input
¡ stream; no other input stream may be connected.
¡ Commit Level: threeway starts on commit level -2. It verifies that the primary input
¡ stream is the only connected input stream and then commits to level 0.
¡ Notes:
¡ 1. 3way is a synonym for threeway.
¡ 2. A null records is written when the corresponding part of the record is not present.
──TIMEstamp──┬─┬────────────────────────┬─┬──
│ │ ┌─8──────┐ │ │
│ └─┴─number─┴──┬────────┬─┘ │
! │ └─number─┘ │
! ├─SHOrtdate──────────────────┤
! ├─ISOdate────────────────────┤
! ├─FULldate───────────────────┤
! ├─STAndard───────────────────┤
! └─STRing──delimitedString────┘
Type: Filter.
Syntax Description: The formatted timestamp can be the “raw” 16-byte sorted timestamp,
a predefined format, or a custom format.
number The first number specifies the position, relative to the end of the
formatted timestamp, of the first character to include; it is quietly limited
to 16. The default is 8, which omits the date when the second number
! is omitted. The second number specifies the count of characters to
! include. The default is the same as the first number. It is quietly
! restricted to this number.
!! FULLDATE The file’s timestamp is formatted in the American format, with the
! century: 3/09/1946 23:59:59.
!! ISODATE The file’s timestamp is formatted with the century in one of the formats
! approved by the International Standardisation Organisation:
! 1946-03-09 23:59:59.
!! SHORTDATE The file’s timestamp is formatted in the American format, without the
! century: 3/09/46 23:59:59.
!! STANDARD The file’s timestamp is formatted as a single word in a form that can be
! used for comparisons: 19460309235959.
!! STRING Specify custom timestamp formatting, similar to the POSIX strftime()
! function. The delimited string specifies formatting as literal text and
! substitutions are indicated by a percentage symbol (%) followed by a
! character that defines the substitution. These substitution strings are
! recognised by timestamp:
! %% A single %.
! %Y Four digits year including century (0000-9999).
! %y Two-digit year of century (00-99).
! %m Two-digit month (01-12).
! %n Two-digit month with initial zero changed to blank ( 1-12).
! %d Two-digit day of month (01-31).
! %e Two-digit day of month with initial zero changed to blank ( 1-31).
! %j Julian day of year (001-366).
! %H Hour, 24-hour clock (00-23).
! %k Hour, 24-hour clock first leading zero blank ( 0-23).
! %M Minute (00-59).
! %S Second (00-60).
! %F Equivalent to %Y-%m-%d (the ISO 8601 date format).
! %T Short for %H:%M:%S.
! %t Tens and hundredth of a second (00-99).
The length of the formatted timestamps and the equivalent string are:
Standard 14 %Y%m%d%H%M%S
ISO 19 %F %T
Full 19 %n/%d/%Y %k:%M:%S
Short 17 %n/%d/%y %k:%M:%S
Premature Termination: timestamp terminates when it discovers that its output stream is
not connected.
Examples: To see what the current time or the current date and time is:
pipe literal | timestamp | console
10590197
Ready;
pipe literal | timestamp 16 | console
2010071410590199
Ready;
! pipe literal | timestamp string /Day %e of Month %n in %Y at %H/ | console
! Day 14 of Month 7 in 2010 at 10
! Ready;
To timestamp records that are logged in a service machine:
'pipe starmsg | ... | timestamp 16 | >> service log a'
Notes:
| 1. timestamp obtains the local time. Use spec TOD C2T 1 to obtain the current time in
| UTC, assuming, of course, that the TOD clock is set to the standard epoch.
tokenise—Tokenise Records
tokenise splits an input record into tokens, writing an output record for each. Blanks
always delimit tokens; they are discarded. The specified string contains additional charac-
ters; when an input record contains one of these characters, the character is written by
itself in an output record.
──┬─TOKENISE─┬──delimitedString──┬─────────────────┬──
└─TOKENIZE─┘ └─delimitedString─┘
Type: Filter.
Operation: A line is written for each token in the input record. Blanks always delimit
tokens. The first delimited string lists characters that delimit other tokens.
The second argument string, if present, is written as a separate record after each input line
is processed.
Record Delay: tokenise does not delay the last record written for an input record.
Premature Termination: tokenise terminates when it discovers that its output stream is
not connected.
Examples: To tokenise according to the CMS rules, adding a blank line after the tokens of
an input record:
...| tokenise /()/ / / | pad 8 | chop 8 | xlate upper |...
To tokenise without padding, chopping, or translation:
pipe literal apples = bananas(cherries+dates) | tokenise /()=+/ | console
apples
=
bananas
(
cherries
+
dates
)
Ready;
Notes:
1. Tokens are neither padded, truncated, nor translated to upper case.
──┬─TOLABEL──┬────────┬─────────────────────────────────────┬──
│ └─string─┘ │
| │ ┌─EXCLUSIVe─┐ │
└─STRTOLABEL──┬─────────┬──┼───────────┼──delimitedString─┘
¡ └─ANYcase─┘ └─INCLUSIVe─┘
Syntax Description: A string is optional for tolabel. The string starts after exactly one
blank character. Leading and trailing blanks are significant.
Operation: Characters at the beginning of each input record are compared with the argu-
ment string. When ANYCASE is specified, case is ignored in this comparison. Any record
matches a null argument string. A record that is shorter than the argument string does not
match.
tolabel copies records up to (but not including) the matching one to the primary output
stream (or discards them if the primary output stream is not connected). If the secondary
output stream is defined, tolabel then passes the remaining input records to the secondary
output stream.
The matching record stays in the pipeline if the secondary output stream is not defined; it
can be read again if the current pipeline is defined with CALLPIPE.
Streams Used: Records are read from the primary input stream. Secondary streams may
be defined, but the secondary input stream must not be connected. If the secondary output
stream is defined, tolabel severs the primary output stream before it passes the remaining
input records to the secondary output stream.
Record Delay: An input record is written to exactly one output stream when both output
streams are connected. tolabel strictly does not delay the record.
Commit Level: tolabel starts on commit level -2. It verifies that the secondary input
stream is not connected and then commits to level 0.
Notes:
! 1. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
2. Remember that REXX continuation functionally replaces a trailing comma with a blank.
Also recall that when two strings are separated by one or more blanks, REXX concat-
enates them with a single blank. Use the concatenation operator (||) before the
comma at the end of the line if your portrait style has the stage separators at the left
side of the stage and the trailing blank is significant to your application.
──TOTARGET──word──┬────────┬──
└─string─┘
Type: Control.
Syntax Description: The argument string is the specification of a selection stage. The
stage must support a connected secondary output stream. If the secondary input stream to
totarget is connected, the argument stage must also support a connected secondary input
stream.
Commit Level: totarget starts on commit level -2. It issues a subroutine pipeline that
contains the argument stage. This subroutine must commit to level 0 in due course.
Examples: To pass to the primary output stream all records up to the first one that
contains a string and to pass the remaining records to the secondary output stream:
/* Totarget example */
'callpipe (end ? name TOTARGET)',
'|*:', /* Connect to input */
'|f: totarget locate /abc/', /* Look for it */
'|*.output.0:', /* Up to target */
'?f:',
'|*.output.1:' /* Target and rest */
exit RC
Notes:
1. It is assumed that the argument stage behaves like a selection stage: the stage should
produce without delay exactly one output record for each input record; it should termi-
nate without consuming the current record when it discovers that its output streams are
no longer connected. However, for each input record the stage can produce as many
records as it pleases on its secondary output stream; it can delete records. The stage
should not write a record first to its secondary output stream and then to its primary
output stream; this would cause the trigger record to be written to both output streams.
If the argument stage has delayed record(s) (presumably by storing them in an internal
buffer) at the time it writes a record to its primary output stream, it will not be able to
write these records to any output stream; the streams that are connected to the two
output streams are severed when the argument stage writes a record to its primary
output stream. End-of-file is reflected on this write. The records held internally in the
argument stage will of necessity be lost when the stage terminates.
2. The argument string to totarget is passed through the pipeline specification parser only
once (when the scanner processes the totarget stage), unlike the argument strings for
append and preface.
¡ 3. totarget is implemented using fillup and fanoutwo. The stage under test has only
¡ primary streams defined. The primary output stream is connected to a stage that reads
¡ a record without consuming it and then terminates. This means that any usage that
¡ depends on the secondary stream in the stage under test, will fail.
Return Codes: If totarget finds no errors, the return code is the one received from the
selection stage.
¡ ──TRACKBLOCK──
¡ Input Record Format: For each track, the home address (FCCHH) is followed by as many
¡ records as there are data records (blocks) on the track. The data records begin with an
¡ 8-byte count area (CCHHRKDD). The end of track record is optional.
¡ Output Record Format: An 8-byte check word that contains the string fpltrack. Then
¡ follows the contents of the track as read by the read track CCW. This contains a 5-byte
¡ home address and a number of records starting with record 0 (if the track is formatted in
¡ the standard format). Each record contains one to three parts:
¡ The count area. This is an 8-byte area that contains the cylinder, head, and record
¡ number followed by the size of the key area and the data area (CCHHRKDD).
¡ Key area, if present.
¡ Data area, if present.
¡ Record Delay: When a track contains an end of track record, the output record is not
¡ delayed relative to the end of track record; otherwise the output is delayed until the arrival
¡ of the next home address record.
¡ Premature Termination: trackblock terminates when it discovers that its output stream is
¡ not connected.
¡ trackdeblock—Deblock Track
¡ trackdeblock splits the track into its parts, the home address and data records.
¡
¡ ──TRACKDEBLOCK──┬───────────┬──
¡ └─TERMinate─┘
¡ Input Record Format: An 8-byte check word that contains the string fpltrack. Then
¡ follows the contents of the track as read by the read track CCW. This contains a 5-byte
¡ home address and a number of records starting with record 0 (if the track is formatted in
¡ the standard format). Each record contains one to three parts:
¡ The count area. This is an 8-byte area that contains the cylinder, head, and record
¡ number followed by the size of the key area and the data area (CCHHRKDD).
¡ Key area, if present.
¡ Data area, if present.
¡ Output Record Format: For each track, the home address (FCCHH) is followed by as
¡ many records as there are data records (blocks) on the track. The data records begin with
¡ an 8-byte count area (CCHHRKDD). When TERMINATE is specified, trackdeblock writes a
¡ record containing 8 bytes of all one bits as an end of track marker.
¡ Premature Termination: trackdeblock terminates when it discovers that its output stream
¡ is not connected.
¡ Notes:
¡ 1. The beginning of a new track can be inferred by the length of the 5-byte home
¡ address; all other records contain at least the eight bytes of their count area.
¡¡ CMS
¡ ──TRACKREAD──devaddr──┬────────────────────────────┬──
¡ └─number──number──┬────────┬─┘
¡ └─number─┘
¡ Syntax Description: Specify the device number and an initial range of tracks to read.
¡ Operation: trackread verifies the device number as part of the syntax check.
¡ Input Record Format: Additional extents to be read. Specify two or three blank-
¡ delimited words: the initial cylinder and track, and the count of tracks or an asterisk.
¡ Output Record Format: An 8-byte check word that contains the string fpltrack. Then
¡ follows the contents of the track as read by the read track CCW. This contains a 5-byte
¡ home address and a number of records starting with record 0 (if the track is formatted in
¡ the standard format). Each record contains one to three parts:
¡ The count area. This is an 8-byte area that contains the cylinder, head, and record
¡ number followed by the size of the key area and the data area (CCHHRKDD).
¡ Key area, if present.
¡ Data area, if present.
¡ Premature Termination: trackread terminates when it discovers that its primary output
¡ stream is not connected.
¡ Notes:
¡ 1. The disk needs not be accessed or even supported by CMS.
¡ tracksquish—Squish Tracks
¡ tracksquish reduces the size of a standard track that is formatted, but unused.
¡
¡ ──TRACKSQUISH──
¡ Operation: tracksquish passes records that already are in the squished format; it reduces
¡ the size of a track in the standard format (as written by trackread) that is formatted by CP
¡ or CMS.
¡ Input Record Format: An 8-byte check word that contains the string fpltrack. Then
¡ follows the contents of the track as read by the read track CCW. This contains a 5-byte
¡ home address and a number of records starting with record 0 (if the track is formatted in
¡ the standard format). Each record contains one to three parts:
¡ The count area. This is an 8-byte area that contains the cylinder, head, and record
¡ number followed by the size of the key area and the data area (CCHHRKDD).
¡ Key area, if present.
¡ Data area, if present.
¡ Output Record Format: An 8-byte check word that contains the string fplsquis. The
¡ remainder of the record is unspecified.
¡ Premature Termination: tracksquish terminates when it discovers that its primary output
¡ stream is not connected.
¡ Notes:
¡ 1. tracksquish does not compress the track.
¡ ──TRACKVERIFY──
¡ Input Record Format: An 8-byte check word that contains the string fpltrack. Then
¡ follows the contents of the track as read by the read track CCW. This contains a 5-byte
¡ home address and a number of records starting with record 0 (if the track is formatted in
¡ the standard format). Each record contains one to three parts:
¡ The count area. This is an 8-byte area that contains the cylinder, head, and record
¡ number followed by the size of the key area and the data area (CCHHRKDD).
¡ Key area, if present.
¡ Data area, if present.
¡ Streams Used: Records are read from the primary input stream and written to the primary
¡ output stream. trackverify does not produce output.
¡ Premature Termination: trackverify terminates when it discovers that its output stream is
¡ not connected.
¡¡ CMS
¡ ──TRACKWRITE──devaddr──┬─word────────────────────┬──number──
¡ ├─STRing──delimitedString─┤
¡ └─*───────────────────────┘
¡ ──number──
¡ Syntax Description: Specify the device number, the current label on the device, and the
¡ first and last cylinder in the writable extent.
¡ The first and last cylinders specify the extent into which tracks are written; the actual track
¡ address is obtained from the input record.
¡ Operation: trackwrite verifies the device number and label as part of the syntax check.
¡ Input Record Format: trackwrite supports input records in the format produced by both
¡ the standard track format (trackread) and the squished track format (tracksquish).
¡ The standard track format contains an 8-byte check word that contains the string
¡ fpltrack. Then follows the contents of the track as read by the read track CCW. This
¡ contains a 5-byte home address and a number of records starting with record 0 (if the track
¡ is formatted in the standard format). Each record contains one to three parts:
¡ The count area. This is an 8-byte area that contains the cylinder, head, and record
¡ number followed by the size of the key area and the data area (CCHHRKDD).
¡ Key area, if present.
¡ Data area, if present.
¡ The squished track format contains an 8-byte check word that contains the string
¡ fplsquis. The remainder of the record is unspecified.
¡ trackxpand—Unsquish Tracks
¡ trackxpand expands a squished track to the standard format.
¡
¡ ──TRACKXPAND──
¡ Operation: trackxpand passes records that already are in the standard format; it expands
¡ squished records to the standard format.
¡ Input Record Format: An 8-byte check word that contains the string fplsquis. The
¡ remainder of the record is unspecified.
¡ Output Record Format: An 8-byte check word that contains the string fpltrack. Then
¡ follows the contents of the track as read by the read track CCW. This contains a 5-byte
¡ home address and a number of records starting with record 0 (if the track is formatted in
¡ the standard format). Each record contains one to three parts:
¡ The count area. This is an 8-byte area that contains the cylinder, head, and record
¡ number followed by the size of the key area and the data area (CCHHRKDD).
¡ Key area, if present.
¡ Data area, if present.
¡ Streams Used: Records are read from the primary input stream and written to the primary
¡ output stream. Null input records are discarded.
¡ Premature Termination: trackxpand terminates when it discovers that its output stream is
¡ not connected.
¡ Notes:
! trackexpand is a synonym for trackxpand
¡¡ CMS
¡ ──TRFREAD──number──┬───────────────────┬──
¡ ├─CP──┬───────────┬─┤
¡ │ └─LOCALtime─┘ │
¡ └─TRSOURCE──────────┘
¡ When no keyword is specified, the raw CP data are prefixed four bytes that specify the file
¡ type. It contains binary zero when the SPOOL file is a TRSOURCE file; it contains four when
¡ the SPOOL file contains CP trace records. You must deblock the file yourself.
¡ Operation: When no keyword is specified, the raw data are written to the primary output
¡ stream. The actual file contents are prefixed a fullword that indicates the type of trace
¡ data. This fullword contains zero for TRSOURCE data; it contains four when the record
¡ contains CP trace data. When the first fullword of the output record contains binary zero, it
¡ can be removed and the buffer deblocked by deblock V. When the first fullword of the
¡ output record contains binary four, the record is 4K+4 bytes long. Of the 4K buffer, the
¡ first 36 bytes contain a header. Refer to the CP Programming Services manual for further
¡ information.
¡ When CP is specified, it is verified that the SPOOL file contains CP trace data. The output
¡ record is 32 or 60 bytes long and contains the full 8-byte TOD timestamp of the entry.
¡ When LOCALTIME is specified, the timestamp is adjusted with the local time zone, which is
¡ present in the input record. The leftmost bit of the CPUID field indicates a double length
¡ entry created from a 64-byte trace table entry. The last fullword of the 64-byte trace table
¡ entry is not present.
¡ When TRSOURCE is specified, it is verified that the file contains TRSOURCE data. The indi-
¡ vidual data records are deblocked.
¡ Commit Level: trfread starts on commit level -2. It opens the SPOOL file and then
¡ commits to level 0.
z/OS
──TSO──┬────────┬──
└─string─┘
Syntax Description:
Operation: The argument string (if present) and input lines are issued to TSO. The
response from the TSO commands is not written to the terminal. The response from each
command is buffered until the command ends; the response is then written to the output.
Record Delay: tso writes all output for an input record before consuming the input
record.
Notes:
1. tso is implemented as a REXX program that uses the OUTTRAP function. Thus, tso can
trap only what can be trapped by OUTTRAP.
┌──
─────────────────────────────────┐
┬───────────────────────────────┬┴──
──UDP──number───
├─ASYNChronously────────────────┤
├─BROADCASt─────────────────────┤
├─GETSOCKName───────────────────┤
│ ┌─ANY───────┐ │
├─LOCALIPaddress──┼─HOSTID────┼─┤
│ └─IPaddress─┘ │
├─REUSEADDR─────────────────────┤
├─STATistics────────────────────┤
└─USERid──word──────────────────┘
ASYNCHRON This keyword is optional when udp is not first in a pipeline; it is not
allowed when udp is first in a pipeline.
udp should receive independently of how it sends. By default, udp
sends one record and then waits for a response.
BROADCAST Turn on the BROADCAST socket option.
GETSOCKNAME Write the contents of the socket address structure to the primary output
stream after the socket is bound.
LOCALIPADDR Specify the local IP address to be used when binding the socket. The
default, ANY, specifies that TCP/IP may use any interface address. (An IP
address of binary zeros is used to bind the socket.) HOSTID specifies that
TCP/IP should use the IP address that corresponds to the host name.
Specify the dotted-decimal notation or (on CMS) the host name for a
particular interface to be used.
REUSEADDR Turn on the REUSEADDR socket option.
STATISTICS Write messages containing statistics when udp terminates. The format of
STATS
these statistics is undefined. STATS is a synonym.
USERID Specify the user ID of the virtual machine or started task where TCP/IP
runs. The default is TCPIP.
Operation: When udp is first in a pipeline, it waits for messages on the port and writes
each message as an output record as it arrives.
When udp is not first in a pipeline and the keyword ASYNCHRONOUSLY is specified, it
reads input records and sends them to the destination specified in the record. It writes
arriving messages to the output as they arrive.
3. Wait for a datagram to arrive at the port specified in the arguments or for a timeout to
occur. When a datagram is received, it is written to the pipeline. If udp is not first in
the pipeline, a null record is written in case of a timeout.
Input Record Format: The input record must be four bytes long (to indicate that UDP
should listen only for the indicated period of time) or at least 24 bytes long (to specify a
timeout value and a port to receive the datagram). The input record contains information
required by the SENDTO IUCV socket function:
Output Record Format: A null record indicates a timeout; no datagram was received. A
record that is not null contains a datagram received:
Streams Used: Records are read from the primary input stream; no other input stream
may be connected. When the secondary output stream is defined, a record is written to it
when udp terminates after TCP/IP has reported an “ERRNO”.
Commit Level: udp starts on commit level -10. It creates a socket, verifies that its
secondary input stream is not connected, and then commits to level 0.
Premature Termination: udp terminates when it discovers that its primary output stream
is not connected.
udp also terminates when an error is reflected by TCP/IP (known as an ERRNO). How it
terminates depends on whether the secondary output stream is defined or not.
When the secondary output stream is not defined, error messages are issued to describe the
error and udp terminates with a nonzero return code.
When the secondary output stream is defined, a single record is written to the secondary
output stream; udp then terminates with the return code zero. The record written contains
the error number; the second word contains the symbolic name of the error number if the
error number is recognised by CMS/TSO Pipelines. The assumption is that a REXX
program will inspect the error number and decide whether it should retry the operation,
discard the current transaction and retry, or give up entirely.
udp also stops if the immediate command PIPMOD STOP is issued or if a record is passed to
pipestop.
Notes:
1. The User Datagram Protocol is said to be connectionless. That is, it is like one virtual
machine sending a message to other virtual machines (as opposed to having an IUCV
connection); any response is generated by the receiver of its own accord.
2. While TCP/IP tries to deliver messages as best it can, the User Datagram Protocol does
not specify that messages must arrive in the order they are sent; nor does it provide
for notification of lost messages. A protocol must be defined at a higher level to
implement error recovery. (This is often called “the end to end argument”.)
3. For compatibility with earlier releases, udp also accepts an abbreviated format for its
arguments:
──UDP──┬────────────────┬──port──┬─────────┬──
└─ASYNchronously─┘ └─machine─┘
: 4. A null packet from the net contains 16 bytes of socket address of the origin, whereas a
: timeout causes a null record to be written.
Return Codes: When the secondary output stream is defined and udp terminates due to an
error that is reported by TCP/IP as an ERRNO, udp sets return code 0 because the error
information is available in the record that is written to the secondary output stream. When
udp terminates because of some other error (for example, if it could not connect to the
TCP/IP address space), the secondary output stream is ignored and the return code is not
zero, reflecting the number of the message issued to describe this error condition.
The sequence number of the record within a run of records with identical keys can be
prefixed to the record; when the last occurrence of a record is selected, this sequence
number becomes the count of records with that particular key.
┌─NOPAD─────┐
──UNIQue──┬───────┬──┼───────────┼──┬─────────┬──
└─COUNT─┘ └─PAD──xorc─┘ └─ANYcase─┘
┌─LAST─────┐
──┬──────────────────┬──┼──────────┼──
└─┤ uniqueRanges ├─┘ ├─SINGLEs──┤
├─FIRST────┤
├─MULTiple─┤
└─PAIRwise─┘
uniqueRanges:
├──┬─inputRange──────────────────────────┬──┤
│ ┌──
───────────────────────────┐ │
─inputRange──┬───────────┬─┴──)─┘
└─(───
├─NOPAD─────┤
└─PAD──xorc─┘
Syntax Description: The keyword NOPAD specifies that key fields that are partially
present must have the same length to be considered equal; this is the default. The
keyword PAD specifies a pad character that is used to extend the shorter of two key fields.
The keyword ANYCASE specifies that case is to be ignored when comparing fields; the
default is to respect case. An optional input range or a list of input ranges in parentheses
may be followed by up to two keywords. Each input range may be followed by the
keywords PAD or NOPAD to specify padding for this particular field. For compatibility with
the past, unique also accepts two optional keywords followed by an optional input range or
a list of input ranges in parentheses. PAIRWISE cannot be specified with COUNT.
Operation: Records are written to the primary output stream or the secondary output
stream, depending on the contents of the column ranges and the option specified.
1. When PAIRWISE is omitted:
For each record on the primary input stream, the contents of the column ranges
specified (the complete record by default) are compared with the contents of the corre-
sponding ranges in the following record. A run of records having the same contents
of the column ranges comprises a set of duplicate records. When the keyword NOPAD
is specified (or the padding option is omitted), a position not present in a record has a
value that is not equal to any possible contents of a position that is present. When
PAD is specified, short key fields are extended with the pad character for purposes of
comparison. Otherwise, when a range is partially present, two records must be the
same length and contain the same data within the range to compare equal.
When the keyword COUNT is used, each record is prefixed with a 10-character field
indicating its position in a run of equal records (starting with 1). When combined
with the default option LAST, COUNT sets the count of identical records in the first 10
positions of the record written to the primary output stream.
For a set of duplicate records, keywords determine which records are selected:
SINGLES Runs that consist only of one record are copied to the primary
output stream. Thus, only truly unique records are selected.
FIRST The first record of a run is copied to the primary output stream.
That is, all singles and the first record of each set of duplicates are
selected.
LAST The last record of a run is copied to the primary output stream.
Thus, singles and the last record of a set of duplicates are selected.
(This is the default.)
MULTIPLE Runs that contain more than one record are written to the primary
output stream. That is, only truly duplicate records are selected.
2. When PAIRWISE is specified, a record is read and compared with the following record.
Both records are written to the primary output stream when their key fields are not
equal.
Records that are not written to the primary output stream are written to the secondary
output stream (or discarded if it is not connected). For example, unique SINGLES writes
complete sets of duplicates on the secondary output stream.
Streams Used: Records are read from the primary input stream. Secondary streams may
be defined, but the secondary input stream must not be connected. Output is written to the
primary output stream and the secondary output stream depending on options.
Record Delay: An input record is written to exactly one output stream when both output
streams are connected. Except for FIRST, which does not delay the record, and PAIRWISE,
which delays the first record of a pair but not the second, unique delays one record.
Commit Level: unique starts on commit level -2. It verifies that the secondary input
stream is not connected and then commits to level 0.
Examples: To list files that are on both of two minidisks or accessed directories:
tell:
say 'Usage: BOTHDISK <fm1> <fm2> [<fn> [<ft>]]'
Notes:
1. unique compares only adjacent records. It is normal to sort the file earlier in the
pipeline.
2. Use sort UNIQUE instead of a cascade of sort and unique when the file has many dupli-
cate records and you do not wish to process the duplicates further.
3. Unless ANYCASE is specified, key fields are compared as character data using the IBM
System/360 collating sequence. Use spec (or a REXX program) to put a sort key first
in the record if you wish, for instance, to use a numeric field that is not aligned to the
right within a column range. Use xlate to change the collating sequence of the file.
! 4. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
5. unique has no option to specify the inverse of PAIRWISE. Use not unique to swap the
contents of the output streams.
──UNPACK──
Type: Filter.
Operation: A file not in the packed format created by XEDIT, COPYFILE, or pack is passed
through unmodified. A null record is interpreted as end-of-file and the next record is
inspected to see if that is the beginning of a packed file.
Streams Used: Records are read from the primary input stream and written to the primary
output stream.
Record Delay: unpack delays input records as required to build an output record. The
delay is unspecified.
Premature Termination: unpack terminates when it discovers that its output stream is not
connected.
Notes:
1. unpack can unpack files that XEDIT and COPYFILE cannot cope with.
2. unpack is “safe” after <. It unpacks a file if it is packed, and passes a file that is not
packed through unchanged; it does not issue a diagnostic if the file is not packed.
┌─ -3────────┐
──UNTAB──┼────────────┼──
├─ -number───┤
│ ┌──
────────┐ │
─number─┴─┘
└──
Type: Filter.
A list of positive numbers enumerates the tab stops; the numbers may be in any order.
The smallest number specifies where the left margin is; use 1 to put the left margin at the
beginning of the record.
A negative number specifies a tab stop in column 1, and for each n columns.
Premature Termination: untab terminates when it discovers that its output stream is not
connected.
──UPDATE──┬───────┬──┬───────┬──┬──────┬──┬───────────┬──
└─range─┘ └─FIRST─┘ └─LAST─┘ └─*──string─┘
Type: Gateway.
Syntax Description: Arguments are optional. A column range may be followed by two
keywords. An asterisk indicates the beginning of a comment string.
When present as the first argument, a column range specifies the location of the sequence
field; the default is 73-80. The maximum length of the sequence field is 15.
FIRST specifies that the filter is the first (or only) in a cascade of updates; it is ensured that
the input file is correctly sequenced.
LAST specifies that update output sequencing is not to be verified. This lets you generate
an updated file without sequence numbers. Note that this option should be used only when
it is desired to suppress sequence numbering for the lines added by the last update applied.
Comments may be entered at the end of the argument string following an asterisk. This
field can be used for the name of the update file being applied to identify the specific stage
in error messages issued by CMS/TSO Pipelines. The comment string is not used by
update.
Operation: The master file is read from the primary input stream and updated with the
update file from the secondary input stream. The updated file is written to the primary
output stream and the update log is written to the secondary output stream.
Control cards supported are the ones defined for UPDATE when using full sequence
numbers (SEQ8).
Messages for errors that do not terminate update during processing are logged to the
update log rather than being written to the terminal.
Streams Used: Two streams must be defined. Records are read and written on the
primary stream and the secondary stream.
Premature Termination: update terminates when it discovers that either of its output
streams is not connected. Connect the secondary output stream to hole to discard the
update log.
Examples: To apply two updates to a file and discard the update logs:
The global options define the question mark as the end character. The first pipeline reads
the source file and passes it through the two update stages. The second pipeline reads the
first update file and passes it to the first update stage (because the label u1: refers back to
the first update); the update log is discarded in hole. Likewise, the third pipeline reads the
second update file into the second update stage.
Notes:
1. update is intended to apply an update created with XEDIT using the CTL option.
Updates are applied in parallel when update stages are cascaded. update may treat
some errors differently than the CMS command UPDATE does.
2. Use a cascade of update stages to apply several updates to a source file.
Return Codes: The following return codes are reflected when errors have been noted in
the update log; when multiple errors are detected, the final return code is the highest one
encountered.
4 Sequence request that is not first in an update file.
4 Sequence error in the input master file.
8 Trouble with the sequence control card: Start or increment is not numeric.
8 Sequence error in the output master file.
12 Trouble with the sequence number field. A sequence number is not numeric, is
missing, is longer than the sequence field width; or the dollar sign is missing.
12 No record is found with the required sequence number.
32 Unsupported or missing control card.
──URLDEBLOCK──┬────────┬──
¡ └─EBCDIC─┘
Type: Filter.
Operation: Most characters of a URL are written to the output record unchanged. These
characters are processed specially:
Streams Used: Records are read from the primary input stream and written to the primary
output stream. Null input records are discarded.
Premature Termination: urldeblock terminates when it discovers that its output stream is
not connected.
Examples:
pipe < sample url | join | xlate e2a | urldeblock | xlate a2e | console
name=Craig R. Doe
class=90
grad=
[email protected]
bdate=03/15/68
url=
Ready;
Notes:
1. Unlike most CMS/TSO Pipelines built-in programs, urldeblock performs its operation
in the ASCII domain. If the input record has already been translated to EBCDIC by a
gateway, it must be translated back to ASCII before it is passed to urldeblock.
¡ 2. The EBCDIC option is useless for a URL that was built on an ASCII system and then
¡ translated to EBCDIC by, for example, a mail gateway since the escape sequences will
¡ contain the ASCII value of the characters.
CMS
┌─00E─────┐
──uro──┼─────────┼──┬──────┬──
└─devaddr─┘ └─STOP─┘
Syntax Description: Arguments are optional. Specify the device address of the virtual
printer or punch to write to if it is not the default 00E. The virtual device must be a unit
record output printer or punch device. The keyword STOP allows you to inspect the
channel programs built by uro.
Operation: The first byte of each record designates the CCW command code (machine
carriage control character); it is inserted as the CCW command code. The remaining char-
acters are identified for transport to SPOOL by the address and length fields of the CCW. A
single blank character is written if the input record has only the command code. Control
and no operation CCWs can specify data; the data are written to the SPOOL file. X'5A'
operation codes are supported, but other read commands are rejected with an error
message; command codes are not otherwise inspected.
Records may be buffered by uro to improve performance by writing more than one record
with a single call to the host interface. A null input record causes uro to flush the contents
of the buffer into SPOOL, but the null record itself is not written to SPOOL. After the
producing stage has written a null record it is assured that uro can close the unit record
device without loss of data. Input lines are copied to the primary output stream, if it is
connected.
The virtual Forms Control Buffer (FCB) for a virtual printer (the virtual carriage control
tape) can be loaded by a CCW or the CP command LOADVFCB. The channel program is
restarted after a channel 9 or 12 hole causes it to terminate; even so, such holes in the
carriage tape should be avoided, because they serve no useful purpose; and they generate
additional overhead.
uro has not been tested with a dedicated printer or a dedicated punch.
Commit Level: uro starts on commit level -2000000000. It ensures that the device is not
already in use by another stage, allocates a buffer, and then commits to level 0.
Examples:
The trick is to pass a null record to uro to force it to flush the contents of its buffer into
CP SPOOL before the device is closed.
Notes:
1. Use punch to write records without carriage control to a virtual punch.
| 2. Set NOPDATA on to write into SPOOL any data in a record that has a carriage control
| designating no operation (X'03').
3. Any output data can be written, including 3800 CCWs, but be aware that CP support
depends on the virtual device type. For example, the maximum record length
(including CCW operation code prefix) is 133 bytes on a virtual 1403.
4. STOP causes CP console function mode to be entered after each channel program has
been given to CP. In a virtual machine on a CP supporting Diagnose A8, general
register 2 points to the HCPSGIOP data area, from which information about the channel
program can be extracted.
In virtual machines that do not support Diagnose A8 (VM/System Product with or
without VM/High Performance Option), general register 6 points to the byte following
the last CCW built; the beginning of the channel program is found by the CP command
“display caw”. The last CCW executed is inferred from the CSW, which is obtained
by “display csw”. Make sure you SET RUN OFF when using this option. This func-
tion was written to help debug uro, but it may also be useful to discover errors in
input data.
! UTF-8 is variable length encoding of Unicode that has the property that 7-bit ASCII is
! encoded unchanged. UTF-16 is a fixed length encoding that is close to Unicode, but see
! the usage notes below. UTF-32 stores the Unicode code point in 32-bits.
!
! Type: Filter.
! Syntax Description:
!! UTF8 Variable length byte stream encoding that has the property that the first
! 128 values are 7-bit ASCII.
!! MODIFIED UTF-8 encoding where U+0000 is encoded as two bytes (X'C080').
!! UTF-8 This has the advantage that the byte X'00' cannot legally occur in such
! a string.
!! UTF16 Halfword encoding where the assigned Unicode code points in the Multi
! Lingual Plane (MLP, U+0000 through U+FFFF) are encoded as the same
! value with the most significant byte first. A value larger than X'FFFF'
! (U+10000 through U+10FFFF) is encoded as a “surrogate pair”; that is,
! two halfwords using one code point in the range X'D800' through
! X'DBFF' followed by a code point in the range X'DC00' through
! X'DFFF' for a total of twenty bits.
!! UTF32 Fullword encoding containing the binary value of the Unicode code
! point with the most significant byte first.
!! REPORT Report input data that are not valid; that is, issue a message and termi-
! nate. The default is to substitute U+FFFD for the code point(s) in error.
! UTF-8: This format uses from one to four bytes to encode the Unicode character set. It
! offers many encodings that are not valid. In particular, overlong encodings are possible.
! Such encodings use more bits that necessary to encode a Unicode code point. For
! example, X'41' and X'C181' encode “A”, but the second encoding is not valid.
! UTF-16: This format uses two bytes to encode the valid code points in the MLP. Values
! in the higher planes are encoded in a surrogate pair, which is four bytes of the form
! B'110110pp ppxxxxxx 110111xx xxxxxxxx', where pppp is one less the number of the
! plane (thus, a code point in the MLP cannot be encoded as a surrogate pair).
! UTF-32: The 22-bit code point number is stored in 32 bits. Values larger than
! X'0010FFFF' are not valid.
! Streams Used: Records are read from the primary input stream and written to the primary
! output stream. Null input records are discarded.
! Premature Termination: utf terminates when it discovers that its output stream is not
! connected.
! Notes:
! 1. In Unicode terminology, a code point represents an unsigned value in the range 0
! through 1114111 (X'10FFFF'). A code point uniquely identifies a character or
! control code.
! Unicode code points are by convention marked up as U+xxxx, where the value is
! specified in hexadecimal.
! 2. Use the same encoding format for input and output operands to validate an encoded
! data stream without conversion.
! Publications:
! As of this writing (January 2010), the current Unicode standard can be found at
! http://www.unicode.org/versions/Unicode5.2.0/
! RFC 3629 describes UTF-8, but so does the current Unicode standard.
──VAR──word──┬──────────┬──┬────────┬──┬──────────┬──
¡ ├─PRODUCER─┤ └─number─┘ └─NOMSG233─┘
└─MAIN─────┘
┌─SYMBOLIC─┐
──┼──────────┼──┬──────────┬──
└─DIRECT───┘ └─TRACKING─┘
Warning: var behaves differently when it is a first stage and when it is not a first stage.
Existing data can be overlaid when var is unintentionally run other than as a first stage.
To use var to read data into the pipeline at a position that is not a first stage, specify var
as the argument of an append or preface control. For example, |append var ...|
appends the data produced by var to the data on the primary input stream.
The keyword PRODUCER may be used when the pipeline specification is issued with
CALLPIPE. It specifies that the variable pool to be accessed is the one for the stage that
produces the input to the stage that issues the subroutine pipeline that contains var, rather
than the current stage. (This is a somewhat esoteric option.) To ensure that the variable
pool persists as long as this invocation of var, the stage that is connected to the currently
selected input stream must be blocked in an OUTPUT pipeline command while the subrou-
tine pipeline is running.
The keyword MAIN specifies that the REXX variable pool to be accessed is the one in effect
at the time the pipeline set was created (either by the PIPE command or by the runpipe
stage). MAIN is implied for pipelines that are issued with ADDPIPE.
A number that is zero or positive is optional. It specifies the number of REXX variable
pools to go back. That is, var can operate on variables in the program that issued the
pipeline specification to invoke var or in one of its ancestors. (When the number is
prefixed by either PRODUCER or MAIN, the variable pool to be accessed is the producer’s or
the main one, or one of their ancestors.) On CMS, if the number is larger than the number
of REXX environments created on the call path from the PIPE command, var continues on
the SUBCOM chain starting with the environment active when PIPE was issued.
¡ Specify the option NOMSG233 to suppress message 233 when the REXX environment does
¡ not exit. Either way, var terminates with return code 233 on commit level -1 when the
¡ environment does not exist.
The keyword SYMBOLIC specifies that REXX should treat the variable names generated as it
would a variable that is written in a program. DIRECT specifies that REXX should use the
variable name exactly as written.
The keyword TRACKING specifies that var should continuously obtain the value of the vari-
able and write it to the output or (if var is not first in a pipeline) set the variable to the
contents of each input record, as it is read.
| Operation:
| When var is first in the pipeline, and TRACKING is omitted, var writes a single record
| containing the value of the variable and terminates.
| When var is first in the pipeline, and TRACKING is specified, var continuously suspends
| itself and then writes the current value of the variable until it senses end-of-file on the
| primary output stream.
Note: Be sure that the pipeline limits the number of records consumed when var
TRACKING is first in a pipeline; it does not terminate normally.
| When var is not first in a pipeline, and TRACKING is omitted, it sets the variable from the
| first record read, passes the record to the primary output stream, consumes the record, and
| then shorts the primary input stream to the primary output stream. var drops the variable
| if no input record arrives.
| When var is not first in a pipeline, and TRACKING is specified, it sets the variable as
| records become available and then passes the record to the primary output stream.
Commit Level: var starts on commit level -1. It verifies that the REXX environment exists
(if it did not do so while processing its parameters) and then commits to level 0.
Examples: To reverse the current line in the current XEDIT session, irrespective of its
length.
/* REVCL XEDIT: Reverse current line */
'extract ,curline'
nuline=reverse(curline.3)
address command,
'PIPE var nuline | xedit'
exit RC
XEDIT advances the current line pointer after a record is read or replaced; therefore, the
EXTRACT XEDIT subcommand is used (rather than the xedit device driver) to get the
contents of the current line.
The cascade of split and drop is useful to set several variables to different words in the
input line:
pipe ... | split | var word1 | drop | var word2
split reformats the file to have a record for each blank-delimited word in the input. The
first var sets the variable word1 to the contents of the first line (which contains the first
word of the input file), and then copies the input to the output. The drop stage discards
the first record (which has already been stored); it passes the second word of the input file
as the first record on the output. Thus, the first line that is read by the second var stage
contains the second word of the input file. This word is then stored in the variable word2.
Though you can add as many drop-var pairs as you like, is may be simpler to set a
stemmed array when there are many words in the input file.
Note these three ways of using var. They produce the same result when the input file
contains one record:
... | var x
... | var x tracking
... | append literal | var x
When there is no input file, the variable is dropped in the first example; the variable is left
unchanged in the second example; and the variable is set to a null value in the third
example (because append can always supply a null record).
When there is more than one input record, the first and third examples set the variable to
the contents of the first record, but the second example sets it to the contents of the last
record.
The file is stored in a stemmed array by stem. Then the first line is selected and counted.
The count will be zero if there are no lines in the file. Because the count can only be zero
or one, the variable haveData is set to a Boolean value.
Notes:
1. When a pipeline is issued as a TSO command, is called to access the variable pool.
When the command is issued with Address Link or Address Attach, var accesses the
REXX environment from where the command is issued.
2. CMS/TSO Pipelines maintains a reference to the current variable environment for each
stage. Initially this is the environment in effect for the PIPE command with which the
original pipeline was started.
When a REXX program is invoked (as a stage or with the REXX pipeline command), its
environment becomes the current one, with a pointer to the previous one.
When a pipeline specification is issued with the runpipe built-in program or the
CALLPIPE pipeline command, the current environment is the one in effect for the stage
issuing runpipe or CALLPIPE; it is known to persist while the subroutine pipeline runs.
On the other hand, when a pipeline specification is issued with the ADDPIPE pipeline
command, the stage that issues ADDPIPE runs in parallel with the added pipeline
specification; it can terminate at any time (indeed, even before the new pipeline
specification starts running). Therefore, for ADDPIPE, the current environment is set to
the one for the last runpipe or the one at initial entry on the PIPE command. Thus, the
MAIN option has effect only for pipeline specifications that are issued by the CALLPIPE
pipeline command.
3. Unless DIRECT is specified, var uses the symbolic interface to access REXX variables.
This means that you should write the variable name the same way you would write it
in an assignment statement. Consider this program fragment:
/* Process an array */
x='fred'
'PIPE literal a | var z.x'
The variable Z.fred is set to 'a '. On the other hand, this would set the variable
Z.x:
/* Process directly */
'PIPE literal a | var Z.x direct'
Note that the stem must be in upper case when DIRECT is used.
4. An unset variable (that is, a variable that has been dropped or has never been assigned
a value) is treated differently by the three variable repositories: REXX returns the
name of the variable in upper case; EXEC2 and CLIST return the null string.
5. Use TRACKING when you wish to leave the current value of the variable unchanged if
there are no input records. Use take 1 to ensure there is only one input record.
┌─SYMBOLIC─┐
──VARDROP──┬──────────┬──┬────────┬──┬──────────┬──┼──────────┼──
¡ ├─PRODUCER─┤ └─number─┘ └─NOMSG233─┘ └─DIRECT───┘
└─MAIN─────┘
Syntax Description: It is possible to access a REXX variable pool other than the current
one.
The keyword PRODUCER may be used when the pipeline specification is issued with
CALLPIPE. It specifies that the variable pool to be accessed is the one for the stage that
produces the input to the stage that issues the subroutine pipeline that contains vardrop,
rather than the current stage. (This is a somewhat esoteric option.) To ensure that the
variable pool persists as long as this invocation of vardrop, the stage that is connected to
the currently selected input stream must be blocked in an OUTPUT pipeline command while
the subroutine pipeline is running.
The keyword MAIN specifies that the REXX variable pool to be accessed is the one in effect
at the time the pipeline set was created (either by the PIPE command or by the runpipe
stage). MAIN is implied for pipelines that are issued with ADDPIPE.
A number that is zero or positive is optional. It specifies the number of REXX variable
pools to go back. That is, vardrop can operate on variables in the program that issued the
pipeline specification to invoke vardrop or in one of its ancestors. (When the number is
prefixed by either PRODUCER or MAIN, the variable pool to be accessed is the producer’s or
the main one, or one of their ancestors.) On CMS, if the number is larger than the number
of REXX environments created on the call path from the PIPE command, vardrop continues
on the SUBCOM chain starting with the environment active when PIPE was issued.
¡ Specify the option NOMSG233 to suppress message 233 when the REXX environment does
¡ not exit. Either way, vardrop terminates with return code 233 on commit level -1 when
¡ the environment does not exist.
The keyword SYMBOLIC specifies that REXX should treat the variable names generated as it
would a variable that is written in a program. DIRECT specifies that REXX should use the
variable name exactly as written.
Input Record Format: One variable per input record. The name of the variable begins in
the first column of the record. Trailing blanks are retained.
Commit Level: vardrop starts on commit level -1. It verifies that the REXX environment
exists (if it did not do so while processing its parameters) and then commits to level 0.
Examples: To drop two variables in the EXEC that invoked the PIPE:
Notes:
1. On z/OS, if vardrop is used in a pipeline specification that is issued with the PIPE
command, the command must be issued by Address LINK.
2. CMS/TSO Pipelines maintains a reference to the current variable environment for each
stage. Initially this is the environment in effect for the PIPE command with which the
original pipeline was started.
When a REXX program is invoked (as a stage or with the REXX pipeline command), its
environment becomes the current one, with a pointer to the previous one.
When a pipeline specification is issued with the runpipe built-in program or the
CALLPIPE pipeline command, the current environment is the one in effect for the stage
issuing runpipe or CALLPIPE; it is known to persist while the subroutine pipeline runs.
On the other hand, when a pipeline specification is issued with the ADDPIPE pipeline
command, the stage that issues ADDPIPE runs in parallel with the added pipeline
specification; it can terminate at any time (indeed, even before the new pipeline
specification starts running). Therefore, for ADDPIPE, the current environment is set to
the one for the last runpipe or the one at initial entry on the PIPE command. Thus, the
MAIN option has effect only for pipeline specifications that are issued by the CALLPIPE
pipeline command.
──VARFETCH──┬──────────┬──┬────────┬──┬──────────┬──
¡ ├─PRODUCER─┤ └─number─┘ └─NOMSG233─┘
└─MAIN─────┘
┌─SYMBOLIC─┐
──┼──────────┼──┬───────────────────────────────────────┬──
└─DIRECT───┘ └─TOLOAD──┬───────────────────────────┬─┘
├─NOCOMMENTS────────────────┤
└─COMMENTS──delimitedString─┘
Syntax Description: It is possible to access a REXX variable pool other than the current
one.
The keyword PRODUCER may be used when the pipeline specification is issued with
CALLPIPE. It specifies that the variable pool to be accessed is the one for the stage that
produces the input to the stage that issues the subroutine pipeline that contains varfetch,
rather than the current stage. (This is a somewhat esoteric option.) To ensure that the
variable pool persists as long as this invocation of varfetch, the stage that is connected to
the currently selected input stream must be blocked in an OUTPUT pipeline command while
the subroutine pipeline is running.
The keyword MAIN specifies that the REXX variable pool to be accessed is the one in effect
at the time the pipeline set was created (either by the PIPE command or by the runpipe
stage). MAIN is implied for pipelines that are issued with ADDPIPE.
A number that is zero or positive is optional. It specifies the number of REXX variable
pools to go back. That is, varfetch can operate on variables in the program that issued the
pipeline specification to invoke varfetch or in one of its ancestors. (When the number is
prefixed by either PRODUCER or MAIN, the variable pool to be accessed is the producer’s or
the main one, or one of their ancestors.) On CMS, if the number is larger than the number
of REXX environments created on the call path from the PIPE command, varfetch continues
on the SUBCOM chain starting with the environment active when PIPE was issued.
¡ Specify the option NOMSG233 to suppress message 233 when the REXX environment does
¡ not exit. Either way, varfetch terminates with return code 233 on commit level -1 when
¡ the environment does not exist.
The keyword SYMBOLIC specifies that REXX should treat the variable names generated as it
would a variable that is written in a program. DIRECT specifies that REXX should use the
variable name exactly as written.
Specify TOLOAD to write output records in the format required as input to varset (and to
varload): each record contain the variable’s name as a delimited string followed by the
variable’s value. The delimiter is selected from the set of characters that do not occur in
the name of the variable; it is unspecified how this delimiter is selected. The keyword
COMMENTS is followed by a delimited string that enumerates the characters that should not
be used as delimiter characters. The keyword NOCOMMENTS specifies that the delimiter
character can be any character that is not in the variable’s name. NOCOMMENTS is the
default.
Operation: When the secondary output stream is not defined, varfetch writes an output
record to the primary output stream for each input record. This record contains the value
returned from the environment.
When the secondary output stream is defined, varfetch inspects the SHVNEWV flag to see if
the variable exists. If the flag indicates that the variable does not exist, the input record is
copied to the secondary output stream. If the flag indicates that the variable does exist, an
output record is built and written to the primary output stream.
Input Record Format: One variable per input record. The name of the variable begins in
the first column of the record. Trailing blanks are retained.
Output Record Format: When TOLOAD is omitted, the output record contains the value of
the variable.
Streams Used: Records are read from the primary input stream; no other input stream
may be connected. Null input records are discarded.
Commit Level: varfetch starts on commit level -1. It verifies that the REXX environment
exists (if it did not do so while processing its parameters) and then commits to level 0.
Examples: Obtain the value of two variables in the EXEC that invoked the PIPE:
Notes:
1. When a pipeline is issued as a TSO command, is called to access the variable pool.
When the command is issued with Address Link or Address Attach, varfetch accesses
the REXX environment from where the command is issued.
2. CMS/TSO Pipelines maintains a reference to the current variable environment for each
stage. Initially this is the environment in effect for the PIPE command with which the
original pipeline was started.
When a REXX program is invoked (as a stage or with the REXX pipeline command), its
environment becomes the current one, with a pointer to the previous one.
When a pipeline specification is issued with the runpipe built-in program or the
CALLPIPE pipeline command, the current environment is the one in effect for the stage
issuing runpipe or CALLPIPE; it is known to persist while the subroutine pipeline runs.
On the other hand, when a pipeline specification is issued with the ADDPIPE pipeline
command, the stage that issues ADDPIPE runs in parallel with the added pipeline
specification; it can terminate at any time (indeed, even before the new pipeline
specification starts running). Therefore, for ADDPIPE, the current environment is set to
the one for the last runpipe or the one at initial entry on the PIPE command. Thus, the
MAIN option has effect only for pipeline specifications that are issued by the CALLPIPE
pipeline command.
3. An unset variable (that is, a variable that has been dropped or has never been assigned
a value) is treated differently by the three variable repositories: REXX returns the
name of the variable in upper case; EXEC2 and CLIST return the null string. Only REXX
sets the SHVNEWV flag.
┌─DIRECT───┐
──VARLOAD──┬──────────┬──┬────────┬──┬──────────┬──┼──────────┼──
¡ ├─PRODUCER─┤ └─number─┘ └─NOMSG233─┘ └─SYMBOLIC─┘
└─MAIN─────┘
──┬───────────────────────────┬──
├─NOCOMMENTS────────────────┤
└─COMMENTS──delimitedString─┘
Syntax Description: It is possible to access a REXX variable pool other than the current
one.
The keyword PRODUCER may be used when the pipeline specification is issued with
CALLPIPE. It specifies that the variable pool to be accessed is the one for the stage that
produces the input to the stage that issues the subroutine pipeline that contains varload,
rather than the current stage. (This is a somewhat esoteric option.) To ensure that the
variable pool persists as long as this invocation of varload, the stage that is connected to
the currently selected input stream must be blocked in an OUTPUT pipeline command while
the subroutine pipeline is running.
The keyword MAIN specifies that the REXX variable pool to be accessed is the one in effect
at the time the pipeline set was created (either by the PIPE command or by the runpipe
stage). MAIN is implied for pipelines that are issued with ADDPIPE.
A number that is zero or positive is optional. It specifies the number of REXX variable
pools to go back. That is, varload can operate on variables in the program that issued the
pipeline specification to invoke varload or in one of its ancestors. (When the number is
prefixed by either PRODUCER or MAIN, the variable pool to be accessed is the producer’s or
the main one, or one of their ancestors.) On CMS, if the number is larger than the number
of REXX environments created on the call path from the PIPE command, varload continues
on the SUBCOM chain starting with the environment active when PIPE was issued.
¡ Specify the option NOMSG233 to suppress message 233 when the REXX environment does
¡ not exit. Either way, varload terminates with return code 233 on commit level -1 when
¡ the environment does not exist.
The keyword SYMBOLIC specifies that REXX should treat the variable names generated as it
would a variable that is written in a program. DIRECT specifies that REXX should use the
variable name exactly as written. The keyword COMMENTS is followed by a delimited
string that enumerates the characters that can mark comment lines in the input. The
keyword NOCOMMENTS specifies that the input contains no comment records. The default
is COMMENT /* /.
Input Record Format: Records that contain one of the characters in the comment string
in the first column are considered comments and are ignored. The first position of each is
a delimiter character unless the record is treated as a comment. The name of the variable
to set begins in column 2 and ends at the next occurrence of the delimiter character. That
is, a delimitedString beginning in column 1 defines the name of the variable to set. In
order that stemmed variables with any stem can be loaded, the variable name is not trans-
lated in any way; simple variables (and stems) must be in upper case. There is no substi-
tution in stemmed variables when they are set.
Data to load into the variable, if any, immediately follow the second occurrence of the
delimiter character and extend to the end of the record; use strip TRAILING to remove
trailing blanks.
Commit Level: varload starts on commit level -1. It verifies that the REXX environment
exists (if it did not do so while processing its parameters) and then commits to level 0.
msg='MSG'; vmconio='VMCONIO'
Say cpset.msg cpset.vmconio
A sample input file for this is shown here; note that the variable names are all in upper
case.
* GDDM SETVARS:
,WHITE,-2
,BLACK,-1
,BLUE,1
,RED,2
,MAGENTA,3
,GREEN,4
,CYAN,5
,YELLOW,6
,NEUTRAL,7
,BACKGROUND,8
rexxvars can be used to save the variables in a program so that they can be restored later:
/* Save variables */
'PIPE (name VARLOAD)',
'|rexxvars', /* Read all variables */
'| drop 1', /* Drop source string */
'| spec /=/ 1 3-* next', /* Beginning of delimiter */
'| join 1', /* Join name and value */
'| > saved variables a' /* Write file */
/* Restore variables */
'PIPE (name VARLOAD)',
'|< saved variables', /* Read variables */
'| varload' /* Set them */
This example works with “well behaved” variables, but note that rexxvars cannot obtain
the default value for a stem; it also truncates the value of a variable after 512 bytes. A
more subtle thing to beware of is that a compound variable can have an equal sign as part
of its name; this would cause a longer value to be restored than was saved.
To set all possible compound values whose names begin with STEM. (the default):
'PIPE literal /STEM./Value for array|varload'
Notes:
1. varload is identical to varset, except for the defaults.
2. When a pipeline is issued as a TSO command, is called to access the variable pool.
When the command is issued with Address Link or Address Attach, varload accesses
the REXX environment from where the command is issued.
3. CMS/TSO Pipelines maintains a reference to the current variable environment for each
stage. Initially this is the environment in effect for the PIPE command with which the
original pipeline was started.
When a REXX program is invoked (as a stage or with the REXX pipeline command), its
environment becomes the current one, with a pointer to the previous one.
When a pipeline specification is issued with the runpipe built-in program or the
CALLPIPE pipeline command, the current environment is the one in effect for the stage
issuing runpipe or CALLPIPE; it is known to persist while the subroutine pipeline runs.
On the other hand, when a pipeline specification is issued with the ADDPIPE pipeline
command, the stage that issues ADDPIPE runs in parallel with the added pipeline
specification; it can terminate at any time (indeed, even before the new pipeline
specification starts running). Therefore, for ADDPIPE, the current environment is set to
the one for the last runpipe or the one at initial entry on the PIPE command. Thus, the
MAIN option has effect only for pipeline specifications that are issued by the CALLPIPE
pipeline command.
4. varload cannot set a compound variable whose derived name is the same as its stem (a
compound variable with a null index). This can be accomplished with var if there
exists a simple variable containing a null value; this example hijacks the question
mark:
... | literal | var ? | drop 1 | var stem.?
A null record is generated by literal, stored in the question mark variable, and
discarded. The next record is assigned to the compound variable with the null index.
┌─SYMBOLIC─┐
──VARSET──┬──────────┬──┬────────┬──┬──────────┬──┼──────────┼──
¡ ├─PRODUCER─┤ └─number─┘ └─NOMSG233─┘ └─DIRECT───┘
└─MAIN─────┘
──┬───────────────────────────┬──
├─NOCOMMENTS────────────────┤
└─COMMENTS──delimitedString─┘
Syntax Description: It is possible to access a REXX variable pool other than the current
one.
The keyword PRODUCER may be used when the pipeline specification is issued with
CALLPIPE. It specifies that the variable pool to be accessed is the one for the stage that
produces the input to the stage that issues the subroutine pipeline that contains varset,
rather than the current stage. (This is a somewhat esoteric option.) To ensure that the
variable pool persists as long as this invocation of varset, the stage that is connected to the
currently selected input stream must be blocked in an OUTPUT pipeline command while the
subroutine pipeline is running.
The keyword MAIN specifies that the REXX variable pool to be accessed is the one in effect
at the time the pipeline set was created (either by the PIPE command or by the runpipe
stage). MAIN is implied for pipelines that are issued with ADDPIPE.
A number that is zero or positive is optional. It specifies the number of REXX variable
pools to go back. That is, varset can operate on variables in the program that issued the
pipeline specification to invoke varset or in one of its ancestors. (When the number is
prefixed by either PRODUCER or MAIN, the variable pool to be accessed is the producer’s or
the main one, or one of their ancestors.) On CMS, if the number is larger than the number
of REXX environments created on the call path from the PIPE command, varset continues on
the SUBCOM chain starting with the environment active when PIPE was issued.
¡ Specify the option NOMSG233 to suppress message 233 when the REXX environment does
¡ not exit. Either way, varset terminates with return code 233 on commit level -1 when the
¡ environment does not exist.
The keyword SYMBOLIC specifies that REXX should treat the variable names generated as it
would a variable that is written in a program. DIRECT specifies that REXX should use the
variable name exactly as written. The keyword COMMENTS is followed by a delimited
string that enumerates the characters that can mark comment lines in the input. The
keyword NOCOMMENTS specifies that the input contains no comment records. The default
is NOCOMMENT.
Operation: When the secondary output stream is not defined, varset copies the input
record to the primary output stream after the variable is set.
When the secondary output stream is defined, varset inspects the SHVNEWV flag to see if
the variable existed before. If the flag indicates that the variable already exists, the input
record is copied to the primary output stream. If the variable is new, the input record is
copied to the secondary output stream.
Input Record Format: Records that contain one of the characters in the comment string
in the first column are considered comments and are ignored. The first position of each is
a delimiter character unless the record is treated as a comment. The name of the variable
to set begins in column 2 and ends at the next occurrence of the delimiter character. That
is, a delimitedString beginning in column 1 defines the name of the variable to set. In
order that stemmed variables with any stem can be loaded, the variable name is not trans-
lated in any way; simple variables (and stems) must be in upper case. There is no substi-
tution in stemmed variables when they are set.
Data to load into the variable, if any, immediately follow the second occurrence of the
delimiter character and extend to the end of the record; use strip TRAILING to remove
trailing blanks.
Streams Used: Records are read from the primary input stream; no other input stream
may be connected. Null input records are discarded.
Commit Level: varset starts on commit level -1. It verifies that the REXX environment
exists (if it did not do so while processing its parameters) and then commits to level 0.
Notes:
1. When a pipeline is issued as a TSO command, is called to access the variable pool.
When the command is issued with Address Link or Address Attach, varset accesses
the REXX environment from where the command is issued.
2. CMS/TSO Pipelines maintains a reference to the current variable environment for each
stage. Initially this is the environment in effect for the PIPE command with which the
original pipeline was started.
When a REXX program is invoked (as a stage or with the REXX pipeline command), its
environment becomes the current one, with a pointer to the previous one.
When a pipeline specification is issued with the runpipe built-in program or the
CALLPIPE pipeline command, the current environment is the one in effect for the stage
issuing runpipe or CALLPIPE; it is known to persist while the subroutine pipeline runs.
On the other hand, when a pipeline specification is issued with the ADDPIPE pipeline
command, the stage that issues ADDPIPE runs in parallel with the added pipeline
specification; it can terminate at any time (indeed, even before the new pipeline
specification starts running). Therefore, for ADDPIPE, the current environment is set to
the one for the last runpipe or the one at initial entry on the PIPE command. Thus, the
MAIN option has effect only for pipeline specifications that are issued by the CALLPIPE
pipeline command.
3. varset cannot set a compound variable whose derived name is the same as its stem (a
compound variable with a null index). This can be accomplished with var if there
exists a simple variable containing a null value; this example hijacks the question
mark:
... | literal | var ? | drop 1 | var stem.?
A null record is generated by literal, stored in the question mark variable, and
discarded. The next record is assigned to the compound variable with the null index.
¡ ┌──
────────────────────────────┐
┬──────────────────────────┬┴──
──VCHAR──number──number───
¡ ├─PAD──┬─xorc────────────┬─┤
¡ │ └─delimitedString─┘ │
¡ ├─PADIN──┬──────┬──────────┤
¡ │ └─xorc─┘ │
¡ └─┬─PADOUT──┬──────┬─┬─────┘
¡ │ └─xorc─┘ │
¡ └─TRUNCate─────────┘
Type: Filter.
Syntax Description: The first argument is the number of bits per character in the input
¡ record; the second argument is the number of bits per character in the output record. Up
¡ to three options may follow the two numbers.
¡¡ PADOUT Padding to be inserted at the end of each output record to complete the
¡ last byte. As many of the leftmost bits of the pad character as required
¡ are appended to the right of the last character. Specify a xorc or take
¡ the default padding of binary zeros.
¡¡ TRUNCATE Do not pad the output record; truncate any partial byte.
Operation: Bits are truncated on the left when the first number is larger than the second
one. Zero bits are inserted on the left when the second number is larger than the first
number. The input and output records are bit streams. A record is written for each input
record. Only complete input characters are copied, effectively truncating the input record
¡ if it contains a number of bits that is not evenly divisible by the first number unless PADIN
¡ is specified. If the output record contains a number of bits that is not evenly divisible by
eight, the last byte of the output record is padded with zeros on the right.
Premature Termination: vchar terminates when it discovers that its output stream is not
connected.
Examples: To recode a file containing four 6-bit characters packed into every three 8-bit
bytes:
...| vchar 6 8 |...
This example keeps the six bits together, adding two zero bits to the left of each byte.
To convert ASCII from 6-bit code to 8-bit code with three input bits and a leading zero into
each output nibble (halfbyte):
...| vchar 3 4 |...
First an asterisk is suffixed to each record; and then it is removed from records that
contain an odd number of characters.
To insert a blank in front of each four characters (assuming that no input record contains
X'00'):
For production use, this should be enhanced to suffix three blanks to the record before the
vchar stage; otherwise the records are truncated when they contain a number of characters
that is not evenly divisible by four.
To write a string in two records, each pair of input characters arranged vertically:
pipe literal 47F0F120| spec 1-* 2 write 1-* 1 | vchar 16 8 | console
4FF2
7010
Ready;
This example uses spec to produce two copies of the input record where the first one is
offset one byte to the right. For each pair of input characters, vchar selects the rightmost
character.
──VERIFY──┬─────────┬──┬────────────┬──delimitedString──
└─ANYCASE─┘ └─inputRange─┘
Syntax Description: Specify ANYCASE to make the comparison case insensitive. An input
range is optional. This specifies the part of the record to be inspected. The delimited
string enumerates the characters that are allowed within the input range.
Operation: verify tests the characters within the input range for being in the specified
string. If the input range is null or all characters in the range are in the specified string,
the record is passed to the primary output stream. Otherwise, the record is discarded (or
passed to the secondary output stream if the secondary output stream is connected).
Streams Used: Records are read from the primary input stream. Secondary streams may
be defined, but the secondary input stream must not be connected.
Record Delay: An input record is written to exactly one output stream when both output
streams are connected. verify strictly does not delay the record.
Commit Level: verify starts on commit level -2. It verifies that the secondary input
stream is not connected and then commits to level 0.
To verify that a range contains at least one character that is not numeric:
... | not verify 5.5 /0123456789/ | ...
Notes:
1. verify is similar to the REXX built-in function verify().
! 2. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
CMS
──VMC──word──┬────────┬──
└─string─┘
Syntax Description: The first word specifies the virtual machine to send commands to.
An initial message is optional after the name of the virtual machine.
vmc sends messages over the Virtual Machine Communications Facility (VMCF) to a
service machine. It expects a single reply. The reply is deblocked to 80-byte records and
written to the output.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. Null and blank input records are discarded.
Record Delay: vmc writes all output for an input record before consuming the input
record.
Premature Termination: vmc terminates when it discovers that its output stream is not
connected.
Examples: To send two commands to the SMART service machine to capture its help
information:
pipe literal next | vmc smart help | strip trailing | > smart commands a
Notes:
! vmc interoperates with vmclisten and vmcreply, but as it deblocks the reply into
! 80-byte output records, vmclient may be a more appropriate choice.
1. Remember that REXX continuation functionally replaces a trailing comma with a blank.
Also recall that when two strings are separated by one or more blanks, REXX concat-
enates them with a single blank. Use the concatenation operator (||) before the
comma at the end of the line if your portrait style has the stage separators at the left
side of the stage and the trailing blank is significant to your application.
!! CMS
! ──VMCDATA──
!! VMCPRECV Receive, X'0005'. The input record contains the message header only.
! The output record contains the data received appended to the parameter
! list.
!! VMCPREPL Reply, X'0007'. The reply data must be appended to the message
! header in the input record. The reply function is also performed when
! the function code is unchanged from the message header (VMCPSNDR).
!! VMCPRJCT Reject, X'000B'.
! Input Record Format: A 40-byte message header followed by optional reply data. The
! fields VMCMMID and VMCMUSER must remain unchanged from vmclisten, as they identify
! the message being responded to.
! Output Record Format: The 40-byte parameter list after the VMCF function has
! completed. For the receive function, data received are appended to the parameter list.
!! CMS
! ──VMCLIENT──┬──────┬──
! └─word─┘
! Syntax Description: The word specifies the target virtual machine. When present, it is
! inserted in all input records.
! Operation: vmclient issues the VMCF diagnose for each input record. For identify it then
! outputs the parameter list. For other functions, it waits for the final response interrupt,
! which indicates that the transaction is complete, and then produces an output record that
! contains the message header and reply data, if any.
! Input Record Format: A VMCF parameter list (40 bytes) followed by data to transmit.
! The parameter list must have been filled in for function, user (unless an operand is
! specified), and (for send/receive) the length of the desired response buffer. The message
! identifier is reserved for CMS Pipelines use unless the identify function is specified.
! Supported function codes are send, sendx, send/receive, and identify. The first buffer
! and length (VMCPVADA and VMCPLENA) are set to reflect the balance of the record from
! position 41 and on. For a send/receive function, a sufficient buffer is allocated for the
! response, as specified by VMCPLENB. If VMCPLENB is zero, the current size of the response
! buffer, which is at least 4056 bytes, is used.
! Output Record Format: For identify, the parameter list; otherwise the message header
! (40 bytes) for the response interrupt followed by reply data, if any.
! Streams Used: Records are read from the primary input stream and written to the primary
! output stream. Null input records are discarded.
! Record Delay: vmclient does not delay the record. That is, the output record is produced
! before the input record is consumed; however, there may well be a temporal delay while
! the server processes the request. vmclient waits forever if the server neither rejects the
! message nor produces a response.
! Commit Level: vmclient starts on commit level -2. It ensures that the external interrupt
! infrastructure is available, that the virtual machine is authorized for VMCF, and then
! commits to level 0.
! Premature Termination: vmclient terminates when it discovers that its output stream is
! not connected; vmclient also stops if the immediate command PIPMOD STOP is issued or if a
! record is passed to pipestop.
! Notes:
! 1. When an operand is specified and no vmclisten stage is active, a specific authorise is
! used for the target unless another vmclient stage is active and has specified a different
! user; the authorization is upgraded to full in this case, as it will be by vmclisten.
!! CMS
! ──VMCLISTEN──┬─────────┬──
! └─RECEIVE─┘
! Syntax Description:
! Output Record Format: The 40-byte message header followed by any sendx data or
! send data if RECEIVE is specified.
! Commit Level: vmclisten starts on commit level -2. It ensures that the external interrupt
! infrastructure is available, that the virtual machine is authorized for VMCF, and then
! commits to level 0.
! Premature Termination: vmclisten terminates when it discovers that its output stream is
! not connected; vmclisten also stops if the immediate command PIPMOD STOP is issued or if
! a record is passed to pipestop.
! Notes:
! 1. There can be at most one vmclisten stage active within a virtual machine at any one
! time.
! 2. If a specific authorize is active when vmclisten starts, it is upgraded to a general one.
! 3. When send/receive is indicated in the function code, the pipeline must generate an
! appropriate reply or reject and pass this to vmcdata.
! ──WAITDEV──devaddr──
! Syntax Description:
! devaddr The virtual device number of the device to wait on. The virtual device
! type must be terminal, graphic, unit record input, or channel to channel
! adapter.
! that a file has arrived. For channel to channel adapters, an attention indicates that the
! other side has made a channel command pending.
! The 12 byte channel status word is written to the primary output stream.
! 3. The pipeline is signalled to stop.
! No output is generated.
! Commit Level: waitdev starts on commit level -2000000000. It verifies that the virtual
! device exists and is of a supported type, and then commits to level 0.
! Premature Termination: waitdev terminates when it discovers that its output stream is
! not connected; waitdev also stops if the immediate command PIPMOD STOP is issued or if a
! record is passed to pipestop.
! Examples: To wait for a user to dial in and then display the input 3270 data stream until
! the user generates an attention interrupt:
! /* Wait for someone to dial in and then pass data to fullscr. */
! parse arg dev
! 'callpipe (end \ name WDIAL.REXX:10)',
! '\literal',
! '|waitdev' dev,
! '|stem how.',
! '|append literal', /* Be sure to shut the gate */
! '|g:gate',
! '\*:',
! '|g:',
! '|hole'
! If how.0=0
! Then exit /* Stopped. */
! say dev 'now dialed.'
! The first pipeline drains the input while waiting for the interrupt.
! The second pipeline passes the input to the screen. Should the user generate an attention
! interrupt, fullscr will produce a record which will cause it to terminate as there is no
! consumer.
! Notes:
! waitdev does not inspect the interrupt status.
! warp—Pipeline Wormhole
! warp passes data through a wormhole from a pitcher, which is a warp stage that is not a
! first stage, to a catcher, which is a warp stage that is a first stage; the records are passed to
! the pitcher’s primary output stream and also emanate from the catcher’s primary output
! stream.
! Within a pipeline set, there can be any number of pitcher stages, but at most one catcher
! stage by a particular name.
!
! ──WARP──word──
! Syntax Description: Specify the wormhole’s name as the only operand. The name is
! truncated after eight characters. Case is respected in wormhole names. The scope of a
! wormhole name is the pipeline set.
! Operation: When warp is first in the pipeline, it waits for records to fall out of the
! wormhole and passes them to its primary output stream.
! When warp is not first in the pipeline, it sends its input records through the wormhole. A
! pitcher will terminate without consuming the record if the catcher no longer exits or cannot
! write the record. The pitcher waits for the catcher to complete writing its output record; it
! then passes the record to its own primary output stream, ignoring end-of-file.
! Streams Used: Records are read from the primary input stream and written to the primary
! output stream. End-of-file is propagated from the catcher’s primary output stream to the
! pitchers’ primary input stream.
! Record Delay: A pitcher warp stage delays the record until it has been written by the
! catcher.
! Commit Level: warp starts on commit level -20. When it is first in the pipeline warp
! verifies that no other wormhole exist with the specified name and then commits to level 0.
! When it is not first in the pipeline, warp commits to level -1 to give a catcher time to start;
! if there then is no catcher, the stage terminates with an error message; otherwise it
! proceeds to commit level 0.
! Premature Termination: A catcher terminates when it cannot write its output; this causes
! the pitcher to terminate as well.
! warp also stops if the immediate command PIPMOD STOP is issued or if a record is passed
! to pipestop.
! Notes:
! 1. warp does not implement function that cannot be implemented with multistream pipe-
! lines, proper connection of streams, faninany, and, not least, sufficient stamina, but it
! is an easy way to gather data from a number of pipeline specifications.
! 2. warp sets the internal wait flag, which means that it cannot cause the pipeline set to
! go into a wait state; thus it cannot obscure a stall.
! 3. The workings of warp are similar to a UNIX named pipe. In fact, a pitcher can estab-
! lish a catcher by a different name to receive a reply, and then send the private
! catcher’s name in the message it pitches to a server that has a well known name.
! However, such practice is not considered pipethink.
! 4. warp can send records back to a previously defined pipeline specification, notably one
! issued by ADDPIPE; but it cannot send records to a pipeline specification that has not
! yet been issued.
! warplist—List Wormholes
! warplist writes a record for each active wormhole in the pipeline set. The record contains
! the name of the wormhole, padded with blanks on the right to eight characters.
!
! ──WARPLIST──
! Premature Termination: warplist terminates when it discovers that its output stream is
! not connected.
! Examples:
! pipe (end ?) warp x|hole ? warp y|hole ? warplist | console
! y
! x
! Ready;
──┬─WHILELABEL──┬────────┬──────────────────────┬──
│ └─string─┘ │
└─STRWHILElabel──┬─────────┬──delimitedString─┘
└─ANYcase─┘
Syntax Description: A string is optional for whilelabel. The string starts after exactly
one blank character. Leading and trailing blanks are significant.
Operation: Characters at the beginning of each input record are compared with the argu-
ment string. When ANYCASE is specified, case is ignored in this comparison. Any record
matches a null argument string. A record that is shorter than the argument string does not
match.
whilelabel copies records up to (but not including) the first one that does not match to the
primary output stream (or discards them if the primary output stream is not connected).
whilelabel passes the remaining input records to the secondary output stream.
Streams Used: Records are read from the primary input stream. Secondary streams may
be defined, but the secondary input stream must not be connected. whilelabel severs the
primary output stream before it passes the remaining input records to the secondary output
stream.
Record Delay: An input record is written to exactly one output stream when both output
streams are connected. whilelabel strictly does not delay the record.
Commit Level: whilelabel starts on commit level -2. It verifies that the secondary input
stream is not connected and then commits to level 0.
Examples: To select the ESD cards from the first text deck in a file, discarding any update
log in front of it:
/* FIRSTESD REXX */
'callpipe',
'*:',
'|frlabel' '02'x || 'ESD',
'|whilelabel' '02'x || 'ESD',
'|*:'
hole ensures that whilelabel does not terminate prematurely when it discovers that the
primary output stream is not connected and there is no secondary output stream defined.
The concatenation operator ensures that lines that contain just an asterisk are dropped, as
are lines that have a character other than a blank in column 2.
Notes:
! 1. CASEANY, CASEIGNORE, CASELESS, and IGNORECASE are all synonyms for ANYCASE.
! 2. pick can do what whilelabel does and then quite some more.
3. Remember that REXX continuation functionally replaces a trailing comma with a blank.
Also recall that when two strings are separated by one or more blanks, REXX concat-
enates them with a single blank. Use the concatenation operator (||) before the
comma at the end of the line if your portrait style has the stage separators at the left
side of the stage and the trailing blank is significant to your application.
¡ ┌──
──────────────────────────────────┐
¡ ┬────────────────────────────────┬┴──
──WILDCARD──┬─────────┬───
¡ └─ANYCase─┘ │ ┌─BLANK────────┐ │
¡ ├─BLANK──┴─┤ Charspec ├─┴────────┤
¡ │ ┌─%────────────┐ │
¡ ├─ANYCHaracter──┴─┤ Charspec ├─┴─┤
¡ │ ┌─*────────────┐ │
¡ └─ANYSTRing──┴─┤ Charspec ├─┴────┘
¡ ──┬────────────┬──delimitedString──
¡ └─inputRange─┘
¡ Charspec:
¡ ├──┬─xorc─┬──┤
¡ └─OFF──┘
¡ Syntax Description:
¡ The word delimiter, any character, and any string are collectively referred to as “meta
¡ characters”. The meta characters are case sensitive, irrespective of the case setting. This
¡ applies also to word delimiter characters in the input range.
¡ The meta characters must all be different; this is enforced during the parse of the operands.
¡ Meta characters may be defined more than once. The last occurrence of the definition of
¡ any particular meta character is the one used.
¡ Operation: Matching the pattern against the input range is conceptually done by first
¡ breaking the pattern and the input range into words delimited by the word delimiter meta
¡ character. There must be the same number of words in both for the record to match. For
¡ each word, again conceptually, the literals, any characters and any strings in the pattern are
¡ matched against the word in the input range. A pattern word that consists of any strings
¡ only will match any single character, but not the empty string (as assumes that file names,
¡ types, and modes are not blank). In combination with the any character or literals, the any
¡ string can match the null string.
¡ The handling of word delimiters at the boundaries of the input range is not symmetrical:
¡ In the left margin, they are observed rigorously. When the pattern includes no leading
¡ word delimiters, matching will fail when the input range contains a leading word
¡ delimiter; conversely, when the pattern includes a leading word delimiter, matching
¡ will fail when the input range contains no leading word delimiter.
¡ In the right margin, on the other hand, trailing word delimiters are allowed at the end
¡ of the input range. Thus, the pattern /a/ matches an input range that contains “a” in
¡ the first column and any number of trailing blanks. If the pattern contains a trailing
¡ word delimiter, the input range must have trailing word delimiter(s).
¡ Streams Used: Records are read from the primary input stream. Secondary streams may
¡ be defined, but the secondary input stream must not be connected.
¡ Record Delay: An input record is written to exactly one output stream when both output
¡ streams are connected. wildcard strictly does not delay the record.
¡ Commit Level: wildcard starts on commit level -2. It verifies that the secondary input
¡ stream is not connected and then commits to level 0.
¡ Examples: To “swap” two meta characters, one of them must be disabled temporarily:
¡ ... | wildcard anychar off anystring % anychar * /*bcd%/ | ...
¡ Notes:
¡ 1. When processing the output from the LISTFILE command that has more than three
¡ words, be sure to restrict the range to the first three words (usually 1.19). Also be
¡ sure to specify all three words, possibly using an any string meta character for the
¡ mode.
z/OS
──WRITEPDS──pods──┬────────┬──┬────────────────────────────┬──
└─SHARED─┘ └─DELIMiter──delimitedString─┘
──┬────────┬──┬───────────┬──┬───────────────────────────┬──
├─COERCE─┤ └─PAD──xorc─┘ ├─INDELimiter───────────────┤
└─CHOP───┘ ├─ISPFSTATs─────────────────┤
└─USERDATA──delimitedString─┘
pods:
├──┬─dsname───────────────┬──┤
├─dsname(generation)───┤
├─'dsname'─────────────┤
├─'dsname(generation)'─┤
└─DDname=word──────────┘
Syntax Description:
pods Enclose a fully qualified data set name in single quotes; the trailing
quote is optional. Specify the DSNAME without quotes to have the
prefix, if any, applied. Append parentheses containing a signed number
to specify a relative generation of a data set that is a member of a gener-
ation data group. To store members into an already allocated data set,
specify the keyword DDNAME= followed by the DDNAME already allo-
cated. The minimum abbreviation is DD=.
SHARED Allocate the data set shared rather than exclusive write. For a PDS you
must ensure that no other stage or user allocates the data set for write
concurrently.
DELIMITER Specify the delimiter string that separates members in the input stream.
The string must match the leading characters of an input record. The
following word of the input record is then taken to be the member name.
The default is /*COPY /, which has a trailing blank.
INDELIMITER User data to be stowed in the member is present in the delimiter record
as an unpacked hexadecimal string, which follows the member name.
ISPFSTATS Update or create status information associated with the member. This
information is kept in the user data field of the PDS directory entry. The
information is in the ISPF format.
USERDATA Specify the user data to be associated with all members created or
replaced. This information is kept in the user data field of the PDS direc-
tory entry. The data need not be in ISPF format.
The options COERCE, CHOP, and PAD are used with fixed record format data sets. COERCE
specifies that the input records should be padded with blanks or truncated to the record
length of the data set. CHOP specifies that long records are truncated; input records must
be at least as long as the record length for the data set. PAD specifies the pad character to
use when padding the record. Input records must not be longer than the record length of
the data set when PAD is specified alone.
Input Record Format: The delimiter record contains the specified string beginning in
column one. The member name is specified after the delimiter string. If it is requested,
the following word contains the user data string. The first input record must be a delimiter
(so that the name for the first member can be specified).
Commit Level: writepds starts on commit level -2000000000. It opens the DCB and and
then commits to level 0.
Examples: To create the individual members of the TSO Pipelines help library:
/* Build Help library */
'PIPE',
'|< dd=fplparms(fplhelp) ',
'| unpack ',
'| writepds dd=fplhelp delimiter /%COPY% /
Members in the composite help file are separated by delimiter records that contain %COPY%
in the left margin.
Notes:
1. pdswrite is a synonym for writepds.
2. Note that the delimiter is specified in a different way than done in maclib.
CMS
┌─00E──────────────────────┐
──XAB──┼──────────────────────────┼──
├─devaddr──────────────────┤
└─┬────────┬──FILE──number─┘
└─READER─┘
Warning: xab behaves differently when it is a first stage and when it is not a first stage.
Existing data can be overlaid when xab is unintentionally run other than as a first stage.
To use xab to read data into the pipeline at a position that is not a first stage, specify xab
as the argument of an append or preface control. For example, |append xab ...|
appends the data produced by xab to the data on the primary input stream.
Syntax Description: Arguments are optional. The default is to read or write the external
attribute buffer of the virtual printer on address 00E. Specify a device address of a virtual
printer to reach one at some other address. Write the SPOOL file number after the keyword
FILE to process a particular file. Specify READER when the file is on the reader chain; the
file is assumed to be on the printer chain if READER is omitted.
Operation: An external attribute buffer is read from a virtual printer or from a file when
xab is first in a pipeline.
The buffer is replaced with the contents of the first input record when xab is not first in a
pipeline. The input file is shorted to the output after the buffer is set successfully.
CMS
──XEDIT──┬────────────────────┬──
└─fn──┬────────────┬─┘
└─ft──┬────┬─┘
└─fm─┘
Warning: xedit behaves differently when it is a first stage and when it is not a first stage.
Existing data can be overlaid when xedit is unintentionally run other than as a first stage.
To use xedit to read data into the pipeline at a position that is not a first stage, specify
xedit as the argument of an append or preface control. For example, |append xedit
...| appends the data produced by xedit to the data on the primary input stream.
Syntax Description: The arguments are the file name, type, and mode of the file to
process. An asterisk (*) is used for components that are not specified. The default is the
current file.
Operation: When xedit is first in the pipeline, lines from the file are copied into the pipe-
line starting at the current line pointer (which XEDIT advances to the next line after each
line is read). xedit suspends itself to let other stages run before it obtains each record from
the host interface. When xedit is resumed, it ensures that the primary output stream is still
connected. The complete line up to the LRECL is available when reading from XEDIT; the
VERIFY and ZONE settings have no effect. For variable record format files, XEDIT strips
trailing blanks down to a minimum of one blank. Position the file at the top before
reading to get all lines in the file.
When xedit is not first in a pipeline, lines in the file are replaced with records from the
pipeline starting at the current line or are added to the file if the current line is at the
end-of-file. (XEDIT advances to the next line after each line is replaced.) To replace the
entire contents of a file, ensure that the file is empty before using xedit to add lines to the
file. To add lines to the file, ensure that the current line is at the end-of-file. For fixed
record format files, records must be exactly as long as the XEDIT logical record length; use
pad to extend the record. You must have sufficient WIDTH for the longest record when
you write to a variable record format file; lines are truncated at the width by XEDIT when
SPAN is OFF. CASE, IMAGE, TABS, and TRUNC settings have no effect on lines appended to
or replaced in the file by xedit. The record is also copied to the primary output stream (if
it is connected).
Examples: Issue these commands from the XEDIT command line to count the number of
blank-delimited words in the file (or you could make an XEDIT macro with the subcom-
mands):
top
cms pipe xedit | count words | xmsg
The result is displayed as an XEDIT message. The file will be left positioned at the end of
the file.
To count the number of words in the first five lines (or the entire file if it contains fewer
than five lines):
:1 pipe xedit | take 5 | count words | xmsg
The file will be left positioned on line 6, because XEDIT moves the current line forward
after xedit has obtained the fifth line.
To position the file after the first line that is longer than 80 bytes:
:0 pipe xedit | locate 81
This example relies on the fact that locate will terminate without consuming its current
input record when it discovers that its output streams are no longer connected. As used
here, locate will discard records that are 80 bytes or shorter without trying to write to an
output stream. When locate writes the first line that is 81 bytes or longer, it discovers that
its primary output stream is not connected and terminates. xedit will terminate because
this severs its primary output stream. XEDIT will have advanced the read pointer to the
next line.
Notes:
1. An XEDIT session must exist or be set up before PIPE is issued to process a pipeline
specification. Queue or stack a pipeline command before invoking XEDIT:
/* Process reader files */
queue 'cms pipe cp q rdr * all | procrdr | xedit'
'XEDIT READER FILES S'
You can also issue the PIPE command from the XEDIT profile.
2. Lines are read and written in files in the topmost XEDIT ring; you cannot access files in
an XEDIT session that has invoked XEDIT recursively.
3. You cannot directly insert records through the interface used by xedit; for records
shorter than 253 bytes you might be able to use:
One final point: This example assumes that the file has variable record format. If the
file is fixed record format, the record that is passed to XEDIT must be as long as the
record length set for the file. Use chop and pad to coerce the record into the correct
format.
4. The RANGE and SCOPE settings control which records are read from the file or
replaced. Only lines with a selection level in the range between the limits set by the
DISPLAY XEDIT subcommand are made available or replaced by xedit when SCOPE is
DISPLAY. When a range is set by the RANGE XEDIT subcommand, only lines within
that range are available or replaced. It appears that the selection level for lines added
or replaced is set to the lowest value in the display range.
5. Set the file mode to S to be sure that a work file is not stored on disk accidentally.
6. Multiple xedit stages processing the same file are not recommended. XEDIT advances
the line pointer after a line is read or written; it is difficult to predict the order in
general.
7. XEDIT does not document which settings have any effect for the underlying interface.
──┬─XLATE─────┬──┬──────────────────────┬──
└─TRANSlate─┘ ├─inputRange───────────┤
│ ┌──
──────────────────┐ │
─(──inputRange──)─┴─┘
└──
┌──
─────────────────────┐ ┌──
──────────────────┐
┬───────────────────┬┴───
─── ┬────────────────┬┴──
└─┤ default-table ├─┘ └─xrange──xrange─┘
default-table:
├──┬─INput──────────────────────────┬──┤
├─OUTput─────────────────────────┤
├─UPper──────────────────────────┤
├─LOWer──────────────────────────┤
└─┬─TO───┬──┬──────────┬──number─┘
└─FROM─┘ └─CODEPAGE─┘
Type: Filter.
An input range or a list of input ranges in parentheses is optional as the first argument.
One or more translate tables are optional after the input ranges. When more than one
translate table is specified, the resulting table is the cumulative effect of the tables
specified, in the order specified. The upper case table is used if there are no arguments or
only input ranges and the secondary input stream is not defined. The neutral table is used
if there are additional arguments and no keyword is recognised for the default table. You
may use an upper to lower table or one of the three CMS translate tables (upper case, SET
INPUT, and SET OUTPUT). The tables for INPUT and OUTPUT default to the neutral one when
no such SET is in effect.
When LOWER is used, the lower case translation table is constructed as the inverse of the
CMS upper case translation table. If the upper case translate table translates two or more
characters to a particular upper case one, the character with the lower hex value is used in
constructing the upper to lower table.
Tables translating between codepage 500 and one of the national use codepages are
provided with the keywords TO and FROM. Figure 397 on page 682 shows the supported
codepages. The first column contains the codepage number; the second column contains
the base type (EBCDIC or ASCII); and the last column contains the country name (where it is
known).
You can modify the translation further with translation elements, which map a “from”
character or range into a “to” character or range. Each element of a from/to pair may be
specified as a character, a two-character hex code, or a range of characters (xrange).
Modifications to the starting translate table are made in the order they appear in the argu-
ment list. A character can be specified more than once; the last modification is the one
that is used.
If a range is specified for the “from” part of a translation element and the “to” range is
shorter than the “from” range, the last part of the “from” range is translated to the last (or
only) character of the “to” range. That is, the last character is “sticky”. For example,
00-02 0-1 causes X'00' to be translated to 0; both X'01' and X'02' are translated to 1
(=X'F1').
Operation: If the secondary input stream is connected, a record is read from it before the
primary stream is processed. The first 256 characters of this first record are used as the
initial translate table, which is then modified as described above.
For each record on the primary input stream, xlate builds an output record with the same
length as the input record.
The contents of the input record are copied into a buffer. Input ranges are then processed
in the order specified in the first argument; the contents of a column are replaced by the
corresponding value from the translate table. Depending on the contents of the table,
multiple translates may change a character to a different character than the original trans-
lation. A column outside all ranges is left unchanged.
Streams Used: If the secondary input stream is defined, one record is read and consumed
from it. The secondary input stream is severed before the primary input stream is proc-
essed. The secondary output stream must not be connected.
Commit Level: xlate starts on commit level -2. It verifies that the secondary output
stream is not connected and then commits to level 0.
Premature Termination: xlate terminates when it discovers that its output stream is not
connected. The corresponding input record is not consumed.
Examples: To remove punctuation and other special characters except single quotes:
... | xlate *-* 40-7f blank ' ' | ...
Modifications replace the default translate table; they are not performed in addition to this
table:
pipe literal abcABC | xlate upper a z | console
zBCABC
Ready;
pipe literal abcABC | xlate e2a a z | console
zÂÄ âä
Ready;
Zoned decimal data are just like normal numbers, except that the sign is encoded in the
leftmost bits of the rightmost digit. Positive numbers are indicated by a “digit” that is “A”
through “I” for 1 through 9, respectively. Negative numbers are represented by “J”
through “R”. (And presumably one should interpret 1 through 9 themselves as unsigned.)
A zero digit is represented by the national use characters X'C0' and X'D0', respectively.
ZONE2DEC REXX, which is shown in Figure 398 on page 684, is a sample filter to convert
zoned decimal data in columns 1 through 5 to humanly readable format:
Notes:
1. For compatibility with the past, xlate A2E is equivalent to xlate from 819. xlate E2A
is equivalent to xlate to 819. These represent mappings between codepage 500, the
international base codepage, and codepage 819, ISO 8859 Latin Character Set 1
(Western Europe).
2. Use a placeholder range *-* when the complete record is to be translated with
modification to the neutral table; this is de rigueur when the first “from” range is a
valid range (for instance 40), but it is a good habit to use the placeholder even when
the first “from” range cannot be taken for a range.
3. Modifications to the default table specify the direct input to output mapping; they are
not performed after the default mapping. Beware, in particular, when the default table
translates between EBCDIC and ASCII.
4. Use a cascade of xlate stages to perform stepwise translation. You can also compute a
composite translate table by sending the neutral table (from xrange) through the
cascade of xlate stages. Then provide this table on the secondary input stream to a
single xlate stage. This may give a marginal improvement of performance for large
files.
¡ 5. The location of input ranges are computed based on the original input record. For
¡ example, translating blanks to a non-blank character does not change the position of a
¡ particular word.
CMS
──┬─XMSG─────┬──
└─XEDITMSG─┘
Operation: Lines are prefixed msg and then directed to XEDIT as commands, so that the
text will be displayed on the XEDIT screen (as determined by the SET MSGLINE XEDIT
subcommand).
Examples: This XEDIT macro runs the arguments as a pipeline, writing its output as
XEDIT messages:
/* PIPE XEDIT */
address command 'PIPE' arg(1) '| xmsg'
Notes:
1. The number of message lines that can be displayed on the XEDIT screen is controlled
by the XEDIT command “set msgline n m overlay”. XEDIT reverts to line mode
output when more messages are queued than can be displayed on the screen.
2. XEDIT may not display the message immediately. Use the REFRESH XEDIT subcom-
mand to show the queued messages. You can turn the output from xmsg into refresh
requests and issue these with subcom XEDIT, but more craftiness is required to display
more than one message at a time and be sure the messages stay on the screen long
enough to be read. Experiment with this kind of pipeline:
/* Issue output as messages and wait */
'PIPE ...
'| elastic', /* Buffer messages while we wait */
'| xmsg', /* Issue messages */
'| spec read read read read /command refresh/ 1', /* See below */
'| subcom xedit', /* Issue refresh command */
'| spec /+5/ 1', /* Make it a delay */
'| delay', /* Wait five seconds */
'| hole' /* Done; must stay connected */
The elastic stage decouples the pipeline segment shown from the rest of the pipeline,
which can proceed even though xmsg may not be ready to read more messages. The
messages are issued with xmsg. spec is used to drop four records (the four READ
keywords) and transform every fifth record into a refresh command which the subcom
stage issues to XEDIT. The subcom stage also passes the record to the next stage, spec,
which transforms it to the string “+5”. When the following delay stage reads this
record, it causes the pipeline to wait for five seconds. The xmsg stage also waits for
five seconds before reading more messages because none of the stages in this partial
pipeline delay the record.
──XPNDHI──┬────────┬──
└─number─┘
Operation: Records that do not have X'00' in column one are copied unchanged to the
output.
Descriptor records (having X'00' in the first position) in a data stream in the format of the
output from overstr are changed to make spaces between highlighted words highlighted as
well.
Streams Used: Records are read from the primary input stream and written to the primary
output stream. Null input records are discarded.
Premature Termination: xpndhi terminates when it discovers that its output stream is not
connected.
Notes:
1. xpndhi is intended to be used between overstr and buildscr when the resultant data are
to be displayed in reverse video on a 3270 terminal.
──XRANGE──┬────────────┬──
├─xrange─────┤
└─xorc──xorc─┘
Syntax Description: If the argument string is a single word, it is scanned for a range of
characters. Otherwise it must be two words, which specify the beginning character
followed by the ending character.
Note that the output record contains all values in the range:
pipe xrange i j | console
i«»‰–ijffi°j
Ready;
You can use two hexadecimal digits:
pipe xrange f0 f9 | console
0123456789
Ready;
The argument is assumed to be a selection stage; that is, it should specify a program that
reads only from its primary input stream and passes these records unmodified to its
primary output stream or its secondary output stream without delaying them.
──ZONE──inputRange──┬───────┬──┬─────────┬──word──┬────────┬──
└─CASEI─┘ └─REVERSE─┘ └─string─┘
Type: Control.
Syntax Description: An input range and a word (the name of the program to run) is
required; further arguments are optional as far as zone is concerned, but the specified
program may require arguments.
Streams Used: Records are read from the primary input stream; no other input stream
may be connected.
Commit Level: zone starts on commit level -2. It does not perform an explicit commit;
the specified program must do so.
Examples: To select records between those that end in “on” and “off”:
pipe ... | zone -3;-1 reverse between /no/ /ffo/ | ...
Notes:
1. Use asmfind or asmnfind as the argument to zone only if you understand the impli-
cations of shifting the continuation column in the input records that the selection stage
reads.
2. The argument string to zone is passed through the pipeline specification parser only
once (when the scanner processes the zone stage), unlike the argument strings for
append and preface.
¡ 3. End-of-file is propagated from the streams of zone to the corresponding stream of the
¡ specified selection stage.
Return Codes: If zone finds no errors, the return code is the one received from the
selection stage.
¡ ┌─FROM16bit───────────────────────┐ ┌──
────────┐
─number─┴──
──┬─3277BFRA─┬──┼─────────────────────────────────┼───
¡ └─3270BFRA─┘ ├─┬─ROWCOL───┬──┬───────────────┬─┤
¡ │ └─TOROWCOL─┘ └─WIDTH──number─┘ │
└─TO16BIT─────────────────────────┘
Syntax Description:
TO16BIT Convert a 12-bit buffer address from encoded form to binary. If the two
leftmost bits of both bytes are nonzero, a 12-bit binary address is
extracted from the rightmost six bits of the two bytes. If either of the
two bytes contain zeros in the two leftmost bits, the buffer address
remains unchanged.
WIDTH Specify the screen width. The default width is 80.
¡ The final numbers specify the columns where the buffer addresses begin. Up to ten
¡ numbers may be specified.
Premature Termination: 3277bfra terminates when it discovers that its output stream is
not connected.
Examples: To generate a set buffer address order for each line on a screen:
/* Build screen */
'PIPE (name 3277BFRA)',
'|stem data. ',
'|spec x11 1 number from 0 by 80 d2c 2.2 right 1-* next',
'|3277bfra 2',
'|join *',
'|var buffer'
Notes:
¡ 1. For compatibility with previous releases, the operands can also be specified as a single
¡ number followed by the keyword TO16BIT.
──┬─3277ENC─┬──
└─3270ENC─┘
Operation: A 64-byte record is written. The first byte contains the encoding for X'00';
the second byte contains the encoding for X'01'; and so on.
The encoding string can also be fed to xlate’s secondary input stream to be used to trans-
late a stream of records that contain an attribute value followed by data. Assume that the
input record contains three fields: the first two columns contain the buffer address (16-bit
addressing); the third position contains the attribute byte; and the remainder of the record
contains the data to be put into a field having the specified attribute at the specified
address.
The buffer address and the attribute byte are processed according to their natures. The
data part of the record is then translated to ensure it cannot interfere with the device
orders. Finally, the 3270 device orders are inserted into the data stream.
'callpipe (end ? name 3277ENC)',
'|*:', /* Input file */
'|3277bfra 1', /* Make buffer address "printable" */
'|x: xlate 3', /* Make attribute "printable" */
'|xlate 4-* 01-3f blank ff blank', /* Rub out other controls */
'|spec x11 1 1.2 next x1d next 3-* next', /* Orders */
'|*:', /* Write to output */
'?3277enc', /* Get encoding vector */
'|x:' /* Pass it to XLATE. */
¡ ──64DECODE──
¡ Type: Filter.
¡ Operation: 64decode produces one output record for each input record.
¡ Input Record Format: The encoded data are represented in EBCDIC. The records should
¡ be a multiple of four bytes in length, but this is not enforced.
¡ Premature Termination: 64decode terminates when it discovers that its output stream is
¡ not connected.
¡ ──64ENCODE──
¡ Type: Filter.
¡ Streams Used: Records are read from the primary input stream and written to the primary
¡ output stream. Null input records are discarded.
¡ Premature Termination: 64encode terminates when it discovers that its output stream is
¡ not connected.
This chapter contains a description of the syntax of spec in the form of annotated syntax
diagrams. Instead of presenting a single enormous syntax diagram followed by explana-
tion, this chapter intermixes syntax diagrams and their explanation.
Overview
spec builds output records based on the contents of input records, constant fields, and data
generated internally. spec supports any number of input and output streams.
The format of the output record is specified by a list of items, one for each field; thus the
list is called a specification list.
You can think of the specification list as a program that is run for each input record. Each
item in the list is a step in this program. Performing the action specified in an item is
called issuing the specification item. Specification items that are not issued are ignored.
spec ignores case in keywords, but respects case in stream identifiers, field identifiers, and
in literal strings.
spec can compute numeric expressions, compare fields and numbers, and issue items
! conditionally. It stores values in counters, which persist from record to record.
Though spec clearly does not require specialised hardware, it can be viewed as a software
simulation of a hardware architecture. (Indeed, all programs can.) Figure 399 shows the
parts of spec, that you, the programmer of this abstract machine, can reach.
Rest assured that no hardware knowledge is required to use spec effectively. But you will
have a head start if you have past experience with the IBM 407 Accounting Machine,
which influenced the design of spec.
If you have experience with RPG, most of the concepts will be familiar as well; this is no
coincidence, because RPG, too, has its roots in accounting machines.
Concepts
The Cycle
To perform a cycle, spec issues the items in the specification list from left to right. At the
beginning of a cycle, spec synchronises the input streams it uses; that is, it peeks at all
input streams it uses before issuing any specification items. This ensures that a record is
available on all streams at the beginning of the cycle. At the end of a cycle, spec
consumes an input record from each of the streams it uses.
In addition to the streams, spec can store the previous record from the primary input
stream into a buffer known as the second reading station and make this available as a
: pseudo input stream on the next cycle. The primary input stream is also known as the first
: reading station.
Streams
The primary input stream is selected at the beginning of the cycle unless a SELECT item
precedes the first item that refers to an input range. With this proviso, spec uses all
streams that are mentioned in SELECT items, even when no data field is referenced from the
stream. BREAK, EOF, SELECT SECOND, and break() imply reading the primary input
| stream because they refer the second reading, which is derived from the primary input
| stream.
At the beginning of each cycle, spec sets the primary output stream as the one to receive
the next record. This can be modified by the OUTSTREAM and NOWRITE items.
When the second reading is the only referenced stream, spec reads the first record from the
primary input stream directly into the second reading before taking the first cycle, having
the second record at the primary input stream.
When both the first and the second reading are used, spec performs an initial runin cycle
with the first record at the primary input stream and a null record at the second reading.
Any specification items that are subject to SELECT SECOND are ignored during the runin
cycle. This prevents a spurious subtotal.
When the EOF item or the second reading is used, spec performs a final runout cycle after
it comes to end-of-file on its input; the first reading contains a null record during this cycle
: (and the second reading contains the previous record on the primary input stream).
| If a EOF specification item is present, it will be the first issued during the runout cycle;
| otherwise the normal sequence is issued. In either case, any specification items that are
subject to SELECT FIRST are ignored during the runout cycle to prevent a spurious heading.
The scope of a field identifier is from the specification item where it is defined to the next
READ or READTO, or to the end of the specification list. Field identifiers cannot be defined
in specification items that are issued conditionally; that is, after EOF or BREAK; or within an
! IF or WHILE group.
When a field identifier is referenced in a BREAK item or by the built-in function break(),
a test is performed to determine whether a break is established for this level or not. This
is determined at the time the identified item is issued, not at the time of the test. The
break levels are ordered from “a” (lowest) through “z” and further from “A” through “Z”
(higher). End-of-file represents an even higher break level.
When a specification item is issued and it has an identifier that is used to control breaks,
the fields are compared as follows:
If SELECT SECOND is in effect for the item, the contents of the equivalent field on the
primary input stream are compared with the field specified in the item. That is, the
first and second readings are compared.
If some other select is in effect, the field is compared with the equivalent field in the
second reading station. Normally, the primary input stream would be selected to allow
consecutive records to be tested, but it is not an error to test some other input stream;
maybe it contains the same data as the corresponding record on the primary input
stream.
A break at some level also establishes the break at all lower levels, but the break must be
established before it takes effect. That is, the specification item that defines the field that
causes the break to be established must have been issued.
! Structured Data
! Fields in input and output records may be referenced as members of structures (see
! Chapter 6, “Processing Structured Data” on page 91).
! A data type may be associated with a member of a structure, specified by a single char-
! acter. The types recognised by spec are:
! Note that explicit conversion in a specification item disables a numeric member type
! including any scale; an output placement that specifies a member will contribute only the
! position and field length.
Counters
! Data, numbers as well as strings, are stored in counters. Counters are referenced by inte-
gers that are zero or positive. There is no arbitrary limit to the number of counters; and
you should not worry overly about which numbers you use. Do avoid, however, a large
range of unused counter numbers.
! The counters are initialised to contain a null value (which converts to 0 or a null string, as
! appropriate for the context) when spec starts; after that they are set only as a side effect of
evaluating expressions.
! A particular counter may hold a number at some time and be assigned a string at other
! times. That is, counters are not declared to contain a particular data type.
Number Representation
Arithmetic is done in signed decimal floating point with thirty-one significant digits and an
| exponent range from -2G to 2G-1. (This is not the decimal floating point recently intro-
| duced in IBM’s Power Systems and System z machines.) For division, the significance of
the divisor is reduced to fifteen digits due to the System/390 hardware implementation.
This may affect the precision of division, integer division, and remainder operations.
¡ In addition to the numeric value, an exactness latch is associated with the contents of a
¡ counter. This latch can be tested with the exact() built-in function; it is set in several
! ways:
! When a string is converted to load into a counter. The counter will be exact unless
! more than thirty-one significant digits are present and a nonzero digit is dropped.
! Assignment from a member that has D type is always exact.
! Assignment from a member that has F type is also exact, even though the input data
! may have lost exactness at an earlier time. For example, 1.3 cannot be represented as
! an exact hexadecimal floating point number.
! Assignment from a member that has P type first sets the exactness from the sign of the
! input field. Exactness is lost when significant digits are truncated as part of the
! assignment.
! The sign nibble A represents an exact positive number; B represents an exact negative
! one; D represents inexact negative; and the three remaining ones all represent inexact
! positive.
! Exactness is exposed on output to a member that has the packed type; truncation of
! significant digits in this assignment will make the output field inexact even when the
! source number is exact.
When an expression is evaluated to control the issuing of specification items, a zero result
represents the Boolean value false; other values represent the Boolean value true.
Expressions
! spec supports numeric and string expressions.
spec computes expressions using data from input records, constants, and the contents of
! counters. The result can be stored in a counter or formatted for printing, or both. spec
! can compare the contents of counters, the contents of an identified input field, and literals
! in any combination. Comparison is numeric or by character, as determined by the oper-
! ator.
The result of an expression can alter the flow through specification items to issue items
! selectively (if/then/else) or repeatedly (while).
Syntax Recursion
While it may be a strange programming language, the arguments to spec support many
concepts that are normally associated with programming languages. Supporting general
constructs, such as nested conditional statements or expressions whose terms can be
expressions in their own right, invariably leads to recursive syntax diagrams.
But let them not intimidate you; the infinities are easily resolved. Consider this diagram:
Construct:
├──┤ Part ├──┬───────────────┬──┤
└─┤ Construct ├─┘
Such a syntax diagram should be read in this way: A construct must contain one part,
because the part is on the main track. After the part you have a choice; you can end the
construct right there by taking the straight exit line; or you can make a recursion by taking
one more construct, which would lead you to one more part, and so on for as long as you
wish.
Syntax Description
Syntax Overview
The arguments to spec contain options followed by a group of specification items. The
options must be first in the argument string.
Main Options
keywords that control the general behaviour of spec are specified at the beginning of the
argument string.
MainOptions:
┌─STOP──ALLEOF─────┐
├──┼──────────────────┼──┬─────────────────────────────────┬──┤
└─STOP──┬─ANYEOF─┬─┘ ├─PRINTONLY──┬─letter─┬──┬──────┬─┤
! └─number─┘ │ └─EOF────┘ └─KEEP─┘ │
! └─COUNTERS──number────────────────┘
STOP Specify when spec should terminate. ALLEOF, the default, specifies that
spec should continue as long as at least one input stream is connected.
ANYEOF specifies that spec should stop as soon as it determines that an
input stream is no longer connected. A number specifies the number of
unconnected streams that will cause spec to terminate. The number 1 is
| equivalent to ANYEOF. STOP must be specified first.
PRINTONLY Output records will be suppressed unless the specified break level has
been established. The break level can be a letter (case is respected) or it
can be the keyword EOF, which specifies that records are written only
after the input reaches end-of-file or the condition specified with STOP is
satisfied.
!! KEEP Do not reset the output buffer when write is suppressed due to the break
! level. This allows the contents of several input records to be made
! present in a single output record.
!! COUNTERS Specify the largest counter number referenced as an array member. spec
! allocates at least counters 0 through the number specified.
Item Group
An item group is a list of specification items that are issued sequentially. Each item in a
group can be a plain item, an IF group, which is a group from which some items are issued
! and others are suppressed, or a WHILE group, which is a group that is issued repeatedly
! while an expression evaluates to true.
ItemGroup:
├──┬─┤ PlainItem ├──┬──┬───────────────┬──┤
├─┤ IfGroup ├────┤ └─┤ ItemGroup ├─┘
! └─┤ WhileGroup ├─┘
! If Groups and While Groups are known collectively as conditional groups. They can nest
| one inside another in any combination to a depth of sixteen. There is no ambiguity of, for
| example, ELSE because each nested group is terminated with a distinct keyword (ENDIF or
| DONE).
| Conditional groups cannot contain input sources that have identifiers, as it is indeterminate,
| in general, whether the field has been defined or not.
If Group
A group of specification items from IF to the matching ENDIF is called an IF group.
Depending on the result of evaluating an expression, some of the specification items are
issued and others are ignored. Because an IF group can contain several tests, it is similar
to the REXX Select instruction.
IfGroup:
├──IF──┤ Expression ├──THEN──┤ ItemGroup ├──
┌──
───────────────────────────────────────────────┐
┬─────────────────────────────────────────────┬┴──
───
└─ELSEIF──┤ Expression ├──THEN──┤ ItemGroup ├─┘
──┬─────────────────────┬──ENDIF──┤
└─ELSE──┤ ItemGroup ├─┘
| ELSE IF is also recognised, but this starts a new IF group that is nested within the ELSE
| group. The nested IF group could be followed by other specification items as part of the
| ELSE group.
! While Group
! A group of specification items from WHILE to the matching DONE is called a WHILE group.
! Depending on the result of evaluating an expression, the specification items in the group
! are either skipped or issued repeatedly until the expression evaluates to false.
! Ensure that the loop always terminates. If it does not, PIPMOD STOP ACTIVE will terminate
! spec.
!
! WhileGroup:
! ├──WHILE──┤ Expression ├──DO──┤ ItemGroup ├──DONE──┤
Plain Item
A plain item can describe a field in an output record; it can control input and output
streams; it can react to control breaks; it can specify a pad character; it can specify an
! expression to evaluate for its side effects; and it can establish or disable a qualifier for use
! in specifying members of structures.
PlainItem:
├──┬─┤ DataField ├────────────────────────────────────────────┬──┤
├─┤ StreamControl ├────────────────────────────────────────┤
├─┤ BreakControl ├─────────────────────────────────────────┤
├─PAD──xorc────────────────────────────────────────────────┤
├─SET──┤ expression ├──────────────────────────────────────┤
! │ ┌─BOTH───┐ │
! └─Qualify──┬─────┬──┼────────┼──┬─identifier──┬────────┬─┬─┘
! └─ALL─┘ ├─INput──┤ │ └─number─┘ │
! └─OUTput─┘ ├─-──────────────────────┤
! └─.──────────────────────┘
PAD Specify the character to insert in the output record between output fields.
The default pad character is the blank.
SET Specify an expression to be evaluated for its side effects. (That is, to set
variables.) The result is discarded.
!! QUALIFY Specify the qualifier for all streams or the currently selected stream of
! the specified type, or both input and output. The number specifies the
! beginning column for the structure; column 1 is the default. You must
! specify a number when the next item is an input source that also is a
! single column. A period or a hyphen disables any active qualifier.
! Specify BOTH explicitly when the identifier scans as one of the
! input/output keywords.
! The QUALIFY item can be specified anywhere in the item list; in partic-
! ular, a member need not follow, as it must when used in an inputRange.
! Notes:
! 1. Using a qualifier that applies to a particular input stream is definitely useful when
! several members are referenced in a number of specification items interspersed with
! SELECT items.
Stream Control
The stream control items read or write records during the cycle; they select the source of
data for subsequent input fields; and they select the stream to which subsequent writes will
be directed.
StreamControl:
├──┬─SELECT──┬─stream─┬─┬──┤
: │ ├─FIRST──┤ │
│ └─SECOND─┘ │
├─READ───────────────┤
├─READSTOP───────────┤
├─OUTSTREAM──stream──┤
├─WRITE──────────────┤
└─NOWRITE────────────┘
SELECT Select the source for input data in subsequent field items. Specify the
stream identifier for then input stream you wish to process.
: Or specify one of the keywords FIRST and SECOND to use the record in
: that particular reading station. SELECT FIRST is a convenience for
: SELECT 0.
| NOWRITE Suppress the write at the end of the cycle. NOPRINT is a synonym for
| NOWRITE
Break Control
Specification items after a break control are issued only if a break level has been estab-
lished at least to the level specified.
Break items are not allowed in IF groups. The break items are a convenience; they can
also be formulated with IF.
BreakControl:
├──┬─BREAK──┬─letter─┬─┬──┤
: │ ├─FIRST──┤ │
: │ └─EOF────┘ │
└─EOF───────────────┘
BREAK The subsequent specification items are issued only if a break has been
letter established at the specified level or higher. Otherwise all items up to the
next break control are ignored. Further break items are allowed.
:: BREAK FIRST The subsequent specification items are issued only on the runin cycle.
: Further break items are allowed.
:: BREAK EOF The subsequent specification items are issued only on the runout cycle.
: Further break items are allowed.
EOF The remainder of the specification list is issued only on the last cycle
performed by spec; it is ignored on other cycles. No further break items
are allowed in the specification list.
Data Field
A data field inserts a field in the output record. You must specify an input source and an
output placement; you may also specify conversion of the contents of the field from one
representation to another one; and you may specify that the input field is stripped of blanks
before it is converted (or before it is placed in the output, if you specify no conversion).
A data field is suppressed if it refers to an input range that is not present in the record and
the output placement does not specify an explicit length for the output field; the item is
then ignored.
DataField:
├──┤ InputSource ├──┬───────┬──┬────────────────┬──
└─STRIP─┘ └─┤ Conversion ├─┘
──┬─┤ OutputPlacement ├─┬──┤
└─.───────────────────┘
¡ STRIP Strip the input field of leading and trailing blanks for subsequent use
¡ within this specification item. Other specification items that reference
¡ this specification item through a field identifier use the original contents
¡ of the input range.
. (A period.) Do not insert the field into the output record. The period
makes sense only if the field originates in an input record and a field
identifier is specified; specifying a period with other items has the same
| effect as omitting them. (Well, almost; any side effects of evaluating a
| PRINT expression would still occur.)
Input Source
The input source specifies where data come from. Data can originate in an input record;
they can be the result of evaluating an expression; they can be a literal constant; or they
can be generated internally in spec.
InputSource:
├──┬─┬─────────┬──inputRange─────────────────────────────┬──┤
│ └─letter:─┘ │
├─NUMBER──┬───────────────┬──┬─────────────┬──────────┤
│ └─FROM──snumber─┘ └─BY──snumber─┘ │
├─TODclock────────────────────────────────────────────┤
├─ID──letter──────────────────────────────────────────┤
├─PRINT──┤ Expression ├──┬──────────────────────────┬─┤
│ └─PICture──word──┬───────┬─┘ │
¡ │ └─ROUND─┘ │
└─delimitedString─────────────────────────────────────┘
TODCLOCK Data originate in spec. Eight bytes are provided containing the value of
the time-of-day clock at the beginning of the cycle. The field contains a
64-bit binary counter; the thirty-first bit is incremented slightly less often
than once a second. Refer to z/Architecture Principles of Operation,
SA22-7832, for a description of the time-of-day clock.
The implied field length is eight bytes.
¡ ID Refer back to the contents before stripping of a previously defined input
range.
The implied field length is the length of the input range.
PRINT Compute an expression and format the result. You may specify PICTURE
to supply the picture under which the result is presented. Refer to
“Pictures” on page 720 for the syntax of a picture. The default picture
¡ is eleven digits with a drifting minus sign. When a picture is specified,
¡ the expression must evaluate to a numeric result; when the picture is
¡ omitted, a string expression is also acceptable. When a picture is used,
¡ the value is truncated unless ROUND is specified.
The implied field length is the length of the picture (ignoring a V, if one
is specified).
delimitedStringThe delimited string represents a constant in all cycles. The constant
can be specified as a string between delimiter characters, as a
hexadecimal literal, or as a binary literal.
The implied field length is the length of the literal.
Conversions
The conversions are modelled on the REXX built-in functions to convert between binary
and other formats.
The conversion routine names are of the form x2y, where x represents the data format
before conversion and y represents the format desired for the result.
The REXX function names may be confusing; if so, the confusion is carried over into spec.
The confusion stems from the fact that the format C (for character) is not usually printable
characters, rather it is the internal form of the data, as represented inside the computer.
Thus, d2c converts from printable decimal to four bytes of binary data. Possibly, you
would have expected it to be the other way round, but there you are; REXX works the same
way.
! Note: Explicit conversion disables any numeric type specified for a member, both in the
! input source and in the output placement. In general, explicit conversion is incompatible
! with structured data.
conversion:
¡ ├──┬─C2D(8)─────────────────────┬───(1) ─┤
¡ ├─C2T──┬───────────────────┬─┤
¡ │ │ ┌─0───────┐ │ │
¡ │ └─(──┼─snumber─┼──)─┘ │
¡ │ └─*───────┘ │
! ├─C2U(8)─────────────────────┤
¡ ├─D2C(8)─────────────────────┤
¡ ├─T2C──┬───────────────────┬─┤
¡ │ │ ┌─0───────┐ │ │
¡ │ └─(──┼─snumber─┼──)─┘ │
¡ │ └─*───────┘ │
! ├─U2C(8)─────────────────────┤
├─f2t────────────────────────┤
├─P2t(snumber)───────────────┤
└─f2P(snumber)───────────────┘
Note:
1 The conversion routines are B2C D2C F2C I2C P2C T2C U2C V2C X2C C2B C2D C2F C2I
C2P C2T C2U C2V C2X and selected direct conversions.
These routines convert from internal representation (binary) to a format that can be printed
or displayed on a terminal:
C2B Convert bytes to bit string (unpack bytes to bit strings). For each character in
the input field, the result has eight bytes containing the character 0 or 1
(X'F0' or X'F1').
C2D Convert a binary integer using two’s complement notation for negative
¡ numbers to a character string. C2D operates on 32-bit integers; C2D(8) operates
¡ on 64-bit integers. This is the format used for fixed point integers on IBM
System/360 and its descendants. Numbers must be within the 32-bit (64-bit)
precision of the fixed point instruction set; input fields longer than 4 (8) char-
| acters are allowed only if the leading characters represent a sign extension that
| can be stripped down to 4 (8) bytes. For C2D, the conversion result is 11
characters, aligned to the right, padded with blanks on the left. (It has no
¡ leading zeros.) C2D(8) produces a field that is large enough to contain the
¡ number, but, unlike C2D, no longer. That is, the number aligned to the left; be
¡ sure to specify an output range with right alignment in tabular reports. Nega-
tive numbers have a leading minus; other numbers are unsigned.
C2F Convert a doubleword of IBM System/360 long floating point representation
: (hexadecimal floating point) to scientific notation. The input field must be
between two and eight bytes long. A field that is shorter than eight bytes is
padded on the right with binary zeros. Exact zero (with either sign) is
converted to the single character, 0. The conversion result for other numbers
is a 22-character string containing the number in scientific notation. It has a
leading sign, one significant digit before the decimal point, and a 15-digit frac-
tion. Numbers with an absolute value from 1 to 10 have four trailing blanks.
Numbers numerically 10 and larger are represented with a positive exponent;
numbers numerically less than 1 are shown with a negative exponent.
C2I Convert from z/OS Julian date format to ISO (or “sorted”) timestamp format.
The input field may contain between three and seven characters. Each halfbyte
contains a decimal digit, except for a sign (which must be X'F') in the right-
most four bits of the third or fourth byte. When the sign is in the third byte,
the year is in the century beginning with the year 1900. When the sign is in
the fourth byte, the first byte contains the number of centuries beyond 1900.
Up to three bytes containing a timestamp are allowed after the sign. The
output field contains at least eight characters for the date, expressed in ISO
format: yyyymmdd. Additional characters are appended when the input field
contains a timestamp.
¡ C2T Convert eight bytes binary time-of-day clock value to an ISO timestamp. A
¡ time zone offset in seconds can be specified as a signed number in paren-
¡ theses; specify a positive number east of Greenwich. The default is 0.
¡ Specify * to use the time zone offset that CP stores on diagnose 0.
C2P Convert a packed decimal number to a printable form. The input field must
contain a valid IBM System/360 packed decimal number (but it is not
restricted in length). A scale factor in parentheses may be appended to the
name of the conversion routine. When no scaling is specified, the output field
contains a sign (plus or minus) and an integer number. When scaling is
specified, a positive scaling indicates the number of decimal places; a negative
scaling represents additional orders of magnitude in the number. The output
field contains a sign (plus or minus), the integer part of the number (if any), a
decimal point, and the decimal fraction (if any).
! C2U Convert an unsigned binary integer. Except for sign, processing is identical to
! the corresponding variant of C2D.
C2V Select substring. The input field is a varying length character string. It
consists of a halfword that contains the length in binary (unsigned); this is
followed by the characters of the string. The string length may be in the range
0 through 65535 inclusive. The conversion result begins with the third byte of
the input field; it has as many characters from the input field as the halfword
string length specifies. It is an error if the string length is larger than the
length of the input field minus two.
C2X Convert bytes to hexadecimal (unpack hex). The conversion result has two
characters (0 through 9 and A through F) for each input character.
These routines convert a “readable” representation to the internal (binary) representation:
B2C Pack bits. The input field must consists entirely of the characters 0 and 1; the
length must be a multiple of 8. The result has one character for each 8 char-
acters in the input field.
D2C Convert from signed decimal to binary. The input field must contain a
decimal integer that may be signed or unsigned. Leading and trailing blanks
are allowed; blanks are allowed between the sign and the number. The result
¡ is 4 bytes (8 bytes for C2D(8)). with the number in binary using two’s comple-
ment notation for negative numbers. The number must be within the 32-bit
¡ (64-bit) precision of IBM System/360 integer arithmetic.
F2C Convert from decimal to floating point binary. The input field must contain
the external representation of a floating point number which can be signed and
can have an exponent. The result is 8 bytes containing the number in the
: format of a long floating point number in IBM System/360 hexadecimal
: floating point notation.
I2C Convert from ISO timestamp to z/OS Julian date format. The length of the
input field must be even, between six and fourteen. When the input field is six
characters, three bytes output is generated. When the input field is eight char-
acters or longer, the first four characters specify the year.
P2C Pack a decimal number. The input field consists of an optional sign, an
optional integer, and an optional decimal fraction. Blanks are allowed before
and after the number, and between the sign and the number. A scale factor in
parentheses may be appended to the name of the conversion routine. When no
scale factor is present, the packed number contains all digits from the input
field, right aligned; if present, a decimal point is ignored. When the scale
factor is zero, the packed number contains only the integer part of the input
field. When a negative scale factor is specified, that number of integer digits
are truncated on the right (the fraction is ignored). When a positive scale
factor is specified, the fraction is truncated or padded with zero on the right to
this number of digits.
¡ T2C Convert an ISO timestamp to eight bytes time-of-day clock. A time zone offset
¡ in seconds can be specified as a signed number in parentheses; specify a posi-
¡ tive number east of Greenwich. The default is 0. Specify * to use the time
¡ zone offset that CP stores on diagnose 0.
! U2C Convert from decimal to unsigned binary. Except for the lack of sign and the
! extended number range, processing is identical to the corresponding D2C
! variant.
V2C Prefix field length. The result consists of an unsigned halfword (16 bits) with
the length of the input field, followed by the contents of the input field. The
longest acceptable input field is 65535 bytes.
X2C Convert pairs of hexadecimal digits to single characters. The input field must
contain an even number of hex characters. As for REXX hexadecimal
constants, blanks are allowed at byte boundaries internally in the input field,
but not at the beginning or the end. The result is a character string with one
character for each two hex characters in the input field.
Some conversions are supported directly between printable formats, for example X2B. This
table summarises the supported combinations. A blank indicates that the combination is
not supported.
D X B F V P I T U
D D2X D2B
! X X2D X2B X2F X2V X2P X2I X2T X2U
! B B2D B2X B2F B2V B2P B2I B2T B2U
F F2X F2B
V V2X V2B
P P2X P2B
I I2X I2B
T T2X T2B
! U U2X U2B
Composite conversion (x2y) is performed strictly via the C format; that is, x2C followed by
C2y.
Output Placement
The output placement specifies where a field is stored in the output record. It consists of a
position and a placement option, which specifies the alignment of the data within the field.
When no explicit length is specified for the output field, the length of the data to be stored
is used as the size of the output field.
OutputPlacement:
(1) ──┬───────────┬─┬──┬────────┬──┤
├──┬─┬─Next──────┬──
│ ├─NEXTWord──┤ └─.──number─┘ │ ├─Left───┤
│ └─NEXTField─┘ │ ├─Centre─┤
├─number─────────────────────────┤ └─Right──┘
├─range──────────────────────────┤
! ├─Member──identifier─────────────┤
! └─┤ CompOut ├────────────────────┘
CompOut:
¡ ├──(──┤ Expression ├──
¡ ──┬──────────────────────────────────────────┬──)──┤
! └─,──┤ Expression ├──┬───────────────────┬─┘
! └─,──┤ Expression ├─┘
Note:
1 There is no blank between the keyword, the period, and the number.
NEXT Append the field to the end of the output record built so far. You may
append a period and a number to specify an explicit field length.
NEXTWORD Append a blank to the output record if it is not null. Then append the
NW
field to the end of the output record built so far. You may append a
period and a number to specify an explicit field length. NWORD is a
synonym; it can be abbreviated to NW.
NEXTFIELD Append a horizontal tabulate character (X'05') to the output record if it
NF
is not null. Then append the field to the end of the output record built
so far. You may append a period and a number to specify an explicit
field length. NFIELD is a synonym; it can be abbreviated to NF.
number Specify the beginning column of the output field. The field will be the
size of the source data to be stored.
range Specify the extent of the output field. If the field length is different
from the size of the data to be stored, the data are padded with blanks or
truncated on the right, unless a placement option is specified.
!! MEMBER Specify the identifier for the member that defines the output field and its
! type. For members that are typed D, F, or P and without picture,
! explicit conversion, or STRIP, the input source is converted automatically
! to the requested output type. When the input is not a typed member, the
! character input field is converted to a counter and thence to the
! requested output format. Direct automatic conversion between the three
! types is also applied when the input source is a typed field and no
! explicit conversion or STRIP is specified, except that conversion to or
! from packed decimal is performed via assigning the value to a counter,
! which may truncate the significant digits to 31.
!¡ Expression The parenthesised expression specifies the column number and optionally
! the field width and placement. The first expression must evaluate to a
! positive integer; the second expression must evaluate to an integer that is
! zero or positive; and the third expression must evaluate to a string that
! matches one of the placement options described below. A length of zero
! specifies that the length is the default length for the particular input
! source.
! When the third expression is present, the placement option described
! below is ignored.
¡ The parentheses are required.
! These two output placements are identical, except that the second one
! will take somewhat longer than the first one:
! ... 1.8 c ...
! ... (1, 8, "c") ...
An optional keyword specifies the placement of the source field within the output field;
this is called a placement option. When a placement option is specified, the input field
after conversion (and thus after the default length of the output field is determined) is
stripped of leading and trailing blank characters unless the conversion is D2C, F2C, I2C,
! P2C, U2C, or V2C. This field is then inserted in the output field, truncated or padded with
the pad character, as specified by the keyword used:
LEFT The field is aligned to the left of the output field, truncated or padded on
the right with pad characters.
CENTRE The field is loaded centred in the output field truncated or padded on
both sides with pad characters. If the field is not padded equally on both
sides, the right side gets one more pad character than the left side. If
the field is not truncated equally on both sides, the left side loses one
more character than the right side.
CENTER is also recognised.
RIGHT The field is aligned to the right of the output field truncated on the left
or padded on the left with pad characters.
Expression
| An expression contains terms that are combined with operators. All REXX numeric opera-
tors are supported, except for the exponentiation operator (**) and the exclusive OR oper-
! ator (&&). The REXX concatenate operator (||) is also supported, but the blank operator is
! not (concatenate with blank). In addition, several operators are borrowed from the C
language, as is the notion that assignment is an operator.
A few diagrams are required to show the correct precedence of the conditional operator,
which selects one of two expressions depending upon the result of evaluating a third
expression. The precedence of the remaining operators is not shown by diagrams, but by
the order they are described.
Blanks are ignored between syntactic entities in the part of an expression that is enclosed
in parentheses. But an expression that contains no parenthesis cannot contain blanks,
because the first blank will mark the end of the expression. Enclose the entire expression
in parentheses to be able to use blanks liberally.
| spec parses expressions differently than most other components of CMS/TSO Pipelines.
| For example, a word need not be blank-delimited (in some contexts, it must not be
| followed by a blank); an operator or a separator will do just as well, while in other stages
| or not within an expression, the operator or separator would be included in the word.
Expression:
├──┤ AssignmentExpression ├──┬───────────────────┬──┤
└─;──┤ Expression ├─┘
The semicolon operator evaluates its left hand operand and then its right hand one. The
result is the left hand operand; the right hand result is discarded. Thus, the semicolon
operator can be used to reset a counter after it is printed.
Assignment Expression
The operand of the semicolon operator can be a conditional expression (which is defined
later); or it can be an assignment expression.
An assignment expression evaluates the first operand and then assigns a value to the
specified counter. How this is done depends on the particular assignment operator used.
AssignmentExpression:
├──┬─┤ CondExpr ├─────────────────────────────────────────────┬──┤
└───┤ Counter ├────┤ Aoperator ├──┤ AssignmentExpression ├─┘
Counter:
! ├──#number──#(─┤ Expression ├─)──┤
! After the pound sign, specify the number of the counter to receive a value or an integer
! expression in parentheses to compute the number of the counter.
:= Assignment. The counter is assigned the value of the right hand side.
Nota bene: A colon is used to distinguish the assignment operator from the
relational equality operator.
+= Increment. The right hand operand is added to the counter.
-= Decrement. The right hand operand is subtracted from the counter.
*= The counter is multiplied by the right hand operand.
/= The counter is divided by the right hand operand.
%= The counter is divided by the right hand operand. The result is truncated to an
integer.
//= The counter is assigned the remainder after division by the right hand operand.
x//y == x-((x%y)*y)
! ||= The string representation of the right hand operand is appended to the contents
! of the counter, which is converted to a string, as required.
Conditional Expression
The conditional expression evaluates a binary expression. The question mark operator is a
ternary operator. It evaluates its first operand and then one of its two remaining operands,
which are separated by a colon. When the first operand evaluates true, the operand to the
left of the colon is evaluated and the other one is ignored; when the result is false, the
colon’s left operand is ignored and the right hand one is evaluated.
CondExpr:
├──┤ BinaryExpression ├──
──┬──────────────────────────────────────────────────────────┬──┤
└─?──┤ AssignmentExpression ├──:──┤ AssignmentExpression ├─┘
¡ The two assignment expressions must be of the same type. The result takes on this type.
¡ Two field identifiers are taken to be numeric; one field identifier is taken to be a string
¡ reference when the other expression is a string expression. The built-in function string()
¡ can be used to cast a field identifier into a string.
¡ When one of the assignment expressions is a string and the other one is numeric, that
¡ number will be converted to a string.
Binary Expression
BinaryExpression:
├──┬───────────────┬──┤ Term ├──
└─┤ Uoperator ├─┘
──┬─────────────────────────────────────┬──┤
└─┤ Boperator ├──┤ BinaryExpression ├─┘
The binary operators are described here in the order of precedence, the operators that have
the highest precedence are first. Operators in a group have the same precedence. Opera-
tors that have the same precedence group from left to right.
! Operands are converted to the appropriate type, as required. An error is reported when a
! string operand cannot be converted to a number.
| Note that unlike REXX, numeric comparison operators do not perform string comparisons
when an operand is not numeric.
When the order of evaluation of the operands is not specified, it is indeed unspecified. Do
not rely on the order of evaluation, even though you can determine it easily enough. For
example, it is unpredictable whether an assignment in one operand has effect for the evalu-
ation of the other operand.
The multiplicative operators have the highest precedence of the binary operators.
* Multiplication.
/ Division.
% Integer division. Truncate the result to an integer.
// Remainder after division.
x//y == x-((x%y)*y)
Addition and subtraction.
+ Add.
- Subtract.
! The concatenate operator is alone in its group.
! || Concatenate the two string operands.
The relational operators. The result is 1 if the relation holds; otherwise it is 0.
| < Test for the first operand being numerically less than the second one.
| <= Test for the first operand being numerically less than or equal to the second
¬> one.
| > Test for the first operand being numerically greater than the second one.
| >= Test for the first operand being numerically greater than or equal to the second
¬< one.
| << Test for the first operand being string being strictly less than the second one.
| <<= Test for the first operand being string being strictly less than or equal to the
¬>> second one.
| >> Test for the first operand being string being strictly greater than the second
one.
| >>= Test for the first operand being string being strictly greater than or equal to the
¬<< second one.
Equality operators. The result is 1 if the relation holds; otherwise it is 0.
| = Test for the first operand being numerically equal to the second one.
| ¬= Test for the first operand being numerically not equal to the second one.
| == Test for the first operand being string being strictly equal to the second one.
| ¬== Test for the first operand being string being strictly not equal to the second
one.
The AND operator is alone in its group.
& First evaluate the left hand side. When it evaluates to 0, the result is set to 0
: and the second operand is not evaluated. The second operand is evaluated
only if the first one evaluates to a nonzero value. If the second operand then
evaluates to 0, the result is set to 0. The result is 1 only when both operands
evaluate to a nonzero value. Note that this behaviour is different from REXX;
it is similar to the C && operator.
The OR operator is alone in its group.
| First evaluate the left hand side. When it evaluates to a nonzero value, the
: result is set to 1 and the second operand is not evaluated. The second operand
is evaluated only if the first one evaluates to 0. If the second operand then
evaluates to a nonzero value, the result is set to 1. When both operands eval-
uate to 0, the result is 0. Note that this behaviour is different from REXX; it is
similar to the C || operator.
Term
A term represents a floating point number, the value of an identified field, the value of a
! counter, the result of a call to a built-in function, the contents of a member of a structure,
or the result of evaluating an expression in parentheses.
Term:
├──┬─┤ FPnumber ├─────────────────────────┬──┤
├─┤ FunctionCall ├─────────────────────┤
├─letter───────────────────────────────┤
! ├─identifier──┬──────────────────────┬─┤
! │ └─(──┤ Expression ├──)─┘ │
├─quotedString─────────────────────────┤
├─#number──────────────────────────────┤
! ├─#(─┤ Expression ├─)──────────────────┤
└─(──┤ Expression ├──)─────────────────┘
letter Specify the identifier for a field to reference the contents of that field.
| When used alone in assignment and PRINT and no type is specified, the
contents of the field are converted to the internal representation of a
! number. When no specification item has been declared for the letter, it
! is parsed as an identifier instead.
! identifier Specify the name of a member of a structure, optionally with a
! computed subscript. The identifier may specify a fully qualified member
! name by prefixing it with two periods; you may specify that the current
! qualifier is to be used by prefixing one period. When the structure
! contains nested structures, such structures can also be subscripted.
! While the subscript is shown as optional, it should be taken to mean that
! you must specify a subscript when referencing a member that is an
! array; and you may not specify a subscript for a scalar member.
! Note in particular that this usage does not apply to an inputRange that
! specifies a field in the input record (for that you are limited to constant
! subscripts).
! When the identifier resolves to a member of a structure that is a manifest
! constant, the value of the constant is used as if it were entered as a
! number. Thus vmcparm.vmcpsend will resolve to the number 2.
#number Specify the number of the counter to reference.
! #() Specify an expression to compute the number of the counter to refer-
! ence.
quotedString Specify a character string for strict comparison operators. The string
follows the REXX rules. That is, double occurrences of the enclosing
quote specify a single occurrence of the enclosing quote inside the
string; hexadecimal constants are denoted by an X after the closing
: quote. Binary constants are denoted by a B after the closing quote.
: '0', 'f0'x, and 1111000'b all designate the same character.
! The ambiguity in the syntax of counters, field identifiers, functions, and member names is
! resolved as follows:
! A single letter that has also been specified as an identifier for an input source is
! scanned as a field identifier.
! A number sign (#) that is followed by only digits and no letters or any of the special
! characters @#$! is scanned as a counter.
! An identifier followed by a left parenthesis is scanned as a function name when it
! represents one of the built-in functions or a user written function in a filter package.
! Otherwise it is scanned as a subscripted member. You must prefix a member name
! that is also a built-in or user written function by a period to select the active qualifier.
! Otherwise it is scanned as an identifier. Thus #0x is scanned as an identifier.
! Note that identifiers in expressions cannot contain a question mark because that is
! scanned as the conditional operator, but they are valid in input sources and output
! placements.
FPnumber:
┌──
───────┐
─digit─┴──┬────────────────┬──┬─────────────────────┬──┤
├──┬───┬───
├─+─┤ │ ┌──
─────────┐ │ │ ┌──
───────┐ │
└─-─┘ ┬───────┬┴─┘ └─E──┬───┬───
└─.─── ─digit─┴─┘
└─digit─┘ ├─+─┤
└─-─┘
The implementation also supports fractional numbers that begin with a period; for example,
.5. In this format, at least one digit must be specified after the period. To retain clarity,
this is not shown in the railroad track above.
| Any additional significant digits beyond the 31-digit precision are ignored.
Functions
! spec supports user written functions in type-2 filter packages, as well as built-in ones.
! Some, but not all, built-in functions can be replaced by functions in the PTF filter package;
! which ones is unspecified and may change over time. The search order is:
! 1. “Hardwired” function names.
! 2. Functions in the PTF filter package.
! Built-in Functions
! spec expressions use a number of built-in functions. They are described here in four
! sections:
! Functions that perform as the function by the same name in the REXX language.
! Functions that are particular to spec and return a Boolean value.
! Functions that are particular to spec and return a number.
! Functions that are particular to spec and return a string.
FunctionCall:
¡ ┌─,──────────────────┐
─┬────────────────┬─┴──)──┤
(1) ─(───
├──┤ FunctionName ├───
! └─┤ Expression ├─┘
FunctionName:
├──word──┤
Note:
1 No blanks are allowed between the name and the opening parenthesis.
| While arguments are, in, general expressions, some functions require particular data, such
| as a single letter. This is noted in the syntax diagrams below.
! The functions are not described further (except for word, which is enhanced); refer to REXX
! documentation for details.
! ABBREV C2X LEFT SPACE WORDPOS
! ABS DATATYPE LENGTH STRIP WORDS
! BITAND DELSTR MAX SUBSTR XRANGE
! BITOR DELWORD MIN SUBWORD X2C
! BITXOR FIND OVERLAY TRANSLATE X2D
! CENTRE INDEX POS VERIFY
! COMPARE INSERT REVERSE WORD
! COPIES JUSTIFY RIGHT WORDINDEX
! C2D LASTPOS SIGN WORDLENGTH
| Notes:
| 1. datatype with one argument returns NUM when the first string can be assigned to a
| counter without error. Thus, it returns NUM for many strings that cannot be processed
| by the D2C conversion on a specification item.
| 2. Several conversion functions by the same name as a REXX function are defined as
| specific to spec because they either perform a slightly different function or they
| support fewer operands.
| 3. max and min with no or two or more arguments are described below.
| 4. word is also described below, because it supports a third argument to make it parallel
| to field.
| 5. spec has no concept of NUMERIC DIGITS, which may cause the numeric result of a
| function to be converted to string differently from REXX.
break The argument must be a single literal letter, which specifies a field
identifier that must have been associated with an input field previously in
the specification list. break() returns 1 if a break has been established
on the level specified. On the runin cycle, no break is reported if the
field is null. A break is always reported on the runout cycle.
eof A niladic function. eof() returns 1 during the runout cycle. It returns
0 during all other cycles.
¡ exact The argument must evaluate to a number. Often it is just a term that
¡ refers to a counter. The result is 1 if no truncation has occurred in the
¡ evaluation of the expression. The estimate is conservative, that is, the
¡ result may be zero even when the argument is in fact exact.
first A niladic function. first() returns 1 during the runin cycle. It returns
0 during all other cycles.
! present Return 1 when the field is present in the record and 0 otherwise.
! length() for the contents of a field is 0 both when the field is present,
! but null, and when the field is not present.
NumericFunction:
¡ ├──┬─AVERAGE(letter)────────────────┬──┤
¡ ├─C2D(─┤ String ├─)──────────────┤
! ├─C2F(─┤ String ├─)──────────────┤
! ├─C2U(─┤ String ├─)──────────────┤
! ├─DELTA(identifier)──────────────┤
¡ ├─MAX(─┬────────────┬─)──────────┤
¡ │ └─┤ MaxMin ├─┘ │
¡ ├─MIN(─┬────────────┬─)──────────┤
¡ │ └─┤ MaxMin ├─┘ │
¡ ├─NUMBER()───────────────────────┤
! ├─PRIMARY(identifier)────────────┤
! ├─SECONDARY(identifier)──────────┤
¡ ├─SQRT(─┤ CondExpr ├─)───────────┤
¡ ├─STDDEV(─┤ IdOrCondExpr ├─)─────┤
¡ ├─STDERRMEAN(─┤ IdOrCondExpr ├─)─┤
¡ ├─VARIANCE(─┤ IdOrCondExpr ├─)───┤
! ├─X2D(─┤ String ├─)──────────────┤
! ├─X2F(─┤ String ├─)──────────────┤
! └─X2U(─┤ String ├─)──────────────┘
IdOrCondExpr:
¡ ├──┬─letter─────────────────────────────────────────┬──┤
¡ └──┤ CondExpr ├─ ,─┤ CondExpr ├─ ,─┤ CondExpr ├──┘
MaxMin:
¡ ┌─,────────────┐
¡ ─┤ CondExpr ├─┴───┤
├───┤ CondExpr ├─ ,──
¡ average The argument must be a single literal letter, which specifies a field
¡ identifier that must have been associated with an input field previously in
¡ the specification list. The contents of the specified field must evaluate to
¡ a number in all input records. The number returned is the average over
¡ the input records seen so far. The average of no records is zero.
¡ c2d The argument is a string expression. The string is interpreted as a
¡ binary number in two’s complement notation. The maximum number of
¡ significant input bits is 108 (which is not an integral number of bytes).
¡ The result is a number that expresses the binary input number in
¡ decimal.
! c2f The argument is a string expression of two to sixteen bytes. The string
! is interpreted as the internal representation of a System/360 hexadecimal
! floating point number.
! c2u The argument is a string expression. The string is interpreted as an
! unsigned binary number. The maximum number of significant input bits
! is 108 (which is not an integral number of bytes). The result is a
! number that expresses the binary input number in decimal.
! delta A convenience for primary(x)-secondary(x). One of the terms is
! zero on the runin and on the runout cycle. delta implies both SELECT
! FIRST and SELECT SECOND.
¡ max None, two, or more arguments must be specified. All arguments must
¡ evaluate to numbers. The result is the largest number in the argument
| list. When no arguments are specified, max() returns the largest (most
| positive) number that a counter can store.
¡ min None, two, or more arguments must be specified. All arguments must
¡ evaluate to numbers. The result is the smallest number in the argument
| list. When no arguments are specified, min() returns the smallest (most
| negative) number that a counter can store.
¡ number number() is niladic. The result is the number of the current record,
¡ starting with 1 on the runin cycle, if any is taken. During the runout
¡ cycle number() returns the total number of records seen. That is, it is
¡ not incremented during the runout cycle, as is the NUMBER data source.
! primary Return the numeric contents of the specified member from the first
! reading. The member must have a type that is D or F. primary implies
! SELECT FIRST.
! secondary Return the numeric contents of the specified member from the second
! reading. The member must have a type that is D or F. secondary
! implies SELECT SECOND.
¡ sqrt The argument must evaluate to a number that is zero or positive. The
¡ result is the square root of the number.
¡ stddev For the monadic stddev() the argument must be a single literal letter,
¡ which specifies a field identifier that must have been associated with an
¡ input field previously in the specification list. The contents of the
¡ specified field must evaluate to a number in all input records. For the
¡ triadic stddev() the arguments must all evaluate to numbers. The first
¡ argument is considered to be the sum of a series of numbers (s); the
¡ second argument is considered to be the sum of the squares of the series
¡ (q) and the third argument is considered to be the count of observations
¡ (n). stddev(f) evaluates the triadic stddev(s, q, n) on the values
¡ seen so far in the field f.
¡ stddev:=sqrt(variance(s, q, n))
¡ stderrmean For the monadic stderrmean() the argument must be a single literal
¡ letter, which specifies a field identifier that must have been associated
¡ with an input field previously in the specification list. The contents of
¡ the specified field must evaluate to a number in all input records. For
¡ the triadic stderrmean() the arguments must all evaluate to numbers.
¡ The first argument is considered to be the sum of a series of numbers
¡ (s); the second argument is considered to be the sum of the squares of
¡ the series (q) and the third argument is considered to be the count of
¡ observations (n). stderrmean(f) evaluates the triadic stderrmean(s,
¡ q, n) on the values seen so far in the field f.
¡ stderrmean:=stddev(s, q, n)/sqrt(n-1)
¡ variance For the monadic variance() the argument must be a single literal
¡ letter, which specifies a field identifier that must have been associated
¡ with an input field previously in the specification list. The contents of
¡ the specified field must evaluate to a number in all input records. For
¡ the triadic variance() the arguments must all evaluate to numbers.
¡ The first argument is considered to be the sum of a series of numbers
¡ (s); the second argument is considered to be the sum of the squares of
¡ the series (q) and the third argument is considered to be the count of
¡ observations (n). variance(f) evaluates the triadic variance(s, q,
¡ n) on the values seen so far in the field f.
¡ variance:=q/n-(s/n)**2
! x2d Similar to c2d, except the the input is an unpacked hexadecimal repre-
! sentation of the signed two’s complement binary number.
! x2f Similar to c2d, except the the input is an unpacked hexadecimal repre-
! sentation of the hexadecimal floating point number. (Two different
! meanings of “hexadecimal”.)
! x2u Similar to c2d, except the the input is an unpacked hexadecimal repre-
! sentation of the unsigned binary number.
¡ StringFunction:
! ├──┬─FIELD(─┤ String ├─,snumber─┬───────────────┬─)─┬──┤
! │ └─,──┤ String ├─┘ │
! ├─RECORD(─┬────────────────────────┬─)───────────┤
! │ └─snumber──┬───────────┬─┘ │
! │ └─,──number─┘ │
! ├─STORAGE(─┤ String ├─,─┤ number ├─)─────────────┤
¡ ├─STRING(─┤ String ├─)───────────────────────────┤
! ├─SUBSTITUTE(─┤ Substitute ├─)───────────────────┤
! ├─TYPE(─┬─letter─────┬─)─────────────────────────┤
! │ └─identifier─┘ │
! └─WORD(─┤ String ├─,snumber─┬───────────────┬─)──┘
! └─,──┤ String ├─┘
¡ Substitute:
! ├──┤ String ├──,──┤ String ├──,──┬────────────┬──,──┬────────┬──┤
! └─┤ String ├─┘ └─number─┘
¡ String:
¡ ├──┬─letter─────────────┬──┤
¡ ├─quotedString───────┤
¡ └─┤ StringFunction ├─┘
! The string argument may be a literal letter which refers to the identified input range; an
! identifier that specifies a member; a quoted string literal; or the result of a string function.
! field Return the nth field of the first string. The number must be nonzero.
! The field number is counted from the end of the string when the number
! is negative. If present, the second string argument must have length 1; it
! specifies the field separator; the default is X'05', horizontal tab.
! record Return the current input record or a substring of it. If present, the first
! number specifies the starting position of the result within the record; it
! must be nonzero. When this number is negative, the position is relative
! to the end of the record. The second number specifies the maximum
! number of columns to include in the result. The default is to the end of
! the record. The second number must be zero or positive. The result is
! never padded.
! storage Return the contents of virtual storage at the specified address.
! The first argument is the address in printable hexadecimal; the second
! argument is a number that must be zero or positive.
! pipe spec eof print storage('230', 32) 1 | console
! VM Conversational Monitor System
! Ready;
¡ string The result is the argument string without modification, syntactically cast
¡ as a string. Thus, string("1") returns a string, not a number.
¡ string() may be needed in conjunction with the conditional operator to
¡ cast one of the operands into a string.
! substitute Return a modified string where occurrences of one substring are replaced
! by another, much as done the the XEDIT change command. The first
! argument is the string to be changed. The second argument is the
! substring to be replaced; it must be at least one character. The third
! argument is the replacement string, the default is an empty string. The
! fourth argument, if present, must be positive. It specifies the maximum
! number of substitutions to perform; the default is infinity.
! type Return the data type associated with the field (if it is a member of a
! structure) or member, or a single blank.
! word Return the nth word of the first string. The number must be nonzero.
! The word number is counted from the end of the string when the
! number is negative. If present, the second string argument must have
! length 1; it specifies the word separator; the default is X'40', a blank.
Pictures
A picture is a pictorial description of the desired formatting of a numeric quantity. It is
specified after the keyword PICTURE.
A picture contains a character for each column of the formatted field, except that the V
character does not represent an output character. Thus, the output fields are fixed in
length. Case is ignored in pictures.
The picture characters are a subset of the ones defined for PL/I. They are
S+-$9Z*Y,./BVE. They comprise five groups: Sign, digit select, punctuation, implied
decimal point, and exponent.
A leading zero is replaced with a blank when the significance latch is off and the picture
character is the letter “Z”. The significance latch is off at the beginning of the picture. It
is turned on by a nonzero digit or the picture character 9. It is forced off again by the E
pattern character.
Sign Characters
S (The letter S.) Insert the sign (+ or -) of the number. Zero is consid-
ered positive.
+ Insert a + if the number is zero or positive; a blank if the number is
negative.
- Insert a - if the number is negative; a blank if the number is not nega-
tive.
$ (The dollar sign.) Insert the currency symbol irrespective of the sign of
the number. (This is the pound sterling symbol on a UK terminal.)
A sign character can be stand-alone or part of a drifting sign. A stand-alone sign character
occupies the column where it is specified. A drifting sign is specified by successive
columns containing the particular sign character, possibly with interspersed punctuation
characters. The sign occupies the column before the one where the significance latch is
turned on. Prior columns contain a blank. The first column of a drifting sign can never
contain a digit.
Digit Selection
9 Insert a digit. The significance latch is set unconditionally.
Z Insert a digit, suppressing leading zeros. The position contains a blank
when the digit is zero and the significance latch is off. It contains a
digit if the digit is nonzero or the significance latch is on. A nonzero
digit sets the significance latch on.
* Insert cheque protection. The position contains an asterisk when the
digit is zero and the significance latch is off. It contains a digit if the
digit is nonzero or the significance latch is on. A nonzero digit sets the
significance latch. That is, the asterisk is similar to Z, except that it
inserts an asterisk rather than a blank when it processes a leading zero.
Y Insert a blank when the digit is zero, and the digit otherwise. A nonzero
digit sets the significance latch. Thus, Y is similar to Z, but it does not
test the significance latch.
Punctuation
, Insert a comma if the significance latch is set; insert a blank otherwise.
. Insert a period if the significance latch is set; insert a blank otherwise.
/ Insert a forward slash if the significance latch is set; insert a blank other-
wise.10
B Insert a blank.
Exponent
10 The / was no doubt provided to support the archaic British currency system of pounds, shillings, and pence, which was still in use when the
PL/I compiler (from which the picture definition is borrowed) was designed. The IBM 407 Accounting Machine could be built to have certain
counters doing duodecimal (base 12) arithmetic to support currency computations. This feature was, no doubt with Shakespearean allusions,
called Twelfth Counters. Interestingly, the 407 seems to have no feature for binary arithmetic (which is required when summing shillings);
presumably this was possible with standard wiring.
E Insert the character “E”. This marks the beginning of the exponent.
The exponent field can contain only an optional sign, which cannot drift,
followed by digit selectors Z (suppress zero) or 9.
General
The default picture is ----------9, which is eleven columns with a drifting minus sign.
The default command environment of a REXX program running as a pipeline filter proc-
esses pipeline commands, described in alphabetical order in the following sections. This
list is an overview by function.
The following pipeline commands are also available to the pipcmd built-in program and
the underlying macro PIPCMD: ADDPIPE, ADDSTREAM, CALLPIPE, COMMIT, EOFREPORT,
ISSUEMSG, MAXSTREAM, MESSAGE, OUTPUT, RESOLVE, REXX, SELECT, SETRC, SEVER,
SHORT, STAGENUM, STREAMNUM, STREAMSTATE, SUSPEND. Whether it makes sense to use
them all with pipcmd is another matter; in particular, COMMIT to a positive number will
cause a stall.
The following pipeline commands are available only through the REXX interface:
BEGOUTPUT, GETRANGE, NOCOMMIT, PEEKTO, READTO, SCANRANGE, SCANSTRING, and
STREAMSTATE ALL.
Return code -7 on a pipeline command means that the pipeline command processor cannot
resolve the command. Refer to “Return Codes -3 and -7” on page 116.
Operation: The pipeline specification is added to the current set of pipelines. Its stages
run in parallel with the stage issuing ADDPIPE, independent of the commit level of the stage
that issues the ADDPIPE pipeline command.
Connectors in the pipeline specification designate how the stage’s current streams are
modified. All streams mentioned in connectors are disconnected from the stage.
The stream is connected to the new pipeline when the connector is before a pipeline
and the second component of the connector is INPUT, or the connector is after the
pipeline and the second component is OUTPUT.
/* Process input and output independently */
"addpipe *.input:|xlate upper|> output file a"
The stage cannot reconnect to a stream that has been transferred to another pipeline in
this way.
The connection is saved on a stack for the stream when the the connector is after a
pipeline and the second component is INPUT, or the connector is before the pipeline
and the second component is OUTPUT. The new pipeline is connected to the stream
instead of the saved connection. End-of-file on the new connection sets return code
12 in a READTO or PEEKTO pipeline command. SEVER restores the stacked connection.
/* Read parameter file */
"addpipe < parm file | *.input:" /* Connect input to file */
"nocommit" /* Disable automatic commit */
"readto line" /* Read first line of file */
do while RC=0 /* Process all lines */
/* Process line */
"readto line" /* Read next line */
end
"sever input" /* Reinstate input file */
"commit 0" /* See if other stages are OK */
if RC/=0 then exit 0 /* Exit quietly if not */
A pipeline is inserted in front of (or after) the stream when a side of a stream is
referenced both at the beginning and end of a pipeline, for instance:
"addpipe *.input: |deblock net|*.input:"
"addpipe *.output:|xlate upper|*.output:"
Return Codes:
0 The pipeline specification has been added to the running set. The stage issuing
ADDPIPE runs in parallel with the stages added to the pipeline set.
± The pipeline specification has one or more syntax errors. Error messages are issued.
ADDSTREAM—Create a Stream
┌─BOTH───┐
──ADDSTREAm──┼────────┼──┬──────────┬──
├─INput──┤ └─streamID─┘
└─OUTput─┘
Operation: An unconnected stream is added to the stage on the side(s) specified by the
first keyword. If present, the stream identifier is set; the stream has no identifier by
default. Use MAXSTREAM to discover the number of the stream just added. Use ADDPIPE
to connect a stage to the stream.
Return Codes:
0 The stream(s) are added to the stage.
-112 Too many arguments. This may be caused by a misspelled keyword that is assumed
to be a stream identifier.
-174 A stream already exists with the stream identifier specified.
Examples:
"addstream output errs"
"addpipe *.output.errs:|> error file a"
Remember to select the error stream before using OUTPUT to write to it. Select the
primary output stream when writing “normal” output.
Operation: The argument string is stored, but is otherwise ignored; the REXX interface
then enters the implied output mode.
Subsequent pipeline commands are treated as output data rather than commands. That is,
they are processed by the OUTPUT pipeline command; this includes implied commit proc-
essing. The complete command is written to the currently selected output stream.
A command that contains exactly the string specified (or is null when no string was
specified) terminates the implied output mode and the command interface reverts to its
normal operation; the terminating command is discarded.
Examples: To write two lines of output. Note that the lines are still processed as REXX
expressions. In practice this means that literals must be enclosed in quotes.
'begoutput'
'Field 1 Field 2'
'------------ ---------------------'
'' /* Null command to terminate */
'callpipe *: | spec 37.14 1 89-* 16 | *:' /* Command mode active */
Notes:
1. BEGOUTPUT can be issued from a REXX program only; it is not available to pipcmd or
the underlying PIPCMD macro.
Operation: The subroutine pipeline may be connected to the stage’s input and output
streams. The stage issuing CALLPIPE is suspended until all stages of the subroutine pipe-
line have returned. The stage that issues CALLPIPE commits to the highest commit level of
the subroutine pipeline while it waits for it to complete. The subroutine pipeline can run
on a commit level that is lower than the caller’s. A short-through pipeline forces a commit
to level 0 to avoid a stall.
Connectors in the subroutine pipeline connect to streams in the stage issuing the CALLPIPE
pipeline command until end-of-file is transferred across the connection; the connection to
the calling stage is restored when end-of-file is transmitted from the subroutine.
Return Codes:
0 The pipeline specification is syntactically correct. All stages of the pipeline return
code 0.
± There is a syntax error in the pipeline specification or a stage of the subroutine pipe-
line gives a return code that is not zero.
Examples: This subroutine pipeline takes a literal, makes it upper case, and passes it on
the currently selected output stream.
To position the input stream at the next line with a comma in column 1 (or read to end-of-
file):
'callpipe *:|tolabel ,|hole'
'peekto'
if RC=12 then exit /* End-of-file */
Notes:
1. A pipeline stall is possible if all these conditions are satisfied:
The stage issuing CALLPIPE is on a negative commit level.
The subroutine pipeline is connected to both an input stream and an output
stream.
The subroutine completes without committing to level 0 and without running a
program that commits to level 0. A stage that issues the SHORT pipeline
command without committing satisfies this condition.
The stage should commit to level 0 before issuing a subroutine pipeline of this nature.
Operation: Commit to the level specified. When the number is less than or equal to the
level the stage is already committed at, the return code is the current aggregate return code
for the pipeline specification.
The stage is suspended when the level requested is higher than the level the stage is
currently committed at. The stage is suspended until all stages in the pipeline specification
(and the caller, if the stage is in a subroutine pipeline) have committed at least to the level
the stage requests. The return code is the aggregate return code when all stages are
committed to the level specified.
REXX programs begin at commit level -1. The interface commits to level 0 when the stage
reads or writes unless a NOCOMMIT pipeline command is issued first.
Return Codes:
0 All stages that have returned did so with return code zero.
-2147483648 The arguments are in error. Message 58, 112, or 113 is issued.
± A stage has returned with the return code.
Example: Use COMMIT to test the return code of other REXX programs and those built-in
programs that are committed to start on level -1 or before. You can abandon the program
if the return code is not zero. In the second example below, console is not started because
the first stage returns with code 112. (TISSUE REXX is shown on page 731.)
CURRENT The original CMS/TSO Pipelines behaviour is desired. Stream events are
ignored when they do not relate to the currently selected stream at the
time of I/O.
ALL Return code 8 is to be reported on PEEKTO and SELECT ANYINPUT when
all output streams are severed.
ANY Return code 8 is to be reported on PEEKTO and SELECT ANYINPUT when
all output streams are severed.
Return code 4 is to be reported on OUTPUT, PEEKTO, and SELECT
ANYINPUT when any stream is severed. For OUTPUT, return code 4 is
reported only if the record was not seen by the following stage.
Operation: The return codes reported by OUTPUT, PEEKTO, READTO, and SELECT ANYINPUT
are modified, depending on the specified keyword.
Return Codes:
0 End-of-file reporting is set as specified. At least one input stream and one output
stream are connected.
8 No input stream is connected or no output stream is connected (or no stream at all is
connected).
-111 The word is not a recognised option.
-112 The argument string is more than one word.
-113 The argument string is empty.
Example: Use EOFREPORT ALL in stages that should propagate end-of-file. The stage will
stop waiting for an input record when the output stream is severed.
Use EOFREPORT ANY in multistream stages that need to propagate end-of-file immediately.
──GETRANGE──word──┬─VARiable─┬──word──┬────────┬──
└─STEM─────┘ └─string─┘
Syntax: The first word specifies the name of a variable that contains the token repres-
enting the inputRange. This variable must have been set by a previous SCANRANGE pipe-
line command. It must not be modified by the REXX program.
The second word is a required keyword. It specifies how the input range should be
returned to the program.
The third word specifies the name of the variable or the stem to receive the input range.
The remaining string after exactly one blank is the record from which to extract the
contents of an input range.
Operation: When VARIABLE is specified, the third word contains the name of a single
variable, which is set to a substring of the input record, as determined by the contents of
the token.
The remainder of this section discusses the operation when STEM is specified. The result is
stored into a stemmed array; the third word contains the stem to use. The stem would
normally end with a period.
When the input range is not present in the record (as opposed to its being of length zero),
the compound variables are set as follows:
stem0 1
stem1 The entire input record.
When the input range is present in the record, the compound variables are set as follows:
stem0 3
stem1 The part of the record up to the beginning of the input range.
stem2 The contents of the input range.
stem3 The balance of the record.
Example: This example program writes the reverse of a substring of the input record:
/* Getrange sample */
parse arg inputRange
'eofreport all'
signal on error
do forever
'peekto line'
'getrange field var range' line
'output' reverse(range)
'readto'
end
error:
exit RC*(wordpos(RC, '8 12)=0)
Notes:
1. GETRANGE can be issued from a REXX program only; it is not available to pipcmd or
the underlying PIPCMD macro.
2. The first and the third word are names of variables, but string represents a string.
Syntax: The first word of the arguments to ISSUEMSG is the number of the message to
issue. The second word should be six characters for the module identifier. Delimited
strings are optional after the two required arguments.
Operation: Issue the message with the number specified. The message text is obtained
from the internal message text table. There must be a delimited string for each substitution
in the message.
See also: MESSAGE. Use MESSAGE to issue messages where you include the message
identifier (component, module, number, and severity) as well as the substituted message
text.
Return Codes:
0 Message 0 is issued.
+ The number of the message issued.
-58 The first word is not a positive number or zero.
-60 A delimited string is not properly delimited.
-113 There are fewer than two words in the argument string.
Example: This example program issues message 112 if the argument string is not blank.
Notes:
1. Refer to Chapter 26, “Message Reference” on page 746 for a list of the messages in
the built-in message text table.
Operation: The return code is set to the highest stream number available on the side
specified by the keyword. When a stage starts, it has as many input as output streams
defined. Streams can be added to one side with ADDSTREAM.
Return Codes:
0 The primary stream is the only stream.
+ The largest number allowed in a SELECT pipeline command.
-112 The argument string is more than one word.
-163 No keyword is specified.
-164 The keyword is not valid.
Example: A program uses MAXSTREAM to test how many streams it has available.
MESSAGE—Issue a Message
──MESSAGE──string──
Syntax: The argument string to MESSAGE is a substituted message. It should have the
standard message identifier in the first ten positions.
See also: ISSUEMSG issues a message by number using the message text tables built into
CMS/TSO Pipelines.
Return Code: 0.
Example: The message identifier is suppressed when EMSG is set to TEXT so as to display
only the text of the message.
Operation: The REXX interface does not commit for level 0 on subsequent I/O commands.
To have any effect, NOCOMMIT must be issued before any READTO, PEEKTO, OUTPUT, or
SELECT ANYINPUT pipeline commands are issued; otherwise the interface has already
committed to level 0.
Return Code:
0 The interface will not commit automatically.
4 A NOCOMMIT or COMMIT pipeline command has already been issued.
8 A read or write pipeline command has already committed the stage to level 0.
-112 NOCOMMIT found operands.
Example: This example shows how to use NOCOMMIT and ADDPIPE to read a file on
commit level -1. The return code is the number of lines in the file. This causes the
remainder of the pipeline to be abandoned because the REXX program returns 11 without
committing to level 0. Note that an equivalent CALLPIPE subroutine pipeline setting the
variable directly commits the caller to level 0.
Notes:
1. NOCOMMIT can be issued from a REXX program only; it is not available to pipcmd or
the underlying PIPCMD macro.
OUTPUT—Write a Line
──OUTPUT──┬────────┬──
└─string─┘
Operation: When issued from a REXX program, the stage commits to level 0 if the stage
is not already committed unless NOCOMMIT has been issued to disable the implied commit
operation. The argument string is written to the currently selected output stream. The
record written begins after the blank ending the command verb; the record can have
leading blanks. A null record is written when the string is omitted.
Return Codes:
0 The line is read by the stage connected to the output stream.
4 EOFREPORT ALL is in effect and a stream event has occurred before the consumer
peeked at the record (or read it). The program should process the stream event and
then reissue the OUTPUT command.
12 The output stream is not connected.
-4095 The pipeline is stalled. All input streams and output streams are severed.
± The stage connected to the stream issued the pipeline command SETRC to set a return
code.
Example:
Syntax: When present, the argument is the name of the variable that PEEKTO should set.
The word must represent a valid name for a REXX variable as it would be written in a
REXX program.
Operation: The stage commits to level 0 if the stage is not already committed unless
NOCOMMIT has been issued to disable the implied commit operation. The next record on
the currently selected input stream is copied into the variable. The record remains in the
pipeline; use READTO to read or discard the line. Use PEEKTO without argument to test if
the input stream is at end-of-file without setting a variable to the contents of the next
record.
Return Codes:
0 The next record is available.
4 EOFREPORT ALL is in effect and a stream event occurred that did not cause return
codes 8 or 12 to be set.
8 EOFREPORT ALL or EOFREPORT ANY is in effect. There is no longer a connected
output stream.
12 The stream is at end-of-file. If a word is specified, the variable is dropped.
-4095 The pipeline is stalled.
Example: PEEKTO is used after a subroutine pipeline to see if there are more input data to
process.
The generic filter to pass records from the input to the output without delay and propa-
gating end-of-file both forwards and backwards:
do forever
'peekto line' /* Look for next input line */
/* Process line here */
'output' line /* Pass it to the output */
'readto' /* Consume the record */
end
Notes:
1. PEEKTO can be issued from a REXX program only; it is not available to pipcmd or the
underlying PIPCMD macro.
2. If EOFREPORT ALL or EOFREPORT ANY is in effect, subsequent PEEKTO pipeline
commands without intervening READTO pipeline commands will set return codes 4 or
8 when suitable stream events have occurred since the previous PEEKTO, even when an
input record is available.
Syntax: When present, the argument is the name of the variable that READTO should set.
The word must represent a valid name for a REXX variable as it would be written in a
REXX program.
Operation: The stage commits to level 0 if the stage is not already committed unless
NOCOMMIT has been issued to disable the implied commit operation. The next record on
the currently selected input stream is copied into the variable and discarded.
No variable is set when the word is omitted; a record is discarded from the input stream if
one is available.
Return Codes:
0 The next record is available.
12 The stream is at end-of-file. If specified, the variable is dropped.
-4095 The pipeline is stalled.
Notes:
1. READTO can be issued from a REXX program only; it is not available to pipcmd or the
underlying PIPCMD macro.
Operation: The word is looked up in the directories for built-in programs and attached
filter packages. When positive, the return code is the entry point address.
Return Codes:
0 The name is not resolved as a built-in program or a program in an attached filter
package.
+ The entry point address.
-42 The argument is missing.
-112 There is more than one word in the argument string.
Syntax: The argument string to REXX is the same format as for the rexx built-in program.
Operation: The program is called as a pipeline filter. It can access the caller’s streams
while it runs. The argument string is passed as the first argument string to the called
program. The return code is the return code from running the program or the number of a
message issued because the program could not be found.
Return Codes: The corresponding return code is set if message 21, 22, 40, 113, 122, 381,
or 382 is issued. The return code is the return code from the program when the interface
does not reflect an error.
Example: (PIPPCEND REXX is shipped in PIPGEN PACKAGE; it generates an END card for an
object module.)
Notes:
1. Using the pipeline command REXX in a REXX filter is equivalent to calling an external
function in a command procedure. Variables are not shared between the caller and the
called program.
2. The pipeline command environment is not available in an external function called
from a REXX filter.
3. The function performed by REXX is also available with CALLPIPE where all connected
streams are passed to a subroutine pipeline consisting of the one stage; REXX is
retained for compatibility with the past.
SCANRANGE—Parse an inputRange
SCANRANGE can be used by the filter programmer to parse an argument string containing
an inputRange specification in the same way that CMS/TSO Pipelines built-in programs
scan their arguments when an inputRange can be specified.
──SCANRANGe──┬─OPTional─┬──token──┬─rest─┬──┬────────┬──
└─REQuired─┘ └─.────┘ └─string─┘
OPTIONAL The argument string need not begin with a syntactically correct
inputRange; the range 1-* is assumed instead.
REQUIRED The argument string must begin with a syntactically correct inputRange.
The second word specifies the name of a variable that will be set to a token representing
the inputRange. The variable can be used in subsequent GETRANGE pipeline commands to
select that input range. The contents of this token are unspecified; other tokens can repre-
sent other inputRanges. This variable must not be modified by the REXX program.
The third word specifies the name of the variable to receive the residual string after the
inputRange specification has been scanned from the beginning of the argument string.
Specify a period (.) to discard the residual string.
The remaining string is the argument string from which to scan the specification of an
inputRange. Leading blanks are ignored.
| The default word and field separators are not remembered between instances of
| SCANRANGE; the defaults apply to each invocation. Any qualifier set in the inputRange
! specification is discarded when it has been parsed.
Return Codes: Error messages have been issued when a nonzero return code is set. The
program should deallocate any resources it may have allocated and exit with the return
code.
Example: SCANRANGE is typically used at the beginning of a REXX filter where it scans its
arguments string. For example, to scan an argument string that may specify one input
range but no other arguments:
parse arg argstring
'scanrange optional range_definition rest' argstring
If RC¬=0
Then exit RC /* Messages are already issued */
If rest¬=''
Then call err 112, rest /* Too much */
To scan an argument string that must specify two input ranges and no other arguments:
parse arg argstring
'scanrange required first rest' argstring
If RC¬=0
Then exit RC
'scanrange required second rest' rest
If RC¬=0
Then exit RC
If rest¬=''
Then call err 112, rest /* Too much */
Notes:
1. SCANRANGE can be issued from a REXX program only; it is not available to pipcmd or
the underlying PIPCMD macro.
2. The token representing the input range remains valid only as long as the stage is
running. Once the stage has terminated, the contents of the token are stale. Using it
(for example, in some other REXX filter) may cause random ABENDs.
3. The second and the third word are names of variables, but string represents a string.
4. To determine the length of the string scanned, subtract the length of the residual text
from the length of the argument string.
──SCANSTRIng──word──┬─rest─┬──┬────────┬──
└─.────┘ └─string─┘
Syntax: The first word specifies the name of a variable that will be set to the contents of
the delimited string.
The second word specifies the name of the variable to receive the residual string after the
delimitedString specification has been scanned from the beginning of the argument
string. Specify a period (.) to discard the residual string.
The remaining string is the argument string from which to scan the specification of a
delimitedString. Leading blanks are ignored.
Return Codes: Error messages have been issued when a nonzero return code is set. The
program should deallocate any resources it may have allocated and exit with the return
code.
Example:
parse arg argstring
'scanstring word rest' argstring
The variable argstring could contain /abc/ or xf1f2f3; the variable word would then
be set to abc or 123, respectively.
Notes:
1. SCANSTRING can be issued from a REXX program only; it is not available to pipcmd or
the underlying PIPCMD macro.
2. The first and the second word are names of variables, but string represents a string.
3. To determine the length of the string scanned, subtract the length of the residual text
from the length of the argument string.
SELECT—Select a Stream
┌─BOTH───┐
──SELECT──┬─┼────────┼──stream─┬──
│ ├─INput──┤ │
│ └─OUTput─┘ │
└─ANYINput───────────┘
Syntax: The argument string to SELECT has an optional keyword followed by a stream
identifier, or a keyword.
Operation: When the operation is SELECT ANYINPUT from a REXX program, the stage
commits to level 0 if the stage is not already committed unless NOCOMMIT has been issued
to disable the implied commit operation. The stream is selected on the side(s) specified.
Use STREAMNUM to discover the number of the input stream selected when any input
stream is desired.
Return Codes:
0 The stream(s) are selected.
4 When ANYINPUT is not specified, the stream does not exist; No message is issued.
4 EOFREPORT ALL is in effect and a stream event occurred that did not cause return
codes 8 or 12 to be set.
8 EOFREPORT ALL or EOFREPORT ANY is in effect. There is no longer a connected
output stream.
12 The pipeline command is SELECT ANYINPUT; all input streams are at end-of-file.
-112 There are too many words in the argument string.
-168 OUTPUT or BOTH is used with ANY.
Operation: In this description, the stage issuing SETRC is called the consumer stage. The
stage connected to the currently selected input stream is called the producer stage.
It is verified that the stage connected to the currently selected input stream (the producer
stage) is waiting for an output record to be read by the consumer stage on that particular
stream. You can be sure of this only after a PEEKTO pipeline command and before the
next READTO pipeline command.
The argument is stored as the return code that the producer stage sees when the consumer
stage issues the next READTO pipeline command to consume the record.
Return Codes:
0 The return code is set
-4 The currently selected input stream is not connected (the stage is first in a pipeline)
or the stage connected to the currently selected input stream has selected a different
output stream or it is not waiting for an OUTPUT pipeline command to complete. No
message is issued.
-58 The first word of the argument string is not a number.
-112 There is more than one word in the argument string.
-113 The argument string is empty.
Example: A REXX program to issue CP commands may feed the return code back using
SETRC.
Notes:
1. SETRC should only be used when connected to programs that are prepared for any
return code on OUTPUT.
SEVER—Break a Connection
──SEVER──┬─INput──┬──
└─OUTput─┘
Operation: When the currently selected stream on the side specified is connected, these
actions are performed at the stream at the other side of the connection: If the stream was
created with CALLPIPE, the previous connection is reinstated. If the stream was not created
with CALLPIPE, the stream becomes not connected; end-of-file is set if the stage is waiting
for I/O on this stream or it is the last remaining input stream and the stage is waiting for
any input stream.
For the stream specified at the stage issuing SEVER, the connection on the top of the
ADDPIPE stack (if any) is reinstated. The stream becomes not connected if the stack is
empty.
Return Codes:
0 The stream is severed.
-112 There is more than one word in the argument string.
-163 The argument string is empty.
-164 The argument is not INPUT or OUTPUT.
Example: In a stage with more than one output stream, sever a stream as soon as you
have finished writing to it. This may avoid a stall.
/* Process from label to secondary output */
parse arg label
'callpipe (name PIPCMDS)',
'|*:',
'|tolabel' label ||,
'|*:'
'sever output'
'callpipe *:|procem|*.output.1:'
exit RC
Note: Though much CMS/TSO Pipelines documentation speaks of severing a stream
rather than severing the connection to a stream, it is understood that the severance occurs
by removing the connector between the stream being severed and the stream it is
connected to, if any. Streams are created by the pipeline specification parser and by the
pipeline command ADDSTREAM; once created a stream exists as long as the stage to which
it is attached. There is no pipeline command to destroy a stream.
Operation: The currently selected input stream and the currently selected output stream
are connected directly, bypassing the stage issuing SHORT.
Return Code: 0.
Example: Use SHORT when you wish to copy all input to the output.
Operation: The return code is set to the position of the stage in the pipeline of its primary
stream. The first stage gets return code 1.
Example: Use STAGENUM when you wish to ensure that a program is first (or not first) in
a pipeline.
Syntax: STREAMNUM requires a keyword for the side to operate on. An asterisk, a
number, or a stream identifier is optional.
Operation: The return code is set to the number of the stream. When the stream
identifier is omitted or specified as an asterisk, the number associated with the currently
selected stream is returned. When a number is specified as the stream identifier, it is
verified that the stream exists; the return code is the number. The number of the stream
that has the identifier specified is returned when the identifier is neither an asterisk nor a
number.
Return Codes:
0 The primary stream is selected or associated with the identifier.
-4 The stream is not defined.
-102 The second argument is a number. The stream does not exist.
-112 There are more than two words in the argument string.
-163 The argument string is empty.
-164 The argument is not INPUT or OUTPUT.
-178 The second argument is a stream identifier. The stream does not exist.
When SUMMARY is specified, the return code is zero if at least one input stream is
connected and at least one output stream is connected. The program receives no indication
of which streams are connected.
When ALL is specified, the state of each pair of input and output streams is stored as a
word in the specified variable, which will contain as many words as the highest number of
input or output streams. Each word contains the state of the input stream, a colon (:), and
the state of the output stream. These states are defined in the “Return Codes” section
below.
When neither SUMMARY nor ALL is specified, STREAMSTATE requires a keyword for the
side to operate on; if it is present, the second word specifies the stream to test. The
default is the currently selected stream.
Operation: The return code is set to indicate the summary status or the status of the
specified stream.
The return code from STREAMSTATE ALL is zero unless there is trouble setting the variable.
Return Codes:
0 The stream is connected; the stage on the other side is waiting for I/O on this stream.
4 The stream is connected; the stage on the other side is waiting for I/O on this stream
on a different commit level. The pipeline stalls if you try to read or write the stream
before committing to the level the other side is at.
8 The stream is connected; the stage on the other side is not waiting for the stream.
12 The stream is not connected.
-4 The stream is not defined.
-112 There are more than two words in the argument string.
-163 The argument string is empty.
-164 The argument is not INPUT or OUTPUT.
Notes:
1. STREAMSTATE ALL can be issued from a REXX program only; it is not available to
pipcmd or the underlying PIPCMD macro.
Operation: The stage is put at the end of the dispatch list. The return code is set to the
number of other stages that are ready to run at the time the stage was suspended. (That is,
the return code is computed at the time the stage is suspended even though the stage must
of necessity resume before it can inspect the return code.)
Return Codes:
0 There were no other stages ready to run. The stage was resumed immediately.
+ The number of stages that were ready to run at the time the stage was suspended.
Example: To try to obtain some confidence that an output record will be consumed:
'suspend' /* give consumer a chance to read */
'streamstate output' /* Now, did it? */
if RC<0 /* Not defined? */
then exit RC /* This is an impossibility */
if RC=12 /* End-of-file; quit */
then exit 0
'peekto line' /* Now try to read a line */
Notes:
1. The order of dispatching is unspecified. The pipeline dispatcher could select the
suspended stage before it has run all stages that were ready at the time the stage
issued SUSPEND. If the return code is positive, at least one other stage has run.
2. A program should not go into a loop waiting for a producer or a consumer to commit
itself to write or read a record. Two stages using SUSPEND can be chasing each
other’s tails forever doing this.
In addition to the reference in this chapter, you can obtain more information about a
message in these ways:
The command “pipe help” invokes help for the last message issued (excluding some
informational messages). “pipe help 1” gives help on the second last error message,
and so on for the last 11 messages.
The messages are listed alphabetically in Appendix B, “Messages, Sorted by Text” on
page 868.
Unless disabled, additional messages (192 and 1 through 4) are issued automatically
by CMS/TSO Pipelines to identify the stage or command causing the error.
In the list of messages on the following pages, the first line for a message is set in bold
type. It gives the message number, severity code, and text. Words in the message text
that are set in italics type are replaced with variable data.
Most stages return with the same return code as the number of the last message issued
when the message indicates an error. Some errors are considered sufficiently grave to give
negative return codes.
The text for a message displayed on your terminal is generated from the same Script input
file that is used here. If you see a message not listed on the following pages or listed
differently, then something is downlevel (though it could well be this book). To resolve
this, type the command “pipe query” to obtain message 86 identifying the level of
CMS/TSO Pipelines you are using.
0E No message text for message number 4I ... Issued from stage number of pipeline number
name "name"
Explanation: CMS/TSO Pipelines has discovered an
internal error. A CMS/TSO Pipelines module requests the Explanation: This message is issued to identify which
message with the number shown, but there is no action stage is the cause of the previously issued message when the
defined for the message. option NAME is used in the pipeline specification.
System Action: Depends on where the message is issued. System Action: Message 1 is issued if the message level is
odd. Processing continues.
User Response: Ensure the message level is odd (it is
unless you have changed it). Note the string substituted in
message 1, if one follows. Contact your systems support 10E Extended format parameter list is required
staff.
Explanation: PIPE is invoked with a call type flag, indi-
System Programmer Response: If message 1 is issued and cating that only a tokenised parameter list is available. Most
it indicates a REXX program, the program may have issued likely you have entered the command from EXEC1 or the
the ISSUEMSG pipeline command; ensure that it uses a correct command line of BROWSE.
message number. If message 1 is not issued, the unknown
Stages check that an extended parameter list is present and
message is issued in the pipeline specification parser.
exit with return code -10 if the flag in the leftmost byte of
Contact IBM for programming assistance if the pipeline
register 1 indicates that no parameter list address is provided
module is unmodified and the number shown is not in the
in register 0. No message is issued in this case because such
file PIPELINE HELPIN. If the message is defined in the HELPIN
an error indicates that the stage is not entered from the pipe-
file, then ensure that the message text table is correctly
line dispatcher.
generated and inspect the file $$PIPE UPDLOG to ensure that
the correct version of it is included when generating PIPELINE System Action: The PIPE command or the stage returns
MODULE. with return code -10.
User Response: Use CMDCALL to issue a command using
1I ... Running "string" the required parameter parameter lists when the environment
does not build such parameter lists.
Explanation: This message is issued after any other
message when a stage is currently running and the message
level is odd. The first 60 characters of the specification of 11E Null or blank parameter list found
the stage are substituted in the message. Explanation: A null parameter list is found by PIPE, the
System Action: None. pipeline command processor, or a stage needing parameters.
User Response: The message level is set by the command System Action: PIPE, the pipeline command processor, or
PIPMOD MSGLEVEL, by the global or local MSGLEVEL, and by the stage returns with return code -11.
the MSGLEVEL option on runpipe.
12E Null pipeline
2I ... Processing "command" Explanation: The last character of a pipeline specification
Explanation: This message is issued after messages issued is the pipeline end character; two consecutive end characters
by the pipeline command processor if the bit for 2 is on in are met; or global options are present (in parentheses) with
the message level. The first 60 characters of the pipeline no more data.
command issued are substituted in the message. System Action: Message 192 is issued if the message level
System Action: Message 1 is issued if the message level is is odd. Pipeline scan continues to the end of the pipeline
odd. Processing continues. specification, at which time processing terminates with return
code -12.
3I ... Issued from stage number of pipeline number
13E No ending right parenthesis for global options
Explanation: This message is issued to identify which
stage is the cause of the previously issued message when the Explanation: A leading left parenthesis is found, indicating
option NAME is not used in the pipeline specification. global options, but there is no closing parenthesis.
System Action: Message 1 is issued if the message level is System Action: Pipeline scan terminates with return code
odd. Processing continues. 13.
User Response: Terminate global options with a right
parenthesis.
14E Option word not valid 18E CMS/TSO Pipelines incorrectly generated with
character
Explanation: The word substituted is not recognised as one
of the global options supported. Explanation: CMS/TSO Pipelines has discovered an
internal error. CMS/TSO Pipelines is generated with unac-
System Action: Pipeline scan terminates with return code
ceptable characters for one of the delimiter characters (stage
14.
separator, left parenthesis, right parenthesis, period, or
User Response: Defined global options are:NAME TRACE colon).
LISTRC LISTERR LISTCMD STOP SEPARATOR ENDCHAR ESCAPE
MSGLEVEL. System Action: Pipeline scan terminates with return code
-18.
15E Value missing for keyword "keyword" User Response: Contact your systems support staff.
Explanation: An operand is specified that requires a value System Programmer Response: Restore a working copy
(for instance, NAME), but the following non-blank character is of CMS/TSO Pipelines. Issue NUCXDROP PIPMOD followed by
the right parenthesis that ends the global options, or the PIPINIT if the broken module was activated with “pipinit
operand is the last word of the argument string to a stage. test”. If not, you may have to resort to a backup of the
This message is issued when an option list ends prematurely, product tape.
and by stages that use values with operands. Correct the error introduced in the operand table (SYSTEM
System Action: Pipeline scan terminates with return code KWDTABLE by default) for the operands sc, lp, rp, cn, dt, and
15. When issued by a stage, the stage returns with return as.
code 15.
19W Label "word" truncated to eight characters
16E Last character is escape character Explanation: The first word of a stage ends in a colon;
Explanation: The escape character (declared by the option there are more than eight characters before the colon or the
ESCAPE) is the last character of a pipeline specification. This first period in the label.
is an error because there is nothing to escape. System Action: The label is truncated on the right. Proc-
System Action: Pipeline scan terminates with return code essing continues.
-16.
20I Stage returned with return code number
17E Null stage found Explanation: Pipeline dispatcher trace is active; option
Explanation: There is a stage separator at the end of a LISTRC or option LISTERR is in effect. The stage has
pipeline specification; a stage separator is adjacent to an end completed processing.
character; or there are two stage separators with only blank System Action: The pipeline dispatcher continues with
characters between them. other work. Control returns to the caller when all stages are
System Action: Message 192 is issued if the message level complete.
is odd. Pipeline scan continues to the end of the pipeline
specification, at which time processing terminates with return 21E Unable to find EXECCOMM for REXX
code -17.
Explanation: CMS/TSO Pipelines has discovered an
User Response: Ensure that the pipeline specification is internal error. The EXEC interpreter did not set up a subcom-
complete. Check if a comma is missing to indicate REXX mand environment for EXECCOMM before issuing a command
continuation to the next stage on the following line. to the default command environment.
The INTM shell adds console stages to the beginning or end System Action: The REXX interface returns with code -21.
(or both) of a pipeline specification beginning or ending with
a stage separator; the PIPE command does not. User Response: Contact your systems support staff.
Ensure that there are no blanks between what you intend to System Programmer Response: Ensure that the pipeline
be a pair of self-escaping vertical bars (||). module is generated correctly. This message indicates a
change in the implementation of VM/REXX. Investigate
whether corrective service is available.
27E Entry point word not found ready The stage is ready to run.
wait loc Waiting for data in locate mode.
Explanation: The named entry point is not a built-in
wait in Waiting for data in move mode.
program; it is not found in any declared local directory; and
wait out Waiting for a stage to read its output.
there is no file with file name word and file type REXX..
wait ecb Waiting for an event control block to be
System Action: Message 1 is issued if the message level is posted.
odd. Pipeline scan continues to the end of the pipeline unavail The stage has been redefined by
specification, at which time processing terminates with return CALLPIPE; it waits for the subroutine
code -27. The stage terminates with return code -27 when pipeline to complete.
this message is issued by a look up routine. wait any Waiting for data on any input stream.
returned The stage has completed execution.
User Response: If the error seems to originate in a specs
wait com Waiting for other stages to commit.
stage or some other stage that allows for a long and complex
parameter list, check your pipeline specification to see if you
System Action: None.
have inadvertently used a stage separator or an end character
without doubling it up to let it escape through the scanner.
If you forgot, what you think of as a parameter list is, in 31I Resuming stage; return code is number
fact, several stages or even several pipelines. Explanation: Pipeline dispatcher trace is active. The stage
The usually useless piece of advice is: Verify the spelling of is being resumed. The return code from the call to the pipe-
the name of the program to run. If, however, this shows that line dispatcher is shown.
the name substituted is not the name you wrote, the pipeline System Action: None.
specification has been truncated in the middle of the
command verb. Most likely, message 1145 was also issued;
address your CMS commands to COMMAND, not to XEDIT.
35I Output number bytes Explanation: PIPMOD receives a service call with an argu-
ment that indicates neither that CMS ABEND processing is in
Explanation: Pipeline dispatcher trace is active. A PIPOUTP process (PURGE) nor that the nucleus extension is being
macro or a OUTPUT pipeline command is issued. The dropped (RESET). The first token of the argument is substi-
contents of register 0 are substituted for number. tuted.
System Action: Message 39 follows. System Action: The service call is ignored.
User Response: Contact your systems support staff.
36I Select side stream number
System Programmer Response: Ensure that no program
Explanation: Pipeline dispatcher trace is active. A PIPSEL
issues SVC 202 or CMSCALL with a call type of X'FF'.
macro or a SELECT pipeline command is issued.
System Action: None. 42E Entry point missing
Explanation: The RESOLVE pipeline command is issued
37I Streamnum side stream number intersection
with no operands; ldrtbls or nucext has no operands.
number
System Action: Return code 42 or -42 is set.
Explanation: Pipeline dispatcher trace is active. A
PIPSTRNO macro or a STREAMNUM pipeline command is
issued. 43E Null label
44E Label string is not valid 48E Conflicting value for keyword keyword: character
Explanation: string does not conform to the syntax for a Explanation: The character used for the operand is already
label. For instance, there may be two or more periods in it. a special character in the pipeline specification parser.
System Action: Message 192 is issued if the message level The following characters are reserved for other uses in a
is odd. Pipeline scan continues to the end of the pipeline pipeline specification: left and right parentheses “()”, colon
specification, at which time processing terminates with return “:”, period “.”, blank, asterisk “*”. Other special characters
code -44. are defined by the global options option SEPARATOR (by
default “|”), option ENDCHAR, and option ESCAPE.
45W Stream identifier "name" truncated to four charac- System Action: Scan terminates with return code 48.
ters
User Response: Use another character for the function.
Explanation: There are more than four characters between
the period beginning a stream identifier and the colon ending
49E Value for keyword "keyword" is not acceptable
the label.
Explanation: The value must be a single character or two-
System Action: The stream identifier is truncated. Proc-
digit hexadecimal representation of the character to be used
essing continues.
for the function indicated by the operand.
System Action: Scan terminates with return code 49.
46E Label label not declared
Explanation: No specification for a stage is found the first
50E Not a character or hexadecimal representation:
time the label is used. The first usage of a label defines the
word
stage to run, and any operands it may have. Subsequent
references are to the label by itself. Explanation: word is not a character or a two-digit
hexadecimal representation of a character.
System Action: Message 192 is issued if the message level
is odd. Pipeline scan continues to the end of the pipeline System Action: Return code 50 is set. If issued from the
specification, at which time processing terminates with return scanner, scan terminates with return code 50. If issued from
code -46. a stage, the stage terminates with return code 50.
User Response: Ensure that the label is spelt correctly. If
this is the case, inspect the pipeline specification to see if a 51E Missing operand after inputRange(s)
stage separator is erroneously put between the label and the Explanation: A column range or a list enclosed in paren-
verb for the stage. theses is specified, but no further operands are present.
System Action: The stage terminates with return code 51.
47E Label label is already declared
User Response: Specify the range “*-*” if you wish to
Explanation: A reference is made to a label that is already change left parentheses; or rearrange the list of translations
defined. The label reference should be followed by a stage so that the first one is not the beginning of a valid range.
separator or an end character to indicate reference rather than
definition.
52E Unknown translate table "word"
System Action: Message 192 is issued if the message level
is odd. Pipeline scan continues to the end of the pipeline Explanation: The table is not INPUT, OUTPUT, LOWER,
specification, at which time processing terminates with return UPPER, A2E, or E2A; nor is it one of the operands TO or FROM
User Response: Ensure that the label is spelt correctly. If word is the first word specified after column ranges, if any.
this is the case, add a stage separator after the label to indi- It is neither a translation specification nor one of the oper-
cate that this is a reference to a stream other than the ands designating a translate table.
primary one. Note that all references to a label refer to the System Action: The stage terminates with return code 52.
invocation of the stage that is defined with the first usage of
the label.
53E Odd number of translate pairs
Explanation: The argument string ends prematurely.
System Action: The stage terminates with return code 53.
User Response: The most likely cause of this error is that
the first operand is interpreted as a column range instead of a
translation specification. For instance, “xlate 40 a” gives
this message instead of translating blank characters to lower
Explanation: A list of column ranges is opened or a System Action: The stage terminates with return code 58.
keyword is specified that indicates a list of words or fields.
A word in it does not conform to a column range syntax. If 59E Logical record length number is not valid
a valid decimal range is specified, the beginning column is
Explanation: The number is zero or negative.
zero or the end of the range is before the beginning.
System Action: The stage terminates with return code 59.
System Action: The stage terminates with return code 54.
User Response: Add the range “*-*” if you wish to trans-
60E Delimiter missing after string "string"
late the left parenthesis; or rearrange the list of translations
so that the first one is not the beginning of a valid range. Explanation: No closing delimiter is found for a delimited
string.
55E No inputRange(s) in list System Action: The stage terminates with return code 60.
Explanation: A left parenthesis is found, which indicates User Response: Most likely you never intended to specify
the beginning of a list of input ranges. The next non-blank a delimited string, but a mistake in a column range caused
character is a right parenthesis, which indicates that the list the specification error.
contains no ranges.
System Action: The stage terminates with return code 55. 61E Output specification missing
User Response: Add the range “*-*” for the column range Explanation: The output column is not specified for the
if you intend to translate left parentheses to right paren- last item.
theses, like this: “xlate *-* ( )”; or rearrange the list of System Action: The stage terminates with return code 61.
translations so that the first one is not the beginning of a
valid range. User Response: A likely cause is that an earlier
specification is interpreted as a delimited string instead of
what it was intended to be.
56E More than 10 inputRanges specified
Explanation: There are more than 10 words in the list of 62E Command length number too long for CP
column ranges.
Explanation: The argument string or an input line is longer
System Action: The stage terminates with return code 56. than the 240 bytes supported by CP, even after leading blank
User Response: Use a cascade of xlate if you need to characters are stripped.
translate more than 10 ranges. Alternatively, use a subrou- System Action: The stage terminates with return code 62.
tine pipeline with a spec to put the fields to be translated
adjacent to each other; perform the transliteration desired;
then use another spec to put them back where they were in 63E Output specification word is not valid
the input record. Explanation: The word specifies where to put a field in the
output record; it is not a positive number or a column range.
57E Missing right parenthesis after inputRanges System Action: The stage terminates with return code 63.
Explanation: A left parenthesis is found, meaning a range User Response: A mistake in a conversion or placement
of columns is specified, but no closing right parenthesis is option can trigger this message. Another likely cause is that
found. an earlier input specification has been scanned as a delimited
System Action: The stage terminates with return code 57. string where it should have been a column.
74E Fixed records not same length; last bytes followed 79E CCW command code X'hex' is not valid
by current bytes
Explanation: Except for X'5A', the first byte of a record
Explanation: Input records to block FIXED are not all the does not contain a write or control CCW: the rightmost bit is
same length. zero.
System Action: The stage terminates with return code 74. System Action: The stage terminates with return code 79.
User Response: Check the input file. Maybe you wanted User Response: Check the input file. Most likely there is
the function performed by fblock rather than block; fblock no CCW operation code in the first column of the data record.
accepts records of any length. Use pad to increase the Use punch instead of uro or printmc if you do not need to
length of short records, chop to truncate records. create records that have no operation carriage control.
75E Block size not integral multiple of record length; 80E More than 255 conversion triplets specified
remainder is number
Explanation: More than 765 operand words are found;
Explanation: The block size specified is not an integral overstr cannot handle more than 255 triplets.
! multiple of the length of the first record read. For fbawrite
System Action: The stage terminates with return code 80.
! the block less the prefix does not contain a multiple of 512
! bytes. User Response: Build a cascade of c14to38 filters if you
need to process that many combinations.
System Action: The stage terminates with return code 75.
User Response: Use fblock if you wish to combine records
81E Incomplete conversion triplet
irrespective of their lengths.
Explanation: The number of operand words is not divisible
by three.
76I Waiting on ECB at X'address': hex
System Action: The stage terminates with return code 81.
Explanation: Pipeline dispatcher trace is active. The stage
issues the macro PIPWECB. The address of the ECB and its
contents are shown. Bit 1 of the ECB (X'40') indicates that 82E Device address word is not hexadecimal
it is posted. Explanation: The device address shown is not composed of
System Action: Processing continues. hexadecimal characters.
System Action: The stage terminates with return code 82.
77I Return code number User Response: Ensure operands are spelt correctly; a
Explanation: The return code from a pipeline command is misspelled keyword is interpreted as a device address.
not zero. The option LISTERR is active.
System Action: None. 83E Device word does not exist
Explanation: CP sets condition code 3 on diagnose 24,
78E Record length number is too much indicating that the virtual device does not exist.
Explanation: The input record is too long for the device System Action: The stage terminates with return code 83.
driver or blocking filter in question. User Response: Ensure operands are spelt correctly; a
System Action: The stage terminates with return code 78. misspelled keyword is interpreted as a device address.
User Response: Check the input file. block CMS, disk,
fullscr, printmc, punch, and uro only accept up to 65535 84E Virtual device word is not a supported virtual type
bytes of data. Explanation: The virtual device class and type returned for
block V and block VB do not support input records longer the device are not compatible with the function requested.
than 32752 bytes (which is equivalent to the OS restriction of For instance, it is not a printer for printmc or not a punch for
32756 including the record descriptor word). Use block VBS punch.
to process records of any length. System Action: The stage terminates with return code 84.
86I CMS/TSO Pipelines, 5654-030/5655-A17 modlevel 91E Return code number from CONSOLE type macro
(Version.Release/Mod) - Generated July 14, 2010 at
Explanation: The CONSOLE interface is used to a 3270
11:00 a.m.
terminal and the return code shown is received from CMS.
Explanation: This is the response to the command PIPE type displays the second doubleword of the CONSOLE param-
QUERY. Date and time shown here represent the time when eter list.
this book was formatted. In the actual message they are
System Action: The stage terminates with return code 91.
replaced with the date and time the module was generated.
This time is normally less than a minute before the User Response: The return codes from CONSOLE are
timestamp of PIPELINE MODULE, unless the file has been trans- described in the VM/ESA CMS Application Development
ported with SENDFILE across time zone boundaries or Reference for Assembler, SC24-5453.
between systems with dissimilar time zone specification in
DMKSYS or HCPSYS. 92E More than ten key fields
System Action: Return code 86 is set. Explanation: More than the maximum ten key fields are
specified for sort or merge.
87E This stage must be the first stage of a pipeline System Action: The stage terminates with return code 92.
Explanation: A program that cannot process input records User Response: Use spec to rearrange the records to make
is not in the first position of the pipeline. the fields contiguous so that they can be coalesced.
System Action: The stage terminates with return code 87.
93E Pipeline not installed as a nucleus extension; use
88E Buffer overflow PIPE command
Explanation: buildscr needs more than 16K to build the Explanation: CMS/TSO Pipelines is initialised, but general
screen image. register 2 does not point to an SCBLOCK for its entry point.
This message has also been observed when a downlevel or
System Action: The stage terminates with return code 88.
modified NUCXLOAD MODULE is used to install the pipeline
User Response: Check the input file. Ensure (for instance module as a nucleus extension.
with asatomc) that the input file does have machine carriage
System Action: Processing terminates with return code 93.
control.
User Response: Use the command PIPE to run a pipeline
specification. Do not issue the command PIPELINE or NXPIPE;
89E Return code number reading the virtual reader
these modules run in the user area when invoked directly as
Explanation: Diagnose 14 sets the return code shown. a command.
System Action: The stage terminates with return code 89. System Programmer Response: Ensure that the PIPELINE
MODULE is relocatable.
User Response: One reason is that the first file in the
reader is a VMDUMP file and you did not use the option
4KBLOCK. Refer to VM/ESA CP Programming Services, 94E Token token is not valid for PIPMOD
SC24-5520, for a description of the error codes for diagnose
Explanation: The PIPMOD command is issued with flag
14.
byte zero. The subcommand is not supported.
System Action: Processing terminates with return code 94.
90E No reader file available
User Response: Do not issue the PIPMOD command from an
Explanation: There are no files in your reader that can be
EXEC1.
processed.
System Action: The stage terminates with return code 90.
95E Operand word is not valid for PIPMOD
User Response: The option MONITOR, or lack thereof,
Explanation: The PIPMOD command is issued from the
determines which type of file reader tries to read. There
command line or from an EXEC. The word shown is not
may still be files ready for reading of the other kind (monitor
supported.
or not, as appropriate).
System Action: Processing terminates with return code 95.
96E Missing PIPMOD operand 101E Connector connector can be specified with
ADDPIPE or CALLPIPE
Explanation: The PIPMOD command is issued with a
register 1 flag byte of zeros. No arguments are found. Explanation: A connector is found, but the pipeline was
not issued with an ADDPIPE or CALLPIPE.
System Action: Processing terminates with return code 96.
System Action: Pipeline scan terminates with return code
101.
97E Userword for pipe nucleus extension is zero
Explanation: CMS/TSO Pipelines has discovered an
102E Stream number not defined
internal error. The nucleus extension for PIPE is installed,
but no pipeline header is allocated. Explanation: A connector requests the stream with the
number shown, but the calling stage does not have that many
System Action: Processing terminates with return code 97.
streams defined.
User Response: Contact your systems support staff.
System Action: Pipeline scan terminates with return code
System Programmer Response: This is an error in 102.
CMS/TSO Pipelines.
User Response: Note that the primary stream has number
0; the secondary stream is number 1.
98E Connector not by itself
Explanation: A label is found that has an asterisk as the 103E Stream identifier not defined
first component, but a stage definition follows.
Explanation: The calling stage does not have a stream with
System Action: Message 192 is issued if the message level the identifier specified in the third component of the
is odd. Pipeline scan continues to the end of the pipeline connector.
specification, at which time processing terminates with return
System Action: Pipeline scan terminates with return code
code -98.
103.
User Response: Connectors must be at the beginning or the
end of a pipeline. Most likely an end character or a stage
104E Compiler stack overflow
separator is missing.
Explanation: CMS/TSO Pipelines has discovered an
internal error. A filter is out of space for its compiler stack
99E Connector not at the beginning or the end of a
while generating code.
pipeline
System Action: The stage terminates with return code 104.
Explanation: A connector is in the middle of a pipeline.
User Response: Try to reduce the complexity of the argu-
System Action: Message 192 is issued if the message level
ment string. Contact your systems support staff.
is odd. Pipeline scan continues to the end of the pipeline
specification, at which time processing terminates with return System Programmer Response: This is a programming
code -99. error in CMS/TSO Pipelines. Investigate whether corrective
service is available.
User Response: Connectors specify how to couple streams
between the active pipeline and the one being added to the
pipeline set. Connectors must be at the beginning or the end 105E Compiler overflow
of a pipeline. Most likely an end character or a stage sepa-
Explanation: A filter is compiling a program; the program
rator is missing.
is too large to fit into the area that has been allocated for this
purpose.
100E Direction "word" not input or output
System Action: The stage terminates with return code 105.
Explanation: The second component of a connector is not
User Response: For sort, the fields required such a
a recognised operand.
complex program to compare that the available storage is
System Action: Pipeline scan terminates with return code exhausted. Use spec to rearrange the sort fields to become
100. less complex. sort has read all input, but produces no
output.
System Programmer Response: This is not a program-
ming error in CMS/TSO Pipelines.
107E PIPMOD nucleus extension dropped before PIPE 112E Excessive options "string"
command is complete
Explanation: A stage has scanned all options it recognises;
Explanation: The PIPMOD nucleus extension is dropped the string shown remains.
while a pipeline is active.
System Action: The stage terminates with return code 112.
System Action: Control is returned to CMS with return code
User Response: This error may occur when a delimited
107. Results are unpredictable when control returns to the
string is intended, but a single character is found. For
pipeline; an ABEND is likely.
example, in “chop / ,/|” the first forward slash means the
User Response: Do not issue PIPINIT or NUCXDROP PIPMOD literal character '/' rather than the opening of a delimited
while a pipeline specification is being run. string. Though “chop /, /|” would scan the intended way,
the preferred specification is “chop any / ,/”.
108E Return code number from operation operation on Another likely cause is that a stage separator is missing and
tape tape what is intended as a following stage is treated as additional
operands to the current stage.
Explanation: CMS refuses to perform an I/O operation on
the tape drive.
113E Required operand missing
System Action: The stage terminates with return code 108.
Explanation: A stage has found some, but not all, required
User Response: Refer to the RDTAPE or WRTAPE macro
operands.
description in the VM/ESA CMS Application Development
Reference for Assembler, SC24-5453, for a description of the System Action: The stage terminates with return code 113.
error codes.
Error 2 on write means that end of tape is reached while 114E Block size missing
writing tape mark(s) after the file; all input records have
Explanation: block is issued without an operand.
been processed.
System Action: The stage terminates with return code 114.
109E Keyword word is not a valid blocking format User Response: Specify the block size for a default of
Fixed.
Explanation: The operand is not Fixed, Variable, C, and so
on.
115E Block size too small; number is minimum for this
System Action: The stage terminates with return code 109.
type
User Response: deblock V supports all OS variable record
Explanation: The block size is too small to hold a record
formats, blocked or spanned, or both.
or segment, even of one byte.
System Action: The stage terminates with return code 115.
110E Unsupported record in IEBCOPY unloaded data
Explanation: The top three bits of the first record are not
116E File type missing
all zero.
Explanation: The argument string is one word (the file
System Action: The stage terminates with return code 110.
name).
User Response: Check the input file. If the data is indeed
System Action: The stage terminates with return code 116.
an IEBCOPY unloaded PDS, then there seems to be a note list.
Remove it in a drop or nfind stage. User Response: Write file names, types, and modes as
blank-delimited words. Specify both the file name and the
file type.
111E Operand word is not valid
Explanation: A keyword operand is expected, but the word
117E File mode "word" longer than two characters
does not match any keyword that is valid in the context.
Explanation: Three or more characters are found in the
System Action: The stage terminates with return code 111.
third word of the argument string.
System Action: The stage terminates with return code 117.
118E Return code number from renaming the file 121E File not found in the active file table
Explanation: An erase and write operation is requested for Explanation: CMS/TSO Pipelines has discovered an
a file. The file exists, so a utility file is written and renamed. internal error. Having written a file through the full block
The RENAME function fails with the return code shown. interface, the disk device driver is unable to find the AFT
entry for the newly created file.
System Action: The stage terminates with return code 118.
System Action: The stage terminates with return code 121.
User Response: Contact your systems support staff.
User Response: The reason may be that the file has been
System Programmer Response: This is an error in
closed during execution, possibly by some other stage going
CMS/TSO Pipelines. Investigate whether corrective service
into subset. Use diskslow to overcome this problem or
is available.
buffer the file with buffer before writing it.
Explanation: An erase and write operation ( >) is requested System Action: Message 192 is issued if the message level
with an asterisk as the file mode. is odd. Pipeline scan continues to the end of the pipeline
specification, at which time processing terminates with return
System Action: The stage terminates with return code 126. code -132.
User Response: Specify the mode letter where you wish to
write the file. 133E Stream "word" already prefixed
Explanation: The stream is referenced in two or more
127E This stage cannot be first in a pipeline connectors that specify a prefix type connection. For
Explanation: A device driver that requires an input stream instance, two or more connectors refer to *.input: at the
is first in a pipeline, where there can be no input to read. end of a pipeline.
System Action: The stage terminates with return code 127. System Action: Message 192 is issued if the message level
is odd. Pipeline scan continues to the end of the pipeline
specification, at which time processing terminates with return
128E Record format not existing file format letter code -133.
Explanation: A file is to be appended to. The explicit
record format specified is not the same as the one for the 134E Record is number bytes, but format F file record
existing file. length is number
System Action: The stage terminates with return code 128. Explanation: While file that has fixed record format is
User Response: Specify the correct record format; use > to being written, an input record does not have the correct
replace a file; or erase the existing file before issuing the length.
pipeline. System Action: The stage terminates with return code 134.
User Response: Check the input file. Use pad to extend
129E Error reading file: Premature end of file records; chop to truncate.
Explanation: A V format file is being read through the full
block interface. The end-of-file record is not expected. 137E The string of operands is too long
This error can occur when a file on a shared minidisk has Explanation: The operand string for asmfind or asmnfind is
been updated by another virtual machine after you accessed longer than 71 bytes, indicating that the target is not entirely
the minidisk. in the first record. The argument string to sql is 32K or
System Action: The stage terminates with return code 129. : longer. An input line to attach is 32K or longer.
User Response: Access the disk and try again if the file is System Action: The stage terminates with return code 137.
on a shared minidisk. If the file is indeed in order, there is a User Response: Use asmcont to combine continuation
programming error in disk. Use diskslow instead. records before find or nfind; reconstruct the Assemble file
with asmxpnd.
Explanation: An input record does not fit in the buffer System Action: None.
when creating format V or VB records or a record is longer
than an explicit length on pack VARIABLE. The length of the 146E File "fn ft fm" does not exist
record is substituted.
Explanation: A file does not exist. It is requested by
System Action: The stage terminates with return code 140. pdsdirect, members, or file read is requested with the
User Response: Check the input file. Increase the block synonym <.
size to accommodate the required length if you are indeed System Action: The stage terminates with return code 146.
blocking the data you intend to block.
User Response: Use disk to treat missing files as if they
have no records.
141E XEDIT not active
Explanation: The xedit device driver is invoked; it finds no 147E File not a proper PDS
active XEDIT subcommand environment. (The return code
from CMS is -3.) Explanation: The first record of the file does not contain a
recognised identifier.
System Action: The stage terminates with return code 141.
System Action: The stage terminates with return code 147.
¡ 149E Offset is not smaller than modulo 157E Null string found
¡ System Action: Explanation: There are two consecutive delimiter charac-
ters.
¡ Explanation: The first number specified must be zero or
¡ positive and smaller than the second. The stage terminates System Action: The stage terminates with return code 157.
¡ with return code 149.
¡ 158E Modulo must be positive (it is number)
150E Member word not found
¡ System Action: The stage terminates with return code 158.
Explanation: The member listed does not exist in the
library.
159E Device address no longer exists
System Action: The stage terminates with return code 150.
Explanation: Condition code 3 is received on an I/O opera-
User Response: When extracting members from a TXTLIB, tion to the device.
members requires the name of the first CSECT in an object
System Action: The stage terminates with return code 159.
module. It does not resolve entry points the way the CMS
loader does.
161E 64K or more inbound data
151E Operand "string" is not range of characters or a Explanation: A 3270 generates 64K bytes or more of input
delimited string data.
Explanation: The operand is neither a range of characters System Action: The stage terminates with return code 161.
nor a delimitedString of enumerated characters. User Response: If your terminal is a personal computer,
System Action: The stage terminates with return code 151. the terminal simulator may have generated an incorrect
inbound transmission.
152E Block size number too large; number is the
maximum 162E Return code number from NUCEXT
Explanation: The block size for block is larger than the Explanation: The return code shown is received when
size supported for the blocking format in question. For V installing or retracting a nucleus extension.
and the three other variable formats, the maximum is 32760. System Action: Processing terminates with return code
For AWSTAPE, the maximum is 65541. 162.
User Response: Choose a smaller block size.
163E Missing keyword INPUT or OUTPUT
154E Operating environment not supported by stage Explanation: SELECT and SEVER must have an operand.
Explanation: A stage is requested which does not run on System Action: Processing terminates with return code
the operating system at hand. 163.
System Action: The stage terminates with return code 154.
164E Direction "word" not valid or not supported
155E "attribute" is not three characters or hexadecimal Explanation: A stage issues a pipeline command where the
Explanation: One of the first four words in the arguments first operand is the word shown. This combination is not
to buildscr is neither an asterisk nor three characters. supported.
System Action: The stage terminates with return code 155. System Action: Processing terminates with return code
164.
User Response: Write three characters for extended attri-
butes, or a single asterisk.
165E Stream identifier word not valid
166E No real device attached for device 174E Stream "identifier" is already defined
Explanation: The device driver requires a real device, but Explanation: The second component of the label refers to a
one is not attached. stream that is already defined for the stage.
System Action: The stage terminates with return code 166. System Action: Message 192 is issued if the message level
is odd. Pipeline scan continues to the end of the pipeline
specification, at which time processing terminates with return
¡ 167E You cannot READ from the second reading station
code -174.
¡ Explanation: SELECT SECOND is in effect. The second
User Response: Choose another stream identifier for the
¡ reading station has no input stream associated and thus no
label reference.
¡ record can be read.
¡ System Action: The stage terminates with return code 167.
175E Language table not generated
Explanation: The language table describing message texts
169E Stream identifier missing
for multiple languages has not been generated in CMS/TSO
Explanation: SELECT has no operands. Pipelines.
System Action: Processing terminates with return code System Action: Processing terminates with return code
169. 175.
170E Prefix or suffix type connector not allowed 176E Language "word" not found
Explanation: A pipeline specification that is issued with Explanation: Messages for the requested language were not
CALLPIPE contains an output connector at the beginning of a generated with CMS/TSO Pipelines.
pipeline or an input connector at the end of a pipeline.
System Action: Processing terminates with return code
System Action: Processing terminates with return code 176.
170.
User Response: Use the ADDPIPE pipeline command to 177I Spent number milliseconds in routine
process alternative input or redirect output.
Explanation: This message is issued when the message
level includes the bit for 8K. A message is issued for each
172E Help not available for relative message number; stage as it completes. Further messages are issued to list
issue PIPE HELP MENU for the Pipelines help time spent in system services.
menu
System Action: None.
Explanation: The operand on PIPE HELP specifies a relative
number for which no message is stored.
178E Stream identifier is not found
System Action: Processing terminates with return code
Explanation: fanin is used with operands to designate a
172.
specific order of streams to be read, but the one shown
cannot be selected. SELECT on spec requests a stream that is
173E No stage found to run not defined.
Explanation: CMS/TSO Pipelines has discovered an System Action: The stage terminates with return code 178.
internal error. The pipeline is stalled, but error recovery
User Response: This error can be caused by a missing
finds no stage that is forced ready to run.
stage separator after fanin.
System Action: Processing terminates with return code
173.
179E Character "char" is not an ASA carriage control
User Response: Contact your systems support staff. character
System Programmer Response: This is an error in Explanation: The file is not in the correct format.
CMS/TSO Pipelines. Provide as documentation PIPDUMP
System Action: The stage terminates with return code 179.
LISTING created on the user’s A disk.
User Response: Check the input file.
191E Second character of connector not a period 197E Range shorter than first string
Explanation: The first character is an asterisk, indicating a Explanation: The “from” string is longer than a range.
connector, but the second character is not a period.
System Action: The stage terminates with return code 197.
System Action: Message 192 is issued if the message level
is odd. Pipeline scan continues to the end of the pipeline
198E Count must be one when first string is null
specification, at which time processing terminates with return
code -191. Explanation: A null “from” string is present; the only
acceptable count is one.
192I ... Scan at position number; previous data "string" System Action: The stage terminates with return code 198.
Explanation: The number substituted is the number of
characters from the beginning of the pipeline specification 200E Missing ending parenthesis in expression
(including global options) to the current scan pointer. The Explanation: More left parentheses are met than can be
last 20 characters before the scan pointer are substituted for paired with right parentheses in the expression.
string.
System Action: The stage terminates with return code 200.
System Action: None.
User Response: The error is at or before the character indi- 204E Too many ending parentheses in expression
cated by the scan pointer.
Explanation: A right parenthesis is met for which there is
no open left parenthesis.
193E Colon missing in connector
System Action: The stage terminates with return code 204.
Explanation: The definition of a stage begins with an
asterisk, but a blank character or a parenthesis is met before
a colon. 206E Expression missing
System Action: Message 192 is issued if the message level Explanation: An opening parenthesis is followed by a
is odd. Pipeline scan continues to the end of the pipeline closing one or a comma; a comma is followed by a comma,
specification, at which time processing terminates with return or a comma is followed by a closing parenthesis.
code -193. System Action: The stage terminates with return code 206.
194E Parenthesis not supported in connector 209E Segment length number not 2 or more
Explanation: A parenthesis is met in a connector. Explanation: The length byte in front of a a segment is
System Action: Message 192 is issued if the message level zero or one. This is not valid for data blocked in the netdata
is odd. Pipeline scan continues to the end of the pipeline format.
specification, at which time processing terminates with return System Action: The stage terminates with return code 209.
code -194.
User Response: Check the input file. Ensure that the input
stream is indeed in the netdata format and that records are
195E Pipeline cannot contain only a connector padded to 80 bytes.
Explanation: A connector at the beginning of a pipeline
ends the operand string, or it is followed by an end char- 211E Second target missing
acter.
Explanation: A delimited string is found for the first target,
System Action: Message 192 is issued if the message level but the second target is not present.
is odd. Pipeline scan continues to the end of the pipeline
specification, at which time processing terminates with return System Action: The stage terminates with return code 211.
code -195.
212E Screen size number less than 1920 or greater than
196E Column ranges must be in ascending order and not 16384
overlapping Explanation: The screen size (the product of the number of
Explanation: change finds overlapping ranges or ranges lines and columns) is less than the 1920 capacity of a model
that are not in ascending order from left to right. 2 screen, or larger than the 16384 positions addressable with
14-bit addressing.
System Action: The stage terminates with return code 196.
System Action: The stage terminates with return code 212.
214E Mode fm is not accessed or not CMS format 222E Secondary stream not defined
Explanation: No CMS minidisk or directory is found with Explanation: Only the primary stream is defined. update
the mode letter shown. Either the mode is not accessed, the requires two input and output streams. lookup requires at
character is not a letter, or the disk is in OS format. least two streams.
System Action: The stage terminates with return code 214. System Action: The stage terminates with return code 222.
User Response: update reads the master file from the
215E File identifier "file" not complete or too long primary input and writes the updated file to the primary
output. Transactions (update control cards and
Explanation: The identifier of a file to be looked up by
replaced/inserted records) are read from the secondary input.
state or statew is not two or three blank-delimited words.
The update log is written to the secondary output.
System Action: The stage terminates with return code 215.
/* Sample update */
'pipe (end ?)',
219E Input not in correct format (check word is "check '< mstr fl|u:update|> m fl a',
word", not "word") '?< upd fl|u: |> u log a'
Explanation: A stage that expects a particular record from
another stage did not read what it expected. For example, 223E Sequence error in output file: previous to new
outstore finds an input record that does not describe a file in Explanation: A sequence error is introduced in the output
storage; or tcpdata reads a record that is not generated by file.
! tcplisten. “Input” should be taken to include records passed
! in a data space. System Action: This message is written to the update log
stream. Processing continues. Return code 8 is set unless
System Action: The stage terminates with return code 219. other errors force a higher return code.
User Response: Check the input file. Ensure that the
correct stage is used to generate the file. 224E Premature end of primary input stream; sequence
number number not found
220E First record not a delimiter: "data" Explanation: An update control record references the
Explanation: maclib finds that the first input record does number shown, but it is not found before the input stream is
not define the name of a member. The beginning of the first exhausted.
record is shown. Two double quotes with no substitution System Action: This message is written to the update log
means that the first record is null. This is an error because stream. Processing continues. Return code 12 is set unless
the record belongs to no member. other errors force a higher return code.
System Action: The stage terminates with return code 220.
User Response: Check the input file. Ensure that the input 225E Sequence number not found
stream has members delimited properly and that the operand, Explanation: An update control record references the
if used, is written correctly and in the correct case. number shown, but it is not found. A line with a higher
By default, members are delimited by “*COPY” starting in serial number is encountered.
the first position of the record. System Action: This message is written to the update log
Note that the operand is not translated to upper case; write in stream. Processing continues. Return code 12 is set unless
upper case if the delimiters are upper case. other errors force a higher return code.
227E Sequence field not present in record; number bytes 233E No active EXECCOMM environment found
read
Explanation: A stage refers to the EXEC environment, but
Explanation: An input record is too short to contain the no such environment is found.
sequence field. All master input records are checked for
System Action: The stage terminates with return code 233.
this; detail records being inserted are checked if the control
record indicates that the sequence field in the record is to be User Response: Ensure that the pipeline is started from an
retained. EXEC when using filters referencing EXEC or REXX variables.
System Action: The stage terminates with return code 227. System Programmer Response: A NUCEXT for EXECCOMM
received a nonzero return code.
User Response: Check the input file. Ensure that all
records have a sequence field. Move the sequence field to
the beginning of variable length records. 234E Caller not REXX
Explanation: rexxvars is unable to obtain the interpreter
229E Sequence error in input stream from previous to private data. Most likely, an EXEC2 issued the PIPE
new command.
Explanation: The input master file has a sequence error. System Action: The stage terminates with return code 234.
System Action: This message is written to the update log User Response: Ensure that rexxvars is only called from
stream. Processing continues. Return code 8 is set unless REXX programs. Such programs begin with a REXX comment
other errors force a higher return code. (/* ... */).
230E Unsupported format "type" 235E Variable name is not valid: word
Explanation: The record format for the packed file is Explanation: The variable name is unacceptable to the
neither fixed nor variable. EXECCOMM interface. The variable may be longer than 250
characters or it may contain a character that is not valid in a
System Action: The stage terminates with return code 230.
variable name.
System Action: The stage terminates with return code 235.
231E Null variable name
User Response: Ensure that the stem or variable name is
Explanation: The first two characters of an input record are
spelt correctly. Do not put an ampersand (&) at the begin-
the same.
ning of it. varload requires that the stem part of a variable
System Action: The stage terminates with return code 231. name must be in upper case; a simple variable must be
completely in upper case.
User Response: Check the input file. Ensure that a single
delimiter is used to delimit the variable name from the data
to load. The name must begin in the second column of the 236E Too much data for variable name
input record.
Explanation: Too much data is to be set. The maximum
A blank or an asterisk (*) in column one indicates a length supported for EXEC2 variables is 255 bytes. This
comment line for which no variable is set. message is also issued when there is insufficient storage for
EXECCOMM processing to complete.
232E Stem or variable name is too long; length is number System Action: The stage terminates with return code 236.
bytes
User Response: Use chop to truncate records if using
Explanation: The variable name is too long. var supports EXEC2. With REXX, it is likely that you have run out of
at most 250 bytes for the variable name. stem supports at storage; it may help to increase virtual machine storage.
most 240 bytes in the name of the stem to allow for a
10-character sequence number.
237E Error code X'hex' (return code number) from
System Action: The stage terminates with return code 232. EXECCOMM
User Response: Choose a shorter name for the variable or Explanation: CMS/TSO Pipelines is not prepared for the
stem. return code it receives from EXECCOMM.
System Action: Message 552 displays the EXECCOMM
parameter list. The stage terminates with return code 237.
User Response: Ensure the message level is odd (it is
unless you have changed it) and that message 1 is issued to
show the stage in error. Contact your systems support staff.
261E Unable to open DDNAME 281W Mixed case command verb "word"
Explanation: The third bit of DCBOFLGS stays zero. Explanation: command finds the first word to be different
from its translation to upper case.
System Action: The stage terminates with return code 261.
A companion CMS message (DMSSOP036R) may have been System Action: The tokenised parameter list is translated to
issued. upper case.
User Response: The most likely cause is that no FILEDEF User Response: This message alerts you to the fact that
has been issued to define the data set. Ensure that DCB attri- results may be different on VM/System Product Releases 4
butes are specified. and 5. Write the command entirely in upper case to be sure
the results are the same on the two releases of VM.
264E Too many streams System Programmer Response: command inspects the
operand defined by CD in SYSTEM KWDTABLE to see if it
Explanation: Too many streams are defined for merge; a
should translate the tokenised parameter list and issue this
selection stage has more than two streams; a secondary
message. Use one of these values:
stream is defined for a stage that does not use it.
0 Issue no message and translate to upper case.
System Action: The stage terminates with return code 264.
1 Issue message 281 and translate to upper case. This
User Response: Cascade merge stages to merge the
is the default.
required number of streams. For other stages, this message
usually indicates trouble with the multistream topology. For 2 Issue no message and leave the tokens in lower case.
instance, this is a subroutine pipeline to select lines with A,
3 Issue message 281 and leave the tokens in lower case.
B, or C:
'callpipe (end ? name ALLMSGS)',
'|*:', 282E Stage cannot be used with ADDPIPE
'|a:locate string /A/', Explanation: One of the device drivers referring to REXX
'|f:faninany', or EXEC variables is requested in a pipeline specification
'|*:', issued with ADDPIPE. Since the two programs would run in
'?a:', parallel, it is not possible to ensure that the EXECCOMM envi-
'|b:locate string /B/',
ronment will remain for the duration of the new pipeline.
'|f:',
'?b:', System Action: The stage terminates with return code 282.
'| locate string /C/',
'|f:' User Response: Use CALLPIPE to load or store variables in
a REXX filter.
System Action: The stage terminates with return code 279. System Action: Processing continues.
Explanation: The argument string to maclib is longer than It is more likely that there is an error in CMS/TSO Pipelines.
16M. System Action: The stage terminates with return code 284.
System Action: The stage terminates with return code 280.
287E Number number cannot be negative
User Response: Choose a shorter token.
Explanation: A negative number is specified for an
operand to a stage that only supports zero or positive
numbers.
System Action: The stage terminates with return code 287.
Overrun The channel did not transmit data fast System Action: The stage terminates with return code 304.
enough for the device. This error is not
User Response: Ensure that ISPF is active.
likely to occur.
System Programmer Response: Regenerate the pipeline
System Action: None. module with the current ISPLINK TEXT if the ISPLINK interface
has changed.
Explanation: The contents of the HCPSGIOP control block Explanation: The application name used by CMS/TSO
(used with diagnose A8) are listed in hexadecimal, 32 bytes Pipelines to establish IUCV connections is already known to
CMS. This can happen when the PIPE command is entered
at a time.
recursively from a pipeline that uses starmsg.
System Action: None.
User Response: Find out what is already connected to the
system service.
300E Namelist does not end
System Action: The stage terminates with return code 306.
Explanation: A left parenthesis is found opening a name
list in a table definition, but no right parenthesis is found to
close it.
System Action: The stage terminates with return code 300.
307E Unable to connect to service 313E IPRCODE number received on IUCV instruction
Explanation: The path to the system service shown is Explanation: The return code is not expected.
severed rather than connected. For *MSG and *MSGALL this
System Action: The stage terminates with return code 313.
indicates that there is already a path connected to the service.
User Response: Find out what is already connected to the
314E Server user ID is not available
system service. This can be a starmsg stage in a pipeline
that has invoked the PIPE command recursively, or it can be Explanation: Return code 1011 is received on CMSIUCV
a different application, for instance full screen CMS. CONNECT. On CMS, the virtual machine is not logged on or
has not enabled IUCV communications. On z/OS, no address
System Action: Message 312 is issued if the user data field
space has connected to the VMCF subsystem for the name
unless it contains all zero bits all one bits, or is blank. The
specified.
path is severed. The stage terminates with return code 307.
System Action: The stage terminates with return code 314.
308E CP system service name not valid
315E Server has not declared a buffer
Explanation: Return code 1016 is received on a CMSIUCV
macro. This indicates that the name of a CP system service Explanation: Return code 1012 is received on CMSIUCV
is not valid. CONNECT.
System Action: The stage terminates with return code 308. System Action: The stage terminates with return code 315.
Explanation: CMS sets the code shown when a path to an System Action: The stage terminates with return code 318.
IUCV service is declared.
System Action: The stage terminates with return code 310. 319E Not authorised to communicate with service
User Response: Return codes are listed in VM/ESA CMS Explanation: Return code 1015 is received on CMSIUCV
Application Development Reference for Assembler, CONNECT. There is no IUCV statement in the directory
SC24-5453. Return code 4 indicates an attempt by two authorising communication to the server.
programs to access the same function. System Action: The stage terminates with return code 319.
System Programmer Response: Use the IUCV ALLOWANY
311E Return code number from CMSIUCV function statement in the directory entry for the server virtual machine
Explanation: The return code shown was received from if anyone should be allowed to connect to it; use IUCV state-
CMS when attempting to connect to a service. ments in the directory entries for individual users when you
wish to authorise only some virtual machines to communi-
System Action: The stage terminates with return code 311.
cate with a server.
User Response: Return codes are listed in VM/ESA CMS
Application Development Reference for Assembler, 320E Unexpected IUCV interrupt with IPTYPE type on
SC24-5453. path number
Explanation: An IUCV interrupt is fielded where the type is
312I IPUSER: hex
not the expected one.
Explanation: A path was severed. If the IPUSER field is
System Action: The stage terminates with return code 320.
neither blank nor zero, its contents are substituted. The
substitution is a character string when the field consists
entirely of printable characters; otherwise the field is
displayed in hexadecimal.
¡ 324E CMSIUCV application not active in server 339E PIPSDEL return code number
¡ Explanation: The connection to the server was severed Explanation: A return code is received on a conversion
¡ with user data all binary ones. For CMSIUCV this means that operation.
¡ there is no active application by the name specified.
User Response: Contact your systems support staff.
¡ System Action: The stage terminates with return code 324.
System Programmer Response: This is an error in
CMS/TSO Pipelines. Recreate the message with SET EMSG
333E System service name is in use ON to display the module that issues the message. Contact
IBM for service.
Explanation: The requested system service is already being
used by a stage; it cannot be used by more than one stage at
a time. 340I IPARML: message (R0=number)
System Action: The stage terminates with return code 333. Explanation: The bits for 128 or 64 are on in the message
level. The number is decoded when it represents a valid
User Response: Use fanout to create multiple copies of the
IUCV code.
output stream from starmsg.
337E Binary data missing after prefix User Response: Refer to “IUCV SEND” and “Message
Complete External Interrupt” in CP Programming Services,
Explanation: A prefix indicating a binary constant is SC24-5520.
found, but there are no more characters in the argument
string or the next character is blank.
344I IUCV External Interrupt type
System Action: The stage terminates with return code 337.
Explanation: The bit for 32 is on in the message level. An
external interrupt is being processed. The contents of the
338E Not binary data: string interrupt parameters are dumped from storage.
Explanation: A prefix indicating a binary constant is
found, but the remainder of the word contains a character 345E Originator name severed path number
that is neither 0 nor 1.
Explanation: A connection pending interrupt was received
System Action: The stage terminates with return code 338. from the virtual machine shown on the path shown. When
accepting the connection, the CMSIUCV ACCEPT macro returns
code 1020, indicating that the originator has severed the path
in the meantime.
System Action: The stage terminates with return code 345.
346E No message found (id number) 355I ... RDS: number DBSS: number; number rows
done; string
Explanation: Condition code 2 is received on an IUCV
instruction. This means that the message specified does not Explanation: This message is issued after message 354 to
exist. display additional information from the SQL communications
area.
System Action: The stage terminates with return code 346.
System Action: None.
347E Condition code 3 on IUCV instruction System Programmer Response: The numbers are obtained
from the SQL communications area. string shows the flags;
Explanation: Condition code 3 is received on an IUCV
blanks have been changed to minus to maintain alignment.
instruction.
System Action: The stage terminates with return code 347.
356I ... Message parameter string
Explanation: The SQL communication area has a parameter
348I UserData data
string with one or more items in it; each is listed in a sepa-
: Explanation: An IUCV service severed the path rate message.
: unexpectedly. TCP/IP will indicate the reason in the user data
System Action: None.
: field. The user data field is neither all zeros nor all ones. If
it is printable the contents are shown as sixteen characters,
otherwise the contents are shown as thirty-two hexadecimal 357E SQL RC -934: Unable to find module module; run
characters. SQLINIT
Explanation: SQL is unable to initialise.
350E Primary key longer than secondary
System Action: The stage terminates with return code -934.
Explanation: The primary key is longer than the secondary
User Response: The most likely cause is that the SQL inter-
key.
face modules are not generated on your A disk. Issue
System Action: The stage terminates with return code 350. “Filedef * clear” followed by “sqlinit db(sqldba)” to
create the modules SQL uses to find the resource manager.
352E Input record is number bytes; it should be number Specify the name of the database in the SQLINIT command.
Be sure to access the minidisk that contains the SQL parame-
Explanation: The input record does not have the length ters (normally SQLDBA 195). Contact your systems support
required for the function. staff if there is no SQLINIT EXEC available to you or if there is
System Action: The stage terminates with return code 352. no ARIRVSTC TEXT available to you.
354E Return code number from SQL, detected in module Explanation: The access module is not generated.
module System Action: The stage terminates with return code -805.
Explanation: A negative return code is received from SQL. User Response: Contact your systems support staff.
System Action: The stage terminates with the return code System Programmer Response: An access module must
shown. Messages 355, 356, and 369 are issued to describe be generated before CMS/TSO Pipelines can access SQL.
the error further. The recommended approach (which is the way CMS/TSO
User Response: Try the command “pipe help sqlcode” Pipelines is shipped) is to generate the access module as
to see if it is possible to obtain the information about the 5785RAC.PIPSQI and then grant the use of that to everyone
return code from SQL. Refer to the section on SQLCODEs in (“grant run on 5785rac.pipsqi to public”).
SQL/Data System Messages and Codes for IBM VM Systems, Use PGMOWNER to specify a program owner for a particular
SH09-8079 if online help fails. invocation of sql.
SQL/Data System Application Programming for IBM VM
Systems, SH09-8086 describes how to use SQLPREP to
generate an access module.
Explanation: SQL indicates that a column is not present in Explanation: There are already 10 sql stages active.
a table. System Action: The stage terminates with return code 368.
System Action: The stage terminates with return code -205 User Response: Try to change the pipeline topology to
make some sql stages complete before starting others.
364E Unable to obtain help from SQL (return code
number) 369I ... SQL statement prepared: string
Explanation: A nonzero return code is obtained when Explanation: An error is reported by SQL while it is proc-
reading the index to the SQL return code information in essing a dynamically prepared statement. The statement is
SQLDBA.SYSTEXT1.. substituted.
System Action: The stage terminates with return code 364.
370E Cursor has been closed
User Response: The error reported is likely to be -934 or
-806, which indicate that you have not identified the SQL Explanation: SQL code -504 is received while a cursor is
virtual machine or that the access module for CMS/TSO used to read a line of a query or insert a line. The most
Pipelines has not been generated. Additional messages are likely cause is that another sql stage has committed the unit
likely to be issued; refer to help for the message issued. of work or rolled it back.
System Action: The stage terminates with return code -504.
365E SQL has no information about topic
User Response: Ensure that all concurrent sql stages
Explanation: help is processing a help request for the SQL specify NOCOMMIT. Use a buffer stage to separate a query
topic shown. The tables are successfully selected, but the from the stage processing the result.
query result is null. This means that there is no information
Use a subroutine pipeline to ensure that a query is processed
available about the topic.
correctly before the result is processed further; direct the
System Action: The stage terminates with return code 365. result to a stemmed array where it can be referenced by a
User Response: Ensure that the correct return code is put second pipeline after the return code for the first one is
tested and found OK.
371E ARIRVSTC TEXT is not available; run SQLINIT 377E Subsystem word is not defined
Explanation: The object module that contains the SQL Explanation: DSNALI returns reason code X'00F30006',
bootstrap code is not linked into the pipeline module, nor is which means that the subsystem identification is not valid (or
it accessible as a file. more likely not defined).
System Action: The stage terminates with return code 371. User Response: Contact your database administrator to
determine the subsystem id to specify or contact your
User Response: Issue “Filedef * clear” followed by
systems support staff to generate the correct default in TSO
“sqlinit db(sqldba)” to create the modules SQL uses to
Pipelines.
find the resource manager. Specify the name of the database
in the SQLINIT command. Be sure to access the minidisk that System Programmer Response: The system keyword QZ
contains the SQL parameters (normally SQLDBA 195). Contact defines the default subsystem identifier. This is DSN by
your systems support staff if there is no SQLINIT EXEC avail- default.
able to you or if there is no ARIRVSTC TEXT available to you.
System Action: The stage terminates with return code 377.
374E DB2 connection using plan word already active 380E Left parenthesis missing
Explanation: The option PLAN is specified, but a different Explanation: A left parenthesis is expected for a list of
plan is already in use. items, but one is not found.
System Action: The stage terminates with return code 374. System Action: The stage terminates with return code 380.
375E DB2 already connected to subsystem word 381E Right parenthesis missing
Explanation: The option SSID is specified, but a different Explanation: A left parenthesis for a list of items has been
subsystem is already in use. met, but no right parenthesis is found.
System Action: The stage terminates with return code 375. System Action: The PIPE command or stage terminates with
return code 381.
Explanation: The return code (register 15) and reason code Explanation: An opening parenthesis is found with only
(register 0) substituted are received in response to a call to blank characters before the closing parenthesis.
CAF OPEN. System Action: The stage terminates with return code 382.
System Action: The stage terminates with return code 376.
391E Unsupported conversion type
Explanation: The type shown is syntactically correct to
request a conversion of a field, but the conversion is not
available. An example of such conversion is B2F.
System Action: The stage terminates with return code 391.
User Response: Use two spec stages to perform the
conversion via an intermediary format; for instance, char-
acter.
401E Input record too short (number bytes) 409E Assert failure code at address
Explanation: For asmcont, an input record after a state- Explanation: CMS/TSO Pipelines has discovered an
ment indicating continuation is shorter than 16 bytes, which internal error. A program check operation exception is
means that there is no continuation text. For join forced to indicate a condition which should not occur.
KEYLENGTH, the input record was shorter than the specified
System Action: Message 411 is issued if the information is
key length.
available. CMS ABEND processing continues.
For lookup SETCOUNT, the input master record is shorter than
User Response: Ensure the message level is odd (it is
10 bytes, which means that it does not contain a full count
unless you have changed it) and that message 1 is issued to
field. Likewise, for lookup INCREMENT, the input detail
show the stage in error. Make a note of the code and the
record is shorter than 10 bytes and thus cannot contain the
following message. Contact your systems support staff.
! increment field. For fbawrite, the record is shorter than 24
! bytes. System Programmer Response: Investigate whether
corrective service is available.
System Action: The stage terminates with return code 401.
User Response: Check the input file. If you are using
410E ABEND code at address; PSW hex
asmcont, ensure that the input file is indeed an Assembler
file. Explanation: A CMS ABEND has occurred in the main pipe-
line module. The ABEND code indicates the type of failure.
The immediate CMS command HX causes ABEND 222.
402I Calling Syntax Exit
The PSW at time of ABEND (ABNPSW) is substituted. The
Explanation: Pipeline dispatcher trace is active. The stage
contents of storage locations 140-143 (Program Interruption
is defined with a syntax exit which is called.
Identification) are displayed after the PSW when bit 12 of the
System Action: None. PSW is one. This field is meaningful only if a program check
caused the ABEND.
405E Minimal C program tries to extend DSA System Action: Message 411 is issued if the information is
available. CMS ABEND processing continues.
Explanation: A program using a minimal C runtime has
run out of stack space. User Response: Ensure the message level is odd (it is
unless you have changed it) and that message 1 is issued to
System Action: The program terminates. The stage returns
show the stage in error. Make a note of the information for
with code 405.
your systems support staff. Contact your systems support
User Response: Use the C systems programmer environ- staff.
ment for the program. This makes it look like any other
System Programmer Response: Investigate whether
Assembler program.
corrective service is available.
420E Return code number reading or writing block 503E Return code number obtaining data set control
number on disk mode block
Explanation: The return code shown is received when Explanation: The return code from OBTAIN is greater than
reading a block from the disk. 8. Return code 12 indicates an error reading the volume
table of contents. Return code 16 indicates a programming
System Action: The stage terminates with return code 420.
error in CMS/TSO Pipelines.
User Response: Ensure that the block is within the disk
System Action: The stage terminates with return code 503.
extents. Ensure that the disk is formatted correctly. Ensure
that the block number(s) are in decimal.
504E Data set DSNAME does not exist
421E File mode string more than one character Explanation: Return code 4 or 8 is received when trying to
locate the data set with OBTAIN, which indicates that the
Explanation: A word in the argument string to adtfst is
volume is not mounted or that the data set does not exist.
longer than one character.
System Action: The stage terminates with return code 504.
System Action: The stage terminates with return code 421.
User Response: Write each mode letter as a blank-
505E Data set DSNAME is not partitioned
delimited word when more than one mode letter is proc-
essed. Explanation: A member is requested and the data set
control block does not indicate partitioned organisation.
498E Output descriptor name is not valid System Action: The stage terminates with return code 505.
Explanation: Reason code X'035C8002' was received
when dynamically allocating a SYSOUT data set. The output 506E DDNAME name is permanently concatenated
descriptor contains a character that is not valid.
Explanation: qsam does not support permanent concat-
System Action: The stage terminates with return code 498. enations.
System Action: The stage terminates with return code 506.
499E Output descriptor name is not defined
User Response: Use > to specify the particular data set into
Explanation: Reason code X'04CC8002' was received which the member should be stored.
when dynamically allocating a SYSOUT data set.
System Action: The stage terminates with return code 499. 507E Member name not found
Explanation: FIND or BLDL gives return code 4, indicating
500E Data set DSNAME is partitioned that the requested member is not in the data set.
Explanation: The requested data set is partitioned but no System Action: The stage terminates with return code 507.
second operand is provided to indicate a specific member.
System Action: The stage terminates with return code 500. 508E Output descriptor too long: word
User Response: Select a specific member when allocating Explanation: The word is longer than 26 characters. This
the data set. is the limit for an output descriptor.
System Action: The stage terminates with return code 508.
501E No data set is allocated for DDNAME
Explanation: There is no data set allocated for the data 509E Unacceptable spool file identifier SFID
definition name shown. The return code 4 is received on the
Explanation: For reader, the number shown is negative;
RDJFCB macro.
for xab, the number is negative or larger than 64K.
System Action: The stage terminates with return code 501.
System Action: The stage terminates with return code 509.
For xab, CP gives return code 44, indicating that the SPOOL
file does not exist. 517E Record number not present in file
System Action: The stage terminates with return code 510. Explanation: The record requested is not in the file.
System Action: The stage terminates with return code 517.
511E Spool file identifier SFID rejected by CP
Explanation: CP gives return code 44 on the diagnose 518E Record number truncated
instruction to manipulate the external attribute buffer. Explanation: The record requested has been replaced or
System Action: The stage terminates with return code 511. added since diskrandom obtained information about the file
from CMS.
512E Virtual device device not a spooled printer System Action: The stage terminates with return code 518.
System Action: The stage terminates with return code 512. Explanation: A record is moved or appended to a buffer.
Condition code 3 is set, indicating destructive overlap. This
can be caused by indiscriminate use of storage.
513E Return code number reading or writing XAB
(parameters hex) System Action: The stage terminates with return code 530.
Explanation: A decimal number or range is expected but Explanation: storage gets a program check code 4
the word shown is found. (protection) while loading data into the storage area.
System Action: The stage terminates with return code 515. System Action: The stage terminates with return code 533.
516E Not a record number or a range of record 534E Storage at address is not addressable
numbers: word Explanation: storage gets program check code 5
Explanation: Though an acceptable range of decimal (addressing).
numbers, the word shown cannot represent a range of System Action: The stage terminates with return code 534.
records. The beginning of the range is zero or less, or the
end of the range is less than the beginning.
535E Program check code
System Action: The stage terminates with return code 516.
Explanation: storage gets a program check that is neither
protection nor addressing.
System Action: The stage terminates with return code 535.
536E Buffer header destroyed: hex 542E Unable to communicate with user ID
Explanation: CMS/TSO Pipelines has discovered an Explanation: CP sets return code 5 when vmc tries to
internal error. The pointer to the next available byte is communicate with the server.
below the base address of the buffer.
System Action: The stage terminates with return code 542.
System Action: The stage terminates with return code 536.
User Response: Ensure the message level is odd (it is 543E Return code number from VMCF: string
unless you have changed it) and that message 1 is issued to
Explanation: CP sets the return code shown on a VMCF
show the stage in error. Contact your systems support staff.
request.
System Programmer Response: This is likely to be an
System Action: Message 544 is issued to display the
error in CMS/TSO Pipelines.
parameter list. The stage terminates with return code 543.
540E Command is longer than 256 (number characters) 548I SEVER function requested for side
Explanation: A command to vmc is longer than the Explanation: Pipeline dispatcher trace is active. The stage
maximum of 256 characters. severs the connection on the side shown.
System Action: The stage terminates with return code 540. System Action: None.
541E VMCF is in use by another stage 549E Return code number, reason code number, R0 hex
Explanation: vmc cannot use VMCF because another stage from IRXINIT
is using it. Explanation: The return code and reason code shown are
System Action: The stage terminates with return code 541. received when trying to find the environment for the REXX
program that issued a pipeline specification with Address
link or Address attach. The reason code is valid only when
the return code is 20.
User Response: Refer to the IRXINIT return and reason range or column number to abut the field to the contents of
codes in TSO Extensions Version 2 REXX Reference, the output buffer; this is equivalent to the concatenate oper-
SC28-1883. ator (||) in REXX. Use the operand NEXTWORD instead of a
range or column number to append a blank and the field to
System Action: The stage terminates with return code 549.
the output record built so far. (The blank is suppressed if
System Programmer Response: Reason code 24 means the record is empty.)
that the environment table has too few entries for the number
of concurrent REXX programs that the user wishes to run.
557E Not authorised to obtain CP load map
Refer to TSO Extensions Version 2 REXX Reference,
SC28-1883. Explanation: A program check is reflected on the diagnose
38 that is issued to read the CP symbol table.
550E Unable to access variables System Action: The stage terminates with return code 557.
Explanation: The TSO service routine gives return code 40, User Response: Ensure that the virtual machine has
indicating that there is no active CLIST environment. command privileges to issue diagnose 38. By default, privi-
lege class C or E is required; your installation may have
System Action: The stage terminates with return code 550.
changed the privilege classes in an override file.
563W ANYOF assumed in front of string 570E Unexpected IUCV interrupt with IPTYPE type on
path number
Explanation: A delimited string that contains more than
one character is specified without a keyword to specify how Explanation: An IUCV interrupt is fielded while the stage is
to interpret it. It is most likely that you wish this interpreted waiting for a connection complete or path severed interrupt.
as a string rather than as an enumerated list of characters. This represents a CP/CMS IUCV protocol error.
This message is suppressed if the delimited string contains
System Action: The path is severed. The stage terminates
one character; the question is clearly moot.
with return code 570.
User Response: Use the keyword ANYOF to specify a
delimited string of characters enumerating characters that
571E Virtual device device number is in use by another
match a single character position in the input record. Use
stage
STRING to specify that the target is a string of characters that
must occur in the sequence shown to match. Explanation: Two stages try to operate the same device
concurrently.
564W Range(s) should be before keyword; put more than System Action: The stage terminates with return code 571.
one in parentheses
Explanation: A range is specified after the keyword. The 572E Unable to load file (EXECLOAD return code
order should be reversed. number)
Explanation: In the syntax exit for a REXX filter, CMS indi-
565W Stage is obsolete; use name instead cates that the program is disk resident. When the time
comes to run the program, EXECLOAD fails with the return
Explanation: A stage is used that will be retracted.
code shown.
User Response: Use the name for the stage that will
System Action: The stage terminates with return code 572.
continue to be available.
593E Shared data set DSNAME cannot be allocated 600E Return code number from TGET
exclusive
Explanation: The return code shown is received when
Explanation: Dynamic allocation sets return code reading the terminal.
020C0000, which indicates that a request for exclusive allo-
System Action: The stage terminates with return code 600.
cation of a shared data set was rejected.
User Response: Ensure you wish to modify the data set.
601E Return code number from STFSMODE
Use the SHR operand to indicate that a shared allocation
should be used. Explanation: Full screen mode is not set.
System Action: The stage terminates with return code 593. System Action: The stage terminates with return code 601.
594E Return code number reason code hex from STOW 602E Unsupported data set organisation hex
Explanation: The return code shown was received when Explanation: The data set organisation is neither physical
adding a member to the partitioned data set. The contents of sequential nor partitioned. The DSORG field is substituted.
register 0 (the reason code) are substituted in hexadecimal. System Action: The stage terminates with return code 602.
System Action: The stage terminates with return code 594.
603E Unable to read directory for member name
595E Member name is not allowed for this function Explanation: FIND gives a return code that is neither zero
Explanation: The program does not support a member nor four.
name. System Action: The stage terminates with return code 603.
System Action: The stage terminates with return code 595.
604E Null DDNAME
596E Data set name too long: name Explanation: The argument begins with the keyword
Explanation: The data set name plus the prefix (if active) DDNAME=, but there are no further characters or the next
is longer than forty-four characters. character is a left parenthesis to indicate a member.
System Action: The stage terminates with return code 596. System Action: The stage terminates with return code 604.
597E Member name or generation too long in DSNAME 605E DDNAME longer than 8 characters: word
name Explanation: The argument begins with the keyword
Explanation: The argument contains a left parenthesis, DDNAME=; it is followed by a word that is more than eight
indicating that a generation number or a member is present. characters.
There are more than eight characters to the end of the argu- System Action: The stage terminates with return code 605.
ment.
System Action: The stage terminates with return code 597. 606E Null member name in DDNAME name
Explanation: The argument contains a left parenthesis,
598E Null member name or generation in DSNAME indicating that a member is present, but no further characters
name are present.
Explanation: The argument contains a left parenthesis, System Action: The stage terminates with return code 606.
indicating that a generation number or a member is present,
but no further characters are present.
607E Member name too long in DDNAME name
System Action: The stage terminates with return code 598.
Explanation: The argument contains a left parenthesis,
indicating that a member is present. There are more than
599E Null DSNAME name eight characters to the end of the argument.
Explanation: The argument consists of a single quote or System Action: The stage terminates with return code 607.
two quotes, or the first character is a right parenthesis for the
beginning of a member. This is not a valid DSNAME.
System Action: The stage terminates with return code 599.
608E Incorrectly specified DSNAME word 616E Caller’s producer is not blocked waiting for output
Explanation: A generation data group number in paren- Explanation: The PRODUCER is requested, the stage is in a
theses is followed by a character that is not a left paren- pipeline specification that has been issued with CALLPIPE, the
thesis. caller’s currently selected input stream is connected, and the
output stream from the stage is connected to the caller, but
System Action: The stage terminates with return code 608.
the stage is not waiting for an output operation to complete.
Thus, the integrity of the variable pool cannot be ensured.
609E ABEND code reason code number
System Action: The stage terminates with return code 616.
Explanation: The DCB ABEND exit is driven for the
abnormal termination condition substituted.
617E File does not have fixed format records; do not
System Action: The ABEND condition is reset. The stage specify keyword
terminates with return code 609.
Explanation: The keyword BLOCKED is specified for a file
that has variable length records. CMS does not support
611E Cannot set CONSOLE exit blocked read of such a file.
Explanation: fullscr ASYNCHRONOUS cannot set the console System Action: The stage terminates with return code 617.
exit routine because the path turned out to be opened
already.
620W Unsupported code page number
User Response: Do not specify a path; let fullscr assign
Explanation: A code page number is requested that xlate
one.
does not support.
System Action: The stage terminates with return code 611.
System Action: The code page number is ignored.
644E Timestamp word not valid; reason code number 653E Monitor is currently running in shared mode;
exclusive request rejected
Explanation: An ISO timestamp is not valid. The input
record must contain the year (four digits) followed by five Explanation: An attempt was made to connect to
fields containing month, day, hour (24 hour clock), minute, *MONITOR for exclusive use of the monitor segment. The
and second (two digits each). It may be followed by one to monitor severed the connection with error code X'34', indi-
six decimal digits representing a fraction of a second. cating that some other virtual machine is already connected
to the monitor in shared mode.
The reason code shows which test has failed:
System Action: The stage terminates with return code 653.
4 The input record is shorter than 14 characters or
longer than 20 characters after stripping leading and
trailing blanks. 654E Monitor is currently running in exclusive mode;
8 Year is not a number or the number is less than 1900. shared request rejected
12 Month is not a number, it is not positive, or it is
Explanation: An attempt was made to connect to
greater than 12.
*MONITOR for shared use of the monitor segment. The
16 Day is not a number, it is not positive, or it is greater
monitor severed the connection with error code X'38', indi-
than 31.
cating that some other virtual machine is already connected
20 Hour is not a number, it is negative, or it is greater
to the monitor in exclusive mode.
than 23.
24 Minute is not a number, it is negative, or it is greater System Action: The stage terminates with return code 654.
than 59.
28 Second is not a number, it is negative, or it is greater 655E Not a named saved segment: word
than 59.
32 Fraction is not a number, it is negative, or it is greater Explanation: An attempt was made to connect to
than 999999. *MONITOR using the segment name substituted. The monitor
severed the connection with error code X'3C', indicating
System Action: The stage terminates with return code 644. that the substituted word is not the name of a discontiguous
shared segment; it could, for instance, be a named saved
650E CP system service word is in use by another system.
program System Action: The stage terminates with return code 655.
Explanation: CP severs a connection request to the system
service substituted. This indicates that the service is already 656E Connection to word severed with code word
connected by some program that does not run under control
of CMS Pipelines. Explanation: A connection request to a system service was
rejected.
System Action: The stage terminates with return code 650.
User Response: Refer to the documentation for the system
service shown.
651E DCSS word is not loaded
System Action: The stage terminates with return code 656.
Explanation: An attempt was made to connect to
*MONITOR using the segment name substituted. The monitor
severed the connection with error code X'18', indicating 657E Limit of connections to word is reached
that the segment was not available. Explanation: A connection request to a system service was
System Action: The stage terminates with return code 651. rejected with return code X'0C', indicating that the
maximum number of connections supported for this service
has already been reached.
652E DCSS name word does not match the DCSS name
already established System Action: The stage terminates with return code 657.
System Action: The stage terminates with return code 652. System Action: The stage terminates with return code 658.
659E Return code number from LINEWRT macro 665E Exponent is not valid: word
Explanation: The return code shown was received on a Explanation: A numeric constant is being scanned. The
LINEWRT macro. Return code 104 means that there was letter “E” is met. Either there is no number after the letter
insufficient storage to complete the request. Return code 24 or the value of the exponent overflows a 32-bit integer.
means that the parameter list built by CMS Pipelines is
System Action: The stage terminates with return code 665.
rejected by CMS.
User Response: For return code 24, contact your systems
666E Syntax error in expression; reason code number
support staff to report the problem.
Explanation: The expression is not syntactically correct.
System Action: The stage terminates with return code 659.
The number describes the error:
- Internal error (negative length remains to be scanned).
660E Unsupported code page number
0 Unexpected character at the beginning of an
Explanation: A FROM or TO was met, but the following expression or after (.
word does not represent a supported code page number. 1 A digit is expected for the number of a counter, but
something else was found.
System Action: The stage terminates with return code 660.
2 A counter was scanned; it was not followed by an
operator or a ).
661E Please ask nicely 3 An identifier or an expression has been scanned; it
Explanation: The dmsabend built-in program did not find was not followed by an operator or a ). Note that
the appropriate argument string for it to cause an ABEND. assignment operators cannot be immediately to the
right of identifiers or expressions.
User Response: Do not try to force ABENDs in CMS/TSO 4 ! not followed by =.
Pipelines unless you have been instructed to do so by IBM. 5 Assignment attempted to something that is not a
System Action: The stage terminates with return code 661. counter.
6 A vertical bar is not followed by another one to make
up the logical OR operator. Be sure to use four
662E Environment already specified (keyword is met) vertical bars if they are also stage separators This self-
Explanation: A number or one of the keywords MAIN or escapes them down to two bars that are seen by specs.
PRODUCER has already been specified to designate the envi- 7 An ampersand is not followed by another one to make
ronment to use. The keyword that is substituted is met later up the logical AND operator.
in the operand list. 100 Unpaired colon (:).
101 Two consecutive question marks (?). Use parentheses
System Action: The stage terminates with return code 662.
to group a conditional expression between the ? and
the : of a containing one.
663E Unable to generate delimiter for variable name
System Action: The stage terminates with return code 666.
Explanation: The name of the variable and the characters
declared as beginning a comment (these characters are not
667E Arithmetic overflow
eligible to be delimiter characters) contain between them all
256 possible values for a byte. Thus it is impossible to Explanation: The result of evaluating an expression or an
generate a delimiter character to be used to delimit the name intermediary result is beyond the range that can be repres-
of the variable. ented.
User Response: Specify a shorter comment string. System Action: The stage terminates with return code 667.
System Action: The stage terminates with return code 663.
668E Divisor is zero
664E Keyword is not supported when stage is first: Explanation: Division by zero is attempted.
word System Action: The stage terminates with return code 668.
Explanation: The program is used as a first stage of a
pipeline. The operand is valid only in a stage that is not first 670E Picture longer than 255 characters: picture
in a pipeline.
Explanation: The word following PICTURE contains more
System Action: The stage terminates with return code 664. than 255 characters.
System Action: The stage terminates with return code 670.
671E Unacceptable character character in picture picture 678E More than fifteen exponent digits in picture picture
Explanation: The character is not one of the valid charac- Explanation: The letter E is met followed by more than
ters. fifteen digit selectors. The exponent can contain at most ten
digits.
System Action: The stage terminates with return code 671.
System Action: The stage terminates with return code 678.
672E Unacceptable picture picture; word is incorrect
(reason code number) 679E Exponent too large: number
Explanation: An incorrect sequence of picture characters is Explanation: The exponent has more significant digits than
found. the picture allows. The exponent is substituted.
User Response: Compare the contents of the word substi- System Action: The stage terminates with return code 679.
tuted with the contents of the picture string to see where in
the string the error was detected.
680E Record length is zero
System Programmer Response: The reason code should
Explanation: The first byte of a logical record contains
be reported when calling IBM for service. It reflects the
binary zeros. This is not valid, because the minimum record
internal state of the finite state machine that is used to
length is one (a record that contains a byte count of one and
decode the picture; the encoding is unspecified; it might
no data).
change as a result of corrective service or new function
being added. System Action: The stage terminates with return code 680.
: 2 The resource slot in the OpenExtensions callable Appendix A describes the return codes; Appendix B
: services function table points to a dummy entry. describes the reason codes; and Appendix C describes the
: 3 The OpenMVS slot in the CSR table does not contain a offsets.
: pointer. It contains zero. (Offset 24, decimal).
System Action: The stage terminates with return code 686.
: 4 The table of entry points to OpenExtensions callable
: services is too short to contain the function requested.
: That is, the operating system does not support the 687E Relational operator expected; found word
: function requested. Explanation: A relational operator is expected, but the
: 5 The table of entry points to OpenExtensions callable word is not a supported one.
: services contains a dummy entry for the function
: requested. That is, the operating system does not User Response: Note that the operators are the “strict”
: support the function requested or the function is not operators:
: available in this particular configuration. == Equal.
: 10 The CMS is a release where the simulated CVT is too ¬== Not equal
: short to contain the pointer to the CSRTABLE. << Less than.
: 4 The pointer to the Contents Vector Table (CVT) in <<= Less than or equal.
: location 16 (decimal) is destroyed. It is either zero or >> Greater than.
: negative. >>= Greater than or equal.
: 5 The pointer to the Contents Vector Table (CVT) in
: location 16 (decimal) is destroyed. It does not point For example, a single equal sign is not a supported relational
: within the virtual machine. operator.
: 6 The pointer to the Contents Vector Table (CVT) in
: location 16 (decimal) is destroyed, or the CVT has 688I CSW hex; last CCW hex; some data hex
: been corrupted. The byte at offset X'74' has the bit
Explanation: The channel status word is displayed.
: for X'40' zero. This bit indicates that the system is
: CMS.
: 7 The pointer to the Contents Vector Table (CVT) in 689E Workstation file is missing: word
: location 16 (decimal) is destroyed, or the CVT has Explanation: Error code 110 is received when the file is
: been corrupted. The byte at offset X'74' has one or opened by the server program.
: more of the bits for X'3F' nonzero. These bits must
: be zero on CMS. System Action: The stage terminates with return code 689.
System Action: The stage terminates with return code 685. 691E Directory is missing: word
Explanation: Error code 3 is received when the file is
686E OpenExtensions return code number reason code opened by the server program.
hex function: word
System Action: The stage terminates with return code 691.
Explanation: A call to the OpenExtensions failed.
The return code is shown as a decoded errno if the value is 692E No diskette in drive: word
one of those recognised by CMS/TSO Pipelines; otherwise
the return code is shown as a decimal number. Explanation: Error code 21 is received when the file is
opened by the server program.
The reason code is shown in hexadecimal. Only the last
four digits are significant. System Action: The stage terminates with return code 692.
694E Pipeline is not called from a driving program 705E Last character is a slash (path "string")
Explanation: fitting is invoked in a pipeline set that has not Explanation: Return code 129 (ENOENT) and reason code
been initialised for fittings. Thus, the stages have nothing X'0109' (JREndingSlashOCreat) is received on a request
with which to interface. to open a file.
System Action: The stage terminates with return code 694. System Action: The stage terminates with return code 705.
695E Fitting already defined: "name" 706E File system is quiescing (path "string")
Explanation: fitting is issued in a pipeline set that already Explanation: Return code 129 (ENOENT) and reason code
has a fitting of that name defined. X'018F' (JRQuiescing) is received on a request to open a
file.
System Action: The stage terminates with return code 695.
System Action: The stage terminates with return code 706.
699E Return code number from function (file: word)
707E Component in path name is too long: path
Explanation: An error was returned from the communi-
"string"
cation device. This could be a result of a programming error
in CMS/TSO Pipelines or in the device driver that processes Explanation: Return code 126 (ENAMETOOLONG) with
the request on the work station. reason code X'003E' (JRCompNameTooLong) is received on a
request to open a file. A component (file name) is longer
System Action: The stage terminates with return code 699.
than NAME_MAX (255).
System Action: The stage terminates with return code 707.
700E File descriptor number is not open (reason code
hex)
708E Path name is too long: path "string"
Explanation: Return code 113 (EBADF) is received on a
request to read or write a file. The reason code further Explanation: Return code 126 (ENAMETOOLONG) with
describes the error condition. reason code X'0039' (JRPathTooLong) is received on a
request to open a file. A path name is longer than
System Action: The stage terminates with return code 700.
PATH_MAX (1023). This can be a result of the substitution of
symbolic links.
701E File or directory does not exist (path "string"
System Action: The stage terminates with return code 708.
reason code hex)
Explanation: Return code 129 (ENOENT) is received on a
709E Unsupported file type number (path "string")
request to open a file. The reason code further describes the
error condition. Explanation: The file is neither a regular file nor a FIFO.
The actual file type is substituted:
System Action: The stage terminates with return code 701.
1 Directory.
5 Symbolic link.
702I ... Parameter: hex
6 Block special file.
Explanation: The bit for message level 1024 is on and an
System Action: The stage terminates with return code 709.
error was reported for a call to OpenExtensions. The first
eight bytes of each parameter are shown.
710E Unsupported file type number (file descriptor
"number")
703I Opening "hex"
Explanation: The file is neither a regular file nor a FIFO.
Explanation: The bit for message level 1024 is on. A file
The actual file type is substituted:
in the hierarchical file system is being opened.
1 Directory.
704E A component of path is not a directory (path 5 Symbolic link.
"string", reason code hex) 6 Block special file.
Explanation: Return code 135 (ENOTDIR) is received on a System Action: The stage terminates with return code 710.
request to open a file. The reason code further describes the
error condition. 711E Function not supported: word
System Action: The stage terminates with return code 704. Explanation: The first word of an input record to hfsxecute
or hfsquery is not a supported one.
System Action: The stage terminates with return code 711.
712E Path name is missing from the input record 719I Resuming pipeline
Explanation: An input record to hfsxecute contains only Explanation: The application program has resumed the
one word. copipe with a request.
System Action: The stage terminates with return code 712.
720I Terminating pipeline
713E Mode is not valid: word Explanation: The application program has resumed the
copipe without a request parameter list.
Explanation: The second word of an input record to
hfsxecute does not contain a valid mode specification. The
mode contains one to four octal digits. 721I RPL hex
System Action: The stage terminates with return code 713. Explanation: The Request Parameter List is displayed.
716E Not a dotted decimal network address: word 725I Returning to the pipeline dispatcher
Explanation: A word that begins with a digit is scanned Explanation: All Request Parameter Lists have been proc-
for a network address, but the word does not conform to the essed and the fitting stages posted to wake up.
dotted decimal notation defined for inet_addr(). A compo-
nent could be too large; or there could be more than three 726I No RPLs changed state
dots.
Explanation: All Request Parameter Lists have been proc-
System Action: The stage terminates with return code 716. essed, but none changed state. Thus, the application has
cheated.
¡ 717I Ignoring IUCV interrupt for message number; System Action: The status code is set accordingly.
¡ waiting for number (interrupt on path number; sent
¡ on number)
727I string
Explanation: One of the device drivers for TCP/IP received
an IUCV interrupt for a message that it does not have Explanation: A tracing message.
outstanding with TCP/IP.
System Action: The interrupt is ignored. 728I number description
¡ System Programmer Response: If the two path numbers Explanation: Statistics are requested. The contents of a
¡ are different, the interrupt is being processed by the wrong counter are displayed.
¡ stage. Investigate whether corrective service is available.
729I Letting dispatcher wait
718I Returning to application Explanation: The application has indicated that it does not
Explanation: A copipe is returning to the application wish to regain control until a particular fitting stage has
program. produced or consumed a record.
732E Return code number from DMSCSL 737E Too many parameters in call to name (number
Explanation: The callable services interface returned a found)
return code that was not expected. Explanation: Return code -10 was received from Callable
System Action: The stage terminates with return code 732. Services.
User Response: Contact your system support staff.
733E Return code number reason code number from
System Programmer Response: This is a programming
routine
error in CMS/TSO Pipelines. Investigate whether corrective
Explanation: The callable services interface returned a service is available.
nonzero return code. The return code, reason code, and
System Action: The stage terminates with return code 737.
routine name are substituted.
System Action: The stage terminates with return code 733. 738E Router did not resolve entry point
Explanation: The routine to resolve the entry point
734E CSL Routine name is not loaded
returned a value of zero.
Explanation: Return code -7 was received from Callable
User Response: Contact your system support staff.
Services.
System Programmer Response: This is a programming
User Response: Contact your system support staff to inves-
error in CMS/TSO Pipelines or an error in generating PIPE-
tigate whether the callable services have been set up
LINE MODULE (DMSPIPE MODULE on Virtual
correctly for your virtual machine.
Machine/Enterprise Systems Architecture). Investigate
System Programmer Response: It is an error in CMS/TSO whether corrective service is available.
Pipelines if this message is issued on releases prior to
System Action: The stage terminates with return code 738.
Virtual Machine/Enterprise Systems Architecture Version 1
Release 2.0.
740E File "words" does not exist or you are not
System Action: The stage terminates with return code 734. authorised for it
Explanation: Reason code 90220 was received from Call-
735E Callable Services are not available
able Service DMSEXIST.
Explanation: Return code -12 was received from Callable
System Action: The stage terminates with return code 740.
Services.
User Response: Contact your system support staff to inves- 741E Record format "character" is not supported
tigate whether the callable services have been set up
correctly for your virtual machine. Explanation: The record format is neither F nor V. A
blank indicates OS-format file; a hyphen indicates that the file
System Programmer Response: It is an error in CMS/TSO is migrated.
Pipelines if this message is issued on releases prior to
System Action: The stage terminates with return code 741.
742E Incorrect file "file" (reason code number) 749E File file is on OS or DOS minidisk
Explanation: The file name and file type are both an Explanation: The file status byte contains 8.
asterisk; or reason codes 90420, 90430, 90445, 90450, or
User Response: Use qsam to read the file.
90455 were returned by DMSEXIST. A component of the file
identification is longer than eight characters, contains an System Action: The stage terminates with return code 749.
asterisk, or a percent sign. Reason code zero is set when the
file name is an asterisk and the file type is an asterisk. For 750E Incorrect input block format
information about nonzero reason codes, refer to DMSEXIST in
CMS Application Development Reference, SC24-5451. Explanation: deblock MONITOR has read a block that
contains a length field of binary zeros, but the remainder of
System Action: The stage terminates with return code 742. the block does not consist entirely of binary zeros.
745E Existing record length is not number 753E NAMEDEF too long in string
Explanation: Reason code 90121 is received when over- Explanation: Reason code 90510 was received from
writing a record of a variable record format file. DMSVALDT. A temporary name for a file name and type is
System Action: The stage terminates with return code 745. longer than 16 characters.
System Action: The stage terminates with return code 753.
746E File file is open with incompatible intent
Explanation: Reason code 44200 is received when opening 754E Improper use of stage; reason code number
the file. If you are trying to read, the file is already open for Explanation: A built-in program that is reserved for IBM
write. If you are trying to write, the file is already open. use has been invoked in a way that is not correct. The
User Response: Use fanin or faninany, as appropriate to reason codes are:
your application, to merge the two streams; then use one disk 1 Argument string is not eight bytes long.
stage to write the file. 2 The input record is not the length in the argument
System Action: The stage terminates with return code 746. string.
System Action: The stage terminates with return code 754.
747E Not authorised to read file
Explanation: Reason code 44000 is received when opening 755E Offset not shorter than width
the file for read. It could also be that the file was removed Explanation: The length of the offset specified (either as a
after CMS Pipelines determined that it exists, but this is number or as the length of the delimited string) is equal to or
probably unlikely. greater than the width.
System Action: The stage terminates with return code 747. System Action: The stage terminates with return code 755.
748E Disk mode is full 756W Use the := assignment operator instead of =
Explanation: Reason code 90131 is received when writing Explanation: A single equal sign is scanned.
to the file.
User Response: Change to use the colon equal operator.
System Action: The stage terminates with return code 748.
759E Incompatible types Explanation: The time stamp must contain at least eight
digits.
Explanation: An operation is requested between a string
and a counter. Relational operators must be between like System Action: The stage terminates with return code 764.
types. Strings cannot be used with computational operators.
System Action: The stage terminates with return code 759. 765E Timestamp too long:string
Explanation: The time stamp must contain at most four-
760E No data will be available for input field teen digits.
Explanation: An input range is specified after EOF without System Action: The stage terminates with return code 765.
SELECT SECOND in effect. Thus, there are no data available
to specs to supply. 766E Century incorrect in timestamp: string
User Response: Use SELECT SECOND to refer to the second Explanation: The first two characters of the timestamp are
reading station, where a copy of the last record is. However, less than 19.
if you were not using the second reading station and the field
you require can be stored in a counter, it is more efficient to System Action: The stage terminates with return code 766.
save the value in a counter while processing the detail record
and then refer to the contents of this counter after the EOF 767E Not numeric character in timestamp: string
item.
Explanation: A character of the time stamp is not numeric.
System Action: The stage terminates with return code 760.
User Response: The timestamp is specified as a sequence
of digits without the usual delimiter characters.
761E Different key fields not allowed with AUTOADD
System Action: The stage terminates with return code 767.
Explanation: AUTOADD was specified and the key field is
defined in a different place in the detail and in the master
768E Incorrect record in file; reading record number
records. This would make adding the record ambiguous.
Explanation: Return code 8 reason code 90117 is received
User Response: Use specs to move the key field in the
on DMSREAD. This indicates that the file contains an impos-
master or the detail records.
sible record length; it could be larger than the record length
System Action: The stage terminates with return code 761. of the file or it could be zero.
User Response: Contact your systems support staff to have
762E Return code number reason code number from TSO the problem diagnosed.
Explanation: The TSO command service routine (gave the System Programmer Response: Open the file using
return code and reason code shown. DMSOPDBK and then read the blocks of the file with filetoken
BLOCKED. Pass this file to deblock CMS to validate the file.
User Response: Refer to TSO Programming Services,
SC28-1875. If deblock does not issue a message, the record length is
zero, indicating a premature end-of-file.
System Action: The stage terminates with return code 762.
System Action: The stage terminates with return code 768.
3 Odd number of hex digits. User Response: Use the keyword OUTDESC to specify a
one-character output descriptor.
4 Digit not hex.
System Action: The stage terminates with return code 769.
770E Period missing in destination word 778E Forbidden character in file name or file type words
Explanation: A DESTINATION keyword is met, but the Explanation: Reason code 90450 is returned by DMSVALDT.
following word contains no period. The file name or the file type contains an asterisk or a
percent sign.
User Response: Make sure the destination contains both a
system ID (also known as a node ID) and a user ID: System Action: The stage terminates with return code 778.
| sysout dest dkibmvm2.john
System Action: The stage terminates with return code 770. 779E Incorrect directory word
Explanation: Reason code 90430 is returned by DMSVALDT.
771E Leading period in destination word The directory is longer than some limit or it contains a char-
acter that is not allowed.
Explanation: The first character of the destination is a
period. This implies a null node ID. System Action: The stage terminates with return code 779.
Explanation: The last character of the destination is a The directory record for an existing file indicates that
period. This implies a null user ID. you cannot write to it. The reason can be that you do
not have write authorisation or that the file is in a
System Action: The stage terminates with return code 772. DIRCTL directory, which is currently accessed for write
by some other user.
773E Node word is not defined to JES Reason code 44000 is received when opening the file for
Explanation: The first component of the destination is not append or replace. It is a remote possibility that the file
known to JES. was removed between the time CMS/TSO Pipelines
determined that the file existed and the time it was
System Action: The stage terminates with return code 773. opened.
System Action: The stage terminates with return code 780.
774E Syntax error: explanation
Explanation: REXX signalled a syntax error. The error text 781E Incorrect file token hex
is substituted.
Explanation: The file token parameter does not refer to an
System Action: The stage terminates with return code 774. open file.
User Response: If you have opened the file in a REXX
775E Incorrect file name word
program, remember to convert the token to printable
Explanation: Reason code 90420 is returned by DMSVALDT. hexadecimal before using it with the filetoken built-in
The file name is longer than eight characters or contains a program:
character that is not allowed. call csl 'dmsopen ... filetoken ...'
System Action: The stage terminates with return code 775. 'pipe filetoken' c2x(filetoken) '|...'
System Action: The stage terminates with return code 781.
776E Incorrect file type word
Explanation: Reason code 90430 is returned by DMSVALDT. 782E Open intent is incompatible with stage position
The file type is longer than eight characters or contains a (intent is char)
character that is not allowed. Explanation: The file token parameter identifies a file that
System Action: The stage terminates with return code 776. is open with an intent that is not compatible with the posi-
tion of the filetoken stage in the pipeline. A read intent is
required when it is first in a pipeline; a write or replace
777E Incorrect file mode number word intent is required when it is not first in a pipeline.
Explanation: Reason code 90430 is returned by DMSVALDT. System Action: The stage terminates with return code 782.
The file mode number is not a digit between “0” and “6”.
System Action: The stage terminates with return code 777.
783E Storage group space limit exceeded 790E File locked by other user or other unit of work
Explanation: The storage group is full. It is not possible Explanation: Reason code 2200 was received from
to write more data into the storage group. You may or may DMSOPEN or DMSOPDBK. The specified file is locked by some
not have exceeded your space quota for the storage group. other user or another of your units of work.
System Action: The unit of work is rolled back. The stage User Response: Be careful about using the WORKUNIT
terminates with return code 783. DEFAULT option with stages the would otherwise acquire a
private unit of work. When you do so, CMS Pipelines
cannot commit the default unit of work; you must do so
784E Space quota exceeded
yourself. (For example by this REXX instruction
Explanation: You have exceeded your space quota for the
call csl 'dmscomm c_rc c_reason'
storage group, or you have no space quota.
If the a file is updated on the default unit of work and then
System Action: If the file was opened successfully, the unit
opened for modification on a different unit of work, a
of work is rolled back. The stage terminates with return
locking conflict is evident to the SFS server, even though this
code 784.
may not be obvious to you.
System Action: The stage terminates with return code 790.
785E DMSOPBLK is not supported
Explanation: The file token represents a file that is opened
791E File was committed by other user or other unit of
through the callable service.
work
User Response: Use instead.
Explanation: Reason code 20000 was received from
System Action: The stage terminates with return code 785. DMSCOMMT. You were creating a file at the same time as
another user or another unit of work was creating the same
786E Specified work unit does not exist object. The another user or another unit of work managed to
commit the file first.
Explanation: Reason code 90540 is received from the call-
able service DMSEXIST. User Response: Create a null file and then replace it if it
takes an appreciable amount of time to create the file, partic-
System Action: The stage terminates with return code 786. ularly if the file is generated as a result of asynchronous
events. This ensures that you can gain exclusive access to
787E Too much ESM data (number bytes) the file.
Explanation: More than eighty characters are specified for System Action: The unit of work was rolled back by CMS.
ESM data. This is over the maximum allowed by SFS. The stage terminates with return code 791.
788E File pool is not available Explanation: A fitting Request Parameter List that refer-
ences the stage specifies an initial operation (read or write)
Explanation: Reason code 97500 was received from that is incompatible with the placement of the fitting stage.
DMSEXIST. The specified file pool or the one set by SET
FILEPOOL is not known to CP. System Action: The stage terminates with return code 792.
789E SAFE can be specified only for PRIVATE work System Action: The stage terminates with return code 793.
unit
Explanation: SAFE was specified, but the stage is not using 794E More than one RPL refers to stage
a private unit of work. Explanation: Two fitting Request Parameter Lists reference
System Action: The stage terminates with return code 789. the stage. This is an error, because the stage only supports
one request at a time.
System Action: The stage terminates with return code 794.
796E 370 accommodation must be turned on (CP SET 1010E VMCF CVT not found
370ACCOM ON)
Explanation: There is no active VMCF address space.
Explanation: A device driver that performs I/O to a device
User Response: Ensure that TCP/IP is installed correctly and
that is not supported for Diagnose A8 has received an opera-
that the IUCV started task is active.
tion exception on a 370-mode I/O instruction.
System Action: The stage terminates with return code
User Response: Unless you run applications that require
1010.
370ACCOM to be off to work correctly, you should turn this
option on in your PROFILE EXEC by the CP command “set
370accom on”. If you cannot run with the 370 accommo- 1011E Return/condition code number on IUCV QUERY
dation on permanently, turn it on before issuing the PIPE Explanation: The query function fails with the condition
command and turn it off after the pipeline has completed. code (CMS) or return code (z/OS) shown.
System Action: The stage terminates with return code 796. User Response: Refer to the description of the condition
codes associated with the IUCV instructions in CP Program-
797E Program check code 'hex'x on TIO to communi- ming Services, SC24-5520.
cations device System Action: The stage terminates with return code
Explanation: An unexpected program check is received 1011.
while testing if the communications device can be used.
User Response: Contact your systems support staff. 1012E Return/condition code number on IUCV declare
buffer
System Programmer Response: This may be an error in
CMS/TSO Pipelines. The hexadecimal value substituted Explanation: CMS/TSO Pipelines is unable to declare the
shows the program exception encountered. Have this code IUCV buffer. The condition code (CMS) or return code (z/OS)
ready when reporting the error to IBM. is substituted.
System Action: The stage terminates with return code 797. User Response: Refer to the description of the IUCV
instructions in CP Programming Services, SC24-5520.
798I Forcing pipeline stall System Action: The stage terminates with return code
1012.
Explanation: The fitting interface ran its pipeline without
any change to the fitting stage(s). A stall is forced, because
no further action is possible and the pipeline would never 1013E No IUCV paths can be connected
complete. Explanation: The maximum number of paths returned by
System Action: The pipeline is stalled. IUCV QUERY is zero.
1084E BREAK items are not allowed after EOF item 1090E Unrecognised operator word
Explanation: BREAK, NOBREAK, or a second EOF is met Explanation: One or more operator characters are met, but
after EOF has been specified. the aggregate string is not a valid operator.
User Response: Use IF to test for end-of-file rather than System Action: The stage terminates with return code
EOF if you do wish subsequent specification items to be 1090.
executed for detail records.
System Action: The stage terminates with return code 1091E Operator expected; found word
1084.
Explanation: An operator or the end of the expression is
expected, but the character shown was met.
1085E Counter number expected
User Response: specs does not support the comma oper-
Explanation: A number sign (#) is met indicating a ator. Use the SET and semicolon operators instead.
counter, but the next character is not a digit.
System Action: The stage terminates with return code
System Action: The stage terminates with return code 1091.
1085.
1100E Record descriptor is too small (it contains number)
1086E Improper operand for string expression
Explanation: A TCP/IP device driver that is specified with
¡ Explanation: A strictly compare operator or a colon for SF or SF4 detects a record that contains a record descriptor
¡ selection of expressions is met, but its operands are not that is shorter than its own length.
¡ strings, references to input fields, or the result of a condi-
User Response: Ensure that the application sending data to
¡ tional operator with string operands.
you observes the protocol you expect. In particular, pay
System Action: The stage terminates with return code attention to word sizes and byte ordering.
1086.
System Action: The stage terminates with return code
1100.
1087E String operand not acceptable to operator
Explanation: An operator is met that requires numeric 1110I Received number bytes
operands, but one of its operands is a literal string.
Explanation: The bit for 16 is on in the message level. A
System Action: The stage terminates with return code data packet is received. Zero bytes means end-of-file.
1087.
1111I Sent number bytes
Explanation: The bit for 16 is on in the message level. A
data packet is sent.
1112I Closing socket (reason number) 1123E Unacceptable input record length number
Explanation: The bit for 16 is on in the message level. Explanation: An input record that is not null is not the
The socket is being closed. required length. For socka2ip, the input record is neither
four nor sixteen bytes.
1113I Purging IUCV message System Action: The stage terminates with return code
1123.
Explanation: The bit for 32 is on in the message level. An
IUCV operation is purged because an input record has arrived.
1124E Incorrect NAMEDEF word (a directory name must
contain a period)
1114I IUCV reply number bytes
Explanation: Reason code 90530 is returned by DMSVALDT.
Explanation: The bit for 32 is on in the message level. An
The third word of the argument string is taken as a name
IUCV reply was received.
definition for the directory containing the file, but there is no
such name defined.
1115I Socket call for type
User Response: You probably wanted to refer to a direc-
Explanation: The bit for 32 is on in the message level. A tory. Ensure that the third word contains either an explicit
socket function is passed to TCP/IP. file pool name and user ID, or at least one period. For
example, to read from JOHN’s top level directory in the
1120E Stage cannot run in CMS subset current file pool:
Explanation: The stage requires an interface that is not pipe < profile exec john. | hole
supported in CMS subset mode. System Action: The stage terminates with return code
User Response: Issue the RETURN command to return to 1124.
full CMS.
System Action: The stage terminates with return code 1125E No space left in PDS directory
1120. Explanation: Return code 12 reason code 0 is received on
the STOW macro. This indicates that the directory is full.
1121E Stage cannot run while DOS is ON System Action: The stage terminates with return code
Explanation: The stage requires an interface that is not 1125.
supported in CMS DOS mode.
User Response: Issue the SET DOS OFF command. 1126E Record descriptor indicates number bytes, but
minimum is number
System Action: The stage terminates with return code
1121. Explanation: When deblocking records that contain their
record length (such as SF4), the record descriptor indicates a
record length that would not include the record descriptor
1122E Expression result is a string: string itself. For example, an input record that contains four binary
Explanation: The result of an expression is a character zeros is not valid for deblock SF4.
string rather than a number in a context where a string is not System Action: The stage terminates with return code
valid. 1126.
User Response: Inspect the expressions in IF, PRINT, and
SET clauses for single literals, such as these: 1127E Host name too long: string
¡ ... | spec set "sh" | ... Explanation: A word of an input record to nsquery is
¡ ... | spec ... if #0?'a':"b" then ... | ... longer than 1024 characters. If the word contains no
¡ ... | spec print "sh" picture 99 1 | ...
periods, the length of the word plus the length of the domain
System Action: The stage terminates with return code origin (the argument to nsquery) is larger than 1023. This
1122. limit is imposed by the domain name system.
System Action: The stage terminates with return code
1127.
1128E Two consecutive periods in host name: string 1135E Host word does not exist
Explanation: A word of an input record to nsquery Explanation: The name server return code is 0, but no
contains two adjacent periods, which would indicate a null response is returned. The secondary output stream to
component in the host name. If the word contains no nsresponse is not defined. The word is recognised as a
periods, the domain origin (the argument to nsquery) domain, not as a host.
contains a leading period or two consecutive periods.
System Action: The stage terminates with return code
System Action: The stage terminates with return code 1135.
1128.
1136E Return code from name server: number
1129E Component of host name too long: string
Explanation: The return code substituted was returned by
Explanation: A component of a host name (or of the the name server.
domain name, which is the argument to nsquery) Contains
User Response: Refer to the current RFC for the meaning
more than 255 consecutive characters without a period. This
of the return code. This RFC is likely to have replaced RFC
limit is imposed by the domain name system.
1035.
System Action: The stage terminates with return code
System Action: The stage terminates with return code
1129.
1136.
1141E Unable to resolve word (RXSOCKET did not 1146E Expect OF; found: word
return a result)
Explanation: The SUBSTR keyword was specified, but the
Explanation: The host name could not be resolved because closing OF was not found where it was expected. The word
the RXSOCKET interface did not return a result on the function found is substituted. If the message ends in the colon, the
invocation. the stage’s argument string was too short.
System Action: The stage terminates with return code System Action: The stage terminates with return code
1141. 1146.
1142E Unable to resolve word (RXSOCKET error string) 1147E Creation time cannot be changed for an existing
file
Explanation: The host name could not be resolved because
the RXSOCKET interface gave a return code. Explanation: Reason code 51051 was received from
DMSCLOSE or DMSCLDBK
System Action: The stage terminates with return code
1142. The file to be replaced exists and CMS refuses to change its
creation date.
1143E Unable to resolve word (RXSOCKET Version 2 is User Response: If you really wish to change the creation
required) date for a file, rename the file and then use CMS Pipelines to
create it again. You will effectively lose all authorisations
Explanation: The host name could not be resolved because
that you may have granted on the file because they will stay
the RXSOCKET interface was downlevel. The string '-1' was
with the renamed file.
returned; this is the way Version 1 reacts to errors.
System Action: The stage terminates with return code
System Action: The stage terminates with return code
1147.
1143.
User Response: Install RXSOCKET Version 2 or Virtual
1148E Expected parameter token "sysv"; found "word"
Machine/Enterprise Systems Architecture Version 2 Release
2, which has RXSOCKET built in. Name resolution will work Explanation: The PIPMOD nucleus extension is called with
with Version 1 if you initialise the socket interface externally a parameter token, but the first parameter token is not the
to CMS/TSO Pipelines. one for the system services vector.
System Action: The stage terminates with return code
1144E Key/ID field is not anchored at the extremities of 1148.
the input record (number before; number after)
Explanation: The STRIP option was specified to delete the 1149E Too many parameter tokens found (second is
key field or the stream identifier from the output record. "word")
This is not possible since the field is inside the record. The
Explanation: The PIPMOD nucleus extension is called with
number of bytes before the key field and the number of bytes
a parameter token and the first parameter token is the one for
after the key field are substituted.
the system services vector, but it is not followed by a fence
System Action: The stage terminates with return code indicating the end of the list of parameter tokens.
1144.
System Action: The stage terminates with return code
1149.
1145W PIPE command was issued from XEDIT, which
truncates at or before 255 characters (use Address
1150E Lost race for SCBWKWRD
Command in XEDIT macros)
Explanation: PIPMOD was invoked to initialise CMS
Explanation: The PIPE command which caused an error
Pipelines. When the initialisation started, the user word for
message to be issued in the scanner was issued from XEDIT.
the PIPMOD nucleus extension was zero, but by the time it
User Response: When issuing PIPE commands from XEDIT came to set the user word, the SCBLOCK already had a
macros, be sure to remember to address them to COMMAND, nonzero user word.
rather than to the default XEDIT command environment.
System Action: The stage terminates with return code
XEDIT truncates commands after 255 bytes without issuing a
1150.
warning message.
System Programmer Response: Investigate whether the
virtual machine runs a vendor multitasking system. If it
does, ensure that CMS Pipelines is initialised before the
vendor multitasking package takes over control of CMS.
System Action: The stage terminates with return code CMS Pipelines issued this command, where xxx contains the
1162. repository set for CMS Pipelines or inferred from its default
style:
Explanation: The hlasm interface cannot declare its exit The default message repository is FPL.
because an instance is already declared at some other User Response: Either make the message repository avail-
address. The return code on the IDENTIFY macro instruction able, set the language to a language for which there is a
is X'14'. message repository, or disable the repository by this
User Response: Return to the CMS ready prompt to clear command:
this stale exit pointer. Then retry the pipeline. PIPE literal REPOSITORY -|configure
System Action: The stage terminates with return code You will receive these nuisance messages until you take
1163. some action to avoid them.
System Action: Processing continues. The message was
1164W Variable name is not valid: contents issued from the internal message table, which is in English.
Explanation: A pipeline global variable does not contain
an acceptable value. 1168E Cannot convert relative date format word to abso-
System Action: The value is ignored and the default is lute date format word
used instead. Explanation: Some date formats are absolute; that is, they
reference a particular moment in time. Other date formats
1165E Configuration variable name is not recognised are relative; that is they specify some amount of time. A
relative date cannot be converted to an absolute date.
Explanation: The configuration variable is not known to
CMS/TSO Pipelines. User Response: Change the input date format to an abso-
lute date format or change the output date format to a rela-
User Response: Note that the names of configuration vari- tive date format. The default date format is an absolute date
ables must be spelt out. Some are longer than eight charac- format. Therefore, when converting a relative date format
ters; they must be specified in their entirety. Case is ignored you must specify a relative output date format.
in the names of configuration variables.
System Action: The stage terminates with return code
1168.
1169E Variable word is not a token set by SCANRANGE 1173I No RPL to restore
(reason code number)
Explanation: A fitting stage is terminating. The fitting
Explanation: The contents of the variable were not set by name was not resolved and thus it cannot be restored.
the SCANRANGE pipeline command. In particular, the vari-
able may have no value. The reason codes are:
1174E Not a hexadecimal address word
1. The variable is not set.
Explanation: The word is not the unpacked hexadecimal
2. The variable contains too many characters.
representation of a machine address. This may be caused by
3. The variable contains too few characters.
a corrupt control block structure for CMS/TSO Pipelines.
4. The check word in the variable is incorrect.
User Response: Do not invoke fplricdf; use pipdump or
User Response: Be sure that you quote the name of the
jeremy instead.
variable; it must not be substituted by REXX.
System Action: The stage terminates with return code
'scanrange required range1 .' arg(1)
1174.
'peekto line'
'inputrange range1 string' line
1175E Incorrect check word in PIPEBLOK: word
System Action: The stage terminates with return code
1169. Explanation: This may be caused by a corrupt control
block structure for CMS/TSO Pipelines. The check word is
1170E The input date is not valid: word (reason code shown in unpacked hexadecimal. It should have been
number) “pipe”, which is X'97899785'.
Explanation: The date could not be converted. The reason System Action: The stage terminates with return code
code is from the DateTimeSubtract callable services library 1175.
routine.
1176W The pointer to the Contents Vector table is
User Response: Correct the error described by the reason
destroyed (reason code number); investigate
code for DateTimeSubtract. If the reason is not obvious,
VM61261
refer to For Timer Services section in the Return and Reason
Code Values appendix of the CMS Application Multitasking Explanation: The pointer in low storage to the Contents
manual to find the symbolic name for the reason code. (The Vector Table is destroyed. The reason codes are:
VM library contains no further description of these error
1 The CVT pointer is either zero or negative.
codes.)
2 The CVT pointer does not point within the virtual
System Action: The stage terminates with return code machine.
1170. 3 The byte at offset X'74' in the CVT has the bit for
X'40' zero. This bit (originally meaning the Primary
1171W No output date format specified; the default Control Program) should be on for CMS.
output date format is the same as the input date : 4 The byte at offset X'74' in the CVT has one or more
format : of the bits for X'3F' nonzero.
Explanation: The output date format was not specified. It User Response: Contact your systems support staff.
defaults to the ISODATE date format. This value is the same System Programmer Response: Investigate whether
as the input date format. corrective service is available. In particular, ensure that the
System Action: Processing continues. The stage validates fix for APAR VM61261 is applied.
all dates as requested. Issue the command cp trace store into 10.4 to set a
User Response: Specify the output date format to avoid trap for storing into the primary CVT pointer.
this message when you wish to validate dates without
converting them. 1177E The system does not support date format word
Explanation: The specified date format is not supported for
1172I Restoring fitting name word the level of CMS on which you are running and in particular
Explanation: A fitting stage is terminating. The fitting not by the version of the DMSDTS callable service you are
name is restored in the Request Parameter List. using The date format may be valid on a later level of CMS
or it may not be valid for any level of CMS.
System Action: The stage terminates with return code
1177.
1178W The pointer to the Contents Vector table has been 1183E Date cannot be converted; input date word is not
restored from the alternate pointer valid
Explanation: The pointer in low storage to the Contents Explanation: The input date (that is, the contents of the
Vector Table is destroyed, but the alternate pointer is intact. specified field in an input record) is not a valid date for the
input date format specified. Possible causes for this error
User Response: Contact your systems support staff.
include:
System Programmer Response: Investigate whether
The input date format does not match the input date.
corrective service is available. In particular, ensure that the
fix for APAR VM61261 is applied. The input range includes information other than the date.
Issue the command cp trace store into 10.4 to set a The input range is not the correct syntax for a date of
trap for storing into the primary CVT pointer. the specified format.
The input date specifies a date which cannot occur, such
1179W The alternate pointer to the Contents Vector table as, February 29, 1997 (1997 is not a leap year).
has been restored from the primary pointer
System Action: The stage terminates with return code
Explanation: The alternate pointer in low storage to the 1183.
Contents Vector Table is destroyed, but the primary pointer
is intact.
1184E Input date word cannot be expressed in the output
User Response: Contact your systems support staff. date format
System Programmer Response: Investigate whether Explanation: The input date has no meaning for the output
corrective service is available. In particular, ensure that the date format. This occurs when one of three conditions
fix for APAR VM61261 is applied. exists:
Issue the command cp trace store into 500.4 to set a The input date is prior to the epoch for the output date
trap for storing into the alternate CVT pointer. format. For example, it is before January first 1900 for
TOD Absolute.
1180E A directory in the path string does not exist or you The input date is negative and the output date format is
are not authorised for it MET.
Explanation: Reason code 44000 was received when The input date specifies a negative year and the output
¡ creating a file. Reason code 90230 was received when date format is not SCIENTIFIC_ABSOLUTE.
¡ opening a directory.
System Action: The stage terminates with return code
System Action: The stage terminates with return code 1184.
1180.
1185E Cannot convert absolute date format word to rela-
1181E Directory control directory string is accessed read tive date format word
only
Explanation: Some date formats are absolute; that is, they
Explanation: Reason code 63700 was received when reference a particular moment in time. Other date formats
opening the file. are relative; that is they specify some amount of time. An
absolute date cannot be converted to a relative date.
System Action: The stage terminates with return code
1181. System Action: The stage terminates with return code
1185.
1182E Date format word cannot be used as an input date User Response: Change the input date format to a relative
format date format or change the output date format to an absolute
Explanation: The specified date format can be used only as date format.
an output date format.
1186W Operand string is ignored for input date format
System Action: The stage terminates with return code
word
1182.
Explanation: You specified the use of a sliding window for
an input date format of REXX_DATE_C or REXX_DATE_D.
System Action: The sliding window operand is ignored.
The date is converted using a base year of the current
century for REXX_DATE_C or the current year for
REXX_DATE_D.
User Response: Do not specify a sliding window for an System Action: The stage terminates with return code
input date format of REXX_DATE_C or REXX_DATE_D. 1197.
1194I Producer on input stream number has record avail- ¡ 1212I Rel hex doublewords at hex
able
¡ Explanation: Free storage management trace is enabled. A
Explanation: Tracing message. SELECT ANYINPUT is issued. ¡ block of storage is deallocated.
A producer other than the currently selected one has a record
available.
¡ 1213E Storage at address is not on allocated chain
¡ Explanation: Free storage management validation is
1195I Selecting input stream number
¡ enabled. The block being released is not on the chain of
Explanation: Tracing message. The stage is waiting after ¡ storage that was allocated by CMS/TSO Pipelines.
SELECT ANYINPUT has been issued. A record is now avail-
¡ System Action: The storage is not returned to the operating
able.
¡ system. Processing continues.
¡ 1216E Storage at address; check word is destroyed ¡ 1226E Global area is corrupted
¡ Explanation: Free storage management validation is ¡ Explanation: The FPLDEBUG command could not verify the
¡ enabled and a block of storage is being released. The ¡ first twenty-four bytes of the global data area returned in the
¡ fullword immediately before the allocated area has been ¡ name/token.
¡ corrupted.
¡ System Action: The command terminates with return code
¡ System Action: The storage is not returned to the operating ¡ 1226.
¡ system. Processing continues.
¡ User Response: It is likely that the error is an insufficient ¡ 1227E Insufficient data returned by DMSGETDI; got
¡ request for the storage that preceded this particular block of ¡ number expect number
¡ storage. This is a particularly nasty bug to track down, espe-
¡ Explanation: The callable service returned fewer bytes than
¡ cially on TSO.
¡ required for the function requested.
¡ User Response: If FPLSTAT was specified and no output is
¡ 1217I Contents: hex
¡ produced, your system is does not support the function
¡ Explanation: Free storage management validation is ¡ requested.
¡ enabled and an error is discovered when a block of storage is
¡ System Programmer Response: If FPLSTAT is omitted and
¡ released. The first thirty-two bytes of the allocated block are
¡ some output was produced, the combination of buffer size
¡ displayed.
¡ used by sfsdir and the record length returned is such that a
¡ partial record is generated in a downlevel format at the end
¡ 1220E IEANTRT RC=hex not equal to R15 (=hex.) ¡ of the buffer. Open an APAR against DMSGETDI.
¡ Explanation: The FPLDEBUG command found that the
¡ return code stored by IEANTRT is not equal to the return code ¡ 1228I Return code number erasing work file
¡ in general register 15.
¡ Explanation: While replacing a file in a SFS directory that
¡ System Action: The command terminates with return code ¡ is accessed as a mode letter using >mdsk, the file is created
¡ 1220. ¡ correctly, but erasing the work file fails with the return code
¡ substituted. The configuration variable DISKREPLACE is set to
¡ COPY.
¡ 1221I The TSO Pipelines name/token is not established
¡ System Action: The error is ignored.
¡ Explanation: IEANTRT gave return code 4. No PIPE
¡ command has been issued for the address space or FPLRESET ¡ User Response: Do not panic. The file is created correctly;
¡ has been issued to delete the TSO Pipelines global informa- ¡ most likely the work file had mode number 3 and the copy
¡ tion area. ¡ operation has removed the work file. The return code is 28
¡ in this case.
¡ 1222E IEANTRT RC=hex
¡ 1229E Length code char is not valid
¡ Explanation: The FPLDEBUG command received a return
¡ code that is neither 0 nor 4. ¡ Explanation: The first position of an input record to
¡ uudecraw is not one of the valid characters for the encoding
¡ System Action: The command terminates with return code
¡ format.
¡ 1222.
¡ User Response: Be sure to remove the header and trailer
¡ 1223E Nametoken field name contains value; expect value ¡ records from a file in the uuencode format.
¡ Explanation: The token returned by the IEANTRT callable ¡ System Action: The stage terminates with return code
¡ service does not contain the expected data. ¡ 1229.
¡ 1249E Requested SYSIB information not available ¡ 1255E CP paging error on diagnose 210 device number
¡ hex
¡ Explanation: A nonzero condition code was received on a
¡ STSI instruction to return the actual configuration information. ¡ Explanation: CP has set the condition code indicating that
¡ it was unable to provide information about the device.
¡ User Response: At the time of writing, these are the valid
¡ selectors: ¡ System Action: The stage terminates with return code
¡ 1255.
¡ 111 Basic machine.
¡ 121 Basic machine CPU
¡ 122 Basic machine CPUs. ¡ 1256E Cylinder number number beyond disk capacity
¡ 221 Logical partition CPU. ¡ number
¡ 222 Logical partition CPUs.
¡ Explanation: The disk has only the number of cylinders
¡ 322 Virtual machine CPUs.
¡ shown.
¡ System Action: The stage terminates with return code
¡ System Action: The stage terminates with return code
¡ 1249.
¡ 1256.
¡ 1277E Record length (number) is not ¡ 1285I Input stream number is only stream connected
¡ 8+keylength+datalength (number)
¡ Explanation: Tracing message. SELECT ANYINPUT is issued.
¡ Explanation: The input record does not represent a valid ¡ Return code 4 is being set as there is only one input stream
¡ CKD block. ¡ left connected.
¡ System Action: The stage terminates with return code
¡ 1277. ¡ 1286E SF4 is not specified
¡ Explanation: EMSGSF4 is specified, but SF4 has not been
¡ 1278E Track number is not specified in input record ¡ specified or has been quietly overridden by an other
¡ number ¡ deblocking option.
¡ Explanation: The input contains one word, not the ¡ System Action: The stage terminates with return code
¡ minimum two. ¡ 1286.
¡ System Action: The stage terminates with return code
¡ 1278. ¡ 1287E Server responds without SF4
¡ Explanation: tcpclient with EMSGSF4 is specified and the
¡ 1279E No messages in queue, but interrupt received. ¡ first byte of data from the server is not zero in the leftmost
¡ five bits. This can be caused by inetd issuing error
¡ Explanation: iucvclient or iucvdata is confused.
¡ messages, in conjunction with starting the server.
¡ System Action: The stage terminates with return code
¡ System Action: The data stream is converted from ASCII to
¡ 1279.
¡ EBCDIC, deblocked, and then issued with message 39 by a
¡ User Response: Contact your systems support staff. ¡ separate stage. On end-of-file from the server, the stage
¡ System Programmer Response: This is a programming ¡ terminates with return code 1287.
¡ error in CMS/TSO Pipelines. Investigate whether corrective
¡ service is available. ¡ 1288I Branch to zero probably from hex
¡ Explanation: A program check has occurred while
¡ 1281E Unsupported IUCV message format ¡ executing an instruction at location zero in storage. Register
¡ Explanation: A message pending or message complete ¡ 14 indicates a branch and link instruction and register 15 is
¡ interrupt is received that either requests a reply or has an ¡ zero.
¡ incorrect message class. ¡ The branch to zero may be due to the branch and link, but it
¡ System Action: The interrupt parameters are displayed. ¡ is also possible that the zero in register 15 is the return code
¡ The stage terminates with return code 1281. ¡ from a subroutine. In this case, the branch must have been
¡ subsequent to the subroutine call, or from the subroutine.
¡ System Action: The stage terminates with return code ¡ System Action: The stage terminates with return code
¡ 1282. ¡ 1289.
¡ 1283E More than number CCWs in input record ¡ 1291I The field ADMSCWR in NUCON is incorrect;
¡ found hex; display of ABEND information may be
¡ Explanation: An input record contains sixteen or more ¡ in jeopardy
¡ control CCWs.
¡ Explanation: This message is issued when CMS Pipelines
¡ System Action: The stage terminates with return code ¡ initialises itself and finds that the address in NUCON of the
¡ 1283. ¡ console write routine does not point into the CMS nucleus, as
¡ defined by the fields nucalpha and nucsigma. This condi-
¡ 1284E Subchannel for device number hex is busy ¡ tion may occur when CMS Pipelines is initialised under
¡ control of other programs that trap console output.
¡ Explanation: The device is not available to start a channel
¡ program. Condition code 2 was set on the start subchannel ¡ CMS Pipelines will not be able to issue meaningful diag-
¡ instruction. ¡ nostic messages in the event of a CMS ABEND while cms or
¡ command is running.
¡ System Action: The stage terminates with return code
¡ 1284.
¡ Explanation: The input field contains too many significant ¡ Explanation: The length of the track being built is larger
¡ digits to fit within the 31 decimal digits available in a ¡ than 64K, which is the architectural maximum for a CKD
¡ counter. ¡ track.
¡ System Action: The stage terminates with return code ¡ System Action: The stage terminates with return code
¡ 1298. ¡ 1307.
¡ 1299W number duplicate masters were discarded ¡ 1308I Device hex is busy or has interrupt pending
¡ Explanation: The input master file contained duplicates. ¡ Explanation: Return code 5 is received on a Diagnose A8
¡ 1301E Not a built-in function: word ¡ System Action: The stage terminates with return code
¡ 1309.
¡ Explanation: A string of characters and digits is met, but it
¡ does not name any of the built-in functions. ¡ User Response: Contact your systems support staff.
¡ System Action: The stage terminates with return code ¡ System Programmer Response: This would appear to be a
¡ 1301. ¡ change in the Control Program. Investigate the meaning of
¡ the return code in the documentation of Diagnose A8 in the
¡ latest edition of CP Programming Services.
¡ 1302E Leftmost word of 32-bit counter number is not zero
¡ (hex)
¡ 1310I Device hex has unsolicited status pending
¡ System Action: The stage terminates with return code
¡ 1302. ¡ Explanation: Return code 16 is received on a Diagnose A8
¡ instruction.
¡ System Action: The operation is retried up to four times.
¡ If the operation cannot be started, the stage terminates with
¡ return code 1310.
¡ 1311I Not squished track reason hex ¡ 1320E Module word contains a type 1 filter package; run
¡ it as a CMS command to install
¡ Explanation: A debug message.
¡ Explanation: The filter package cannot be loaded explic-
¡ itly.
¡ 1312E Filter package word is already loaded
¡ User Response: Invoke the module as a CMS command to
¡ System Action: The stage terminates with return code
¡ load the filter package. On z/OS, link the module with
¡ 1312.
¡ FPLNXG rather than with FPLNXF.
¡ 1313E PTF filter package word is already loaded ¡ System Action: The stage terminates with return code
¡ 1320.
¡ System Action: The stage terminates with return code
¡ 1313.
¡ 1321I Assembler requests number bytes output for record
¡ number on stream number
¡ 1314E Unable to load module word (return code number)
¡ Explanation: The High Level Assembler has called an exit
¡ System Action: The stage terminates with return code ¡ to write a record, but the exit request information contains a
¡ 1314. ¡ negative length, which is displayed in the message.
¡ System Action: The request is ignored.
¡ 1315E Filter package word has bad eye-catcher word
¡ User Response: The filter package must be linked with the ¡ 1322I Ignoring HALT at hex
¡ object module FPLNXG.
¡ Explanation: A specification exception is recognised and
¡ System Action: The stage terminates with return code ¡ the instruction is a diagnose with code 8 and a length of 0.
¡ 1315. ¡ This is used on CMS to put the virtual machine into console
¡ function mode so the user can inspect storage and registers
¡ 1316E Filter package word is not loaded ¡ and in general use CP debugging facilities.
¡ Explanation: The specified filter package is not known to ¡ System Action: Retry is attempted. If the system allows,
¡ CMS/TSO Pipelines. On CMS, it is neither loaded actively ¡ execution continues with the next sequential instruction after
¡ nor passively. ¡ the diagnose. In effect, the halt is ignored.
¡ 1325E Unrecognised option string ¡ 1332E SPOOL file number contains CP trace data
¡ Explanation: The second argument to the strip() func- ¡ Explanation: The specified SPOOL file is not the requested
¡ tion does not begin with “b”, “l”, or “t” (in either upper or ¡ format.
¡ lower case).
¡ User Response: Use CP command “query trf *” to
¡ System Action: Processing terminates with return code ¡ display the trace files available to you and their file types.
¡ 1325.
¡ System Action: Processing terminates with return code
¡ 1332.
¡ 1326E Pad is not a single character (it is string)
¡ Explanation: The third argument to the strip() function ¡ 1333E SPOOL file number does not contain CP trace data
¡ is a string with more than one byte.
¡ Explanation: The specified SPOOL file is not the requested
¡ System Action: Processing terminates with return code ¡ format.
¡ 1326.
¡ User Response: Use CP command “query trf *” to
¡ display the trace files available to you and their file types.
¡ 1327E Scanner jammed in state number in start condition
¡ System Action: Processing terminates with return code
¡ number
¡ 1333.
¡ Explanation: The expression does not contain a valid
¡ sequence of characters. The substituted numbers are useful
¡ 1334E SPOOL file number is in use
¡ only when debugging CMS/TSO Pipelines.
¡ Explanation: The specified SPOOL file is open by another
¡ System Action: Processing terminates with return code
¡ user or in another stage.
¡ 1327.
¡ User Response: If a pipeline ends abnormally while
¡ reading trace data, the file will not be closed and this
¡ 1328E There is no default for the type argument
¡ message is issued on a subsequent attempt to read the trace
¡ Explanation: The second argument to DATATYPE is a null ¡ file. It may be necessary to reset the virtual machine; that is,
¡ string. You must specify at least one character. ¡ IPL CMS.
¡ System Action: Processing terminates with return code ¡ System Action: Processing terminates with return code
¡ 1328. ¡ 1334.
¡ 1329E Attempt to extract the square root of a negative ¡ 1335W Concatenated data set(s) for DD=DDNAME
¡ number ¡ ignored. Use QSAM instead
¡ Explanation: Note that arithmetic in specs is carried out ¡ Explanation: Two or more concatenated input data sets are
¡ using approximate numbers. As a result, the normal laws of ¡ specified on the allocation and a member is requested from
¡ algebra do not all hold. For example, 1-(1/3)*3 will not be ¡ the first one. Subsequent data sets in the concatenation are
¡ zero (it will be negative), whereas 1-(1*3)/3 will be zero. ¡ ignored as errors in their specification cannot be verified by
¡ CMS/TSO Pipelines.
¡ System Action: Processing terminates with return code
¡ 1329. ¡ User Response: Replace the device driver with qsam.
¡ qsam will allow the concatenation (it does not inspect the
¡ 1330E Return code number on diagnose E0 subcode hex ¡ allocation at all); whether it will process the concatenation is
¡ another matter. For example, concatenating a member of a
¡ System Action: Processing terminates with return code ¡ partitioned data set with an entire partitioned data set (which
¡ 1330. ¡ reads its directory) will lead to error messages depending on
¡ the record format of the first data set in the concatenation.
¡ 1331E SPOOL file number does not exist ¡ System Action: Processing continues with the first data set
¡ Explanation: The specified SPOOL file is not available to ¡ in the concatenation.
¡ the virtual machine or it is not a trace file.
¡ User Response: Use CP command “query trf *” to ¡ 1336E Reason number on string: string
¡ display the trace files available to you. ¡ Explanation: A call to WebSphere MQ Series failed with
¡ System Action: Processing terminates with return code ¡ the reason code shown on a call to the function name in the
¡ 1331. ¡ second substitution.
¡ System Action: Processing terminates with return code
¡ 1336.
¡ 1337E Expect CSQN205I; received string ¡ 1351E Key length number is not valid
¡ Explanation: The first response message from the ¡ Explanation: The record containing the key must be eight,
¡ command processor is not the expected one. ¡ sixteen, or twenty-four bytes for cipher DES; it must be
¡ sixteen or twenty-four bytes for cipher 3DES.
¡ User Response: Inspect the substituted message text to
¡ determine whether it indicates some other kind of error. ¡ System Action: Processing terminates with return code
¡ 1351.
¡ System Action: Processing terminates with return code
¡ 1337.
¡ 1352E Cipher Message instruction not available
¡ 1338E ABEND hex reason number on LOAD of entry point ¡ System Action: Processing terminates with return code
¡ 1352.
¡ System Action: Processing terminates with return code
¡ 1338.
¡ 1353E DEA functions are not available hex
¡ 1339E Error opening string for string ¡ System Action: Processing terminates with return code
¡ 1353.
¡ Explanation: A queue could not be opened.
¡ User Response: Look for accompanying RACF messages
¡ 1354E Computed output column is not positive (it is
¡ that indicate missing authorisation.
¡ number)
¡ System Action: Processing terminates with return code
¡ System Action: Processing terminates with return code
¡ 1339.
¡ 1354.
¡ 1340E message
¡ 1355E Unable to convert to integer. number digits in
¡ Explanation: The MQ command processor has indicated an ¡ fraction
¡ error in processing the command.
¡ Explanation: The result of an expression is being
¡ User Response: Refer to the documentation of the message ¡ converted to integer for use in specifying a position.
¡ substituted (CSQN205I). If the return code is 20, you are not
¡ System Action: Processing terminates with return code
¡ authorised to issue commands through the system command
¡ 1355.
¡ queue.
¡ System Action: Processing terminates with return code
¡ 1356E Unable to convert to integer: number
¡ 1340.
¡ Explanation: The result of an expression is being
¡ converted to integer for use in specifying a position. The
¡ 1341I data data
¡ number has more significant digits than can be represented in
¡ 1342I data data data
¡ a 32-bit integer.
¡ 1343I data data data data
¡ 1344I data data data data data ¡ System Action: Processing terminates with return code
¡ 1345I data data data data data data ¡ 1356.
¡ 1346I data data data data data data data
¡ 1347I data data data data data data data data ¡ 1357E Unable to convert to integer. Exponent too large.
¡ 1348I data data data data data data data data data ¡ (number)
¡ 1349I data data data data data data data data data data
¡ Explanation: The result of an expression is being
¡ Explanation: Messages used for tracing and debugging. ¡ converted to integer for use in specifying a position. The
¡ exponent is too large to convert to a 31-digit number.
¡ 1350E Already connected to queue manager string
¡ System Action: Processing terminates with return code
¡ Explanation: Another stage is active with the specified ¡ 1357.
¡ queue manager.
¡ System Action: Processing terminates with return code ¡ 1358W Global lock held by R12=address R14=address
¡ 1350. ¡ Explanation: The message level for thorough dispatcher
¡ checks is on and the dispatcher is called by a stage that
¡ holds the global lock.
¡ The message is issued only the first time that the condition is
¡ detected in all concurrently active pipeline sets, as this
¡ supposedly is the point of failure. Further dispatching
¡ activity is likely to lead to continued detection until the ! 2 General register 2 does not contain an assigned
¡ original holder of the lock releases it. ! number. This is a programming error in CMS/TSO
! Pipelines. (FPLOSM likely needs reassembly.)
¡ User Response: Contact your systems support staff.
! 3 LOAD instruction failed. The return code describes the
¡ System Programmer Response: This is an error in
! error.
¡ CMS/TSO Pipelines if the stage is a built-in program.
! System Action: The stage terminates with return code
¡ For stages written by a user: Congratulations on managing
! 1362.
¡ to obtain the global lock. Holding the lock when calling the
¡ dispatcher is not a good idea as the dispatcher may switch to
¡ another stage. If that stage adds a pipeline specification to ! 1363E Odd string length number
¡ the pipeline set, the stage resolution process also acquires the
! System Action: The stage terminates with return code
¡ global lock and an assert error 128 results.
! 1363.
¡ System Action: Processing continues. Message 411 is
¡ issued if the procedure can be identified. In particular, the ! 1364W Member word has no sections
¡ lock is not released as that would lead to an assert error 129
¡ when the lock is released by the code that obtained the lock. ! Explanation: Reason code X'10800062' is returned by
! IEWBFDAT.
¡ 1359E Record length number not multiple of cipher block ! System Action: Processing continues. bfda writes a null
¡ size number ! record to both its streams and continues with the next
! member.
¡ System Action: Processing terminates with return code
¡ 1359.
! 1365E Warp word not registered
¡ 1360E Degenerate Triple DES key ! Explanation: The stage is not first in a pipeline, but the
! specified warp ID has not been registered by a warp stage
| Explanation: The key is 16 or 24 bytes, consisting of two
! that is first in a pipeline.
| or three keys of eight bytes each. Both or all keys should
| not be equal as this would degenerate the algorithm to single ! System Action: The stage terminates with return code
| DES. ! 1365.
! 1370E IDR record indicates number bytes present, but ! 1377E CTL record found as record X'hex', but count is
! record is number ! X'hex'
! Explanation: The record contains X'80' in column 1, but ! Explanation: A control record has indicated that a number
! column 2 does not contain one less than the record length. ! of relocation dictionary records will follow the text record,
! but some other kind of record was found after that run. The
! System Action: The stage terminates with return code
! load module is probably broken.
! 1370.
! This is rather vague because the load module format
! provides a one byte count, but more than 256 RLD has been
! 1371E Improper IDR language processor flag byte X'hex'
! observed in the wild. However, the modulo 256 does not
! at offset number
! agree.
! Explanation: The byte should be 0 or 1, but it is not. The
! System Action: The stage terminates with return code
! offset is relative to the first CESD number for a particular
! 1377.
! control section; not to the beginning of a particular record, as
! this type of identification data is spanned across records.
! 1378E Installation validation routine rejected SVC 99
! System Action: The stage terminates with return code
! 1371. ! Explanation: An installation exit has denied dynamic allo-
! cation.
! 1372E Improper control record prefix hex ! System Action: The stage terminates with return code
! 1378.
! Explanation: One of the two length fields in the control
! record contains a number that is not a multiple of four. ! User Response: Contact your systems support staff.
! System Action: The stage terminates with return code
! 1372. ! 1379E Unexpected return code X'hex' on SVC 99
! Explanation: The return code was not one of the docu-
! 1373E Control record requests number bytes, but only ! mented ones.
! number bytes are available
! System Action: The stage terminates with return code
! System Action: The stage terminates with return code ! 1379.
! 1373.
! User Response: Contact your systems support staff.
! 1383E Unexpected EOF on primary input ! 1390E Incomplete member definition: name
! Explanation: The primary stream comes to end-of-file ! Explanation: A member name or hyphen is found, but the
! without having presented a control record indicating end of ! definition is missing. Either end-of-file or a colon is met.
! module.
! System Action: The stage terminates with return code
! System Action: The stage terminates with return code ! 1390.
! 1383.
! 1391E "char" is not valid in identifier string
! 1384E Structure name expected; found string
! System Action: The stage terminates with return code
! Explanation: A structure specifier is expected at the begin- ! 1391.
! ning of the input.
! System Action: The stage terminates with return code ! 1392E Structure not defined: name
! 1384.
! System Action: The stage terminates with return code
! 1392.
! 1385E Structure name is empty
! Explanation: A colon is the next nonblank character after ! 1393E No structures defined in pipeline set
! the structure name or there is no further input.
! System Action: The stage terminates with return code
! System Action: The stage terminates with return code ! 1393.
! 1385.
! 1394E Structure still in use: name (number users)
! 1386E No structure name found
! Explanation: An attempt is made to delete a structure that
! Explanation: End-of-file was met after a colon indicating ! is embedded in another structure.
! the beginning of the definition of a structure.
! User Response: Delete the embedding structure first or
! System Action: The stage terminates with return code ! pass the two structure names on the same input line in any
! 1386. ! order.
! System Action: The stage terminates with return code
! 1387E Incorrect first character in identifier: string ! 1394.
! Explanation: Structure and member names (often referred
! to as identifiers) must begin with a letter in the English ! 1395E Unqualified member name: name
! alphabet or one of the special characters “@#$!?_” (at sign,
! Explanation: No qualifier is active and the member name
! number sign, dollar sign, exclamation point, question mark,
! contains no period.
! and underscore). The second and subsequent character may
! also be a digit. ! System Action: The stage terminates with return code
! 1395.
! Identifiers are case sensitive unless the structure is defined as
! caseless.
! 1396E Incomplete inputRange string
! System Action: The stage terminates with return code
! 1387. ! Explanation: A keyword is met that is valid in an
! inputRange, but no columns or members are specified.
! 1388E Structure already defined: name ! System Action: The stage terminates with return code
! 1396.
! System Action: The stage terminates with return code
! 1388.
! 1397E Missing identifier in qualified name: word
! 1389E Member already defined: name ! Explanation: The last character of the word is a period.
! System Action: The stage terminates with return code ! System Action: The stage terminates with return code
! 1389. ! 1397.
! 1399E Member name further qualified with name ! 1407E Exponent overflow (number)
! Explanation: A scalar member is met in an identifier that ! Explanation: The input fixed point number is too large to
! is continued with a period to indicate a member of a struc- ! convert to a hexadecimal floating point number.
! ture.
! System Action: The stage terminates with return code
! System Action: The stage terminates with return code ! 1407.
! 1399.
! 1408E Length of output member (number) is above
! 1400E Structure not further qualified: name ! maximum number
! Explanation: No member name is found. ! Explanation: For fixed point, the limit is 128 bytes; for
! floating point, it is eight.
! System Action: The stage terminates with return code
! 1400. ! System Action: The stage terminates with return code
! 1408.
! 1401E Qualifier contains member: name
! 1409E Counter exponent out of range for hexadecimal:
! Explanation: A qualifier is requested, but one of the levels
! number
! refer to a member that is not an embedded structure.
! System Action: The stage terminates with return code
! System Action: The stage terminates with return code
! 1409.
! 1401.
! 1417E String length cannot be negative : number ! 1426I ... Evaluating "string"
! System Action: The stage terminates with return code ! Explanation: This message is issued after an error message
! 1417. ! has been issued by the spec expression evaluator and the
! message level is odd.
! 1418E String position cannot be zero ! System Action: None.
! System Action: The stage terminates with return code
! 1418. ! 1427E Exponent out of range: number
! Explanation: The fixed point binary number is too large
! 1419E Too few arguments in function call ! for conversion to the internal counter format.
! System Action: The stage terminates with return code ! The number may well be representable in the internal repre-
! 1419. ! sentation, but the conversion algorithm uses a limited expo-
! nent range corresponding to the one that is valid for
! 1420E Too many arguments in function call ! hexadecimal floating point numbers.
! System Action: The stage terminates with return code ! System Action: The stage terminates with return code
! 1420. ! 1427.
! 1421E DO expected; word was found ! 1428E Member name has no type
! Explanation: A condition expression has been scanned ! System Action: The stage terminates with return code
! after WHEN, but there is no further data or the next word is ! 1428.
! not DO.
! 1429E Member name has unsupported type char
! System Action: The stage terminates with return code
! 1421. ! System Action: The stage terminates with return code
! 1429.
! 1422E DONE expected; word was found
! 1430E Not hexadecimal: X'string'
! Explanation: A condition expression has been scanned
! after WHILE and DO, but the next word terminates an IF ! System Action: The stage terminates with return code
! group. ! 1430.
! System Action: The stage terminates with return code
! 1422. ! 1431E Member name longer than 16M (it is number)
! System Action: The stage terminates with return code
! 1423E Incomplete WHILE ! 1431.
! System Action: A WHILE group has been opened, but end-
! of-file is met without a matching DONE. The stage termi- ! 1432E Bad placement option string
! nates with return code 1423.
! Explanation: The third parameter in the output placement
! expression it not CENTRE (CENTER), LEFT, or RIGHT; or an
! 1424E Counter underflow ! abbreviation of these words.
! Explanation: The exponent of a counter has underflowed. ! System Action: The stage terminates with return code
! 1432.
! System Action: The stage terminates with return code
! 1424.
! 1433E Computed output length is negative (it is number)
! 1425W Use parentheses when using the result of an ! Explanation: A length of zero means take the default
! assignment: string ! length as by the data to be loaded.
! Explanation: An operator sees a counter assignment as its ! System Action: Processing terminates with return code
! right hand operand. ! 1433.
! User Response: Enclose the assignment in parentheses.
! System Action: So far, this is a nuisance message to prod
! you to fix the expression.
! 1434E Parse error in state number, unexpected string at ! 1441I ... Processed number structures and number
! offset number: "string" ! members in next structure
! Explanation: The expression does not parse according to ! Explanation: Informational message issued when structure
! the grammar. The state number is of interest only to the ! ADD is terminating because of an error. The first number
! author of CMS/TSO Pipelines; the first string shows the ! represent the number of completely finished structure
! mnemonic name of the input token that the grammar cannot ! definitions; zero mean that the error is in the first structure or
! parse. ! member.
! System Action: Message 1435 is issued. Processing termi-
! nates with return code 1434. ! 1442E Both ranges specify same length as other string
! Explanation: The two ranges in the comparison both
! 1435I Expecting string ! specify plus as the length; this makes the length indetermi-
! nate.
! Explanation: List the acceptable token names in the current
! parser state. ! System Action: Processing terminates with return code
! 1442.
! System Action: Processing terminates with return code
! 1435.
! 1443E Comma list is available only with equal compares
! 1436E FIXED specified, but no record length specified ! Explanation: The right hand range specifies plus as its
! and no input ! length.
! Explanation: Specify an explicit record length if you really ! System Action: Processing terminates with return code
! want to create a null file with a particular record length. ! 1443.
! System Action: Processing terminates with return code
! 1436. ! 1444E Comma list is not available with implied length
! Explanation: The operator must be = or ==.
! 1437E Previous member did not establish a position for
! System Action: Processing terminates with return code
! word
! 1444.
! Explanation: The previous member was specified as a
! word, field, auto field, or length * Its position is not known
! 1445I Error in call to function: string
! at the time the structure is defined.
! Explanation: A built-in function has detected an error.
! System Action: Processing terminates with return code
! 1437.
! 1446E String contains leading or trailing blank :
! "string"
! 1438I Incorrect text unit type X'hex'
! Explanation: The argument to X2C must not contain
! System Action: Further informational messages are issued.
! leading or trailing blanks.
! Processing terminates eventually.
! System Action: Processing terminates with return code
! 1446.
! 1439E Left hand operand is a string
! Explanation: The left hand operand of an arithmetic oper-
! 1447E String contains blank not on byte boundary:
! ator is a counter that contains a string.
! "string"
! System Action: Processing terminates with return code
! System Action: Processing terminates with return code
! 1439.
! 1447.
! 1449E Pad character is a string, not a single character: ! 1457E Scale out of bounds: number (-32768 to 32767 is
! string ! valid range)
! System Action: Processing terminates with return code ! Explanation: A member type character is met followed by
! 1449. ! a left parenthesis, the word in the parentheses is a number,
! but it is too large.
! 1450E Option string is null ! System Action: Processing terminates with return code
! 1457.
! Explanation: If specified, the string must contain at least
! one character (it can be any positive length).
! 1458E Semicolon expected; found string
! System Action: Processing terminates with return code
! 1450. ! Explanation: A prefix or suffix specification does not end
! at a semicolon.
! 1451E Option string is not valid for function: character ! System Action: Processing terminates with return code
! 1458.
! Explanation: The first character of the option string does
! not contain a character that is valid for the function.
! 1459E Plus or minus expected; found string
! System Action: Processing terminates with return code
! 1451. ! Explanation: A prefix or suffix specification must indicate
! whether matching it means to write a hit record (plus) or not
! (minus).
! 1452E Argument is a string, not a single character:
! string ! System Action: Processing terminates with return code
! 1459.
! Explanation: The arguments to the XRANGE function must
! both be a single character unless they are omitted.
! 1460E Incomplete pattern
! System Action: Processing terminates with return code
! 1452. ! Explanation: A prefix or suffix specification is not ended
! in a comma or semicolon; an optional pattern list does not
! end in >.
! 1453I Trap issued the CP command "string"
! System Action: Processing terminates with return code
! Explanation: Informational message to indicate that an
! 1460.
! activated trap has sprung. Most likely there will be a dump
! of the virtual machine in a reader somewhere.
! 1461E Missing pattern at string
! 1454E Not valid packed data hex ! System Action: Processing terminates with return code
! 1461.
! Explanation: A field that should contain packed decimal
! data contains an incorrect bit combination.
! 1462E Comma expected; found string
! System Action: Processing terminates with return code
! 1454. ! Explanation: A prefix or suffix specification does not end
! at a comma or semicolon.
! 1455E Output field is number bytes, but packed number ! System Action: Processing terminates with return code
! requires number bytes to avoid truncation ! 1462.
! Explanation: A counter is converted to packed decimal
! integer. ! 1463E No matching specified
! System Action: Processing terminates with return code ! Explanation: A hyphen is specified for both the prefix and
! 1455. ! the postfix pattern (an omitted postfix is treated like a
! hyphen).
! 1456E Scale not numeric: string ! System Action: Processing terminates with return code
! 1463.
! Explanation: A member type character is met followed by
! a left parenthesis, but the word in the parentheses is not a
! number
! System Action: Processing terminates with return code
! 1456.
! 1464E Odd number of nibbles (number) in pattern: string ! 1471E Unrecognised STOP parameter: word
! Explanation: You guessed it. The pattern must cover ! Explanation: PIPMOD STOP is issued, but the additional
! complete bytes. ! keyword is not ACTIVE.
! User Response: If you desire an odd number of matched ! System Action: Processing terminates.
! nibbles, add a “don’t care” nibble (a period).
! System Action: Processing terminates with return code ! 1472E Unrecognised PIPMOD immediate command:
! 1464. ! word
! Explanation: PIPMOD is issued, but the subcommand is not
! 1465E Missing number at end of pattern: string ! valid.
! Explanation: An ampersand ends the pattern. It must have ! System Action: Processing terminates.
! a number to specify the register to hold the nibble.
! System Action: Processing terminates with return code ! 1473E Unable to obtain global lock; held by hex
! 1465.
! Explanation: The global lock is held, which makes further
! processing impossible. The address of the lock control word
! 1466E Pattern longer than 32767 bytes ! is substituted in the message.
! Explanation: The pattern to match or the control structure ! System Action: Processing terminates.
! to describe it is too long.
! System Action: Processing terminates with return code ! 1474I Global hex
! 1466. ! 1475I Thread hex
! 1476I Header hex
! 1477I Vector hex
! 1467E Expect >; found char
! 1478I Stage hex
! Explanation: And optional item list was opened, but it was
! Explanation: Messages issued in response to the immediate
! terminated by a comma or semicolon.
! command PIPMOD WHERE that display the pipeline control
! System Action: Processing terminates with return code ! blocks
! 1467.
! 1479I Running: string
! 1468E Semicolon, colon, or comma expected; found char ! 1480I In procedure word
! Explanation: An required item list ends in >. ! Explanation: Message issued in response to the immediate
! System Action: Processing terminates with return code ! command PIPMOD ACTIVE.
! 1468.
! 1481I Stage is flagged to stop. PSW not in CMS/TSO
! 1469E Unexpected end of module word ! Pipelines code
! Explanation: End-of-file or a null record is read. The ! Explanation: PIPMOD STOP ACTIVE was issued. The active
! module should end with an end of module flag bit in a ! stage has been flagged to stop next time it enters the
! control record. ! dispatcher. The instruction address of the I/O PSW does not
! point to CMS/TSO Pipelines code.
! System Action: Processing terminates with return code
! 1469.
! 1482I Stage is in the dispatcher; likely to stop on the way
! out.
! 1470E CCW length number differs from record length
! number module word ! Explanation: PIPMOD STOP ACTIVE was issued. The active
! stage has been flagged to stop. The instruction address of
! Explanation: The control record specifies a different length ! the I/O PSW points into the dispatcher; the stage is likely to
! than the actual text record that follows. The module being ! terminate immediately.
! read is broken.
! System Action: Processing terminates with return code
! 1470.
! 1483I Stage is flagged to stop. PSW in wait/free storage ! 1490I Processing item number number: string
! management.
! Explanation: Informational message from spec when it
! Explanation: PIPMOD STOP ACTIVE was issued. The active ! terminates due to a a run time error. Specification items are
! stage has been flagged to stop next time it enters the ! numbered from zero. The first item is a SELECT item gener-
! dispatcher. The instruction address of the I/O PSW points to ! ated internally to select the primary input stream.
! the wait routine or the free storage manager.
! 1491E Field identifier specified, but no further operands
! 1484I Stage is flagged to stop. It is not summarily ! are present
! stoppable
! System Action: Processing terminates with return code
! Explanation: PIPMOD STOP ACTIVE was issued. The active ! 1491.
! stage has been flagged to stop next time it enters the
! dispatcher. The stage cannot be stopped at this point, but it
! 1492E Field identifier specified, but no valid range found:
! is likely to terminate next time it calls the dispatcher.
! word
! System Action: Processing terminates with return code
! 1485I Stage is flagged to stop. I/O old PSW and IOPSW
! 1492.
! fields are not the same. Type B to continue
! Explanation: PIPMOD STOP ACTIVE was issued. The active
! 1493E Too few streams are defined; number are present,
! stage has been flagged to stop next time it enters the
! but number are required
! dispatcher. The stage cannot be stopped at this point
! because the CMS I/O information is in an inconsistent state. ! System Action: The stage terminates with return code
! A CP read has been put up to allow you to examine the CMS ! 1493.
! control blocks and low core.
! 1494E Equal sign expected; end of member found
! 1486I Stage is flagged to stop. Forcing exit from word ! Explanation: A left parenthesis is met indicating that a list
! Explanation: PIPMOD STOP ACTIVE was issued. The PSW ! of manifest constants is to follow and an identifier has been
! has been modified to force the stage to return. ! scanned, but end of input or a colon was met where an equal
! sign is expected.
! 1487E Checksum field in column number is not within ! System Action: Processing terminates with return code
! record length number ! 1494.
! Explanation: The field to receive the checksum is not
! present in the input record. ! 1495E Equal sign expected; found char
! System Action: Processing terminates with return code ! Explanation: A left parenthesis is met indicating that a list
! 1487. ! of manifest constants is to follow and an identifier has been
! scanned, but the next non-blank character is not an equal
! sign.
! 1488E Convert index number is not implemented
! System Action: Processing terminates with return code
! Explanation: The conversion routine selected is not present
! 1495.
! in CMS/TSO Pipelines.
! User Response: Contact your systems support staff.
! 1496E Number expected; end of member found
! System Programmer Response: This is an error in
! Explanation: A left parenthesis is met indicating that a list
! CMS/TSO Pipelines.
! of manifest constants is to follow and an identifier has been
! System Action: Processing terminates with return code ! scanned as well as an equal sign, but end of input or a colon
! 1488. ! was met where a number is expected.
! System Action: Processing terminates with return code
! 1489E Unable to convert from negative to unsigned ! 1496.
! Explanation: Conversion to unsigned binary is requested,
! but the input number is negative.
! 1497E Comma or right parenthesis expected; end of ! 1504E Index missing for member word
! member found
! Explanation: A member has been resolved to an array and
! Explanation: A complete manifest constant has been ! a left parenthesis is scanned indicating that an index is
! scanned, but end of input or a colon was met where a ! present, but no further data are present.
! comma or a right parenthesis is expected.
! System Action: Processing terminates with return code
! System Action: Processing terminates with return code ! 1504.
! 1497.
! 1505E Right parenthesis expected after index
! 1498E Comma or right parenthesis expected; found char
! System Action: Processing terminates with return code
! Explanation: A complete manifest constant has been ! 1505.
! scanned, but end of input or a colon was met where a
! comma or a right parenthesis is expected.
! 1506E Index number is out of bounds (number)
! System Action: Processing terminates with return code
! System Action: Processing terminates with return code
! 1498.
! 1506.
! 1513E Expect period after subscript of identifier string; ! 1521E ALET hex is not valid
! found word
! Explanation: The ALET supplied on input will cause a
! Explanation: The subscript must consist of digits only. ! program check, if used (alserv TEST) or return code 12 is set
! on ALSERV REMOVE to indicate that the ALET is malformed.
! System Action: The stage terminates with return code
! 1513. ! System Action: The stage terminates with return code
! 1521.
! 1514E Subscript "word" is not valid in identifier string
! 1522E Virtual machine is not in XC mode
! Explanation: The member must be subscripted. The
! subscript must consist of digits only. It must also evaluate ! Explanation: Validating the ALET supplied on input will
! to 1 or more and not larger than the array bound. ! cause a program check for special operation exception.
! System Action: The stage terminates with return code ! System Action: The stage terminates with return code
! 1514. ! 1522.
! 1515E Top level structure "word" cannot be subscripted ! 1523E ASIT hex is not valid
! Explanation: A structure name cannot be subscripted. ! Explanation: The ASIT supplied on input does not identify
! and address space owned by the virtual machine. (Return
! System Action: The stage terminates with return code
! code 4 on ADRSPACE PERMIT)
! 1515.
! System Action: The stage terminates with return code
! 1523.
! 1516E Last character of identifier is a period: word
! Explanation: A member name is required after the period.
! 1524E VCIT hex does not represent a user that is logged
! System Action: The stage terminates with return code ! in
! 1516.
! Explanation: The VCIT specified does not represent the
! primary space of a user that is currently logged on. (Return
! 1517E No active qualifier for word ! code 28 on ADRSPACE PERMIT)
! Explanation: A single period is specified at the beginning ! System Action: The stage terminates with return code
! of a member name to indicate that the current qualifier must ! 1524.
! be used, but no qualifier has been established for the stream.
! System Action: The stage terminates with return code ! 1525E User word is not logged on
! 1517.
! Explanation: (Return code 28 on ADRSPACE PERMIT)
! 1528E Address space name word is not valid ! 1536W ALET hex is neither valid nor revoked.
! Explanation: The name contains a character that is not ! Explanation: (Return code 4 on ALSERV REMOVE)
! valid in an address space name. (Return code 16 on
! System Action: Processing continues.
! ADRSPACE)
! System Action: The stage terminates with return code ! Explanation: More than 509 ranges are presented in one
! 1530. ! input record to mapmdisk SAVE.
! System Action: The stage terminates with return code
! 1531E Maximum size of address spaces is exceeded ! 1539.
! Explanation: (Return code 12 on ADRSPACE CREATE)
! 1540E Page number too large: number
! System Action: The stage terminates with return code
! 1531. ! Explanation: The number is larger than the maximum
! number of pages in a data space.
! 1532E Address space size is not valid: number ! System Action: The stage terminates with return code
! 1540.
! Explanation: The number is larger than 524,288, which is
! the number of pages in a two gigabyte address space.
! (Return code 20 on ADRSPACE CREATE) ! 1541E Digit "character" is not hexadecimal in string
! number
! System Action: The stage terminates with return code
! 1532. ! System Action: The stage terminates with return code
! 1541.
! 1533W ASIT hex is already permitted to user word
! 1542E Hexadecimal string too long: number
! Explanation: The secondary output stream is not defined.
! (Return code 24 on ADRSPACE PERMIT) ! System Action: The stage terminates with return code
! 1542.
! System Action: Processing continues.
! 1545E Data space ALET hex is not initialised properly; ! 1551E Data space ALET hex is not locked
! eye-catcher is word
! Explanation: A stage can read data from a data space is
! Explanation: A stage that loads data into a data space is ! invoked with a nonzero ALET operand, but it is found that
! invoked with a nonzero ALET operand, but the first part of ! the lock in the data space was cleared.
! the data space is not the proper format. In particular, the
! System Action: The stage terminates with return code
! first eight bytes do not contain the string fplasit1.
! 1551.
! System Action: The stage terminates with return code
! 1545.
! 1552E Data space is fetch protected in key hex (PSW key
! hex)
! 1546E Data space ALET hex is in use; lock is word
! Explanation: A stage that accesses a data space will not be
! Explanation: A stage that loads data into a data space is ! able to do so because the data space is fetch protected and
! invoked with a nonzero ALET operand, but the lock word in ! the PSW key is nonzero and different from the key of the first
! the data space is not binary zeros. This indicates that some ! frame in the data space.
! other stage is also using the address space. The contents of
! System Action: The stage terminates with return code
! the lock show may give a clue to the name of the stage that
! 1552.
! holds the lock.
! System Action: The stage terminates with return code
! 1553E Data space is write protected in key hex (PSW key
! 1546.
! hex)
! Explanation: A stage that moves data into a data space
! 1547W Data space ALET hex contains unexpected lock
! will not be able to do so because the data space is fetch
! word
! protected and the PSW key is nonzero and different from the
! Explanation: A stage can load data into a data space is ! key of the first frame in the data space.
! invoked with a nonzero ALET operand. When the is termi-
! System Action: The stage terminates with return code
! nating it is finds that the lock in the data space has been
! 1553.
! changed to the value shown.
! System Action: The stage terminates normally.
! 1554E Data space ALET hex cannot be written
! Explanation: A stage that moves data into a data space
! 1548E Insufficient space in the data space for number
! will not be able to do so because of the protection system.
! bytes
! The data space can be read, but not written.
! Explanation: A stage that loads data into a data space is
! System Action: The stage terminates with return code
! invoked with a nonzero ALET operand, and the data space is
! 1554.
! full.
! System Action: The stage terminates with return code
! 1555E Data space ALET hex is not accessible
! 1548.
! Explanation: A stage that accesses a data space will not be
! able to do so because of the protection system. (Test protect
! 1549E ALET and PGMLIST are incompatible
! condition codes 2 or 3.)
! System Action: The stage terminates with return code
! System Action: The stage terminates with return code
! 1549.
! 1555.
! 1561E Counter number number is not valid (valid: ! 1564E Beginning block number number larger than
! number to number) ! device capacity number
! Explanation: The subscript to the counter array is out of ! System Action: The stage terminates with return code
! bounds. Be sure to specify COUNTERS when using counter ! 1564.
! arrays. No counter array is allocated when the second
! number is larger than the third.
! 1565E Ending block number number larger than device
! System Action: The stage terminates with return code ! capacity number
! 1561.
! System Action: The stage terminates with return code
! 1565.
! 1562E Incorrect UTF-number X'hex' reason code number
! Explanation: The input contains a sequence of characters ! 1566E Record size (number blocks) does not agree with
! that are not valid for the UTF format specified. The reason ! block count number in record
! code indicates the error:
! System Action: The stage terminates with return code
! 4 The UTF-8 input is binary zeros, but MODIFIED is ! 1566.
! specified (U+0000 should be encoded as X'C080').
! 8 The first byte of an UTF-8 encoded character is of the
! 1567E Beginning block number is greater than ending
! form B'10xxxxxx', which is reserved for additional
! block number
! bytes in a multibyte sequence. Most likely, a multi-
! byte sequence is too long. ! Explanation: The third operand is larger than the fourth.
! 12 More bytes are required for the character, but the ! System Action: The stage terminates with return code
! input record is exhausted. ! 1567.
! 16 A byte other than the first for a UTF-8 encoded char-
! acter has the leftmost bit zero (B'0xxxxxxx'). That
! is, a multibyte sequence ended prematurely. ! 1568E Block number before first writable block
! 20 A byte other than the first for a UTF-8 encoded char- ! System Action: The stage terminates with return code
! acter has the second bit one (B'x1xxxxxx'). That is, ! 1568.
! a multibyte sequence ended prematurely.
! 24 The first byte of an UTF-8 encoded character contains
! 1569E Block number after last writable block
! five leftmost one bits (B'11111xxx'). Such
! sequences were defined in RFC 2279 for encodings ! System Action: The stage terminates with return code
! needing more than 21 bits, but retracted in RFC 2279. ! 1569.
! 28 Overlong UTF-8 encoding. Two bytes for a seven bit
! code point, except for zero when MODIFIED is
! specified. Also a any longer sequence that could be
! expressed in fewer bytes.
! 1570E Mode word does not refer to an SFS directory ! 1574E Number is not an integer: number
! System Action: The stage terminates with return code ! System Action: The stage terminates with return code
! 1570. ! 1574.
! 1571E Self-defining is too long (number bits) : string ! 1575E First argument to D2C/D2X is negative, but
! second argument is omitted: number
! System Action: The stage terminates with return code
! 1571. ! Explanation: d2c() and d2x() with one argument support
! positive or zero only.
! 1572E Needle cannot be empty ! System Action: The stage terminates with return code
! 1575.
! Explanation: the second argument to SUBSTITUTE is a null
! string.
! 1576E Stage is not running in a subroutine pipeline
! System Action: The stage terminates with return code
! 1572. ! Explanation: structure with CALLER is not running in a
! subroutine pipeline. Thus, it has no caller and the caller
! scope does not exist.
! 1573E Argument number is required
! System Action: The stage terminates with return code
! Explanation: An argument that is not optional was omitted.
! 1576.
! System Action: The stage terminates with return code
! 1573.
! 1577E No structures defined in caller
! System Action: The stage terminates with return code
! 1577.
It issues the command PIPMOD INSTALL to make the main module initialise itself.
PIPMOD then installs a PIPE nucleus extension to service future PIPE commands.
It reissues the PIPE command to let it be processed by the newly installed nucleus
extension.
CMS Pipelines uses two nucleus extensions, because PIPMOD is reentrant and refreshable
and is thus loaded into system storage to be protected from user programming errors. The
PIPE command, however, can potentially run user programs and therefore uses user storage
for work areas. Running the PIPE command as a user command also means that CMS will
perform ABEND recovery in the event of an error.
You can drop the PIPE nucleus extension at any time; it will be installed again by the next
PIPE command.
Red Neon!
Do not drop the PIPMOD nucleus extension from a pipeline stage (for example, cms).
CMS Pipelines issues message 107 when its service entry point is called as a result of
the NUCXDROP command, but it is unable to prevent the storage occupied by the
module from being released. An ABEND is most probable when control returns to CMS
Pipelines.
INSTALL Create the nucleus extension (if it does not already exist) and install
filter packages that are not already installed. If a word is specified, it is
used instead of PIPE as the name of the command that runs a pipeline
specification.
MSGLEVEL Set the rightmost halfword of the pipeline message level to the number
that follows. The number is decimal (unlike the option on runpipe).
The PIPMOD MSGLEVEL command can set all of the rightmost sixteen bits
of the message level (unlike the global or local option MSGLEVEL, which
is masked by X'17FF'). The individual bits are explained below.
! CMS does not allow immediate commands to issue error messages. Thus, PIPMOD issues
! diagnostic messages and responses through CP messages to the virtual machine.
! ──PIPMOD──ACTIVE──
! In general the currently active stage will change as the dispatcher manages the flow of
! work in the pipeline.
! When ACTIVE is omitted all stages that are waiting for an asynchronous event are signalled
that they should terminate. It can be used to stop a stage that is waiting for an event that
will never occur. Note that CMS services are in general synchronous; the immediate
command PIPMOD will not terminate a stage such as disk. Note that PIPMOD STOP does not
terminate a stage that is in a loop unless it calls the dispatcher (use HI for REXX programs
or HX). See “Device Drivers that Wait for External Events” on page 248 for a list of the
built-in programs that can potentially wait on asynchronous events.
! A stopable stage (sometimes called summarily stopable) is a stage that has specified that it
! uses only resources that the dispatcher knows how to release. It is not documented which
! stages are stopable (and this attribute may indeed change for a stage over time), but most
! filters are, including spec.
! When the stage is terminated because the dispatcher sees the flag to stop, the stage is
! resumed with return code -4092. When the running stage is stopable and the I/O old PSW
! does not indicate that control is in the pipeline dispatcher, the stage will be forcefully
! terminated by making it return to the dispatcher with return code -4091 immediately when
! DMSITI returns to the interrupted program.
! ──PIPMOD──WHERE──
As of level 1.1.10/0015, these differences are no longer fixed in the code; instead, the
behaviour is specified by pipeline configuration variables.
There are two types of configuration variables, keywords and values. Keyword variables
must be set to one of the supported keywords; a value variable specifies a word of up to
eight characters to be used in some context. Case is ignored in the names of configuration
variables; value variables are folded to upper case.
A default is determined by this hierarchy, where the first item has highest priority:
1. It can be set explicitly by the configure built-in program.
2. It can be stored in the system variable repository (a GLOBALV variable on CMS).
For each variable that has not been set explicitly, CMS/TSO Pipelines reads the variable
from the system variable repository the first time it needs to inspect a particular variable; it
saves the value internally from then on.
Default Styles
The default style specifies the default for configuration variables that you have not set
explicitly and for which there is no variable in the system variable repository. You can
select one of three default styles by setting the configuration variable STYLE:
DMS This style sets the defaults to the values associated with the behaviour in
Virtual Machine/Enterprise Systems Architecture. This style is the default for
the module shipped with Virtual Machine/Enterprise Systems Architecture.
PIP This style sets the defaults to the values associated with the field test version
of CMS/TSO Pipelines prior to 1.1.10/0015. This style is the default for the
“runtime distribution” and for the module available from the VMTOOLS tools
disk internally in IBM.
FPL This style sets the defaults to to a mixture of the two previous styles. It
represents the recommended choice for each variable.
CMS Considerations
On CMS, the pipeline variables are stored by default within the GLOBALV group FPL. The
group to use for subsequent queries can be changed by pipe literal group MYPIPE |
configure.
Configuration Variables
The following sections contain an alphabetical list of CMS/TSO Pipelines’s configuration
variables. The name of the variable is used for the section heading. The valid values for
keyword variables are described in tables. The three columns of each table contain:
1. The value or keyword.
2. The style in which it is the default, if any.
3. A description.
Diskreplace
This keyword variable controls how >mdsk replaces a file in an SFS directory that is
accessed with a mode letter. After the data are written to a temporary file, CMS/TSO
Pipelines replaces either the file or its contents. The acceptable values are:
Copy DMS Use the callable service to make the SFS server replace
the previous contents of the output file with the contents
of the temporary file. This retains the characteristics of
the file at the expense of additional I/O in the SFS server
virtual machine.
Replace PIP Use the copy/erase method of replacing the old file with
FPL
the new one. This reduces the load on the SFS server;
but because the file is replaced, all authorisations are lost
and the file creation date is changed.
Notes:
1. Since level 1.1.9, CMS Pipelines has been able to replace files directly in a SFS direc-
tory (CMS9 required). Specify the directory where the file resides instead of the mode
letter. When a directory is specified, > uses native CSL routines to replace the file and
this configuration variable becomes irrelevant.
Disktempfiletype
This keyword variable specifies the file type used by >mdsk when it replaces a file in a
directory that is accessed as a mode letter.
Notes:
1. Since level 1.1.9, CMS Pipelines has been able to replace files directly in a SFS direc-
tory (CMS9 required). Specify the directory where the file resides instead of the mode
letter. When a directory is specified, > uses native CSL routines to replace the file and
this configuration variable becomes irrelevant.
Group
The value of this variable specifies the GLOBALV group where configuration variables are
stored. The default group is FPL in all styles; it can be changed only by the configure
built-in program (clearly, the group cannot be specified by a variable within itself).
Repository
The value of this variable specifies the message repository to use for messages. A single
hyphen (-) means that no message repository is used; CMS/TSO Pipelines then issues
built-in messages.
SQLpgmname
The value of this variable specifies the program name to use by the sql stage. The
program name must match the value specified by the PREP= operand of the SQLPREP
command that generated the access module.
SQLpgmowner
The value of this variable specifies the program owner to use by the sql stage. The
program owner must match the SQL user ID that issued the SQLPREP command that gener-
ated the access module.
Notes:
1. The user ID must begin with a letter when DB2 is used.
Stallaction
This keyword variable controls the behaviour when the pipeline dispatcher determines that
the pipeline is stalled. The original implementation issued message 29 and a message for
each stage in the pipeline set; it then appended a formatted dump of the control block
structure to a disk file.
Stallfiletype
The value of this variable controls the file type used when appending the status of the
stages in a pipeline set to a dump file. The STALLFILETYPE configuration variable is of
interest only when the STALLACTION configuration variable is set to JEREMY or to
STATUSDUMP.
When the last character of the STALLFILETYPE configuration variable is an asterisk (*), the
characters before the asterisk are used as they are; the remaining characters up to a length
of eight are inserted as a number, starting with zeros. CMS/TSO Pipelines loops incre-
menting this number until it finds a file type for which there is no file. If all files exist,
the dump is appended to the last file found.
Style
This keyword variable controls the defaults for other variables. There are three default
styles:
For compatibility, the default for the Style configuration variable is DMS for Virtual
Machine/Enterprise Systems Architecture; it is PIP for the field test version.
The style also governs other actions, which cannot be controlled through individual vari-
ables:
Notes:
1. The way help works is governed by the default style. For PIP and FPL, it behaves as
ahelp, which displays information from PIPELINE HELPLIB (if it is available); for DMS,
it issues the CMS HELP command to display a standard Virtual Machine/Enterprise
Systems Architecture help file.
2. The application ID in the message prefix is controlled by the default style when the
repository value is set to a hyphen. In the PIP style, the application ID is set to PIP; in
the other two styles it is set to FPL.
! VM
! The PIPMOD immediate command support the options ACTIVE and STOP ACTIVE. As imme-
! diate commands are not allowed to perform I/O, the command response is via CP messages.
! The commands are applied to all threads. This discussion assumes that only the main
! thread is running a pipeline; that is, we ignore CMS multitasking.
! PIPMOD STOP ACTIVE sets a flag for the dispatcher to terminate the stage, but whether the
! dispatcher sees this flag is another matter.
! Refer to “PIPMOD Immediate Commands” on page 837 for a discussion of these two
! immediate commands.
! A stage that is not stopable and in a loop will not call the dispatcher and therefore it
! cannot be terminated. CMS command HX is still the only option in this case.
! Also note that CMS will interpret the PIPMOD command as a normal command when
! entered at the Ready prompt. Neither ACTIVE nor STOP are valid PIPMOD subcommands.
! Traps
! A trap facility is implemented for CMS Pipelines to allow the user to specify CP
! commands to issue when a particular message is issued.
! The facility is enabled for messages FPLDSK124E and FPLDSK129E. There is no user inter-
! face to enable other messages, though it would be possible to write a REXX program to
! enable other messages in the field, should this be necessary.
! When an enabled message has been issued, the global variable corresponding to the
! module, message, and severity (for example, DSK124E) is fetched from the global variable
! group FPLTRAP and inspected. If the contents of the variable are not blank, its value is
! stripped of leading blanks and passed directly to CP with diagnose 8 and then logged with
! message 1453. The return code is ignored.
! The most useful command to issue by these means is expected to be the CP command
! message to someone instead (or given sufficient privileges, shut down the system).
! The usual rules apply: The command is limited to 240 bytes; it should be upper case
! unless for strings that you wish to be mixed case; line end characters (X'15') separate
! multiple commands.
! It is deliberate that the command is issued directly to CP; this makes the current register set
! easy to find (registers 0 through 12 are the ones of the program that issued the message);
! had a CMS command been issued instead, it would be tedious to find the registers (though
! clearly possible given sufficient stamina).
! Assuming you create a VMDUMP SPOOL file, you can use to read it in, then access MAINT
! 193 and inspect it with The CMS/TSO Pipelines home page contains a sample VM Dump
! Tool macro, FPLDSK VMDT, to extract information from the dump file for the two enabled
! messages. Refer to:
! http://vm.marist.edu/%7epipeline/FPLDSK.vmdt
abbreviation abbrev Select Records that Contain an Abbreviation of a Word in the First Positions
access list alserv Manage the Virtual Machine’s Access List
ACI acigroup Write ACI Group for Users
active aftfst Write Information about Open Files
address ip2socka Build sockaddr_in Structure
address socka2ip Format sockaddr_in Structure
address 3277bfra Convert a 3270 Buffer Address Between Representations
AES cipher Encrypt and Decrypt Using a Block Cipher
AFT aftfst Write Information about Open Files
after append Put Output from a Device Driver after Data on the Primary Input Stream
after preface Put Output from a Device Driver before Data on the Primary Input Stream
aggregate aggrc Compute Aggregate Return Code
ALET alserv Manage the Virtual Machine’s Access List
align scm Align REXX Comments
and combine Combine Data from a Run of Records
APL apldecode Process Graphic Escape Sequences
APL aplencode Generate Graphic Escape Sequences
append >>mdsk Append to or Create a CMS File on a Mode
append >>sfs Append to or Create an SFS File
append >>sfsslow Append to or Create an SFS File
append append Put Output from a Device Driver after Data on the Primary Input Stream
append diskfast Read, Create, or Append to a File
append diskslow Read, Create, or Append to a File
append diskupdate Replace Records in a File
append fanin Concatenate Streams
append join Join Records
append joincont Join Continuation Lines
append mdsk Read, Create, or Append to a CMS File on a Mode
append mdskslow Read, Append to, or Create a CMS File on a Mode
append mdskupdate Replace Records in a File on a Mode
append preface Put Output from a Device Driver before Data on the Primary Input Stream
arrange spec Rearrange Contents of Records
ASA asatomc Convert ASA Carriage Control to CCW Operation Codes
ASA mctoasa Convert CCW Operation Codes to ASA Carriage Control
ASCII qpdecode Decode to Quoted-printable Format
ASCII qpencode Encode to Quoted-printable Format
data set <mvs Read a Physical Sequential Data Set or a Member of a Partitioned Data Set
data set >>mvs Append to a Physical Sequential Data Set
data set >mvs Rewrite a Physical Sequential Data Set or a Member of a Partitioned Data Set
data set iebcopy Process IEBCOPY Data Format
data set listcat Obtain Data Set Names
data set listdsi Obtain Information about Data Sets
data set listispf Read Directory of a Partitioned Data Set into the Pipeline
data set listpds Read Directory of a Partitioned Data Set into the Pipeline
data set qsam Read or Write Physical Sequential Data Set through a DCB
data set readpds Read Members from a Partitioned Data Set
data set state Verify that Data Set Exists
data set sysdsn Test whether Data Set Exists
data set writepds Store Members into a Partitioned Data Set
data space adrspace Manage Address Spaces
data space mapmdisk Map Minidisks Into Data spaces
database sql Interface to SQL
database sqlselect Query a Database and Format Result
datagram udp Read and Write an UDP Port
date dateconvert Convert Date Formats
date greg2sec Convert a Gregorian Timestamp to Second Since Epoch
date sec2greg Convert Seconds Since Epoch to Gregorian Timestamp
date timestamp Prefix the Date and Time to Records
DB2 sql Interface to SQL
DCB <mvs Read a Physical Sequential Data Set or a Member of a Partitioned Data Set
DCB >>mvs Append to a Physical Sequential Data Set
DCB >mvs Rewrite a Physical Sequential Data Set or a Member of a Partitioned Data Set
DCB qsam Read or Write Physical Sequential Data Set through a DCB
DCB sysout Write System Output Data Set
deblock deblock Deblock External Data Formats
deblock fblock Block Data, Spanning Input Records
deblock iebcopy Process IEBCOPY Data Format
decode 64decode Decode MIME Base-64 Format
decrypt cipher Encrypt and Decrypt Using a Block Cipher
delay beat Mark when Records Do not Arrive within Interval
delay copy Copy Records, Allowing for a One Record Delay
delay delay Suspend Stream
delimited strxxxxx Stages that Allow Argument Strings as Delimited Strings
DES cipher Encrypt and Decrypt Using a Block Cipher
describe sql Interface to SQL
descriptor addrdw Prefix Record Descriptor Word to Records
destruct predselect Control Destructive Test of Records
device devinfo Write Device Information
device eofback Run an Output Device Driver and Propagate End-of-file Backwards
device fullscrq Write 3270 Device Characteristics
device fullscrs Format 3270 Device Characteristics
device waitdev Wait for an Interrupt from a Device
diagnose fullscrq Write 3270 Device Characteristics
diagnose E0 trfread Read a Trace File
diagnose E4 diage4 Submit Diagnose E4 Requests
diagnose 14 reader Read from a Virtual Card Reader
diagnose 58 fullscr Full screen 3270 Write and Read to the Console or Dialled/Attached Screen
diagnose 8 cp Issue CP Commands, Write Response to Pipeline
directory hfsdirectory Read Contents of a Directory in a Hierarchical File System
directory listispf Read Directory of a Partitioned Data Set into the Pipeline
directory listpds Read Directory of a Partitioned Data Set into the Pipeline
directory sfsdirectory List Files in an SFS Directory
discard chop Truncate the Record
discard drop Discard Records from the Beginning or the End of the File
discard hole Destroy Data
discard strip Remove Leading or Trailing Characters
disk >>mdsk Append to or Create a CMS File on a Mode
disk >>sfs Append to or Create an SFS File
disk >>sfsslow Append to or Create an SFS File
disk >mdsk Replace or Create a CMS File on a Mode
disk >sfs Replace or Create an SFS File
disk diskfast Read, Create, or Append to a File
disk diskslow Read, Create, or Append to a File
disk diskupdate Replace Records in a File
disk fbaread Read Blocks from a Fixed Block Architecture Drive
disk fbawrite Write Blocks to a Fixed Block Architecture Drive
disk mdiskblk Read or Write Minidisk Blocks
disk mdsk Read, Create, or Append to a CMS File on a Mode
disk mdskback Read a CMS File from a Mode Backwards
disk mdskrandom Random Access a CMS File on a Mode
disk mdskslow Read, Append to, or Create a CMS File on a Mode
disk mdskupdate Replace Records in a File on a Mode
disk members Extract Members from a Partitioned Data Set
disk pdsdirect Write Directory Information from a CMS Simulated Partitioned Data Set
disk qsam Read or Write Physical Sequential Data Set through a DCB
disk sfsupdate Replace Records in an SFS File
display browse Display Data on a 3270 Terminal
display fullscr Full screen 3270 Write and Read to the Console or Dialled/Attached Screen
driver <mdsk Read a CMS File from a Mode
driver <sfs Read an SFS File
driver <sfsslow Read an SFS File
driver >>mdsk Append to or Create a CMS File on a Mode
driver >>sfs Append to or Create an SFS File
driver >>sfsslow Append to or Create an SFS File
driver >mdsk Replace or Create a CMS File on a Mode
driver >sfs Replace or Create an SFS File
driver aftfst Write Information about Open Files
driver cms Issue CMS Commands, Write Response to Pipeline
driver command Issue TSO Commands
driver command Issue CMS Commands, Write Response to Pipeline
driver cp Issue CP Commands, Write Response to Pipeline
driver diskfast Read, Create, or Append to a File
driver diskslow Read, Create, or Append to a File
driver diskupdate Replace Records in a File
driver eofback Run an Output Device Driver and Propagate End-of-file Backwards
driver filetoken Read or Write an SFS File That is Already Open
driver fullscr Full screen 3270 Write and Read to the Console or Dialled/Attached Screen
driver immcmd Write the Argument String from Immediate Commands
driver literal Write the Argument String
driver mdiskblk Read or Write Minidisk Blocks
driver mdsk Read, Create, or Append to a CMS File on a Mode
driver mdskback Read a CMS File from a Mode Backwards
driver mdskrandom Random Access a CMS File on a Mode
driver mdskslow Read, Append to, or Create a CMS File on a Mode
hierarchical hfsstate Obtain Information about Files in the Hierarchical File System
hierarchical hfsxecute Issue OpenExtensions Requests
hold dam Pass Records Once Primed
host hostid Write TCP/IP Default IP Address
host hostname Write TCP/IP Host Name
host tso Issue TSO Commands, Write Response to Pipeline
http httpsplit Split HTTP Data Stream
HTTP urldeblock Process Universal Resource Locator
huge dfsort Interface to DFSORT/CMS
hypertext httpsplit Split HTTP Data Stream
label frlabel Select Records from the First One with Leading String
label tolabel Select Records to the First One with Leading String
label whilelabel Select Run of Records with Leading String
labels asmfind Select Statements from an Assembler File as XEDIT Find
QSAM <mvs Read a Physical Sequential Data Set or a Member of a Partitioned Data Set
QSAM >>mvs Append to a Physical Sequential Data Set
QSAM >mvs Rewrite a Physical Sequential Data Set or a Member of a Partitioned Data Set
QSAM qsam Read or Write Physical Sequential Data Set through a DCB
query diage4 Submit Diagnose E4 Requests
query fullscrq Write 3270 Device Characteristics
query fullscrs Format 3270 Device Characteristics
query query Query CMS/TSO Pipelines
queue mqsc Issue Commands to a WebSphere MQ Queue Manager
queue stack Read or Write the Program Stack
quoted-printable qpdecode Decode to Quoted-printable Format
quoted-printable qpencode Encode to Quoted-printable Format
SAM <mvs Read a Physical Sequential Data Set or a Member of a Partitioned Data Set
SAM >>mvs Append to a Physical Sequential Data Set
SAM >mvs Rewrite a Physical Sequential Data Set or a Member of a Partitioned Data Set
SAM qsam Read or Write Physical Sequential Data Set through a DCB
SCBLOCK nucext Call a Nucleus Extension
search lookup Find Records in a Reference Using a Key Field
secure digest Compute a Message Digest
select abbrev Select Records that Contain an Abbreviation of a Word in the First Positions
select all Select Lines Containing Strings (or Not)
select asmfind Select Statements from an Assembler File as XEDIT Find
select asmnfind Select Statements from an Assembler File as XEDIT NFind
select between Select Records Between Labels
select drop Discard Records from the Beginning or the End of the File
select find Select Lines by XEDIT Find Logic
select frlabel Select Records from the First One with Leading String
select inside Select Records between Labels
select locate Select Lines that Contain a String
select nfind Select Lines by XEDIT NFind Logic
select nlocate Select Lines that Do Not Contain a String
select notinside Select Records Not between Labels
select outside Select Records Not between Labels
select pick Select Lines that Satisfy a Relation
select predselect Control Destructive Test of Records
select sql Interface to SQL
select sqlselect Query a Database and Format Result
select take Select Records from the Beginning or End of the File
select tolabel Select Records to the First One with Leading String
select unique Discard or Retain Duplicate Lines
select verify Verify that Record Contains only Specified Characters
select whilelabel Select Run of Records with Leading String
select wildcard Select Records Matching a Pattern
selection casei Run Selection Stage in Case Insensitive Manner
selection frtarget Select Records from the First One Selected by Argument Stage
selection totarget Select Records to the First One Selected by Argument Stage
selection zone Run Selection Stage on Subset of Input Record
server runpipe Issue Pipelines, Intercepting Messages
server vmclisten Listen for VMCF Requests
service filterpack Manage Filter Packages
service help Display Help for CMS/TSO Pipelines or DB2
service query Query CMS/TSO Pipelines
SFS <sfs Read an SFS File
SFS <sfsslow Read an SFS File
SFS >>sfs Append to or Create an SFS File
SFS >>sfsslow Append to or Create an SFS File
SFS >sfs Replace or Create an SFS File
SFS filetoken Read or Write an SFS File That is Already Open
SFS sfsback Read an SFS File Backwards
SFS sfsdirectory List Files in an SFS Directory
SFS sfsrandom Random Access an SFS File
1180E A directory in the path string does not exist or you are not authorised for it
410E ABEND code at address; PSW hex
609E ABEND code reason code number
1225E ABEND hex accessing the global data area
1338E ABEND hex reason number on LOAD of entry point
1296I ABEND in CMS command. Last number lines of output follow
1239I About to receive from socket
1237I Active process and thread IDs:hex (hexadecimal)
574E Address is odd
1527E Address space word is not available for user word
1529E Address space name word already exists
1528E Address space name word is not valid
1519E Address space name longer than 24: word
1532E Address space size is not valid: number
1253E Address X'hex' before section base
1536W ALET hex is neither valid nor revoked.
1521E ALET hex is not valid
1549E ALET and PGMLIST are incompatible
1023E All application slots in use
1412E Allocation would require more than two gigabytes
1350E Already connected to queue manager string
562E Alternate exec processor name; return code number
563W ANYOF assumed in front of string
1573E Argument number is required
1452E Argument is a string, not a single character: string
371E ARIRVSTC TEXT is not available; run SQLINIT
667E Arithmetic overflow
1503E Array size is greater than 2G
1533W ASIT hex is already permitted to user word
1534W ASIT hex is already permitted to VCIT hex
1523E ASIT hex is not valid
1321I Assembler requests number bytes output for record number on stream number
409E Assert failure code at address
1083E Assignment is not to a counter
556E Asterisk cannot end output column range
1329E Attempt to extract the square root of a negative number
1234I Attention exit disabled. Hit attention to terminate command
1432E Bad placement option string
1567E Beginning block number is greater than ending block number
1564E Beginning block number number larger than device capacity number
337E Binary data missing after prefix
1298E Binary number too large for counter (reason number)
1569E Block number after last writable block
1568E Block number before first writable block
575E Block padded with hex; it should be X'00'
152E Block size number too large; number is the maximum
69E Block size mismatch; number bytes read, but block descriptor word contains number
114E Block size missing
75E Block size not integral multiple of record length; remainder is number
115E Block size too small; number is minimum for this type
1442E Both ranges specify same length as other string
1288I Branch to zero probably from hex
1084E BREAK items are not allowed after EOF item
1575E First argument to D2C/D2X is negative, but second argument is omitted: number
220E First record not a delimiter: "data"
1306E First record on track not 5 bytes long (it is number)
723I Fitting identifier not resolved
695E Fitting already defined: "name"
792E Fitting placement incompatible with RPL
1406E Fixed number needs at least number columns
74E Fixed records not same length; last bytes followed by current bytes
1436E FIXED specified, but no record length specified and no input
1405E Floating point number too long (length number)
1404E Floating point number too short (length number)
778E Forbidden character in file name or file type words
798I Forcing pipeline stall
1368E Format character 'char' not valid
1413E Found number columns
1414E Found number rows
334E FROM value not valid for file of size number records
240E Function name not supported
1078E Function does not support arguments; word was found
1303E Function name expected, but identifier found: number
711E Function not supported: word
1079E Function requires one-character argument; "word" was found
1474I Global hex
1226E Global area is corrupted
1358W Global lock held by R12=address R14=address
1211I Got hex doublewords at hex
298I HCPSGIOP contents: hex
172E Help not available for relative message number; issue PIPE HELP MENU for the Pipelines help menu
1040E Hex data too long (number bytes)
64E Hexadecimal data missing after prefix
1542E Hexadecimal string too long: number
586I Hit attention again to terminate waiting stages
643E HLASM not found in storage
1134E Host word does not exist
1135E Host word does not exist
1535E Host access list is full
1127E Host name too long: string
292E I/O error on address; CSW X'hex', CCW X'hex'
1369E IDR record does not begin X'80'; found X'char'
1370E IDR record indicates number bytes present, but record is number
1222E IEANTRT RC=hex
1220E IEANTRT RC=hex not equal to R15 (=hex.)
1361E IEWBFDAT code code returns code number reason X'hex'
1322I Ignoring HALT at hex
717I Ignoring IUCV interrupt for message number; waiting for number (interrupt on path number; sent on nu
587E Immediate command name is not active
23E Impossible record (number bytes from X'address')
621W Impossible target string
1372E Improper control record prefix hex
1371E Improper IDR language processor flag byte X'hex' at offset number
1086E Improper operand for string expression
754E Improper use of stage; reason code number
1480I In procedure word
759E Incompatible types
1176W The pointer to the Contents Vector table is destroyed (reason code number); investigate VM61261
137E The string of operands is too long
1177E The system does not support date format word
1221I The TSO Pipelines name/token is not established
1075E THEN expected; word was found
1328E There is no default for the type argument
1289E Third level interrupt exit is already set at hex
309E This machine has too many IUCV connections
127E This stage cannot be first in a pipeline
87E This stage must be the first stage of a pipeline
1300E Time zone offset number is not valid (86399 is max)
644E Timestamp word not valid; reason code number
765E Timestamp too long:string
764E Timestamp too short:string
94E Token token is not valid for PIPMOD
1419E Too few arguments in function call
242E Too few arguments; number is minimum
366E Too few input streams
736E Too few parameters in call to name (number found)
1493E Too few streams are defined; number are present, but number are required
1411E Too few streams are defined; number are present, but three streams are needed
1420E Too many arguments in function call
243E Too many arguments; number is maximum
658E Too many concurrent STIMERM requests
1089E Too many counters
204E Too many ending parentheses in expression
1074E Too many nested IFs
1149E Too many parameter tokens found (second is "word")
737E Too many parameters in call to name (number found)
1539E Too many ranges to save
1268E Too many records on track (number)
264E Too many streams
302E Too many variable names specified (number); maximum is 254
236E Too much data for variable name
787E Too much ESM data (number bytes)
1515E Top level structure "word" cannot be subscripted
1297I Trace table at hex
1307E Track capacity exceeded
1257E Track number number beyond cylinder capacity number
1278E Track number is not specified in input record number
1453I Trap issued the CP command "string"
1224I TSO Pipelines global area is at hex
1128E Two consecutive periods in host name: string
682I TXTunit list hex
550E Unable to access variables
542E Unable to communicate with user ID
1017E Unable to connect to server
307E Unable to connect to service
1489E Unable to convert from negative to unsigned
1355E Unable to convert to integer. number digits in fraction
1357E Unable to convert to integer. Exponent too large. (number)
1356E Unable to convert to integer: number
1163E Unable to declare exit
1272E Unable to find DMSEXI
If the CMS command produces console output, another approach may be to run the
command with cms and post process the output in the pipeline.
Though most CMS commands control the CMS environment, the ones listed below process
data and are thus potential pipelines.
These commands cannot be formulated simply using the device drivers and filters of
CMS/TSO Pipelines: COMPARE, DISK, EXECUPDT, LISTIO, MACLIB (one can be generated),
TAPE (there is a device driver), TAPPDS, TXTLIB (members can be extracted).
EXECMAP, IDENTIFY, LISTFILE, MACLIB MAP, MODMAP, NUCXMAP, QUERY, and TXTLIB MAP
can be issued via cms and command to obtain the output and process it in the pipeline.
Copyfile
The simplest usage of COPYFILE (to copy a file from one minidisk or directory to another
one) is written as a cascade of two disk device drivers. Figure 404 shows how to copy a
file from one mode letter to another one (making the output file variable record format).
Most of the options on COPYFILE to change the file format or contents are implemented as
separate stages, described below.
FROM drop <n-1>.
FRLABEL frlabel, but the argument is not restricted to eight bytes.
TOLABEL tolabel, similarly not restricted.
FOR take.
SPECS spec.
OVLY overlay.
RECFM Use FIXED or VARIABLE option on the disk device driver.
LRECL pad or strip.
TRUNC chop or strip.
OLDDATE No equivalent function is available.
PACK pack, though with care: see the usage notes for pack.
UNPACK unpack.
FILL pad or strip.
EBCDIC xlate. Archaic.
UPCASE xlate UPPER.
LOWCASE xlate LOWER.
TRANS xlate.
SINGLE fanin and multiple streams concatenate files.
Execio
Most of the functions of EXECIO can be done with device drivers and filters. CMS/TSO
Pipelines supports more devices than does EXECIO, notably tape and 3270 full screen; unit
record device addresses can be specified to use other than the standard devices. These are
the device drivers corresponding to the EXECIO device options:
DISKR disk and < read a CMS file. Use diskslow FROM to start at a particular line.
Use diskback to read a CMS file backwards. Use diskrandom to read a CMS file
randomly.
DISKW disk, >, and >> write a CMS file. Use diskslow FROM to start at a particular
line. Use diskupdate to replace records in a file.
CARD reader reads card images. Select lines with X'41' in column 1 with find and
discard the first column with spec.
/* CARD REXX, subroutine to read cards */
'callpipe reader|find' '41'x'|spec 2-* 1|*:'
CP cp.
PUNCH punch.
PRINT printmc is equivalent to PRINT with the carriage control option (PRINT). Use
asatomc to ensure the carriage control is converted to machine carriage
control.
/* PRINT REXX; subroutine to generate CC */
parse arg cc
if cc='' then cc='09' /* Write space */
'callpipe *:|spec x'cc '1 1-* 2|asatomc|printmc'
EMSG emsg.
STEM stem.
VAR var.
LIFO stack LIFO.
FIFO stack FIFO.
SKIP hole.
STRING literal.
Other EXECIO options are used to edit lines. The equivalent CMS/TSO Pipelines filters are:
FIND find selects records with a leading string. EXECIO also stacks a line with the
number of the line selected. diskrandom NUMBER provides the record number
in the first ten columns. For sequential read from the beginning of the file, use
spec to put the record number into each record. Look for the string in
columns 11 and onward.
LOCATE locate to search for a string.
AVOID nlocate to search for lines without a string.
ZONE Use the column range option on locate and nlocate; offset the argument to
find.
MARGINS spec.
STRIP strip TRAILING.
CASE Use xlate UPPER to upper case lines.
Movefile
Combine device drivers; no filters are needed.
Netdata
Part of the function of this command is available in block NETDATA and deblock NETDATA.
deblock TEXTUNIT deblocks the information in control records.
Print
Printing files with carriage control is done by printmc, possibly in conjunction with
asatomc. Though more scaffolding is needed, the crux of the matter is:
'pipe <' file '| asatomc | printmc'
'cp sp e close'
Punch
For NOHeader option:
'pipe <' file '|punch'
'cp sp d close'
The header can easily be built from the output from state.
Readcard
Partly available using reader. Batched files and the :READ control card to name the file
need a little more work.
Type
Immediately available as < connected to console. Use spec to do cols nn-nn and hex
options. members is used instead of disk to read a member of a library.
Update
Single level update is available with update. Multilevel update is done as a cascade of
such stages.
Basic Initialisation
These steps are performed when the PIPE command is issued for the first time in a CMS
session (assuming, for the moment, that the code is loaded from disk):
1. The PIPE MODULE is brought into the CMS transient area. This is a small bootstrap
module.
2. The bootstrap module looks for the nucleus extension PIPMOD, which contains the
main pipeline module. Initially, this module is not loaded as a nucleus extension.
3. The bootstrap module loads PIPELINE MODULE as a system nucleus extension under the
name PIPMOD. (The VM/ESA pipeline is loaded from the module DMSPIPE instead.)
4. The bootstrap issues PIPMOD INSTALL to make the main pipeline module initialise
itself.
5. The main pipeline module declares a (user) nucleus extension for PIPE. This nucleus
extension will process future PIPE commands.
6. The bootstrap module regains control when the main module has been initialised. The
bootstrap clears out the name of the module loaded in the transient area (to avoid a
recursion) and then reissues the original PIPE command to process the pipeline
specification. This time the command is processed by the main pipeline module.
A CMS ABEND will cause the PIPE nucleus extension to be dropped, because it is a user
extension; but the PIPMOD nucleus extension will remain installed, because it is a system
extension. A subsequent PIPE command will then bypass step 3.
Because the main module is several hundred kilobytes in size, it is recommended that it be
installed in a shared segment. For modern VM systems, use a logical segment. (The
program directory for the Program Offering describes how to specify CMS Pipelines to
DMKSNT, the system name table.)
When the main pipeline module is installed in a shared segment, the segment is attached to
the virtual machine by the CMS command SEGMENT LOAD. This is normally performed in
the system profile before any pipeline specifications have been issued. The main module is
now installed as a nucleus extension by CMS. Thus, step 3 is bypassed when the pipeline
is initialised on the first PIPE command.
Coexistence
The Runtime Library version of CMS Pipelines can coexist with the version shipped as
part of Virtual Machine/Enterprise Systems Architecture from Release 1.1. The version
used is determined by which of the two PIPE bootstrap modules is loaded into the transient
area in step 1. Make sure the module you wish to use is first in the search order or
installed as a nucleus extension.
You can perform part or all of this initialisation procedure by hand to install a different
module or to use a command name other than PIPE.
Assuming you are running with the Program Offering level of CMS Pipelines and you
wish to try some commands against the pipeline shipped in Virtual Machine/Enterprise
Systems Architecture, you can issue these commands to make an EPIPE command:
nucxload epipmod dmspipe (system service immcmd
epipmod install epipe
Filter Packages
| CMS/TSO Pipelines supports two types of filter packages:
| Type 1, which is available with CMS only. Such a package includes the FPLNXF glue
| module. It is installed by invoking it as a CMS command. The balance of this section
| discusses type 1 filter packages.
| Type 2 filter packages include the FPLNXG glue module. They are managed by
| filterpack.
Warning: Results are unpredictable (but are likely to be catastrophic) if a filter package
is dropped without the main pipeline module being notified. This leaves a dangling pointer
to what was once the entry point table in the filter package. A program check is likely
next time an entry point is to be resolved by that particular pipeline module.
Virtual Machine/Enterprise Systems Architecture supplies the two commands and to enable
: CMS users to generate filter packages. Refer to HELP PIPE PIPGFTXT and HELP PIPE
: PIPGFMOD. The explanation in this appendix may be useful background information, even
when you use the CMS-supplied procedures.
! The programming interfaces in a filter package are described in CMS/TSO Pipelines: PIPE
! Command Programming Interface, in particular for user written functions for spec.
Introduction
A filter package is a module that contains additional built-in programs. These programs
can be written in Assembler or REXX; REXX programs can be compiled. Programs in filter
packages must be reentrant.
Once the main pipeline module “knows” about a filter package, it can resolve programs to
run in a pipeline from the programs that are contained in the filter package as well as from
the built-in ones. The filter package contains an entry point vector which the main pipe-
line module uses to resolve programs in the filter package.
| A filter package contains mostly code you supply, but the interface to CMS/TSO Pipelines
| is embodied in a bootstrap module.
¡ Filter packages are available on CMS in two flavours, type 1 and type 2; and on z/OS in
¡ one flavour, type 2. The type 1 filter package is the original, which installs itself actively
¡ through the PIPMOD command. Once loaded, a type 1 filter package remains available until
¡ it is deleted explicitly. In contrast, a type 2 filter package is passive; it is loaded and
! deleted by the fltpack service program, though, on CMS, a second bootstrap module can be
! added to the filter package to make it self-installing and self-removing.
¡ Filter packages are loaded as global or local to the thread (task). A global filter package is
¡ available to all pipelines within the virtual machine or address space, but a local filter
¡ package is available to pipelines on the thread (task) where it is loaded. On CMS, filter
¡ packages are by default loaded globally, whereas on z/OS, they are by default local unless
¡ the task is the job step task.
: One filter package, the PTF filter package, receives special attention. Only one filter
: package can be the PTF filter package at any time.
The entry point vector is declared to the main pipeline module when the filter package’s
¡ main entry point is invoked as a CMS command (type 1) or when the filter package is
¡ explicitly loaded (type 2).
¡ On CMS, four type 1 filter packages are installed in storage automatically when the main
pipeline module initialises (and whenever you issue the command PIPMOD INSTALL). CMS
Pipelines loads the modules as system nucleus extensions and attaches their entry point
tables to its own built-in entry point table. The filter package modules that are installed
: automatically are those named PIPPTFF, PIPSYSF, PIPLOCF, and PIPUSERF. If PIPPTFF is
: available, it will be loaded as the PTF filter package. The entry point table in the PTF filter
: package is searched before the main module’s entry point table; thus, filters in this package
: effectively override the built-in ones. All other filter packages are searched after the main
module.
: Additional filter packages must be installed manually. On CMS, a type 1 filter package is
: installed by issuing the file name of the filter package as a CMS command. This will make
it install itself as a nucleus extension (if it is not already one) and declare itself to the main
pipeline module.
¡ Filter packages are managed by the the fltpack control. This stage loads and deletes filter
¡ packages and also list installed filter packages of both types.
: A type 1 filter package is detached from the pipeline when its nucleus extension is
dropped, when the main pipeline module is dropped, and at CMS ABEND cleanup (HX).
¡ z/OS contents management deletes all modules loaded by a task when the task terminates.
¡ This has the effect of deleting all filter packages loaded by the task.
: Specifying Files
: The utilities to generate filter packages use a notation for input and output files that specify
: the file name, type, and mode as a single word where the components are separated by
: periods. The file mode can specify an SFS directory on CMS.
: Multiple input files are specified by concatenating the specifications for the individual files
: with a forward forward slash.
Appendix E. Generating and Using Filter Packages with CMS/TSO Pipelines 895
Filter Packages
: filespec:
: ├──┬─fn───────┬──┤
: ├─fn.ft────┤
: └─fn.ft.fm─┘
: input:
: ┌─/────────────┐
: ─┤ filespec ├─┴──┤
├───
: output:
: ├──┤ filespec ├──┤
: When a default is applicable, its three components are applied independently. Specifically,
: when a default file mode is specified, it can be overridden only by an explicit file mode.
The minimum filter package contains a glue code module, an entry point table, and the
actual program to run. In addition, the filter package may contain a message text table and
a keyword table. How to generate the object modules that contain your parameters (the
entry point, keyword, and message text tables) is described below.
Once the object files have been generated, the filter package is generated on CMS by the
¡ CMS and commands; on z/OS, the executable filter package module is created by the binder
¡ (linkage editor).
¡ Except for the glue code module, the contents of a type 1 and a type 2 filter package is the
¡ same.
Glue Code
The object module PIPNXF TEXT (DMSPFP TEXT if you are using Virtual Machine/Enterprise
: Systems Architecture) contains the program that is invoked when a type 1 filter package
module is invoked as a CMS command. This program is also invoked on service calls.
¡ The type 2 glue module is FPLNXG TEXT. It may be combined with a third module,
! FPLNXH TEXT, which contains code to load the type-2 filter package so that it is opera-
! tionally (but not as far as its contents are concerned) mimics a type-1 filter package.
A source entry point table contains a line for each entry point and optionally comments.
Comments begin with an asterisk and extend to the end of the line. Blank lines and lines
that contain only comments are ignored. Case is ignored in the source entry point table.
Entry points are defined by blank-delimited words:
1. The name of the filter, as it would be used in a PIPE command.
2. The ESD name of the entry point for the program. This defaults to the name in the
first word when the line contains one word.
3. A number specifying the minimum truncation that should be accepted for the filter. If
the first word contains more than eight characters, you must specify a minimum trun-
: cation count and it must be a number that is 8 or less. 0 (the default) specifies that no
truncation should be accepted.
4. The programming language in which the program is written or a period as a
placeholder. The default is Assembler or REXX as determined by inspecting the
program.
5. The commit level at which the program should start. The commit level can be
specified as a number between -128 and 127 (inclusive). Be sure you know what you
are doing if you specify this number positive, as this precludes using the program with
the CMS/TSO Pipelines built-in programs.
: The command generates an object entry point table from one or more source entry point
: tables. The FPLEPTBL command supports two blank-delimited words and an option:
¡ 1. Input file specifications. The default input file is SYSTEM EPTABLE on CMS; it is
¡ FPLEPT FPLPARMS, the member FPLEPT of the data set allocated to FPLPARMS.
¡ 2. The output file (which contains a single object module). By default, the output file is
¡ FPLEPT TEXT.
3. The ESD name of the control section that will contain the object entry point table is
specified after a left parenthesis. The default is the file name of the output file or its
¡ default. For a type 1 filter package, it must be specified as PIPEPT; for a type 2 filter
¡ package, it must be specified as FPLEPT.
Appendix E. Generating and Using Filter Packages with CMS/TSO Pipelines 897
Filter Packages
¡ 2. The message text table in the filter package where the stage that issues the message
¡ was resolved, if any.
: 3. The main message text table in module FPLMTX, which is linked with the PIPELINE
module.
4. The message text table in each attached filter package. The packages are searched in
the order they were attached. Thus, it would be normal to search PIPSYSF before
PIPLOCF and PIPUSERF.
¡ The source message text table is in BookMaster format. It often contains the information
required to build a manual or help file, or both, in addition to the message text tags.
¡ FPLMSGTB processes the tags :msgno and :msg. All other lines and tags are ignored. For
¡ each message, the tags must be specified in the order :msgno followed by :msg. The two
¡ tags may be on the same line or separate lines, but no other tag may follow them on the
¡ same line. That is, the parsing of GML is simplistic. The :msgno tag specifies the message
¡ number and one character for severity code. If the severity code is lower case or the
¡ number 0 (zero), no additional identification messages are issued; if it is the number zero
¡ the message is not entered into the message list. The message text and substitution items
¡ are specified with the :msg tag.
¡ Use DCF variables for characters that would interfere with the markup:
¡ Semicolon &semi.
¡ Colon &colon.
¡ Ampersand &.
¡ Quote &csq. However, this will be converted to a normal single quote (X'7D').
¡ Within the message text, substitution is indicated by text that is bracketed within :mv and
¡ :emv tags. No other tags are allowed in the message text. By default, the items are
¡ substituted in the order they occur. A message value (a substitution item) is associated
¡ with a particular substitution if a number follows the opening tag; a colon must follow the
¡ number.
¡ ──FPLMSGTB──┬───────────────────────────┬──
¡ └─┤ input ├──┬────────────┬─┘
¡ └─┤ output ├─┘
¡ ──┬──────────────────────┬──
¡ └─(──CSECT name──┬───┬─┘
¡ └─)─┘
¡ The command generates an object message text table from one or more source message
text tables. It supports two blank-delimited words and an option:
¡ 1. Input file specification. The default is FPLMSGS SCRIPT.
¡ 2. The output file, which will contain a single object module. The default output file is
¡ FPLMTX TEXT.
¡ 3. The ESD name of the control section that will contain the object message text table
¡ may be specified after a left parenthesis. The default is the file name of the output file
¡ or its default. For a type 1 filter package, this must be specified as PIPMTX; for a type
¡ 2 filter package it must be specified as FPLMTX.
Keyword Table
The keyword table contains keyword definitions that are tested by programs in the filter
package. REXX filters cannot access entries in the keyword table.
A source keyword table contains a line for each keyword and optionally comments.
Comments begin with an asterisk and extend to the end of the line. Blank lines and lines
that contain only comments are ignored. Case is ignored in the source keyword table.
CMS/TSO Pipelines keywords are defined by blank-delimited words:
1. The keyword identifier. This is a symbolic name under which a keyword is known to
the code. The keyword identifier must be one or two characters. It is prefixed with
the module name to obtain the label that will be the entry point for the keyword in the
keyword table. Thus, the identifier must contain only characters that can be specified
as an entry point in an Assembler control section. (That is, English alphanumerics and
the three national use characters “#@$”.)
2. The keyword. This is the character string that is tested against an operand of a stage
in a pipeline. Synonyms are specified by several lines that have the same identifier
(the first word). The keywords are tested in the order they appear in the concatenated
input files (the source keyword tables).
3. A number specifying the minimum truncation that should be accepted for the keyword.
If the second word contains more than eight characters, you must specify a minimum
: truncation count and it must be a number that is 8 or less. 0 (the default) specifies
that no truncation should be accepted.
¡ The command generates an object keyword table from one or more source keyword tables.
It supports two blank-delimited words and options, which is specified after a left paren-
thesis:
¡ 1. Input file specification. The default input file is SYSTEM KWDTABLE on CMS; it is
¡ FPLKWD FPLPARMS on z/OS, the member FPLKWD of the data set allocated to
¡ FPLPARMS.
¡ 2. The output file (which contains a single object module). By default, the output file is
¡ FPLKWD TEXT.
3. The options field contains the ESD name of the control section that will contain the
object keyword table and a keyword, which is specified after an equal sign.
Appendix E. Generating and Using Filter Packages with CMS/TSO Pipelines 899
Filter Packages
The default control section is the file name of the output file or its default.
When the option DECODE is specified, the object module also contains a table (in a format
similar to an entry point table) that is used to decode a keyword to determine the identifier
for the keyword. Because more than one keyword identifier is used for a particular
keyword, this entry point table contains one to three identifiers rather than a pointer to an
entry point. The DECODE option must be specified when generating the keyword table in
the main pipeline module. Do not specify this option for a keyword table that is included
¡ in a type 1 filter package. When decode is specified for a type 2 filter package, the
¡ control section name must be specified as FPLKWD.
Programs
The programs must be in the format of object modules. For Assembler programs, this is
clearly the output from the Assembler.
¡ ──FPLGREXX──fname──┬────────────────────┬──┬───────┬──
¡ └─.ftype─┬────────┬──┘ └─csect─┘
¡ └─.fmode─┘
¡ ──┬────────────────────────────┬──
¡ └─(──┬───────┬──┬──────────┬─┘
¡ └─NODIR─┘ └─COMPRESS─┘
Appendix E. Generating and Using Filter Packages with CMS/TSO Pipelines 901
CMS/TSO Compatibility and Portability
Some modules determine the operating environment at run time, selecting the appropriate
path dynamically. A few device driver modules are specific to the CMS or z/OS environ-
ment.
¡ The TSO Pipelines Service Offering brings the z/OS support up to the current level and
¡ adds z/OS-specific device drivers, as will be described below.
¡ FPLRESET
¡ This command is equivalent to the CMS command “NUCXDROP PIPMOD”. It causes TSO
¡ Pipelines to release any resources and storage it may have acquired. Do not issue
¡ FPLRESET while any pipelines are running.
¡ FPLDEBUG
¡ This command is not intended for general use. It verifies the TSO Pipelines global area
¡ and displays information that may be helpful in isolating a problem.
¡ When the pipeline environment (its control blocks) is correct, FPLDEBUG will just display
¡ the address of the global area.
¡ fpldebug
¡ PLMVS1224I TSO Pipelines global area is at 16D01628.
¡ READY
¡ In addition, FPLDEBUG may issue messages identifying tasks it knows about. The informa-
¡ tion displayed includes the contents of the TCB tokens. Note that the fact that TSO
¡ Pipelines knows about a task does not mean that the task is still active; most likely the
¡ task represents the last TSO command, which has terminated by the time you can issue the
¡ FPLDEBUG command.
¡ pipe q
¡ FPLINX086I CMS/TSO Pipelines, 5654-030/5655-A17 1.0111 (Version.Release/Mod) -
¡ hexadecimal)
¡ READY
¡ FPLUNIX
¡ Is the entry point for running the PIPE command in the UNIX System Services environment;
¡ that is, from the shell. See the following section.
¡ Note in particular that the external link contains the member name rather than anything
¡ else you might think it should contain.
¡ In addition, the STEPLIB environment variable must be set if the module is not in link pack:
¡ CCJOHN:/home/ccjohn: >echo $STEPLIB
¡ CCJOHN.TSO.LOAD
¡ CCJOHN:/home/ccjohn: >pipe q
¡ FPLINX086I CMS/TSO Pipelines, 5654-030/5655-A17 1.0111 (Version.Re
¡ lease/Mod) - Generated 15 Feb 2000 at 12:28:57
¡ TSO Pipelines writes error messages to standard error (file descriptor 2). console reads
¡ from standard input (file descriptor 0) and writes to standard output (file descriptor 1). In
¡ addition, the conveniences stdin, stdout, and stderr are available.
¡ In the first command the pipe character was interpreted by the shell; the PIPE command
¡ saw only the literal stage.
: From a REXX program (EXEC) on TSO, issue the pipeline specification may be addressed to
: several environments.
Appendix F. Pipeline Compatibility and Portability between CMS and TSO 903
CMS/TSO Compatibility and Portability
: From a normal TSO REXX program (the merged environment, as it is called), you may
: address these environments: TSO, LINK, or ATTACH. From a stage written in REXX, can
: address only LINK or ATTACH, but you can issue TSO commands using command or tso.
: Address LINK is required when issuing multiple PIPE commands that must run in the same
unit of work under DB2.
On TSO, REXX filters are resolved from partitioned data sets. The CMS file name (first
word) corresponds to the member name; the CMS file type (second word) specifies the
DDNAME of the data set. DDNAME=FPLREXX is the default.
Built-in Programs
Only device drivers and host command interfaces depend on the operating system; all
filters and gateways are available both on CMS and on TSO.
Program Description L
listcat Provide data set names that are qualified by a specified qualifier. 9
listdsi Provide detailed information about data sets. 9
listispf Reads the directory of a PDS into the pipeline, formatting the user data if it was stored by 7
ISPF. The output can be limited to information about selected members.
Appendix F. Pipeline Compatibility and Portability between CMS and TSO 905
CMS/TSO Compatibility and Portability
REXX Filters
Programs that do not use the Address instruction and do not rely on external functions are
directly transportable between the two environments; they should work without change if
they do not contain pipeline specifications that have incompatible device drivers.
Consider using CALLPIPE instead of Address command pipe; the results are the same for
correct pipelines.
When incompatible device drivers are used, you must use the Parse Source instruction to
determine the environment in which the program runs:
/* Dual-path for TSO and CMS */
parse source where . my_fname my_ftype my_fmode . env .
If where='TSO'
Then dsn='names.text'
Else dsn=userid() 'names a'
'callpipe <' dsn '|...
Assembler Programs
Programs should be 31-bit capable and should use CMS/TSO Pipelines services rather than
services of an operating system. The interface defined in CMS Pipelines Toolsmith’s
Guide and Filter Programming Reference, SL26-0020 remains supported. The PIPEPVR
macro is now required. The macro library is shipped as part of TSO Pipelines.
Filter Packages
¡ TSO Pipelines supports packages managed with filterpack.
Appendix F. Pipeline Compatibility and Portability between CMS and TSO 907
Event Records
The event record contains an 8-byte common prefix which is followed by variants that
define the individual record types.
Record Prefix
The format of the first eight bytes is common to all event records.
00—Message
This record is written when a message is issued. The message is suppressed.
03—Enter Scanner
This record has no variant data. The pipeline specification is described in a set of event
records consisting of one for the overall pipeline specification and a record for each pipe-
line, stage, connector, and label reference. The scanner end record identifies the end of the
pipeline specification.
When both bits are on, the pipeline specification is scanned after the
CALLPIPE rules, but connected after the ADDPIPE rules.
19 1 Reserved.
20 8 The option NAME. The value is truncated after eight bytes. See below
for a pointer to the complete name.
28 2 Bits to turn on in the rightmost halfword of the message level.
30 2 Bits to clear in the rightmost halfword of the message level.
32 1 The stage separator character. This byte contains X'00' when the
pipeline specification is issued as an encoded pipeline specification.
33 1 The end character. This byte contains a blank when no end character is
defined. It contains X'00' when the pipeline specification is issued as
an encoded pipeline specification.
34 1 The escape character. This byte contains a blank when no escape char-
acter is defined. It contains X'00' when the pipeline specification is
issued as an encoded pipeline specification.
35 3 Reserved.
38 2 Offset to the address of the list of stage definitions. This offset is zero
in the first level of the block.
40 4 Address of the original pipeline specification (when the pipeline
specification was issued from a command string) or zero for an
encoded pipeline specification for which there is no original string.
44 4 The length of the original pipeline specification string.
48 4 The address of the name for the pipeline specification.
52 4 The length of the name. This length may be greater than 8.
56 32 Reserved. Contains zeros.
05—Leave Scanner
The pipeline specification is either abandoned or handed over to the pipeline dispatcher to
run.
06—Scanner Item
The scanner items describe the beginning of a pipeline, a stage, a label reference, or a
connector. The first scanner item is a pipeline begin variant. This variant of the event
record contains the PIPSCSTG data area; it has four variants.
08—Start Stage
The pipeline dispatcher is about to call the initial entry point for a stage.
09—End Stage
The stage returns to the pipeline dispatcher.
0A—Resuming Stage
The pipeline dispatcher is about to return control to a stage. The stage has previously
called a pipeline dispatcher service.
-1 Input.
-2 Output.
Register 1 contains the stream number or stream identifier (if the leftmost byte is
nonzero).
SH SHORT.
0C—Pipeline is Stalled
This event has no variant data. A record with code X'0D' is written for each stage.
0D—State of Stage
Offs Len Description
0 1 X'0D' for State of Stage.
4 4 Reference for the stage.
8 4 Encoded state of stage.
12 8 Decoded state of stage.
0E—Pipeline Committing
Offs Len Description
0 1 X'0E' for Pipeline Committing.
4 4 Reference for the pipeline specification.
8 4 The committed aggregate return code.
12 4 The level to which the pipeline is committed.
0F—Console Input
A console stage that is first in a pipeline requires a record. The console stage that runs
under control of runpipe EVENTS does not read from the terminal; rather, it signals a
console input event. A stage that processes the event records without their being delayed
from the runpipe stage can store an input record in the input buffer and set the number of
bytes in the feedback word. If this event record is consumed without a record being
stored, the console stage assumes a null record was entered.
10—Console Output
A console stage that is not first in a pipeline has read an input record. The console stage
that runs under control of runpipe EVENTS does not write to the terminal; rather, it signals
a console output event. A stage that processes the event records without their being
delayed from the runpipe stage can obtain the record from the buffer.
11—Pause
pause has read an input record. A stage that processes the event records without their
being delayed from the runpipe stage can now react to this event. The pause stage is
resumed as soon as this record is consumed. There are no variant data for this event.
Any references in this information to non-IBM Web sites are Programming Interface Information
provided for convenience only and do not in any manner
serve as an endorsement of those Web sites. The materials This book primarily documents information that is NOT
at those Web sites are not part of the materials for this IBM intended to be used as Programming Interfaces of CMS/TSO
product and use of those Web sites is at your own risk. Pipelines.
IBM may use or distribute any of the information you supply This book also documents intended Programming Interfaces
in any way it believes appropriate without incurring any obli- that allow the customer to write programs to obtain the
gation to you. services of CMS/TSO Pipelines. This information is
identified where it occurs by an introductory statement to a
chapter.
Notices 919
Explanation of Terms
Glossary
Collating sequence. Ordering of the character set used in a
A computer. When the binary encoding of one character is less
than the encoding of some other character, then the first
Abut. To put two things together with nothing between character is said to be before the second one in the collating
them. For instance, two abutted strings are put together sequence. Also used to designate all possible values for a
without separating blanks. single character. A byte contains 8 bits in IBM/370, so the
collating sequence has 256 characters from X'00' to X'FF'.
Argument string. Characters coded after the name of a
stage. The argument string is made available to the stage. Connection. A data path between two stages. A stage can
have several input and output connections, but only one
B input and one output can transport data at a time. The active
connection is changed by the select function.
Backus-Naur form. (BNF) A notation for syntax definition,
Connector. An item at the beginning or end of a pipeline
invented in the early sixties. The dialect used in CMS/TSO
indicating how the pipeline is to be connected to streams in
Pipelines manuals is described in Chapter 20, “Syntax
the stage defining the pipeline. A full connector consists of
Notation” on page 220.
an asterisk, a period, a keyword indicating a direction, a
Block descriptor word. (BDW) z/OS term. The BDW is a period, a stream identifier, and a colon; it can be as short as
fullword prefix to a block in variable format. It contains the *:.
total length of the block in the first halfword and zeros in the
Console. The terminal for the virtual machine. Also the
second.
name of a CMS macro that is used to access the terminal in
Blocked. When the pipeline dispatcher has blocked a stage, full screen mode rather than in line mode.
the stage is not run. A stage is blocked when it accesses a
Control stage. A stage that inspects the contents of a file
connection if the other side of the connection is not prepared
and calls one of several subroutine pipelines to process the
to read or write a record.
particular file format.
Buffer. A stage that reads all its input before writing any
Coroutines. Programs being multiprogrammed in a way
output. Such a stage may be needed to ensure that a multi-
where programs explicitly transfer control amongst them-
stream pipeline does not stall. buffer and sort buffer the file;
selves.
lookup buffers the secondary input stream.
CP. The Control Program component of VM manages the
C resources of a real computing system in such a way that
multiple machines appear to exist. Each user on a VM
Card. See Punched card. system has a virtual machine.
DOS. (Disk Operating System.) A precursor of VSE. mixture of programs written in REXX, PL/I, IBM C/370, or
assembler.
Driver. Shorthand for device driver.
First stage. The leftmost stage of a pipeline. A pipeline
specification can contain several pipelines separated by end
E characters. A stage to the right of an end character is also a
first stage.
EDF. See Enhanced Disk Format.
File Status Table. (FST) Information about a file on a CMS
End character. A character in a pipeline specification that minidisk is stored in the FST for the minidisk. This informa-
separates pipelines. The stage to the left of an end character tion includes the record format, the record length, and the
is a last stage; the one to the right of an end character is a date the file was written or appended to.
first stage.
Flush. To empty a buffer.
Enhanced Disk Format. (EDF) A CMS file storage format
used on minidisks that are attached to the user’s virtual Forms Control Buffer. (FCB) Control information to define
machine. The minidisk is formatted into 512, 1K, 2K, or 4K where on a page a skip to a given channel should stop. A
physical blocks. See also “shared file system”. forms control buffer can be associated with a SPOOL file.
Glossary 921
Explanation of Terms
L N
Label. An identifier for a stage that defines multiple data Netdata. A blocking format used to transmit files between
streams, one for each occurrence of the label in a pipeline IBM systems. The format includes information about the file
specification. as well as the contents of the file.
Landscape. Format for a command where all information is Null. Empty; containing nothing; having zero length.
entered on a single line.
Null record. A record with no data; it has length zero.
Last stage. A stage at the end of a pipeline specification. Such a record can indicate end-of-file.
A stage to the left of an end character is also a last stage.
Null stage. Two stage separators next to each other with
Left. Stages are ordered left to right such that a stage only blanks between them; a stage separator next to an end
receives its input from the output of the stage to the left of character or at the end of the pipeline specification.
it.
M. (Megabyte.) =1024K =1,048,576. Pipeline dispatcher. The program that transfers control
between stages to ensure an orderly flow of data through the
Message level. A number set by the command PIPMOD pipeline.
MSGLEVEL. The binary representation of this number is
interpreted as a set of switches controlling the degree of Pipeline specification. The character string that defines a
additional checking performed, and the number of additional pipeline. Stages are separated by a special character, the
messages issued, if any. stage separator, which is the solid vertical bar (|) by default.
Move mode. Term for data management processing where Portrait. A command written over several lines is said to
the record is processed in a buffer allocated by, or in, the be in portrait format. The portrait format of a pipeline
program; data management moves the record to or from the command typically has the specification of each stage on a
user’s buffer. The converse is locate mode. separate line.
MVS. (Multiple Virtual Storage.) An operating system for Primary data stream. Stream number 0.
IBM System/390 mainframe computers.
Printer. A virtual device, simulated by CP, used to write
virtual printer SPOOL files in the CP SPOOL system.
Punched card. Archaic data storage medium where charac- Sever. Terminate the use of a stream. All connections to
ters are represented by holes cut in a piece of cardboard. streams are severed when a stage returns control on the
The most widely used format stores 80 characters. A deck original invocation from the pipeline dispatcher. A stage can
of punched cards is simulated by CP as a SPOOL file with a sever a connection explicitly with the SEVER pipeline
record for each card. CP enforces a maximum record length command.
of 80 bytes.
Shared File System. (SFS.) A file system introduced in
VM/System Product Release 6. To facilitate sharing of data
Q without compromising data integrity, data are stored on mini-
disks that are attached to a “server” virtual machine; the user
Quietly. Jargon for “with error messages suppressed” or cannot access the file pool minidisks directly.
“without issuing error messages”.
Shell. A program that reads lines from the terminal and
interprets them as commands. Also called a Terminal
R Monitor Program.
Record. Unit of information transmitted as a whole. A line Short Circuit. A stage can connect an input stream and an
of a file. output stream with a short circuit. Records then bypass the
stage that has performed the short circuit operation. A stage
Record descriptor word. (RDW) In z/OS, a RDW is a
can issue the SHORT command to short circuit the currently
fullword describing a logical record with the aggregate
selected input and output stream without waiting. It can also
length (including the RDW) in the first halfword and zeros in
issue CALLPIPE to perform this operation: the stage waits
the second one. Also used to describe the halfword length
until end-of-file is reflected on the short circuit; the output
that precedes a record in the CMS file system when the file is
stream is available for further output when the stage
in variable format.
resumes.
REXX. (Reformed EXtended eXecutor.) The name of a
Span. A record is spanned across blocks when the first part
programming language, implemented in CMS by the System
of the record is in one block, and the rest of it is in one or
Product Interpreter. Also designates programs written in
more other blocks.
REXX where the host commands go to the pipeline and are
used for data transport. SPOOL. (Simultaneous Peripheral Operations On Line.) A
system for controlling unit record devices. Also used to
Right. Stages are ordered left to right such that a stage
refer to the data set that holds the data being SPOOLed.
delivers its output to the input of the stage to the right of it.
Stage. A program in a pipeline specification.
S Stage separator. The character (normally |) that separates
stages in a pipeline specification.
Scope. Where an identifier is recognised.
Stall. A condition in a pipeline network where not all
Secondary data stream. Stream number 1.
stages have completed but stages are interlocked in such a
Segment descriptor word. (SDW) z/OS term. A fullword way that no stage can be run.
describing part of a logical record. The first halfword
Stemmed Array. A collection of REXX compound variables
contains the length of the segment including the SDW. The
having the same stem, and a numeric index. The variable
third byte contains the segmentation flags that define whether
with index zero (for instance array.0) contains the number
the segment is the first, the last, or an intermediate one. The
of data variables in the array; data are stored in variables
last byte is zero.
with positive index, starting at 1. Thus, the first variable is
Segmentation Flags. Flag bits that define which part of a array.1, the second variable is array.2, and so on.
logical record is in the present segment. z/OS uses a
Stream. Informal name for a data stream.
different encoding for variable spanned than it uses for the
netdata format. The encoding specifies whether the segment
Stream identifier. A symbolic reference to a data stream.
is the first, the last, the only, or a middle segment of a
record. Subcommand Environment. A named environment to
which commands can be addressed on CMS. Many programs
Sequential data stream. Informal way to express a data
set up a subcommand environment to process commands
stream that is processed in a sequential fashion, one record at
issued by REXX programs that are called as a result of a user
a time, without going back to previous data.
command to the program establishing the subcommand envi-
ronment.
Glossary 923
Subroutine pipeline. A pipeline, defined by CALLPIPE, to TSO. The “Time Sharing Option” for z/OS. This allows
process data. The stage waits until the subroutine pipeline is z/OS users interactive access to the facilities of the operating
complete. system. Over the years many products have moved between
the TSO and the CMS platforms.
SVC. (SuperVisor Call.) An instruction causing a switch
from the user program to the operating system.
U
SVC 202. Used in CMS to issue commands by name.
Unit record. A (virtual) punched card or printed line.
SVC 203. Used in CMS to issue functions by number. Readers, printers, and punches process unit records, and are
thus referred to as unit record devices.
Terminal Monitor Program. (TMP) Program that reads VB. Variable blocked format.
lines from the terminal and interprets them as commands.
Also called a Shell. VBS. Variable blocked spanned format.
Tertiary data stream. Stream number 2. Verb. The name of the entry point for a stage.
Index
&
Special Characters AND character in all 293
-3
Coroutines
retab 530
Glossary definition 920
untab 644
Locate mode
; 557
Glossary definition 922
¬
Move mode
-
Glossary definition 922
Picture character 719 %
, wildcard 674
Picture character 720 +
! 23 Picture character 719
OR character in all 293 <
/ Example of use 9, 10, 11, 12, 15, 17, 18, 29, 30, 36, 40,
Picture character 720 46, 63, 66, 78, 80
. < 260
Picture character 720 <mdsk 261
field identifier 693, 193, 176 <mvs 262
filter 2 <oe 264
filter package 894 <sfs 264
Type 1 894 <sfsslow 265
Type 2 894 >
first reading station 693 Example of use 8, 14, 15, 17, 29, 30, 44, 78, 81
(
> 267
all 293
)
>>
all 293 Example of use 78
$ >> 276
Picture character 719 >>mdsk 277
* >>mvs 279
buildscr 317 >>oe 280
combine 334 >>sfs 281
drop 372 >>sfsslow 283
duplicate 373 >mdsk 268
join 442 >mvs 271
Picture character 720 >oe 273
random 526 >sfs 273
starmsg 575 ¬
starsys 578 NOT-character in all 293
state 581 | 23
statew 585 OR character in all 294
sysout 606
take 608
update 645 Numerics
0
wildcard 674
buildscr 317
*ACCOUNT 578, 579
spec 704
*COPY 66, 467 00C
*LOGREC 578, 579 reader 527
*MONITOR 573 00D
*MSG 575, 770 punch 521
*MSGALL 575, 770 00E
*SYMPTOM 578, 579 printmc 519
uro 647
00E (continued) 58
xab 677 Diagnose 398
1 5A
apldecode 296 Carriage control 519, 648
aplencode 297 64decode 690
buildscr 317 64encode 690
combine 334 8
drop 372 timestamp 625
duplicate 373 80
join 442 chop 324
take 608 deblock 356
8192
tape 610
cp 343
14
9
Diagnose 527
15 Picture character 720
block 308
deblock 356
16-BIT A
crc 345 A8
2 Diagnose 521, 522, 649
apldecode 296 abbrev
aplencode 297 Example of use 84
buildscr 317 abbrev 285
32-BIT Abut
crc 345 Glossary definition 920
3270DS Structured Field 402 ACCESS
3270enc 689 ACCESS 5ACCESS 757, 30
3277 Access list entry token 208
apldecode 296 Access register 208
aplencode 297 Accounting machines 147
buildscr 317 acigroup 286
3277bfra 688 ADD
3277enc 689 alserv 294
3278 structure 594
apldecode 296 ADDLENGTH
aplencode 297 crc 345
buildscr 317 ADDPIPE 723
3279 addrdw 287
apldecode 296 Address ATTACH 904
aplencode 297 Address instruction 115
buildscr 317 Address LINK 904
370 accommodation 797 Address space 28
3800 address space identification token 207
EXEC 35 address spaces 207
3DES
Address TSO 904
cipher 326
ADDSTREAM 724
3F
ADMSF
block 308
block 308
deblock 356
deblock 356
3way 625
adrspace 288
407 Accounting Machine 692
ADRSPACE 288
4224 402 AES
4F cipher 326
Code point 23 AFTER
4KBLOCK chop 324
reader 527 insert 434
500 character limit on REXX clause 240 pick 503
split 564
Index 929
Index
Index 931
Index
Index 933
Index
Index 935
Index
DMSOPBLK EDF
DMSOPBLK 5DMSOPBLK 796 fmtfst 393
DMSOPDBK Glossary definition 921
DMSOPDBK 5DMSOPDBK 796 elastic 374
DMSPIPE ELSE
DMSPIPE 5DMSPIPE 838 spec 697
DMSPQI 569 ELSEIF
ASMSQL 569 spec 697
dmsstoar ELSIF 698
DMSVALDT emsg 375
DMSVALDT 5DMSVALDT 587, 583 EMSGSF4
DO tcpclient 613
spec 698 ENAMETOOLONG 790
DONE ENCRYPT
spec 698 cipher 326
DONE 698 end character 74, 19
DOS
Example of use 19, 77, 81
Glossary definition 921
Glossary definition 921
Dotted-decimal IP address 613
End-of-file 90
Dotted-decimal network address 613
Backwards propagation 90
Drift 182
endChar 222
Drifting sign 720, 183
ENDCHAR 235
Driver ENDIF
Glossary definition 921 spec 697
DROP
ENDIF 697
filterpack 390
Enhanced Disk Format
drop
Glossary definition 921
Example of use 56, 79, 82
ENOENT 790
drop 372
ENOTDIR 790
DSN (DB2 Subsystem ID) 567
DSNAME
Entry point table 897
listcat 449 Enumerate
dsname 232 Glossary definition 921
<mvs 262 Enumerated scalar 921
EOF
>mvs 271
block 308
listispf 452
console 339
listpds 453
deblock 356
readpds 529
spec 696, 700, 715
writepds 676
EOF 180
Dual speed carriage 148
DUMPLOAD
eofback 376
EOFREPORT 727
DUMPLOAD 5DUMPLOAD 845
EOTOK
duplicate 373
tape 610
ERASE 41
E escape 376
ESCAPE 235
E
Picture character 721 Escape character 120
spec 713 Glossary definition 921
EACH ESM
crc 345 <sfs 264
EBADF 790 <sfsslow 266
EBCDIC >>sfs 281
COPYFILE Option 888 >>sfsslow 283
httpsplit 429 >sfs 274
urldeblock 646 sfsback 542
ECMODE
sfsrandom 546
ECMODE 5ECMODE 834, 834 sfsupdate 548
Index 937
Index
FIND fn 228
XEDIT Subcommand 44 <mdsk 261
find <sfs 264
Example of use 14, 42, 53, 66, 67, 68 <sfsslow 266
find 391 >>mdsk 277
FINDANY >>sfs 281
REXX 517 >>sfsslow 283
FIRST >mdsk 268
combine 334 >sfs 274
drop 372 listpds 453
spec 699, 700, 715 mdsk 473
take 608 mdskback 475
unique 641 mdskrandom 476
update 645 mdskslow 478
First stage mdskupdate 479
Glossary definition 921 members 481
fitting 392 pdsdirect 501
FIXED
rexx 532
>>mdsk 277
sfsback 542
>>sfs 281
sfsrandom 546
>>sfsslow 283
sfsupdate 548
>mdsk 268
state 581
>sfs 274
statew 585
block 308
xedit 678
deblock 356
FOR
mdsk 473
COPYFILE Option 888
mdskslow 478
FORCERW 757
mdskupdate 479 FORMAT
pack 497 sfsdirectory 544
sfsupdate 548 state 581
FLOOR
statew 585
lookup 460
Formatting a pipeline 25
fltpack 894
Formatting comments 26
Flush
Forms control buffer 148
Glossary definition 921
Glossary definition 921
fm 229
FPEP 242
<mdsk 261
FPL global variable group 838
>>mdsk 277
FPLASIT 289, 599
listpds 453 FPLEPTBL
mdsk 473 FPLEPTBL 5FPLEPTBL 897
mdskback 475 FPLGREXX
mdskrandom 476 FPLGREXX 5FPLGREXX 900
mdskslow 478 FPLHELP 23
mdskupdate 479 FPLHLASX
members 481 FPLHLASX 5FPLHLASX. 423
FPLKWDTB
pdsdirect 501
FPLKWDTB 5FPLKWDTB 899
rexx 532 FPLMSGTB
state 581 FPLMSGTB 5FPLMSGTB 898
statew 585 FPLNXG 896
xedit 678 TEXT 896
fmode 229 FPLNXH 896
>mdsk 268 TEXT 896
fmtfst 393 FPLOM 599
FMTPCBIN MACLIB 599
REXX 359 FPLREXX 23, 97
Index 939
Index
GROUP HOST
tcpclient 613 Glossary definition 921
tcpdata 617 help 412
Group configuration variable 840 Host command processors 250
Host commands from REXX filters 114
Host interface 2
H Glossary definition 921
HARDEN
Host-primary address space 207
>>sfsslow 283
hostbyaddr 425
sfsupdate 548
hostbyname 427
HCPSGIOP 521, 522, 649, 769 HOSTID
HEADING
tcpclient 613
filterpack 390
tcplisten 621
REXX 733
udp 638
HELLO
hostid 428
REXX 97, 732
hostname 429
HELLO2
httpsplit 429
EXEC 533 HX
Help 24 HX 5HX 191
help 412
hex 222
adrspace 288 I
filedescriptor 385 IBM 407 Accounting Machine 692
filetoken 387 ID
instore 436 spec 702
IDENTIFIED BY
outstore 493
sql 566
polish 512
identifier 223
runpipe 538
polish 510, 512
starmon 574
spec 699, 707, 712, 715, 716, 718
storage 591
HEXADECIMAL structure 594, 595, 596
IDENTIFY
polish 510
mapmdisk 468
hexString 222
IEANTRT 808
buildscr 317 IEBCOPY
crc 345 Glossary definition 921
mapmdisk 469, 470 iebcopy 430
storage 591 IEWBFDAT 818
stsi 601 IF
tape 610 spec 697
hfs 415, 264, 280 if 431
hfsdirectory 416 IF 697
hfsquery 417 IFEMPTY
hfsreplace 418, 273 literal 455
hfsstate 419 IFEND 698
hfsxecute 420 Ignored 692
HLASM 785 Ignoring case 128
hlasm 422 Ignoring case in comparisons 57
IKJCT441
hlasmerr 424
HOLD IKJCT441 5IKJCT441 662, 660, 657, 653, 590, 537
IKJEFTSR
reader 527
hole IKJEFTSR 5IKJEFTSR) 794
Example of use 425 immcmd 433
hole 424 Implied REXX Filters 117
INCLUDE
HONK
INCLUDE 5INCLUDE, 896
EXEC 402 INCLUSIVE
HONK2 deblock 356
EXEC 425 frlabel 395
Index 941
Index
Index 943
Index
Index 945
Index
Index 947
Index
predselect ¬
Example of use 85 ( 293
Index 949
Index
Index 951
Index
READ 172
Q READCARD 890
QCPSETS READER
EXEC 659 xab 677
qpdecode 522 reader
qpencode 523 Example of use 35, 36, 65, 79
qsam 524 reader 527
Qualifier 177 READFULL
qualifier 224 fullscr 398
QUALIFY readpds
spec 699 Example of use 6
QUALIFY 235 readpds 528, 482
QUERY READSTOP
adrspace 288 spec555, 699
query 525 READSTOP 172
QUERY NAMES 7 READTO 734
QUIET
READY 768, 769
hfsstate 419 REALUSER
state 581, 584 REXX 15
statew 585 RECEIVE
Quietly vmclisten 668
Glossary definition 923 RECFM
quotedString 224 COPYFILE Option 888
<oe 264 RECORD
>>oe 280 Glossary definition 923
>oe 273 spec 718
spec 712, 718 Record delay 247, 89
Record descriptor word
Glossary definition 923
R Record descriptor words 62
RANDOM
RECORDS 343
filetoken 387 RECURSIVE
random 526 sfsdirectory 544
RANGE
Redefine connector 238
joincont 444 reentrant environments 115
XEDIT Subcommand 680 Referenced 79
range 224 Referencing a label 239
change 321 REFLIN
deblock 356 crc 345
filetoken 387 REFLOUT
ispf 439 crc 345
mapmdisk 470 REFRESH
mdiskblk 472 XEDIT Subcommand 685
mdskrandom 476 Register 176
RELEASE
sfsrandom 546
spec 555, 707 deal 352
structure 596 sql 566
update 645 Remembering past data 82
REMOVE
RDROP
alserv 294
REXX 741
RDW
mapmdisk 468
deblock 356 Repairing LIST3820 360
REPEATABLE
Re-entrant REXX environments on MVS 119
READ
sql 566
REPORT
mdiskblk 472
utf 649
spec 555, 699
Repository configuration variable 840
storage 591
Index 953
Index
Index 955
Index
SQLQ3 stem
EXEC 139 Example of use 36, 37, 127, 611
sqlselect stem 588
Example of use 20, 137 stembuild 465
sqlselect 572 Stemmed array 36
SQRT Glossary definition 923
spec 716 Stepwise refinement 7
SQUISH STOP
mqsc 484 collate 331
Stack 340 combine 334
stack deal 352
Example of use 425 fanout 380
stack 572 gather 408
Stage 8, 2 pipcmd 507
Glossary definition 923 printmc 519
Stage separator 23 punch 521
Glossary definition 923 spec 555, 696
Stage separator character 23 uro 647
STAGENUM 741 STOP 235
stageSep 225 STOPERROR 236
STAGESEP 235 Stopping an infinite pipeline 159
Stall 88 STORAGE
Glossary definition 923 spec 718
Stallaction configuration variable 841 storage 591
Stalled 88 STOW 801
Stallfiletype configuration variable 841 strasmfind 302
STANDARD strasmnfind 304
aftfst 291 Stream
fmtfst 393 Glossary definition 923
sfsdirectory 544 stream 225
state 581 fanin 377
statew 585 rexx 532
timestamp 625 spec 555, 699
starmon 573 stream identifier 102
starmsg Stream identifier
Example of use 44 Glossary definition 923
starmsg 575 STREAMID
starsys 578 deal 352
state gather 408
Example of use 70, 71, 72 streamID 225
state 584, 580 STREAMNUM 742
statew 585 STREAMSTATE 742
STATISTICS strfind
tcpclient 613 Example of use 65
tcpdata 617 strfind 391
tcplisten 621 strfrlabel 395
udp 638 STRICT
STATS 615, 619, 621, 638 faninany 378
STAX 788 gate 407
STDDEV lookup 460
spec 716 STRING
stderr 386 aftfst 291
STDERRMEAN block 308
spec 716 chop 324
stdin 386 deblock 356
stdout 386 fbawrite 383
fmtfst 393
Index 957
Index
Index 959
Index
TMAXSTR TRANS
REXX 730 COPYFILE Option 888
TMSG translate 681
REXX 731 Transparent
TO Glossary definition 924
pick 503 TRES
sql 566 REXX 735
strip 592 TREXXC
utf 649 REXX 736
xlate 681 trfread 636
TO TRSOURCE
Example of use 67, 68 trfread 636
TO16BIT
TRUNC
3277bfra 688
TOD
COPYFILE Option 888
TRUNCATE
cipher 326
TODCLOCK
vchar 663
spec 555, 702 truncate 324
Token TSO
Glossary definition 924 Address 904
Tokenise Glossary definition 924
Glossary definition 924 tso 637
tokenise 627 TSO Logon Procedure 23
tokenize 627 TSTRNO
TOLABEL REXX 742
COPYFILE Option 51, 888 TSTRST
tolabel REXX 743
TXTLIB 363
Example of use 44, 55, 56
tolabel 627 DFSRTLIB 363
TOLOAD TYPE 890
rexxvars 534 spec 718
varfetch 656 Type 1 filter package 894
Topology diagram 19 Type 2 filter package 894
TOROWCOL
3277bfra 688
totarget 629 U
U2C(8)
TPOS
REXX 741 spec 704
TRACE udp 637
runpipe 538 Unconnected pipeline specification 245
UNIQUE
TRACE 236
trackblock 630 sort 552
TRACKCOUNT unique
lookup 460 Example of use 56, 63
trackdeblock 631 unique 641
trackexpand 636 Unit record
TRACKING Glossary definition 924
var 651 Unlimited
trackread 632 Glossary definition 924
tracksquish 633 UNPACK
trackverify 633 COPYFILE Option 888
trackwrite 634 unpack
trackxpand 635 Example of use 35, 56, 62, 69, 80
TRAILING
unpack 643
joincont 444 untab 644
strip 592 UPCASE
TRAILING
COPYFILE Option 888
Example of use 59
Index 961
Index
xorc (continued)
X pad 498
X
pick 503
buildscr 317
X2D sort 552
spec 716 space 554
X2F spec 555, 699
spec 716 storage 591
X2U structure 596
spec 716 tcpclient 613
XAB 149 tcpdata 617
xab 677 unique 641
XEDIT vchar 663
Glossary definition 924 wildcard 674
PIPXSAMP 241 writepds 676
xedit xrange 686
Example of use 65 XOROUT
xedit 678 crc 345
XEDIT Macro xpndhi 686
Glossary definition 924 xrange 686, 226
XEDIT Subcommand chop 324
ALL 44 split 564
CHANGE 47, 321, 323 strip 592
DISPLAY 680 xlate 681
EXTRACT 652 xrange 686
FIND 44 xtract 482, 781
RANGE 680
REFRESH 685
SET MSGLINE 685 Y
ZONE 47 Y
xeditmsg 685 Picture character 720
xithlp03 414
xlate
Example of use 44, 45, 46, 62, 63
Z
Z
xlate 681
Picture character 720
XMASTREE
z/OS format V 62
EXEC 500 ZERO
XMIT
mapmdisk 468
XMIT 5XMITted to you directly from SPOOL, 36 ZEROS
XMITMSG
locate 457
XMITMSG 5XMITMSG 109
nlocate 486
xmsg 685
ZONE
xorc 225
casei 320
>>mvs 279
XEDIT Subcommand 47
>>sfs 281
zone
>>sfsslow 283
Example of use 67
>mvs 271
zone 687
>sfs 274
ZONE2DEC
block 308
REXX 684
c14to38 348
Zoned decimal 683
collate 331
deblock 356
fblock 384
filetoken 387
lookup 460
merge 483
overlay 494
Index 963
Communicating Your Comments to IBM
CMS/TSO Pipelines
Author’s Edition
1.1.12
Publication No. SL26-0018-06
If you especially like or dislike anything about this book, please use one of the methods listed below
to send your comments to IBM. Whichever method you choose, make sure you send your name,
address, and telephone number if you would like a reply.
Feel free to comment on specific errors or omissions, accuracy, organization, subject matter, or
completeness of this book. However, the comments you send should pertain to only the information
in this manual and the way in which the information is presented. To request additional publications,
or to ask questions or make comments about the functions of IBM products or systems, you should
talk to your IBM representative or to your IBM authorized remarketer.
When you send comments to IBM, you grant IBM a nonexclusive right to use or distribute your
comments in any way it believes appropriate without incurring any obligation to you.
If you are mailing a readers' comment form (RCF) from a country other than the United States, you
can give the RCF to the local IBM branch office or IBM representative for postage-paid mailing.
If you prefer to send comments by mail, use the RCF at the back of this book.
If you prefer to send comments by FAX, use this number:
– United States and Canada: 1-845-432-9405
– Other Countries: +1 845 432 9405
If you prefer to send comments electronically, use this network ID:
[email protected]
Overall, how satisfied are you with the information in this book?
Very Very
Satisfied Satisfied Neutral Dissatisfied Dissatisfied
Overall satisfaction
How satisfied are you that the information in this book is:
Very Very
Satisfied Satisfied Neutral Dissatisfied Dissatisfied
Accurate
Complete
Easy to find
Easy to understand
Well organized
Applicable to your tasks
When you send comments to IBM, you grant IBM a nonexclusive right to use or distribute your comments in any
way it believes appropriate without incurring any obligation to you.
Name Address
Company or Organization
Phone No.
Cut or Fold
Readers' Comments — We'd Like to Hear from You
SL26-0018-06 Along Line
NO POSTAGE
NECESSARY
IF MAILED IN THE
UNITED STATES
IBM Corporation
MHVRCFS, Mail Station P181
2455 South Road
Poughkeepsie, New York 12601-5400
Cut or Fold
SL26-0018-06 Along Line
SL26-0018-06
Spine information:
CMS/TSO Pipelines Author’s Edition 1.1.12