Awsaumst
Awsaumst
This edition applies to version 9, release 4, modification level 0 of IBM Workload Scheduler (program number
5698-WSH) and to all subsequent releases and modifications until otherwise indicated in new editions.
© Copyright IBM Corporation 2001, 2016. © Copyright HCL Technologies Limited 2016, 2018
Contents
Figures . . . . . . . . . . . . . . vii Job log output . . . . . . . . . . . . . 37
| Chapter 6. IBM BigInsights jobs . . . . 15 | Chapter 13. Hadoop Map Reduce jobs 73
iii
Customizing IBM Workload Scheduler to run Chapter 23. Access method for
Informatica PowerCenter jobs . . . . . . . . 103 PeopleSoft . . . . . . . . . . . . 143
Configuring the agent for SSL . . . . . . . . 104 Features . . . . . . . . . . . . . . . 143
Scheduling and submitting job streams for Roles and responsibilities . . . . . . . . . 143
PowerCenter jobs . . . . . . . . . . . . 105 Scheduling process for the PeopleSoft supported
Monitoring IBM Workload Scheduler jobs that run agents . . . . . . . . . . . . . . . . 144
Informatica PowerCenter workflows. . . . . . 105 PeopleSoft job tracking in IBM Workload Scheduler 144
Procedure for restarting an Informatica Security . . . . . . . . . . . . . . . 144
PowerCenter workflow from the failed task . . 107 Configuring the PeopleSoft access method. . . . 144
Mapping PowerCenter workflow status to job Defining the configuration options . . . . . 144
status . . . . . . . . . . . . . . . 108 Creating a batch processing ID in PeopleSoft 147
Known problems and workarounds . . . . . . 108 Configuring the ITWS_PSXA PeopleSoft project 148
Incorrect worklet status displayed in the job log 108 Uploading the PeopleSoft project . . . . . . 149
Cannot submit jobs after a Web Services Hub Defining PeopleSoft jobs. . . . . . . . . . 151
restart . . . . . . . . . . . . . . . 109 Defining PeopleSoft jobs in IBM Workload
Scheduler. . . . . . . . . . . . . . 151
Chapter 20. Oracle E-Business Suite Configuring the job status mapping policy . . 153
jobs . . . . . . . . . . . . . . . 111
Business scenario . . . . . . . . . . . . 111 Chapter 24. Access method for z/OS 155
Defining an IBM Workload Scheduler job to Features . . . . . . . . . . . . . . . 155
schedule Oracle E-Business Suite applications . . 112 Roles and responsibilities . . . . . . . . . 155
Job definition for Oracle E-Business Suite jobs 112 Installing, configuring, and uninstalling the z/OS
Defining IBM Workload Scheduler jobs to run gateway . . . . . . . . . . . . . . . 156
Oracle E-Business Suite jobs by using the Installing . . . . . . . . . . . . . . 156
Dynamic Workload Console . . . . . . . 114 Configuring . . . . . . . . . . . . . 158
Scheduling and submitting job streams for Uninstalling . . . . . . . . . . . . . 159
Oracle E-Business Suite jobs . . . . . . . 115 Additional information . . . . . . . . . . 159
Customizing IBM Workload Scheduler to run Gateway software components . . . . . . 160
Oracle E-Business Suite jobs . . . . . . . 116 IEFU84 Exit . . . . . . . . . . . . . 160
Mapping between IBM Workload Scheduler job Security . . . . . . . . . . . . . . 160
statuses and Oracle E-Business Suite application Startup . . . . . . . . . . . . . . 160
statuses . . . . . . . . . . . . . . 117 SYSTSIN variables. . . . . . . . . . . 161
Job log output . . . . . . . . . . . . 117 z/OS gateway version . . . . . . . . . 163
Analyzing the Oracle E-Business Suite job Downloading z/OS gateway fixes from FTP . . . 163
properties . . . . . . . . . . . . . 118 Locating product support . . . . . . . . . 166
Configuring the z/OS access method . . . . . 166
| Chapter 21. Salesforce jobs . . . . . 119 Defining the configuration options . . . . . 166
Defining jobs in z/OS . . . . . . . . . 167
Defining z/OS jobs in IBM Workload Scheduler 168
Part 3. Access methods . . . . . . 125 Reference information . . . . . . . . . . 171
Technical overview . . . . . . . . . . 171
Chapter 22. Installing and configuring Diagnostic information . . . . . . . . . 175
the access methods . . . . . . . . 127 Troubleshooting . . . . . . . . . . . 176
Setting options for the access methods . . . . . 127
Option value inheritance . . . . . . . . 130 Chapter 25. Common serviceability for
Editing the options files from the Dynamic the access methods . . . . . . . . 179
Workload Console . . . . . . . . . . . 130 The return code mapping feature . . . . . . . 179
Using the Option Editor . . . . . . . . . 131 Parameters . . . . . . . . . . . . . 179
Defining supported agent workstations. . . . . 134 Creating a return code mapping file . . . . . 179
Creating a workstation using the Dynamic Return code mapping for psagent . . . . . 181
Workload Console . . . . . . . . . . . 134 Return code mapping for r3batch. . . . . . 181
Creating a workstation using the command line 135 Configuring the tracing utility . . . . . . . . 184
Defining workstations for end-to-end scheduling 136 Customizing the .properties file . . . . . . 185
Defining jobs for supported agents . . . . . . 138 Configuration file example for the SAP access
Defining jobs with the Dynamic Workload method . . . . . . . . . . . . . . 186
Console . . . . . . . . . . . . . . 139
Defining jobs using the command line . . . . 139
Defining jobs for end-to-end scheduling . . . 140 Part 4. Integration with SAP . . . . 187
Submitting jobs. . . . . . . . . . . . . 141
Contents v
| Chapter 29. Scheduling jobs on IBM Terms and conditions for product documentation 365
| Workload Scheduler from SAP Notices and information . . . . . . . . . . 366
Libmsg . . . . . . . . . . . . . . . 367
| Solution Manager . . . . . . . . . 353
Apache Jakarta ORO . . . . . . . . . . . 367
| Registering the master domain manager on SAP
ISMP Installer (InstallShield 10.50x) . . . . . . 368
| Solution Manager . . . . . . . . . . . . 353
JXML CODE. . . . . . . . . . . . . . 368
| Scheduling . . . . . . . . . . . . . . 356
InfoZip CODE . . . . . . . . . . . . . 369
| Scheduling jobs directly . . . . . . . . . 356
HSQL Code . . . . . . . . . . . . . . 370
| Scheduling from job documentation . . . . . 357
HP-UX Runtime Environment, for the Java 2
| Monitoring . . . . . . . . . . . . . . 358
Platform . . . . . . . . . . . . . . . 371
| Setting up to log traces on WebSphere Application
| Server . . . . . . . . . . . . . . . . 359
Index . . . . . . . . . . . . . . . 375
Part 5. Appendixes . . . . . . . . 361
Notices . . . . . . . . . . . . . . 363
Trademarks . . . . . . . . . . . . . . 365
vii
viii IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Tables
1. Job plug-ins . . . . . . . . . . . . . 1 29. Psagent access method options. . . . . . 145
2. Access methods . . . . . . . . . . . 2 30. Task string parameters for PeopleSoft jobs 152
| 3. Required and optional attributes for the 31. Relationship between the run status, the
| definition of an IBM BigInsights job . . . . 15 distribution status, and the IBM Workload
| 4. Required and optional attributes for the Scheduler job status . . . . . . . . . 153
| definition of an IBM Cloudant job . . . . . 19 32. Relationship between the run status and the
5. Required and optional attributes for the IBM Workload Scheduler job status . . . . 154
definition of jobs running IBM Cognos reports. 26 33. Roles and responsibilities in Access method
6. Examples to use for parameters of date, time, for z/OS . . . . . . . . . . . . . 155
and time stamp formats . . . . . . . . 31 34. SYSTSIN variables . . . . . . . . . . 161
7. Properties for running IBM Cognos reports 34 35. File characteristics for obtaining the gateway
8. Mapping between IBM Workload Scheduler fix pack files by FTP . . . . . . . . . 164
job statuses and IBM Cognos report statuses . 37 36. File characteristics for the LOADLIB file after
9. Required and optional attributes for the job receiving it . . . . . . . . . . . . 165
definition of IBM InfoSphere DataStage jobs. . 42 37. File characteristics for the SAMPLIB file after
10. Properties to run IBM InfoSphere DataStage receiving it . . . . . . . . . . . . 165
jobs . . . . . . . . . . . . . . . 48 38. Access method for z/OS access method
11. Mapping between IBM Workload Scheduler options . . . . . . . . . . . . . 167
and IBM InfoSphere DataStage job statuses . . 49 39. JES job states with respect to IBM Workload
| 12. Required and optional attributes for the Scheduler . . . . . . . . . . . . . 172
| definition of an IBM Sterling Connect:Direct 40. IBM Workload Scheduler for z/OS operation
| job . . . . . . . . . . . . . . . 53 states with respect to IBM Workload
| 13. Required and optional attributes for the Scheduler . . . . . . . . . . . . . 174
| definition of an IBM WebSphere MQ job . . . 63 41. IBM Workload Scheduler for z/OS operation
| 14. Required and optional attributes for the occurrence states with respect to IBM
| definition of a Hadoop Distributed File System Workload Scheduler . . . . . . . . . 174
| job . . . . . . . . . . . . . . . 69 42. Job states and return codes for the PeopleSoft
| 15. Required and optional attributes for the access method . . . . . . . . . . . 181
| definition of a Hadoop Map Reduce job . . . 73 43. IBM Workload Scheduler for SAP features 190
16. Required and optional attributes for the 44. Roles and responsibilities in IBM Workload
definition of an Oozie job . . . . . . . . 77 Scheduler for SAP . . . . . . . . . . 194
* 17. Required and optional attributes for the 45. Access keywords for activities with SAP
* definition of an Apache Spark job . . . . . 81 scheduling objects . . . . . . . . . . 196
* 18. Required and optional attributes for the 46. ABAP/4 modules installed . . . . . . . 202
* definition of an Amazon EC2 job . . . . . 85 47. ABAP/4 modules contents . . . . . . . 202
* 19. Required and optional attributes for the 48. r3batch global configuration options . . . . 209
* definition of an IBM SoftLayer job . . . . . 89 49. r3batch local configuration options . . . . 210
* 20. Required and optional attributes for the 50. r3batch common configuration options 212
* definition of a Microsoft Azure job . . . . . 93 51. Placeholders and counters for extended
21. Required and optional attributes for the job variants . . . . . . . . . . . . . 231
definition of PowerCenter jobs. . . . . . . 98 52. Task string parameters for SAP jobs . . . . 236
22. Mapping between IBM Workload Scheduler 53. Status transitions in IBM Workload Scheduler
job statuses and PowerCenter workflow (internal status) and the corresponding SAP
statuses . . . . . . . . . . . . . 108 R/3 status . . . . . . . . . . . . 244
23. Required and optional attributes for the job 54. Task string parameters for SAP jobs (dynamic
definition of Oracle E-Business Suite jobs. . . 112 definition) . . . . . . . . . . . . 252
24. Properties to run Oracle E-Business Suite jobs 116 55. Supported attributes for ABAP step definition 258
25. Mapping between IBM Workload Scheduler 56. Supported attributes for external programs
and Oracle E-Business Suite application and external commands step definition . . . 260
statuses . . . . . . . . . . . . . 117 57. Placeholders for job interception template
| 26. Required and optional attributes for the files . . . . . . . . . . . . . . . 276
| definition of a Salesforce job . . . . . . 121 58. Task string parameters for SAP R/3 jobs 281
27. How to complete the extended agents 59. Actions performed when you rerun a process
definition . . . . . . . . . . . . . 135 chain job . . . . . . . . . . . . . 287
28. Roles and responsibilities in Access method 60. Parameters to define an SAP internetwork
for PeopleSoft . . . . . . . . . . . 143 dependency . . . . . . . . . . . . 302
ix
61. Internetwork dependency definition and 72. Mapping between summary context MTE
possible resolution . . . . . . . . . . 303 name and IBM Workload Scheduler fields . . 321
62. History table of the SAP events raised 307 73. Mapping between object MTE name and IBM
63. SAP event matching with the event rule Workload Scheduler fields . . . . . . . 321
defined . . . . . . . . . . . . . 307 74. Mapping between attribute MTE name and
64. History table of the SAP events raised 307 IBM Workload Scheduler fields . . . . . 322
65. SAP events matching with the event rule 75. Alert properties for correlations . . . . . 322
defined . . . . . . . . . . . . . 308 76. SAP R/3 supported code pages . . . . . 328
66. IBM Workload Scheduler fields used to define 77. Miscellaneous troubleshooting items . . . . 329
event rules based on IDocs . . . . . . . 310 78. Required and optional attributes for the job
67. IBM Workload Scheduler fields used to define definition of SAP PI Channel jobs. . . . . 341
correlation rules for IDoc events . . . . . 311 79. Mapping between IBM Workload Scheduler
68. Parameters of IDOCEventGenerated event and SAP PI Channel job statuses . . . . . 343
type. . . . . . . . . . . . . . . 312 80. Required and optional attributes for the
69. Standard outbound IDoc statuses . . . . . 313 definition of a SAP BusinessObjects BI job . . 346
70. Standard inbound IDoc statuses . . . . . 314 | 81. Properties for the smseadapter.properties
71. Mapping between root context MTE name | file. . . . . . . . . . . . . . . . 354
and IBM Workload Scheduler fields . . . . 321
For information about the new or changed functions in this release, see IBM
Workload Automation: Overview, section Summary of enhancements.
For information about the APARs that this release addresses, see the IBM Workload
Scheduler Release Notes at http://www-01.ibm.com/support/docview.wss?rs=672
&uid=swg27048863 and the Dynamic Workload Console Release Notes at
http://www-01.ibm.com/support/docview.wss?rs=672&uid=swg27048864.
xi
= New or changed content is marked with revision bars. For the PDF format, new or
= changed V9.4 content is marked in the left margin with a pipe (|) character and
= new or changed V9.4FP1 content is marked with an equal sign (=).
This publication is intended for job schedulers who want to run and control
application jobs by using IBM Workload Scheduler. Readers of this publication
should have some knowledge of:
v IBM Workload Scheduler
v Dynamic Workload Console
v The specific application environment.
Accessibility
Accessibility features help users with a physical disability, such as restricted
mobility or limited vision, to use software products successfully.
With this product, you can use assistive technologies to hear and navigate the
interface. You can also use the keyboard instead of the mouse to operate all
features of the graphical user interface.
For full information, see the Accessibility Appendix in the IBM Workload Scheduler
User's Guide and Reference.
Technical training
Cloud & Smarter Infrastructure provides technical training.
Support information
IBM provides several ways for you to obtain support when you encounter a
problem.
If you have a problem with your IBM software, you want to resolve it quickly. IBM
provides the following ways for you to obtain the support you need:
v Searching knowledge bases: You can search across a large collection of known
problems and workarounds, Technotes, and other information.
v Obtaining fixes: You can locate the latest fixes that are already available for your
product.
v Contacting IBM Software Support: If you still cannot solve your problem, and
you need to work with someone from IBM, you can use a variety of ways to
contact IBM Software Support.
For more information about these three ways of resolving problems, see the
appendix about support information in IBM Workload Scheduler: Troubleshooting
Guide.
xii IBM Workload Automation: Scheduling Applications with IBM Workload Automation
How to read syntax diagrams
Syntax diagrams help to show syntax in a graphical way.
Throughout this publication, syntax is described in diagrams like the one shown
here, which describes the SRSTAT TSO command:
► ►
KEEP KEEP
AVAIL ( RESET ) DEVIATION ( amount )
NO RESET
YES
► ►
KEEP YES
QUANTITY ( amount ) CREATE ( NO )
RESET
► ►◄
0
TRACE ( trace level )
Read the syntax diagrams from left to right and from top to bottom, following the
path of the line.
►► STATEMENT ►◄
optional item
v An arrow returning to the left above the item indicates an item that you can
repeat. If a separator is required between items, it is shown on the repeat arrow.
v If you can choose from two or more items, they appear vertically in a stack.
– If you must choose one of the items, one item of the stack appears on the
main path:
– If choosing one of the items is optional, the entire stack appears below the
main path:
►► STATEMENT ►◄
optional choice 1
optional choice 2
– A repeat arrow above a stack indicates that you can make more than one
choice from the stacked items:
►► STATEMENT ▼ ►◄
optional choice 1
optional choice 2
optional choice 3
v Parameters that are above the main line are default parameters:
default
►► STATEMENT ►◄
alternative
xiv IBM Workload Automation: Scheduling Applications with IBM Workload Automation
option 1
default
optional choice 1 ( alternative )
option 2
default
optional choice 2 ( alternative )
Important: The plug-ins and access methods listed are included with IBM
Workload Scheduler, but to be entitled to use them, you must purchase a separate
chargeable component in addition to IBM Workload Scheduler or purchase the IBM
Workload Scheduler for z/OS Agent, which includes the plug-ins and access
methods. See the IBM Workload Scheduler Download document for details:
http://www-01.ibm.com/support/docview.wss?rs=672&uid=swg24042843. For
information about the supported versions of the plug-ins and access methods, run
the Data Integration report and select the Supported Software tab.
You can extend job scheduling capabilities with IBM Workload Scheduler plug-ins
to external applications to take advantage of all the IBM Workload Scheduler
functions to manage the operations and tasks performed by the external
applications.
1
Table 1. Job plug-ins (continued)
Job Plug-in More information
Oracle E-Business Suite Chapter 20, “Oracle E-Business Suite jobs,”
on page 111
Salesforce Chapter 21, “Salesforce jobs,” on page 119
SAP BusinessObjects BI “SAP BusinessObjects BI jobs” on page 344
SAP PI Channel “SAP Process Integration (PI) Channel jobs”
on page 339
You can use access methods to extend the job scheduling capabilities of IBM
Workload Scheduler to other systems and applications. Access methods run on:
v Extended agents to extend static scheduling capability.
v Dynamic agents and IBM Workload Scheduler for z/OS Agents to extend
dynamic scheduling capability.
For more details about which workstations can run the access methods, see
Chapter 1, “Supported agent workstations,” on page 3.
An access method interacts with the external system through either its command
line or the Dynamic Workload Console. IBM Workload Scheduler includes the
following access methods:
Table 2. Access methods
Access Method More information
SAP R/3 (r3batch) “Configuring the SAP R/3 environment” on
page 196
PeopleSoft (psagent) Chapter 23, “Access method for PeopleSoft,”
on page 143
z/OS® (mvsjes and mvsopc) Chapter 24, “Access method for z/OS,” on
page 155
3
identifies the access method. An access method is a program that is run by
the hosting workstation whenever IBM Workload Scheduler submits a job
to an external system.
Jobs are defined for an extended agent in the same manner as for other
IBM Workload Scheduler workstations, except for any job attributes that
depend on the external system or application.
To launch and monitor a job on an extended agent, the host runs the access
method, passing to it job details as command line options. The access
method communicates with the external system to launch the job and
returns the status of the job. To launch a job in an external environment,
IBM Workload Scheduler runs the extended agent access method providing
it with the extended agent workstation name and information about the
job. The method looks at the corresponding file named
XANAME_accessmethod.opts (where XANAME is the name of the extended
agent workstation) to determine which external environment instance it
connects to. The access method can then launch jobs on that instance and
monitor them through completion, writing job progress and status
information in the standard list file of the job.
Extended agents can also be used to run jobs in an end-to-end
environment, where job scheduling and monitoring is managed from an
IBM Workload Scheduler for z/OS controller.
The following sections provide an overview of creating job definitions and job
streams, submitting them to run, monitoring them, and then analyzing the job log
and job output. These procedures can be applied to any of the supported job
plug-ins.
For information about the supported versions of the job plug-ins, generate a
dynamic Data Integration report from the IBM® Software Product Compatibility
Reports web site, and select the Supported Software tab: Data Integration.
Tip: Many of the IBM Workload Scheduler job plug-ins are illustrated in helpful,
how-to demonstrations videos available on the Workload Automation YouTube
channel.
5
6 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Chapter 2. Defining a job
Define IBM Workload Scheduler jobs to run business tasks and processes defined
in an external application.
Define an IBM Workload Scheduler job to run tasks or processes you have defined
in external applications. Using the IBM Workload Scheduler job plug-in for your
external application, you can define, schedule and run jobs to automate your
business.
$jobs
[workstation#]jobname
{scriptname filename streamlogon username |
docommand "command" streamlogon username |
task job_definition }
[description "description"]
[tasktype tasktype]
[interactive]
[recovery
{stop
[after [workstation#]jobname]
[abendprompt "text"]]
|continue
[after [workstation#]jobname]
= [abendprompt "text"]] |rerun [same_workstation]
= [[repeatevery hhmm] [for number attempts]]
[after [workstation#]jobname]
|[after [workstation#]jobname]
[abendprompt "text"]}
7
Use the task argument, specifying the XML syntax for the specific job
plug-in. See the section for each job plug-in for the specific XML syntax.
For a detailed description of the XML syntax, see the section about job
definition in User's Guide and Reference.
For some jobs a properties file can be generated and used to provide the values for
some of the properties defined in the job definition.
The properties file is automatically generated either when you perform a "Test
Connection" from the Dynamic Workload Console in the job definition panels, or
when you submit the job to run the first time. Once the file has been created, you
can customize it. This is especially useful when you need to schedule several jobs
of the same type. You can specify the values in the properties file and avoid
having to provide information such as credentials and other information, for each
job. You can override the values in the properties files by defining different values
at job definition time.
After you define an IBM Workload Scheduler job, add it to a job stream with all
the necessary scheduling arguments and submit it to run. After submission, when
the job is running (EXEC status), you can kill the IBM Workload Scheduler job if
necessary. For some job plug-ins, this action is converted into corresponding action
in the plug-in application. Refer to the specific plug-in section for details about
what effect the kill action has in the application.
z/OS For z/OS environments, use the Dynamic Workload Console or the ISPF
application.
How to submit a job stream using the Dynamic Workload Console
To submit a job or job stream to run according to the schedule defined, see
the section about submitting workload on request in production in
Dynamic Workload Console User's Guide. For distributed environments only,
see also the section about quick submit of jobs and job streams in Dynamic
Workload Console User's Guide.
How to submit a process (job stream) using Application Lab
To submit a process to run according to the schedule defined for it, see the
section about running a process in Application Lab User's Guide.
How to submit a job stream from the conman command line
To submit a job stream for processing, see the submit sched command. To
submit a job to be launched, see the submit job command. For more
information about these commands see the IBM Workload Scheduler: User's
Guide and Reference.
How to submit your workload using the ISPF application
The workload is defined by creating one or more calendars, defining
applications, creating a long-term plan, and creating a current plan. The
current plan is a detailed plan, typically for one day, that lists the
applications that run and the operations in each application. See the section
about creating the plans for the first time in Managing the Workload for
more information about creating plans.
9
10 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Chapter 4. Monitoring IBM Workload Scheduler jobs
Monitor IBM Workload Scheduler jobs by using the Dynamic Workload Console,
the command line, Application Lab, or the ISPF application.
Distributed You monitor distributed jobs by using the Dynamic Workload Console
connected to a distributed engine, by using the conman command line, or from
Application Lab.
z/OS You monitor z/OS jobs by using the Dynamic Workload Console
connected to a z/OS engine or the ISPF application.
How to monitor jobs by using the Dynamic Workload Console
See the online help or the section about creating a task to monitor jobs in
the Dynamic Workload Console User's Guide.
How to monitor jobs by using conman
See the section about managing objects in the plan - conman in User's
Guide and Reference.
How to monitor jobs by using Application Lab
See the section about monitoring your process in Application Lab User's
Guide.
How to monitor jobs by using the ISPF application
See the section about monitoring the workload in IBM Workload Scheduler
for z/OS Managing the Workload.
11
12 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Chapter 5. Analyzing the job log
When a job runs IBM Workload Scheduler creates a job log that you can analyze to
verify the job status.
Distributed For distributed jobs, you analyze the job log by using the Dynamic
Workload Console, Application Lab or the conman command line.
z/OS For z/OS jobs, you analyze the job log by using the Dynamic Workload
Console or the ISPF application.
While the job is running, you can track the status of the job and analyze the
properties of the job. In particular, in the Extra Information section, if the job
contains variables, you can verify the value passed to the variable from the remote
system. Some job streams use the variable passing feature, for example, the value
of a variable specified in job 1, contained in job stream A, is required by job 2 in
order to run in the same job stream.
For more information about passing variables between jobs, see the related section
in the IBM Workload Scheduler on-premises online product documentation in IBM
Knowledge Center.
How to analyze the job log using the Dynamic Workload Console
Before you can access the job log for an individual job, you need to run a
query and list the jobs for which you want to analyze the job log. See the
online help or the section about creating a task to monitor jobs in Dynamic
Workload Console User's Guide. From the list of jobs resulting from the
query, you can either download the job log, or view the job log in the job
properties view. Select the job for which you want to analyze the job log
and click More Actions > Download Job Log or More Actions >
Properties from the toolbar.
How to analyze the job log using Application Lab
In Application Lab, a process is the equivalent of a job stream. From the
list of processes displayed for the selected process library, select a process
and then click the History tab. A run history for the process is displayed.
Select a run instance and click Details. A list of steps defined in the
process is displayed. Each step is the equivalent of a job. To view the log
for a step or download the log, select a step and click View Log or
Download Log, as needed.
For more information about Application Lab, see Application Lab User's
Guide.
How to analyze the job log using conman
See the section about the showjobs command in User's Guide and Reference.
How to analyze the job log using the ISPF application
See the section about monitoring the workload in Managing the Workload.
13
14 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
|
| Prerequisite
| You manage IBM BigInsights Workbook data sheets or applications in both a
| distributed and z/OS environment.
| IBM Workload Scheduler integrates with IBM BigInsights for Hadoop to bring the
| power of Apache Hadoop to the enterprise. With the IBM Workload Scheduler
| plug-in for BigInsights for Hadoop you can:
| v Monitor and control workflows containing IBM BigInsights workbooks and
| applications that help enterprise find insights into new and emerging types of
| data.
| v Fully automate IBM BigInsights process execution with calendar and
| event-based scheduling, and a single point of control to handle exceptions, and
| automate recovery processes.
| Before you can define IBM BigInsights jobs, you must create a connection between
| the IBM Workload Scheduler agent and the IBM BigInsights server.
| For information about the supported versions of the job plug-ins, generate a
| dynamic Data Integration report from the IBM Software Product Compatibility
| Reports web site, and select the Supported Software tab: Data Integration.
| A description of the job properties and valid values are detailed in the
| context-sensitive help in the Dynamic Workload Console by clicking the question
| mark (?) icon in the top-right corner of the properties pane.
| For more information about creating jobs using the various supported product
| interfaces, see Chapter 2, “Defining a job,” on page 7.
| The following table lists the required and optional attributes for IBM BigInsights
| jobs:
| Table 3. Required and optional attributes for the definition of an IBM BigInsights job
| Attribute Description and value Required
| Connection properties - IBM BigInsights server section
| Hostname The hostname of the IBM BigInsights server. U
| Port The port of the IBM BigInsights server. U
| Protocol The protocol for connecting to the IBM BigInsights
| server. Supported values are http and https.
| User The user to be used for accessing the IBM BigInsights
| server.
| Password The password to be used for accessing the IBM
| BigInsights server.
15
| Table 3. Required and optional attributes for the definition of an IBM BigInsights
| job (continued)
| Attribute Description and value Required
| Connection properties - Retry options section
| Number of retries The number of times the program retries performing
| the operation.
| Retry interval The number of seconds the program waits before
| (seconds) retrying the operation. The default value is 30
| seconds.
| Action properties - Workbook section
| Workbook The name and path to an IBM BigInsights workbook. U
| Use this option to run a user-specified workbook.
| Action properties - Application section
| Application Identifier The application identifier. Use this option to run an U
| Application.
| Execution Name The user-defined identifier for a specific run of the
| application
|
| You schedule IBM Workload Scheduler IBM BigInsights jobs by defining them in
| job streams. Add the job to a job stream with all the necessary scheduling
| arguments and submit the job stream.
| You can submit jobs by using the Dynamic Workload Console, Application Lab or
| the conman command line. See Chapter 3, “Scheduling and submitting jobs and job
| streams,” on page 9 for information about how to schedule and submit jobs and
| job streams using the various interfaces.
| After submission, when the job is running and is reported in EXEC status in IBM
| Workload Scheduler, you can stop it if necessary, by using the kill command.
| However, this action is effective only for the Wait for a file action. If you have
| defined different actions in your job, the kill command is ignored.
| If the IBM Workload Scheduler agent stops when you submit the IBM Workload
| Scheduler IBM BigInsights job or while the job is running, as soon as the agent
| becomes available again IBM Workload Scheduler begins monitoring the job from
| where it stopped.
| For information about how to monitor jobs using the different product interfaces
| available, see Chapter 4, “Monitoring IBM Workload Scheduler jobs,” on page 11.
| Job properties
| While the job is running, you can track the status of the job and analyze the
| properties of the job. In particular, in the Extra Information section, if the job
| contains variables, you can verify the value passed to the variable from the remote
| system. Some job streams use the variable passing feature, for example, the value
| of a variable specified in job 1, contained in job stream A, is required by job 2 in
| order to run in the same job stream.
| For example, from the conman command line, you can see the job properties by
| running:
| conman sj <job_name>;props
| The properties are listed in the Extra Information section of the output command.
| For more information about passing variables between jobs, see the related sections
| in User's Guide and Reference.
| For information about how to display the job log from the various supported
| interfaces, see Chapter 5, “Analyzing the job log,” on page 13.
| For example, you can see the job log content by running conman sj
| <job_name>;stdlist, where <job_name> is the IBM BigInsights job name.
| See also
| From the Dynamic Workload Console you can perform the same task as described
| in
| the Dynamic Workload Console User's Guide, section about Creating job definitions.
| For more information about how to create and edit scheduling objects, see
| the Dynamic Workload Console User's Guide, section about Designing your Workload.
| Prerequisites
| For information about the supported versions of the job plug-ins, generate a
| dynamic Data Integration report from the IBM Software Product Compatibility
| Reports web site, and select the Supported Software tab: Data Integration.
| Before you can define IBM Cloudant jobs, you must sign up on IBM Cloudant and
| create an account.
| A description of the job properties and valid values are detailed in the
| context-sensitive help in the Dynamic Workload Console by clicking the question
| mark (?) icon in the top-right corner of the properties pane.
| For more information about creating jobs using the various supported product
| interfaces, see Chapter 2, “Defining a job,” on page 7.
| The following table lists the required and optional attributes for IBM Cloudant
| jobs:
| Table 4. Required and optional attributes for the definition of an IBM Cloudant job
| Attribute Description and value Required
| Connection
| Username The name of the user authorized to access the Cloudant database. If you do not
| specify this
| attribute, then
| the attribute is
| read from the
| properties file.
| Password The password that is associated with the user authorized to access the If you do not
| Cloudant database. specify this
| attribute, then
| the attribute is
| read from the
| properties file.
| AccountText The account that was created when you signed up on Cloudant If you do not
| database. specify this
| attribute, then
| the attribute is
| read from the
| properties file.
| DatabaseText The Cloudant database that you want to work with. If you do not
| specify this
| attribute, then
| the attribute is
| read from the
| properties file.
| Action
| Database Action
19
| Table 4. Required and optional attributes for the definition of an IBM Cloudant
| job (continued)
| Attribute Description and value Required
| DatabaseOperation The action that you want to run on the Cloudant database: U
| v Create
| v Read
| v Delete
| Database Replication
| Action
| TargetDb The target Cloudant database that you want to synchronize with your U
| source Cloudant database. If the target database does not exist, it is
| created automatically, unless you specify create_target=false in the
| list of Operation Parameters.
| Document Action
| DocumentOperation The action that you want to run on the Cloudant database document: U
| v Create
| v Read
| v Update
| v Delete
| IdDocument The document identifier. U
| RevDocument The document revision number. For the delete action, it must be equal This attribute
| to the latest revision number. is required for
| the update and
| delete actions.
| Attachment Action
| AttachmentOperation The action that you want to run on the document attachment: U
| v Create
| v Read
| v Update
| v Delete
| IdDocument2 The identifier of the document to which the attachment refers to. U
| RevDocument2 The revision number of the document to which the attachment refers This attribute
| to. For the delete action, it must be equal to the latest revision is required for
| number. the update and
| delete actions.
| NameAttach The name by which the attachment is associated with the document. U
| For the update action, if the attachment does not exist, it is created
| automatically. For the create action, if the attachment already exists, it
| is updated automatically.
| ContentTypeAttach The attachment content type header. U
| DestinationAttach For the read action, the name of the file where you want to receive
| the attachment.
| Operation Parameters
| OperationParameters The list of additional parameters that you can add for the read
| document and the database replication actions.
| You schedule IBM Workload Scheduler IBM Cloudant jobs by defining them in job
| streams. Add the job to a job stream with all the necessary scheduling arguments
| and submit the job stream.
| After submission, when the job is running and is reported in EXEC status in IBM
| Workload Scheduler, you can stop it if necessary, by using the kill command. This
| action stops also the program execution on the IBM Cloudant database.
| Monitoring a job
| If the IBM Workload Scheduler agent stops when you submit the IBM Cloudant
| job, or while the job is running, the job restarts automatically as soon as the agent
| restarts.
| For information about how to monitor jobs using the different product interfaces
| available, see Chapter 4, “Monitoring IBM Workload Scheduler jobs,” on page 11.
| CloudantJobExecutor.properties file
| The properties file is automatically generated either when you perform a "Test
| Connection" from the Dynamic Workload Console in the job definition panels, or
| when you submit the job to run the first time. Once the file has been created, you
| can customize it. This is especially useful when you need to schedule several jobs
| of the same type. You can specify the values in the properties file and avoid
| having to provide information such as credentials and other information, for each
| job. You can override the values in the properties files by defining different values
| at job definition time.
| For a description of each property, see the corresponding job attribute description
| in Table 4 on page 19.
| Job properties
| While the job is running, you can track the status of the job and analyze the
| properties of the job. In particular, in the Extra Information section, if the job
| contains variables, you can verify the value passed to the variable from the remote
| system. Some job streams use the variable passing feature, for example, the value
| of a variable specified in job 1, contained in job stream A, is required by job 2 in
| order to run in the same job stream.
| For information about how to display the job properties from the various
| supported interfaces, see Chapter 5, “Analyzing the job log,” on page 13. For
| example, from the conman command line, you can see the job properties by
| running:
| conman sj <job_name>;props
| For information about passing job properties, see the topic about passing job
| properties from one job to another in the same job stream instance in the User's
| Guide and Reference.
| The following example shows the job definition for an IBM Cloudant job that
| deletes a document:
| <?xml version="1.0" encoding="UTF-8"?>
| <jsdl:jobDefinition xmlns:jsdl="http://www.abc.com/xmlns/prod/scheduling/1.0/jsdl"
| xmlns:jsdlcloudant="http://www.abc.com/xmlns/prod/scheduling/1.0/jsdlcloudant" name="CLOUDANT">
| <jsdl:application name="cloudant">
| <jsdlcloudant:cloudant>
| <jsdlcloudant:cloudantParameters>
| <jsdlcloudant:Connection>
| <jsdlcloudant:UsernameText>cfalcxx</jsdlcloudant:UsernameText>
| <jsdlcloudant:PasswordText>xxxxxx00</jsdlcloudant:PasswordText>
| <jsdlcloudant:AccountText>54fdc307-1b24-4323-9a91-adc817ac45xx-bluemix
| </jsdlcloudant:AccountText>
| <jsdlcloudant:DatabaseText>mydb</jsdlcloudant:DatabaseText>
| </jsdlcloudant:Connection>
| <jsdlcloudant:Action>
| <jsdlcloudant:ActionButtonGroup>
| <jsdlcloudant:DocumentRadioButton>
| <jsdlcloudant:DocumentOperation>DELETE</jsdlcloudant:DocumentOperation>
| <jsdlcloudant:IdDocument>claudio</jsdlcloudant:IdDocument>
| <jsdlcloudant:RevDocument/>
| </jsdlcloudant:DocumentRadioButton>
| </jsdlcloudant:ActionButtonGroup>
| </jsdlcloudant:Action>
| <jsdlcloudant:Body>
| <jsdlcloudant:DocumentInputGroup>
| <jsdlcloudant:InputDocumentButton>
| <jsdlcloudant:InputDocument/>
| </jsdlcloudant:InputDocumentButton>
| </jsdlcloudant:DocumentInputGroup>
| </jsdlcloudant:Body>
| </jsdlcloudant:cloudantParameters>
| </jsdlcloudant:cloudant>
| </jsdl:application>
| </jsdl:jobDefinition>
| For information about how to display the job log from the various supported
| interfaces, see Chapter 5, “Analyzing the job log,” on page 13.
| For example, you can see the job log content by running conman sj
| <job_name>;stdlist, where <job_name> is the IBM Cloudant job name.
| See also
| From the Dynamic Workload Console you can perform the same task as described
| in
| the Dynamic Workload Console User's Guide, section about Creating job definitions.
| For more information about how to create and edit scheduling objects, see
| the Dynamic Workload Console User's Guide, section about Designing your Workload.
For information about the supported versions of the job plug-ins, generate a
dynamic Data Integration report from the IBM Software Product Compatibility
Reports web site, and select the Supported Software tab: Data Integration.
Business scenario
A retail company has many shops around the world. Each shop has its own local
database, which stores daily transactions and tracks the number of articles
remaining in stock. The central business division of the company every morning
needs to analyze all the reports that show the number of articles sold in every
country, grouped by predefined categories.
The company collects this data by using IBM InfoSphere DataStage and creates the
reports using IBM Cognos. Overnight, the company runs the following processes:
v IBM InfoSphere DataStage jobs to collect data from the local database of each
store and then, using the procedures stored in the central database, to produce
the aggregated data to create the business reports.
v IBM Cognos jobs to create the business reports to be used by the business
analysts.
Both processes are performed manually by an operator. To reduce costs and to
ensure that the SLA requirement of having data available every morning is
satisfied, the company wants to automate the entire process.
Using IBM Workload Scheduler Plug-in for IBM InfoSphere DataStage and for IBM
Cognos, the company can satisfy this objective because the product provides the
plug-in necessary to automate and control the entire process.
23
z/OS In a z/OS environment, define an IBM Workload Scheduler job to run
an IBM Cognos report by using the Dynamic Workload Console connected to a
z/OS engine.
See Chapter 2, “Defining a job,” on page 7 for more information about creating
jobs using the various interfaces available. Some samples of IBM Cognos report job
definitions are contained in the sections that follow.
To define a job by using the Dynamic Workload Console, perform the following
procedure. See Chapter 2, “Defining a job,” on page 7 for information about
defining jobs with other available interfaces.
Procedure
1. In the console navigation tree, expand Administration > Workload Design and
click Manage Workload Definitions.
2. Select an engine and click Go. The Workload Designer opens.
3. In the Working List panel, select:
z/OS On z/OS egnine:
New > Business Analytics > Cognos
Distributed On a distributed engine:
New > Job Definition > Business Analytics > Cognos
The properties of the job are displayed in the right-hand panel for editing.
4. In the properties panel, specify the attributes for the job definition you are
creating. You can find detailed information about all the attributes in the help
available with the panel. In particular:
In the General panel:
Distributed
Environment:
Enter the name of the IBM Workload Scheduler job that runs
the IBM Cognos report.
Enter the name of the workstation where you installed the IBM
Workload Scheduler agent.
z/OS
Environment:
Enter the name of the partitioned data set where you want to
create the JCL.
Enter the name of the JCL you want to create in the partitioned
data set.
Enter the name of the workstation where you installed the IBM
Workload Scheduler agent.
In the Cognos panel:
For more information about creating jobs using the various supported product
interfaces, see Chapter 2, “Defining a job,” on page 7.
Table 5 on page 26 describes the required and optional attributes for the definition
of jobs to run IBM Cognos reports, together with a description of each attribute.
The following example shows the job definition for an IBM Cognos report with all
the attributes specified:
$JOBS
NC125152#REPFOREUROPEBUSINESS COGNOS_ALL_FIELDS
TASK
<?xml version="1.0" encoding="UTF-8"?>
<jsdl:jobDefinition xmlns:jsdl="http://www.abc.com/xmlns/prod/scheduling/1.0/jsdl"
xmlns:jsdlcognos="http://www.abc.com/xmlns/prod/scheduling/1.0/jsdlcognos" name="COGNOS">
<jsdl:application name="cognos">
<jsdlcognos:cognos>
<jsdlcognos:CognosParameters>
<jsdlcognos:CognosPanel>
<jsdlcognos:credentialsGroup>
<jsdlcognos:namespace>NTLM</jsdlcognos:namespace>
<jsdlcognos:userName>Administrator</jsdlcognos:userName>
<jsdlcognos:password>{aes}SgB6gmS+3xj0Yq2QsINVOtsNCeZIIsMwt08kwO6ZCR4=
</jsdlcognos:password>
</jsdlcognos:credentialsGroup>
<jsdlcognos:serverConnectionGroup>
<jsdlcognos:serverAddress>nc112006</jsdlcognos:serverAddress>
<jsdlcognos:port>9300</jsdlcognos:port>
<jsdlcognos:CheckSSLGroup>
<jsdlcognos:SslCheck/>
</jsdlcognos:CheckSSLGroup>
</jsdlcognos:serverConnectionGroup>
<jsdlcognos:reportGroup>
<jsdlcognos:ReportPathGroup>
<jsdlcognos:reportPath>date and time report - in values -
Path:/content/package[@name=’cognosTime’]/interactiveReport
The following table shows the syntax you must use when defining reports
containing date, time, and time stamp formats as parameters.
Table 6. Examples to use for parameters of date, time, and time stamp formats
Cognos parameter format examples
®
Prompt Cognos parameter Single
type format value List of values Interval values
Date CCYY-MM-DD 2012-02-03 2012-02-03-Value:2012-03-14 Between 2012-02-03 and
2012-04-15
Time hh:mm:ss 01:00:00 01:00:00-Value:01:01:01 Between 01:00:00 and 23:59:30
Time CCYY-MM-DDThh:mm:ss 2012-02-03 2012-02-03 15:05:00-Value:2012-02- Between 2012-02-03 15:05:00
Stamp or 15:05:00 03T16:01:00-Value:2012-02-03T16:00:00 and
CCYY-MM-DD hh:mm:ss 2012-04-15T16:00:00
Note: You must specify the parameter format exactly as they are shown in the
table respecting lower case and upper case formats.
This example shows how to run the Cognos Employee Training by Year sample
report, specifying for the ?pYear? parameter the value associated with the filter
2004. The Employee Training by Year sample report is located under
/Samples/Models/Dashboard Objects. To run the report proceed as follows:
1. In the Insertable Objects pane, select the 2004 filter. The Properties panel is
displayed.
2. Select [go_data_warehouse].[2004].
3. Insert [go_data_warehouse].[2004] in the Value field.
4. Save the job definition.
To specify parameters that use parameterized filters using composer, perform the
following procedure.
1. Open Report Studio.
2. Open the report to run.
3. In the Insertable Objects pane, select the filter you want to use. The Properties
panel is displayed.
4. Select the Ref value.
5. Copy this value in the <jsdlcognos:parametersValues> attribute. Below an
example for the Employee Training by Year sample report specifying for the
?pYear? parameter the value associated with the filter 2004:
<jsdlcognos:reportGroup>
<jsdlcognos:ReportPathGroup>
<jsdlcognos:reportPath>
Employee Training - Path:/content/folder[@name=’Samples’]
/folder[@name=’Models’]
/folder[@name=’Dashboard Objects’]
/report[@name=’Employee Training’]
</jsdlcognos:reportPath>
</jsdlcognos:ReportPathGroup>
<jsdlcognos:parametersValues>
<jsdlcognos:parametersValue>
key="pYear">[go_data_warehouse].[2004]
</jsdlcognos:parametersValue>
</jsdlcognos:parametersValues>
<jsdlcognos:outputFile>
See Chapter 3, “Scheduling and submitting jobs and job streams,” on page 9 for
information about how to schedule and submit jobs and job streams using the
various interfaces.
After you define an IBM Workload Scheduler job for an IBM Cognos report, add it
to a job stream with all the necessary scheduling arguments and submit it. After
submission, when the job is running (EXEC status), you can kill the IBM Workload
Scheduler job that runs the IBM Cognos report if necessary. In particular, for IBM
Cognos jobs this action is converted into a Cancel action for the IBM Cognos
report.
The agent might become unavailable while the IBM Workload Scheduler job
running the IBM Cognos report is running. When the agent becomes available
again, IBM Workload Scheduler starts to monitor the report from where it stopped.
For information about monitoring jobs and job streams, see Chapter 4, “Monitoring
IBM Workload Scheduler jobs,” on page 11.
For information about analyzing the job log, see Chapter 5, “Analyzing the job
log,” on page 13.
The properties file is automatically generated either when you perform a "Test
Connection" from the Dynamic Workload Console in the job definition panels, or
when you submit the job to run the first time. Once the file has been created, you
can customize it. This is especially useful when you need to schedule several jobs
of the same type. You can specify the values in the properties file and avoid
having to provide information such as credentials and other information, for each
job. You can override the values in the properties files by defining different values
at job definition time.
Where agent_install_dir is the path where you installed the IBM Workload
Scheduler dynamic agent or the IBM Workload Scheduler for z/OS agent. Where
agent_install_dir is the path where you installed the IBM Workload Scheduler
dynamic agent.
Example
To configure the agent to connect to an IBM Cognos server that is using SSL,
perform the following procedure.
Procedure
1. On the IBM Cognos server, run the following command to export the
certificate:
On Windows operating systems:
<Cognos_inst_path>\bin\ThirdPartyCertificateTool.bat -E -T
-r \<certificate_dir>\<certificate_name>
-k <Cognos_inst_path>\configuration\signkeypair\jCAKeystore
-p <cognos_keystore_password>
On UNIX and Linux operating systems:
<Cognos_inst_path>/bin/ThirdPartyCertificateTool -E -T
-r /<certificate_dir>/<certificate_name>
-k <Cognos_inst_path>/configuration/signkeypair/jCAKeystore
-p <cognos_keystore_password>
where:
cognos_inst_path
Specify the path where you installed the IBM Cognos server.
certificate_dir
Specify the directory in which to export the IBM Cognos certificate.
certificate_name
Specify the name of the IBM Cognos certificate you export.
cognos_keystore_password
Specify the IBM Cognos password defined in the IBM Cognos
Configuration > Security > Cryptography > Cognos - Certificate
Authority settings - Certificate Authority key store password.
For example, if you installed the IBM Cognos server on a UNIX operating
system in the /opt/abc/Cognos/c10 path, you want to export the
/tmp/cacert.cer certificate and the Certificate Authority key store password is
pass00w0rd, run the command as follows:
/opt/abc/cognos/c10/bin/ThirdPartyCertificateTool.sh -E -T
-r /tmp/cacert.cer
-k /opt/abc/cognos/c10/configuration/signkeypair/jCAKeystore
-p pass00w0rd
2. On the agent, run the following command to import the certificate into the
agent keystore:
On Windows operating systems:
<agent_inst_path>\TWS\JavaExt\jre\jre\bin\keytool -import
-file <exp_certificate_dir>\<certificate_name>
-keystore <agent_inst_path>\TWS\JavaExt\jre\jre\lib\security\cacerts
-storepass <keystore_password> -alias Cognos10
On UNIX and Linux operating systems:
<agent_inst_path>/TWS/JavaExt/jre/jre/bin/keytool -import
-file <exp_certificate_dir>/<certificate_name>
-keystore <agent_inst_path>/TWS/JavaExt/jre/jre/lib/security/cacerts
-storepass <keystore_password> -alias Cognos10
where:
agent_inst_path
Specify the path where you installed the agent.
keystore_password
Specify the keystore password of the Java extension.
For example, if you installed the agent on a Windows operating system in the
D:\TWS\Engine\tws_user\ path, the agent keystore path is D:\TWS\Engine\
tws_user\TWS\JavaExt\jre\jre\lib\security\cacerts and the password agent
keystore is a0password, add the JVMOptions parameter as follows:
JVMOptions = -Djavax.net.ssl.trustStore=
"D:\TWS\Engine\tws_user\TWS\JavaExt\jre\jre\lib\security\cacerts"
-Djavax.net.ssl.trustStorePassword=a0password
4. Start and stop the agent using the ShutDownLwa and StartUpLwa commands. See
the sections about the commands in User's Guide and Reference.
Purpose
The output of an IBM Workload Scheduler job for IBM Cognos report shows:
Distributed Environment:
v In the first part the JSDL definition you submitted.
v In the second part how the job completed.
See “Sample job log output.”
z/OS Environment:
How the job completed. See “Sample job log in a z/OS environment” on
page 38.
For information about accessing the job log, see Chapter 5, “Analyzing the job log,”
on page 13
Distributed
Sample job log output
This example shows the output of a job that run on a dynamic agent that
completed successfully:
%sj NC125152#JOBS.REPOR1722160684;std=================
= JOB : NC125152#JOBS[(0000 02/27/12),(JOBS)].REPOR1722160684
= TASK : <?xml version="1.0" encoding="UTF-8"?>
<jsdl:jobDefinition xmlns:jsdl="http://www.abc.com/xmlns/prod/scheduling/1.0/jsdl"
xmlns:jsdlcognos="http://www.abc.com/xmlns/prod/scheduling/1.0/jsdlcognos"
name="COGNOS">
<jsdl:application name="cognos">
<jsdlcognos:cognos>
<jsdlcognos:CognosParameters>
<jsdlcognos:CognosPanel>
.....
.....
</jsdl:jobDefinition>
= TWSRCMAP :
This example shows the output of a job that runs on a dynamic agent that
completed with errors:
%sj NC125152#JOBS.REPOR1726171742;std=================
= JOB : NC125152#JOBS[(0000 02/27/12),(JOBS)].REPOR1726171742
= TASK : <?xml version="1.0" encoding="UTF-8"?>
<jsdl:jobDefinition xmlns:jsdl="http://www.abc.com/xmlns/prod/scheduling/1.0/jsdl"
xmlns:jsdlcognos="http://www.abc.com/xmlns/prod/scheduling/1.0/jsdlcognos"
name="COGNOS">
<jsdl:application name="cognos">
<jsdlcognos:cognos>
.....
.....
</jsdl:jobDefinition>
= TWSRCMAP :
= AGENT : NC125152
= Job Number: 1060841360
= Mon Feb 27 17:26:30 CET 2012
===============================================================
AWKCGE050I The IBM Cognos report with path "/content/package[@name=’cognosTime’]
/interactiveReport[@name=’date and time report’]" started running.
AWKCGE056E The IBM Cognos report completed with errors.
===============================================================
= Status Message: AWKCGE056E The IBM Cognos report completed with errors.
= Exit Status : -1
= Elapsed Time (Minutes) : 1
= Mon Feb 27 17:26:37 CET 2012
===============================================================
z/OS
Sample job log in a z/OS environment
This example shows the output of a job that run on a dynamic agent that
completed successfully:
AWKCGE050I The IBM Cognos report with path
"/content/folder[@name=’Samples’]/folder[@name=’Models’]
/package[@name=’GO Data Warehouse\
(query)’]/folder[@name=’Report Studio Report Samples’]
/report[@name=’Total Revenue by Country’]"
started running.
AWKCGE051I The IBM Cognos report with path
"/content/folder[@name=’Samples’]/folder[@name=’Models’]
/package[@name=’GO Data Warehouse\
(query)’]/folder[@name=’Report Studio Report Samples’]
This example shows the output of a job that run on a dynamic agent that
completed with errors:
AWKCGE050I The IBM Cognos report with path
"/content/package[@name=’tws4apps’]
/folder[@name=’Reports with parameters and prompts’]
/interactiveReport[@name=’Report 7 with special chars’]" started running.
AWKCGE056E The IBM Cognos report completed with errors.
You can manage these jobs both in a distributed and in a z/OS environment, by
selecting the appropriate engine.
Prerequisites
You must install the IBM Workload Scheduler agent on the same computer as the
IBM InfoSphere DataStage server.
For information about the supported versions of the job plug-ins, generate a
dynamic Data Integration report from the IBM Software Product Compatibility
Reports web site, and select the Supported Software tab: Data Integration.
Business scenario
A retail company has many shops around the world. Each shop has its own local
database which stores daily transactions and tracks the number of articles left in its
store. The central business division of the company needs to analyze every
morning all the reports that show the number of articles sold in every country,
grouped by predefined categories.
The company collects this data by using IBM InfoSphere DataStage. The company
runs IBM InfoSphere DataStage jobs overnight to collect data from the local
database of each store and then, using the procedures stored in the central
database, produces the aggregated data to create the business reports. The process
that runs the IBM InfoSphere DataStage jobs overnight is performed manually by
an operator. To reduce costs and to ensure that the SLA requirement of having data
available every morning is satisfied, the company wants to automate this process.
Using IBM Workload Scheduler plug-in for IBM InfoSphere DataStage, the
company can satisfy this objective because the product helps it to automate and
control the entire process.
41
z/OS Define an IBM Workload Scheduler job to run an IBM InfoSphere
DataStage job by using the Dynamic Workload Console connected to a z/OS
engine.
See Chapter 2, “Defining a job,” on page 7 for more information about creating
jobs using the various interfaces available. Some samples of using one or more of
these interfaces to create an IBM InfoSphere DataStage job definition are contained
in the sections that follow.
Table 9 describes the required and optional attributes for IBM InfoSphere
DataStage jobs, together with a description of each attribute.
Table 9. Required and optional attributes for the job definition of IBM InfoSphere DataStage
jobs.
Attribute Description/value Required
Domain The domain to log on to. See
Note.
Server The server to log on to. See Note.
UserName The user to use when logging on.
See Note.
password The password of the authorized
user. It is encrypted when you
submit the job. See Note.
ProjectName The name of the project U
containing the job.
JobName The name of the job to run. U Required if
you do not
specify the job
alias
JobAlias The alias associated to the job to U Required if
run. you do not
specify the job
name
FileRemotePath The fully qualified path to the file
that contains the parameter
values to pass to the job.
ParameterTableValues The list of parameters to associate
to the job.
ForceReset Specify it to reset the IBM
InfoSphere DataStage job before it
runs. When an IBM InfoSphere
DataStage job has a status of
Crashed or Aborted, you must
reset it before running the job
again.
Note: If you do not want to specify this attribute in the XML, you can define it in
the DataStageJobExecutor.properties file. You must define all or none of these
values otherwise you receive an error message. See “Customizing IBM Workload
Scheduler to run IBM InfoSphere DataStage jobs” on page 47.
The following example shows the job definition of an IBM InfoSphere DataStage
job with only the required attributes specified:
NC112206#DS01
TASK
<?xml version="1.0" encoding="UTF-8"?>
<jsdl:jobDefinition xmlns:jsdl="http://www.abc.com/xmlns/prod/scheduling/1.0/jsdl"
xmlns:jsdldatastage="http://www.abc.com/xmlns/prod/scheduling/1.0/jsdldatastage"
name="DATASTAGE">
<jsdl:application name="datastage">
<jsdldatastage:datastage>
<jsdldatastage:DataStageParameters>
<jsdldatastage:DataStagePanel>
<jsdldatastage:Logon>
<jsdldatastage:Domain>it112206.rome.it.com:9444</jsdldatastage:Domain>
<jsdldatastage:Server>it112206</jsdldatastage:Server>
<jsdldatastage:UserName>userName</jsdldatastage:UserName>
<jsdldatastage:password>password</jsdldatastage:password>
</jsdldatastage:Logon>
<jsdldatastage:JobDefinitionGroup>
<jsdldatastage:ProjectNameGroup>
<jsdldatastage:ProjectName>DatastageReport</jsdldatastage:ProjectName>
</jsdldatastage:ProjectNameGroup>
<jsdldatastage:JobNameButtonGroup>
<jsdldatastage:JobNameRadioButton>
<jsdldatastage:JobName>dsj01_succ</jsdldatastage:JobName>
</jsdldatastage:JobNameRadioButton>
</jsdldatastage:JobNameButtonGroup>
<jsdldatastage:FileRemotePath/>
</jsdldatastage:JobDefinitionGroup>
<jsdldatastage:JobExecutionGroup/>
</jsdldatastage:DataStagePanel>
<jsdldatastage:OptionsPanel>
<jsdldatastage:JobOptionsGroup>
<jsdldatastage:WarningLimitButtonGroup>
<jsdldatastage:NoWarningLimitButton/>
</jsdldatastage:WarningLimitButtonGroup>
<jsdldatastage:RowLimitButtonGroup>
<jsdldatastage:NoRowLimitButton/>
</jsdldatastage:RowLimitButtonGroup>
<jsdldatastage:OperationalMetadataGroup>
<jsdldatastage:UseDefault/>
</jsdldatastage:OperationalMetadataGroup>
</jsdldatastage:JobOptionsGroup>
</jsdldatastage:OptionsPanel>
</jsdldatastage:DataStageParameters>
</jsdldatastage:datastage>
</jsdl:application>
</jsdl:jobDefinition>
RECOVERY STOP
The following example shows the job definition of an InfoSphere DataStage job
with all the attributes specified:
NC112206#DS01
TASK
<?xml version="1.0" encoding="UTF-8"?>
<jsdl:jobDefinition xmlns:jsdl="http://www.abc.com/xmlns/prod/scheduling/1.0/jsdl"
xmlns:jsdldatastage="http://www.abc.com/xmlns/prod/scheduling/1.0/jsdldatastage" name="DATASTAGE">
To define a job that runs an IBM InfoSphere DataStage job by using the Dynamic
Workload Console, perform the following procedure. You can also define a job
using the other available interfaces such as Application Lab, see Chapter 2,
“Defining a job,” on page 7 for more information.
Procedure
1. In the console navigation tree, expand Administration > Workload Design and
click Manage Workload Definitions
2. Select an engine. The Workload Designer is displayed.
Environment:
Enter the name of the IBM Workload Scheduler job that runs
the IBM InfoSphere DataStage job.
z/OS
Environment:
Enter the name of the partitioned data set where you want to
create the JCL.
Enter the name of the JCL you want to create in the partitioned
data set.
In the DataStage panel:
In the Credentials section:
Enter the credentials related to the IBM InfoSphere DataStage
job. If you do not want to specify them here, you can define
them in the DataStageJobExecutor.properties file. In this case
IBM Workload Scheduler reads them from the .properties file
when you retrieve any information by using a list or when you
submit the job.
If you do not specify them either using the Dynamic Workload
Console or in the .properties file, IBM Workload Scheduler
assumes that you did not set any security on the IBM
InfoSphere DataStage server and tries the connection to the
IBM InfoSphere DataStage server anyway.
You must specify all or none of these values either using the
Dynamic Workload Console or the .properties file, otherwise
you receive an error message. See “Customizing IBM Workload
Scheduler to run IBM InfoSphere DataStage jobs” on page 47.
In the Job Definition section
Enter the project name and the job name or select them from
the appropriate lists. IBM Workload Scheduler retrieves this
information directly from the IBM InfoSphere DataStage server
database. Alternatively you can use the job alias.
You can view the list of parameters defined for the IBM
InfoSphere DataStage job. Select the ones you want to define
for the job and associate a value to them.
Select Reset job before running to reset the IBM InfoSphere DataStage
job before it runs. When an IBM InfoSphere DataStage job has a status
of Crashed or Aborted, you must reset it before running the job again.
See Chapter 3, “Scheduling and submitting jobs and job streams,” on page 9 for
more information about how to schedule and submit jobs and job streams using
the various interfaces.
After you define an IBM Workload Scheduler IBM InfoSphere DataStage job, you
add it to a job stream with all the necessary scheduling arguments and submit it.
After submission you can kill the IBM Workload Scheduler for IBM InfoSphere
DataStage job if necessary, this action is converted in a Stop action for the IBM
InfoSphere DataStage job.
If the IBM Workload Scheduler agent becomes unavailable when you submit the
job or while the job is running, IBM Workload Scheduler collects the job log when
the agent restarts and assigns the Error or ABEND status to the IBM Workload
Scheduler job, independently of the status of the job in IBM InfoSphere DataStage.
For information about monitoring jobs and job streams, see Chapter 4, “Monitoring
IBM Workload Scheduler jobs,” on page 11.
For information about analyzing the job log, see Chapter 5, “Analyzing the job
log,” on page 13.
Where agent_install_dir is the path where you installed the IBM Workload
Scheduler dynamic agent or the IBM Workload Scheduler for z/OS agent.
The properties file is automatically generated either when you perform a "Test
Connection" from the Dynamic Workload Console in the job definition panels, or
when you submit the job to run the first time. Once the file has been created, you
can customize it. This is especially useful when you need to schedule several jobs
of the same type. You can specify the values in the properties file and avoid
having to provide information such as credentials and other information, for each
job. You can override the values in the properties files by defining different values
at job definition time.
Chapter 9. IBM InfoSphere DataStage jobs 47
You can define the property contained in the .properties file, except the installDir
property, at job definition time also. In this case IBM Workload Scheduler uses the
values you specify at job definition time for running the job. You must define all or
none of these properties either using the command line, the Dynamic Workload
Console, or the .properties file, otherwise you receive an error message. If you do
not define any of these properties either in the .properties file or at job definition
time, IBM Workload Scheduler assumes that you did not set any security on the
IBM InfoSphere DataStage server and tries the connection to the IBM InfoSphere
DataStage server anyway. Table 10 describes the properties contained in
DataStageJobExecutor.properties.
Table 10. Properties to run IBM InfoSphere DataStage jobs
Property Description/value Required
installDir The IBM InfoSphere DataStage Server directory U
where you find the DataStage command, dsjob.
The default is:
UNIX and Linux operating systems:
/opt/IBM/InformationServer/Server/
DSEngine/bin
Windows operating systems:
C:/IBM/InformationServer/Server/
DSEngine/bin
Domain The domain to log on to, expressed as
domain:port_number
Server The server to log on to.
UserName The user to use when logging on.
Password The password of the authorized user. It is
encrypted when you retrieve any information by
using a list or when you submit the job.
Example
Table 11 on page 49 table shows how you can map the IBM Workload Scheduler
job status to the IBM InfoSphere DataStage job status based on the return code you
find in the job log output.
Any other return code or status you find in the IBM InfoSphere DataStage log
generated either by using the IBM InfoSphere DataStage command line or the IBM
InfoSphere DataStage Directory interface, is mapped to error if you are using the
Dynamic Workload Console or to FAILED if you are using the IBM Workload
Scheduler command line.
Purpose
The output of an IBM Workload Scheduler for IBM InfoSphere DataStage job is
composed of two parts:
v The first part is the result of the IBM InfoSphere DataStage dsjob -logsum
command.
v The second part is the result of the IBM InfoSphere DataStage dsjob -report
DETAIL command.
The output shows the current job status. See Chapter 5, “Analyzing the job log,” on
page 13 for more information about accessing the job log.
Sample
See Chapter 5, “Analyzing the job log,” on page 13 for detailed information about
how to access the job properties using the various interfaces available.
The job properties output is the result of the IBM InfoSphere DataStage dsjob
-jobinfo command. This example shows the properties of a job that completed
successfully with warnings.
Example
...
Extra Information
Job Status : RUN with WARNINGS (2)
Job Controller : not available
Job Start Time : Wed Oct 05 17:18:28 2011
Job Wave Number : 142
User Status : not available
Job Control : 0
Interim Status : NOT RUNNING (99)
| Prerequisites
| IBM Workload Scheduler plug-in for IBM Sterling Connect:Direct automates the
| entire file transfer process guaranteeing the success of any subsequent processing
| like decryption, renaming, parsing, and retransmission of those files. You have
| access to real-time monitoring, reporting, event management, and auditing. All the
| tools you need to react to delays or errors and automate recovery actions.
| For information about the supported versions of the job plug-ins, generate a
| dynamic Data Integration report from the IBM Software Product Compatibility
| Reports web site, and select the Supported Software tab: Data Integration.
| A description of the job properties and valid values are detailed in the
| context-sensitive help in the Dynamic Workload Console by clicking the question
| mark (?) icon in the top-right corner of the properties pane.
| For information about creating jobs using the various supported product interfaces,
| see Chapter 2, “Defining a job,” on page 7.
| For more information about IBM Sterling Connect:Direct , see IBM Sterling
| Connect:Direct
| The following table lists the required and optional attributes for IBM Sterling
| Connect:Direct jobs:
| Table 12. Required and optional attributes for the definition of an IBM Sterling
| Connect:Direct job
| Attribute Description and value Required
| Address The host name or IP address of the workstation that is considered as U
| the primary node where IBM Sterling Connect:Direct is installed.
| Port The port number of the primary node workstation where IBM U
| Sterling Connect:Direct is listening.
| User ID The name of the user authorized to access the IBM Sterling U
| Connect:Direct on the primary node workstation.
| Password The password that is associated with the user that is authorized to U
| access the IBM Sterling Connect:Direct on the primary node
| workstation.
| Node Name The name of the workstation that is considered as the secondary node U (Only for
| where IBM Sterling Connect:Direct is installed. Submit Process
| action)
| User ID The name of the user authorized to access the IBM Sterling U (Only for
| Connect:Direct on the secondary node workstation. Submit Process
| action)
| Password The password that is associated with the user that is authorized to U (Only for
| access the IBM Sterling Connect:Direct on the secondary node Submit Process
| workstation. action)
53
| Table 12. Required and optional attributes for the definition of an IBM Sterling
| Connect:Direct job (continued)
| Attribute Description and value Required
| Platform The secondary node workstation operating system: U (Only for
| Submit Process
|| Unknown action)
| Unknown operating systems.
| OpenMVS
| MVS operating systems.
| Receive On the primary node the value is the fully qualified path
| and name of the file to be created on the local target from
| the secondary node.
| The wildcard characters '*' and '?' are supported.
| Remote Filename Path U
|| (sNode) Send On the secondary node the value is the fully qualified
| path and name of the file to be created on the remote
| target of the primary node.
| You can submit jobs by using the Dynamic Workload Console, Application Lab or
| the conman command line. See Chapter 3, “Scheduling and submitting jobs and job
| streams,” on page 9 for information about how to schedule and submit jobs and
| job streams using the various interfaces.
| After submitting the job, when the job is running and is reported in EXEC status in
| IBM Workload Scheduler, you can stop it if necessary, by using the kill command.
| This action is effective on both IBM Workload Scheduler job and IBM Sterling
| Connect:Direct jobs. The IBM Sterling Connect:Direct job is deleted. IBM Workload
| Scheduler assigns the Error or ABEND status with return code 0 to the IBM
| Workload Scheduler job.
| If the IBM Workload Scheduler agent stops when you submit theIBM Workload
| Scheduler IBM Sterling Connect:Direct job or while the job is running, as soon as
| the agent becomes available again IBM Workload Scheduler begins monitoring the
| job from where it stopped.
| For information about how to monitor jobs using the different product interfaces
| available, see Chapter 4, “Monitoring IBM Workload Scheduler jobs,” on page 11.
| The properties file is automatically generated either when you perform a "Test
| Connection" from the Dynamic Workload Console in the job definition panels, or
| when you submit the job to run the first time. Once the file has been created, you
| can customize it. This is especially useful when you need to schedule several jobs
| of the same type. You can specify the values in the properties file and avoid
| having to provide information such as credentials and other information, for each
| job. You can override the values in the properties files by defining different values
| at job definition time.
| The TWS_INST_DIR\TWS\JavaExt\cfg\SterlingJobExecutor.properties
| configuration properties file contains the following Primary and Secondary nodes
| properties :
| primaryNodeAddress=primaryNodeAddress
| primaryUserPwd={aes}7LqI0kLt2kiNWNi2QGIIAKxQat5KCN0SNez7ENweg9w=
| primaryNodePort=primaryNodePort
| primaryUsername=primaryUsername
| secondaryUsername=secondaryUsername
| secondaryUserPwd={aes}ns2erjqeEemph8T2hGvLTiP5hbC+OzqQloXmq9Hu4sk=
| secondaryNodeAddress=secondaryNodeAddress
| If you define an IBM Sterling Connect:Direct job with the Submit file action, the
| secondary node information is not read from the SterlingJobExecutor.properties
| configuration file. The job uses the secondary node name specified in the Node
| Name field of Dynamic Workload Console or, if not specified, the secondary node
| name contained in the IBM Sterling Connect:Direct job Process File Name input.
| While the job is running, you can track the status of the job and analyze the
| properties of the job. In particular, in the Extra Information section, if the job
| contains variables, you can verify the value passed to the variable from the remote
| system. Some job streams use the variable passing feature, for example, the value
| of a variable specified in job 1, contained in job stream A, is required by job 2 in
| order to run in the same job stream.
| For information about how to display the job properties from the various
| supported interfaces, see Chapter 5, “Analyzing the job log,” on page 13.
| For example, from the conman command line, you can see the job properties by
| running:
| conman sj <Sterling_job_name>;props
| For an IBM Sterling Connect:Direct job in the Extra Information section of the
| output command, you see the following properties:
| Submit process action
| Extra Information
| Action Selected = Send
| Check Point Restart = Default
| Compression Type = None
| Destination Disposition = Replace
| Destination File Path = c:\sterling\example_ste_1.exe
| Primary Node Address = austin2.usa.com:1363
| Primary Node User = Administrator
| Process Name = copy1
| Process Number = 145
| Secondary Node Name = austin2_bkp
| Secondary Node User = Administrator
| Source File Path = c:\sterling\examples\example_ste_1.exe
| Submit file action
| Extra Information
| Child Process Name = 56415021
| Primary Node Address = cleveland1.usa.com:1363
| Primary Node User = Administrator
| Process File Name = c:\sterling\processes\PROC1.cdp
| Process File Location= cleveland1.usa.com
| Process Name = Submit1
| Process Number = 416
| Secondary Node Name = losangeles1
| Secondary Node User = Administrator
| Where:
| Action Selected
| The action to perform that you specify in the Action Selected field.
| Check Point Restart
| The check point restart value that you specify in the Check Point Restart
| field.
| Child Process Name
| The name of the child process invoked by the parent process specified by
| Process Name.
| You can export the IBM Sterling Connect:Direct job properties that you can see in
| the Extra Information section, to a successive job in the same job stream instance.
| For more information about the list of job properties that you can export, see the
| table about properties for IBM Sterling Connect:Direct jobs in User's Guide and
| Reference.
| Submit Process action:
| The following example shows the EX_STE_SUB_PROC job definition that
| performs the copy of c:\repository\DBfiles\e* non-compressed files from
| the ny123456.usa.com windows primary node workstation to the
| Ny112130.usa.com windows secondary node workstation with the replace
| Destination Disposition.
| MDM_WIN_EAST#EX_STE_SUB_PROC
| TASK
| <?xml version="1.0" encoding="UTF-8"?>
| <jsdl:jobDefinition xmlns:jsdl=
| For information about how to display the job log from the various supported
| interfaces, see Chapter 5, “Analyzing the job log,” on page 13.
| For example, you can see the job log content by running conman sj
| <Sterling_job_name>;stdlist, where <Sterling_job_name> is the IBM Sterling
| Connect:Direct job name.
| For an IBM Sterling Connect:Direct job log, you see the following information:
| sj @#@.a1;std
|
| ===============================================================
| = JOB : WKS_1#JOBS[(0000 04/03/14),(JOBS)].A1
| = TASK : <?xml version="1.0" encoding="UTF-8"?>
| <jsdl:jobDefinition xmlns:jsdl=
| "http://www.abc.com/xmlns/prod/scheduling/1.0/jsdl"
| xmlns:jsdlsterling=
| "http://www.abc.com/xmlns/prod/scheduling/1.0/jsdlsterling" name="STERLING">
| See also
| From the Dynamic Workload Console you can perform the same task as described
| in
| the Dynamic Workload Console User's Guide, section about Creating job definitions.
| For more information about how to create and edit scheduling objects, see
| the Dynamic Workload Console User's Guide, section about Designing your Workload.
|
| Business scenario
| This scenario shows how the integration between IBM Workload Scheduler and
| IBM Sterling Connect:Direct can provide multiple platform support and offer data
| transfer and security capabilities, as well as system stability and efficiency, while
| exploiting at the same time the job scheduling and monitoring features of IBM
| Workload Scheduler. IBM Sterling Connect:Direct also allows for file transfer
| compression and has a very efficient processing speed.
| Scenario goal
| This scenario shows how the IBM Sterling Connect:Direct and IBM Workload
| Scheduler integration is an agile, scalable, and flexible solution for end-to-end file
| management and automation in a secure environment. IBM Sterling Connect:Direct
| interfaces with operating system security and provides a comprehensive audit trail
| of data movement through extensive statistics logs.
| Business Scenario
| By deploying an integrated solution that uses IBM Sterling Connect:Direct and IBM
| Workload Scheduler, the company can schedule and monitor the transfer of high
| volumes of files with no defined limits on file sizes. The solution scalability helps
| ensure that Finch International Bank would be able to handle peak demand and
| keep pace as they increase the number of branches.
| The report is then parsed for analysis and uploaded into the database.
| Prerequisite steps
| For information about the supported versions of the job plug-ins, generate a
| dynamic Data Integration report from the IBM Software Product Compatibility
| Reports web site, and select the Supported Software tab: Data Integration.
| To create an IBM WebSphere MQ job definition, you must first complete the
| prerequisite steps that are listed in the following procedure.
| 1. Install a supported version of IBM WebSphere MQ.
| 2. On the IBM WebSphere MQ server workstation, create a user to use in the IBM
| Workload Scheduler job definition that is not a privileged user. On UNIX
| operating systems, the user must not belong to the mqm group created at
| installation time. On Windows operating systems, the user cannot be a member
| of the Administrator group.
| 3. Allow the user that is defined in step 1 to connect to its queue manager,
| queues, and channels. For the queue manager associated to the user, set the
| Display for the Administration authority, and the Connect and the Inquire for
| MQI authority. For more information about IBM WebSphere MQ users, see
| http://www-01.ibm.com/support/knowledgecenter/SSFKSJ_7.5.0/
| com.ibm.mq.sec.doc/q013290_.htm.
| A description of the job properties and valid values are detailed in the
| context-sensitive help in the Dynamic Workload Console by clicking the question
| mark (?) icon in the top-right corner of the properties pane.
| For information about creating jobs using the various supported product interfaces,
| see Chapter 2, “Defining a job,” on page 7.
| For more information about IBM WebSphere MQ, see the IBM WebSphere MQ
| online product documentation in IBM Knowledge Center.
| The following table lists the required and optional attributes for IBM WebSphere
| MQ jobs:
| Table 13. Required and optional attributes for the definition of an IBM WebSphere MQ job
| Attribute Description and value Required
| MQ Server The host name or IP address of the workstation where IBM U
| WebSphere MQ is installed.
| MQ Port The port number of the workstation where IBM WebSphere MQ is U
| listening.
| User name The name of the user authorized to run the IBM WebSphere MQ
| commands on the IBM WebSphere MQ server.
63
| Table 13. Required and optional attributes for the definition of an IBM WebSphere MQ
| job (continued)
| Attribute Description and value Required
| Password The password that is associated with the user that is authorized to
| run the IBM WebSphere MQ commands on the IBM WebSphere MQ
| server.
| MQ Queue Manager The name of the IBM WebSphere MQ Queue Manager. U
| MQ Channel The name of the IBM WebSphere MQ Channel. U
| Operation U
| Request/Response
| The IBM WebSphere MQ Request/Response operation
| type.
| You schedule IBM Workload Scheduler IBM WebSphere MQ jobs by defining them
| in job streams. Add the job to a job stream with all the necessary scheduling
| arguments and submit the job stream.
| You can submit jobs by using the Dynamic Workload Console, Application Lab or
| the conman command line. See Chapter 3, “Scheduling and submitting jobs and job
| streams,” on page 9 for information about how to schedule and submit jobs and
| job streams using the various interfaces.
| After submission, when the job is running and is reported in EXEC status in IBM
| Workload Scheduler, you can stop it if necessary, by using the kill command.
| However, this action is effective only for the Request/Response scenario, therefore
| the IBM Workload Scheduler processes do not wait to receive a response from the
| IBM WebSphere MQ job.
| If the IBM Workload Scheduler agent stops when you submit the IBM Workload
| Scheduler IBM WebSphere MQ job or while the job is running, as soon as the agent
| restarts in the Request/Response scenario, IBM Workload Scheduler begins
| monitoring the job from where it stopped and waits for the Response phase.
| For information about how to monitor jobs using the different product interfaces
| available, see Chapter 4, “Monitoring IBM Workload Scheduler jobs,” on page 11.
| Job properties
| While the job is running, you can track the status of the job and analyze the
| properties of the job. In particular, in the Extra Information section, if the job
| contains variables, you can verify the value passed to the variable from the remote
| For more information, see the table about properties for IBM WebSphere MQ jobs
| in User's Guide and Reference.
| For information about how to display the job properties from the various
| supported interfaces, see Chapter 5, “Analyzing the job log,” on page 13.
| For example, from the conman command line, you can see the job properties by
| running:
| conman sj <MQ_job_name>;props
| For an IBM WebSphere MQ job in the Extra Information section of the output
| command, you see the following properties:
| Extra Information
| Channel = Channel1
| CorrelationID = 414D5120514D5F6E6330363030303220D215305320024304
| Manager = QueueManager1
| Message ID = 414D5120514D5F6E6330363030303220D215305320024303
| Response message = Received original message: ’Info done’
| Request Message = Need info
| Port = 1414
| Server = NY1_Win.usa.com
| where
| Channel
| The name of the IBM WebSphere MQ Channel that you specify in the MQ
| Channel field.
| CorrelationID
| The ID that correlates the request and response.
| Manager
| The name of the IBM WebSphere MQ Queue Manager that you specify in
| the MQ Queue Manager field.
| Message ID
| The IBM WebSphere MQ message ID.
| Response message
| The IBM WebSphere MQ response message.
| Request Message
| The IBM WebSphere MQ request message that you specify in the MQ
| Message field.
| Port The port number of the workstation where IBM WebSphere MQ is listening
| that you specify in the MQ Port field.
| Server The host name or IP address of the workstation where IBM WebSphere
| MQ is installed that you specify in the MQ Server field.
| You can export the IBM WebSphere MQ job properties that you can see in the
| Extra Information section, to a successive job in the same job stream instance. For
| more information about the list of job properties that you can export, see the table
| about properties for IBM WebSphere MQ jobs in User's Guide and Reference.
| The following example shows the job definition for a WebSphere MQ job that
| performs a Publish operation:
| $JOBS
| WASMQ_WS#PUBLISH
| TASK
| <?xml version="1.0" encoding="UTF-8"?>
| <jsdl:jobDefinition xmlns:jsdl="http://www.abc.com/xmlns/prod/scheduling/1.0/jsdl"
| xmlns:jsdlwebspheremq=
| "http://www.abc.com/xmlns/prod/scheduling/1.0/jsdlwebspheremq" name="WEBSPHEREMQ">
| <jsdl:application name="webspheremq">
| <jsdlwebspheremq:webspheremq>
| <jsdlwebspheremq:WebSphereMQParameters>
| <jsdlwebspheremq:WebSphereMQMainPanel>
| <jsdlwebspheremq:PropertyFileGroup>
| <jsdlwebspheremq:option>webspheremqServer4.properties</jsdlwebspheremq:option>
| </jsdlwebspheremq:PropertyFileGroup>
| <jsdlwebspheremq:PropertiesGroup>
| <jsdlwebspheremq:server>localhost</jsdlwebspheremq:server>
| <jsdlwebspheremq:port>1414</jsdlwebspheremq:port>
| <jsdlwebspheremq:user>agentuser</jsdlwebspheremq:user>
| See also
| From the Dynamic Workload Console you can perform the same task as described
| in
| the Dynamic Workload Console User's Guide, section about Creating job definitions.
| For more information about how to create and edit scheduling objects, see
| the Dynamic Workload Console User's Guide, section about Designing your Workload.
|
| Business scenario
| Managing online transactions quickly and reliably: a business scenarioThis scenario
| shows how the integration between IBM WebSphere MQ and IBM Workload
| Scheduler implements an integrated message queuing infrastructure that can
| transfer large amounts of information between applications in the network in near
| real-time while at the same time using the job scheduling and monitoring features
| of IBM Workload Scheduler. IBM WebSphere MQ delivers a reliable infrastructure
| for transaction data and file-based information with a relatively small footprint.
| Scenario goal
| This scenario shows how the IBM Workload Scheduler and IBM WebSphere MQ
| integration is a resilient, scalable, and reliable solution for end-to-end file
| management and job scheduling in a secure environment.
| Business Scenario
| Ripley's Organic Shop is a large retailer, specializing in organic food, health, and
| wellbeing products. The shop is based in the U.K. and has branches in several
| European countries. It wants to take advantage of the increasing interest for
| natural and biological products and plans to increase its market share by joining
| the online market and providing European-wide delivery. It therefore needs a
| robust infrastructure to quickly manage online purchases. This is especially
| important in today's competitive Internet market, where competition is increasing
| by the hour.
| The job stream contains a number of jobs that send all the details of the order to
| the warehouse. When the order has completed and all the items are packaged and
| ready to be shipped, a dependency is released on two jobs in the job stream, one
| that notifies the delivery service that the package is ready to be collected from the
| warehouse and delivered to the customer and the other to send an email or text
| message to the customer with the shipment tracking number and the status of the
| order.
| Prerequisites
| The IBM Workload Scheduler plug-in for Hadoop Distributed File System enables
| you to access the Hadoop Distributed File System from any computer, and work
| on files and directories. You can download a file, upload a file or free text, append
| a file or free text to another file, rename or delete a file, create a directory, and wait
| for the creation of a file on a Hadoop Distributed File System server.
| For information about the supported versions of the job plug-ins, generate a
| dynamic Data Integration report from the IBM Software Product Compatibility
| Reports web site, and select the Supported Software tab: Data Integration.
| Before you can define Hadoop Distributed File System jobs, you must install an
| IBM Workload Scheduler agent with a connection to the Hadoop Distributed File
| System server.
| A description of the job properties and valid values are detailed in the
| context-sensitive help in the Dynamic Workload Console by clicking the question
| mark (?) icon in the top-right corner of the properties pane.
| For more information about creating jobs using the various supported product
| interfaces, see Chapter 2, “Defining a job,” on page 7.
| The following table lists the required and optional attributes for Hadoop
| Distributed File System jobs:
| Table 14. Required and optional attributes for the definition of a Hadoop Distributed File
| System job
| Attribute Description and value Required
| Connection properties - Hadoop Distributed File System section
| Hostname The hostname of the Hadoop Distributed File System server. U
| Port The port of the Hadoop Distributed File System server. U
| Protocol The protocol for connecting to the Hadoop Distributed File System
| server. Supported values are http and https.
| User The user to be used for accessing the Hadoop Distributed File System
| server.
| Password The password to be used for accessing the Hadoop Distributed File
| System server.
| Connection properties - Retry options section
| Number of retries The number of times the program retries performing the operation.
| Retry interval (seconds) The number of seconds the program waits before retrying the
| operation.
| Action properties - Upload section
| File on Hadoop Distributed The name and path of the file on the Hadoop Distributed File System U
| File System server. Use this option to upload a file from the local workstation to
| the Hadoop Distributed File System server.
69
| Table 14. Required and optional attributes for the definition of a Hadoop Distributed File
| System job (continued)
| Attribute Description and value Required
| Permissions The permissions to be defined for the file on the Hadoop Distributed
| File System server.
| Overwrite Specifies whether the file on the Hadoop Distributed File System
| server should be overwritten, if existing.
| Upload a file The name and path of the file on the local workstation.
| File content Specifies the file content to be written into the file on the Hadoop
| Distributed File System server.
| Action properties - Download section
| File on Hadoop Distributed The name and path of the file on the Hadoop Distributed File System U
| File System server. Use this option to download a file from theHadoop
| Distributed File System server to the local workstation.
| Save file as Specify the name and path of the file to be saved locally U
| Action properties - Append section
| File on Hadoop Distributed The name and path of the file on the Hadoop Distributed File System U
| File System server. Use this option to append a file from the local workstation or a
| specific content to a file the Hadoop Distributed File System server.
| Append a file The name and path of the file to be appended to the specified file on
| the Hadoop Distributed File System server.
| Append this content Specify the content to be appended to the file on the Hadoop
| Distributed File System server.
| Action properties - Rename section
| File or directory on The name and path of the file or directory on the Hadoop Distributed U
| Hadoop Distributed File File System server. Use this option to modify the name of a file or
| System directory on the Hadoop Distributed File System server.
| New path on Hadoop The new name of the file or directory on the Hadoop Distributed File U
| Distributed File System System server.
| Action properties - Delete section
| File or directory on The name and path of the file or directory on the Hadoop Distributed U
| Hadoop Distributed File File System server. Use this option to delete a file or directory on the
| System Hadoop Distributed File System server.
| Recursive Specifies whether this action should be recursive.
| Action properties - Create directory section
| Directory on Hadoop The name and path of the directory on the Hadoop Distributed File U
| Distributed File System System server. Use this option to create a directory on the Hadoop
| Distributed File System server.
| Permissions Specifies the permissions to be assigned to the directory.
| Action properties - Wait for a file section
| File or directory on The name and path of the file or directory on the Hadoop Distributed U
| Hadoop Distributed File File System server. Use this option to define a dependency in the job
| System for the creation on the Hadoop Distributed File System server of a file
| or directory. When the file or directory are created, the job status
| changes to successful.
|
| You schedule IBM Workload Scheduler Hadoop Distributed File System jobs by
| defining them in job streams. Add the job to a job stream with all the necessary
| scheduling arguments and submit the job stream.
| You can submit jobs by using the Dynamic Workload Console, Application Lab or
| the conman command line. See Chapter 3, “Scheduling and submitting jobs and job
| streams,” on page 9 for information about how to schedule and submit jobs and
| job streams using the various interfaces.
| After submission, when the job is running and is reported in EXEC status in IBM
| Workload Scheduler, you can stop it if necessary, by using the kill command.
| If the IBM Workload Scheduler agent stops when you submit the IBM Workload
| Scheduler Hadoop Distributed File System job or while the job is running, when
| the agent becomes available again the job status changes to UNKOWN and you
| have to resubmit the job. If the job consists of the Wait for a file action, as soon as
| the agent becomes available again IBM Workload Scheduler begins monitoring the
| job from where it stopped.
| For information about how to monitor jobs using the different product interfaces
| available, see Chapter 4, “Monitoring IBM Workload Scheduler jobs,” on page 11.
| Job properties
| While the job is running, you can track the status of the job and analyze the
| properties of the job. In particular, in the Extra Information section, if the job
| contains variables, you can verify the value passed to the variable from the remote
| system. Some job streams use the variable passing feature, for example, the value
| of a variable specified in job 1, contained in job stream A, is required by job 2 in
| order to run in the same job stream.
| For information about how to display the job properties from the various
| supported interfaces, see Chapter 5, “Analyzing the job log,” on page 13.
| For example, from the conman command line, you can see the job properties by
| running:
| conman sj <job_name>;props
| The properties are listed in the Extra Information section of the output command.
| For more information about passing variables between jobs, see the section about
| passing job properties from one job to another in the same job stream instance in
| User's Guide and Reference.
| For information about how to display the job log from the various supported
| interfaces, see Chapter 5, “Analyzing the job log,” on page 13.
| For example, you can see the job log content by running conman sj
| <job_name>;stdlist, where <job_name> is the Hadoop Distributed File System job
| name.
| See also
| From the Dynamic Workload Console you can perform the same task as described
| in
| the Dynamic Workload Console User's Guide, section about Creating job definitions.
| For more information about how to create and edit scheduling objects, see
| Prerequisites
| For information about the supported versions of the job plug-ins, generate a
| dynamic Data Integration report from the IBM Software Product Compatibility
| Reports web site, and select the Supported Software tab: Data Integration.
| For more information about creating jobs using the various supported product
| interfaces, see Chapter 2, “Defining a job,” on page 7.
| The following table lists the required and optional attributes for Hadoop Map
| Reduce jobs:
| Table 15. Required and optional attributes for the definition of a Hadoop Map Reduce job
| Attribute Description and value Required
| Hadoop Installation The directory where you installed Hadoop. For example, if Hadoop is
| Directory installed in this path: /opt/hadoop/hadoop_2.6.0/bin/hadoop, you
| must specify the /opt/hadoop/hadoop_2.6.0 path for this attribute.
| Jar File The path and name of the jar file containing the Hadoop Map Reduce U
| code
| Main Class The Java class containing the main method to run when the job is
| loaded.
| Arguments The arguments of the job are provided to the main method
|
| You schedule IBM Workload Scheduler Hadoop Map Reduce jobs by defining them
| in job streams. Add the job to a job stream with all the necessary scheduling
| arguments and submit the job stream.
| You can submit jobs by using the Dynamic Workload Console, Application Lab or
| the conman command line. See Chapter 3, “Scheduling and submitting jobs and job
| streams,” on page 9 for information about how to schedule and submit jobs and
| job streams using the various interfaces.
| If the IBM Workload Scheduler agent stops when you submit the IBM Workload
| Scheduler Hadoop Map Reduce job or while the job is running, when the agent
| becomes available again, the job status changes to ABEND and you have to
| resubmit the job. The Hadoop Map Reduce job status changes to UNDEFINED.
| You can view this information in the Extra information section of the Hadoop Map
| Reduce job in the Dynamic Workload Console.
73
| For information about how to monitor jobs using the different product interfaces
| available, see Chapter 4, “Monitoring IBM Workload Scheduler jobs,” on page 11.
| HadoopMapReduceJobExecutor.properties file
| The properties file is automatically generated either when you perform a "Test
| Connection" from the Dynamic Workload Console in the job definition panels, or
| when you submit the job to run the first time. Once the file has been created, you
| can customize it. This is especially useful when you need to schedule several jobs
| of the same type. You can specify the values in the properties file and avoid
| having to provide information such as credentials and other information, for each
| job. You can override the values in the properties files by defining different values
| at job definition time.
| The hadoopDir property must be specified either in this file or when creating the
| Hadoop Map Reduce job definition in the Dynamic Workload Console. For more
| information, see the Dynamic Workload Console online help.
| Job properties
| While the job is running, you can track the status of the job and analyze the
| properties of the job. In particular, in the Extra Information section, if the job
| contains variables, you can verify the value passed to the variable from the remote
| system. Some job streams use the variable passing feature, for example, the value
| of a variable specified in job 1, contained in job stream A, is required by job 2 in
| order to run in the same job stream.
| For information about how to display the job properties from the various
| supported interfaces, see Chapter 5, “Analyzing the job log,” on page 13. For
| example, from the conman command line, you can see the job properties by
| running:
| conman sj <job_name>;props
| The properties are listed in the Extra Information section of the output command.
| For information passing variables between jobs, see the section about passing job
| properties from one job to another in the same job stream instance in User's Guide
| and Reference.
| For information about how to display the job log from the various supported
| interfaces, see Chapter 5, “Analyzing the job log,” on page 13.
| See also
| From the Dynamic Workload Console you can perform the same task as described
| in
| the Dynamic Workload Console User's Guide, section about Creating job definitions.
| For more information about how to create and edit scheduling objects, see
| the Dynamic Workload Console User's Guide, section about Designing your Workload.
Prerequisites
For information about the supported versions of the job plug-ins, generate a
dynamic Data Integration report from the IBM Software Product Compatibility
Reports web site, and select the Supported Software tab: Data Integration.
Before you can define Oozie jobs, you must create the IBM Workload Scheduler
agent connection to the Oozie server.
A description of the job properties and valid values are detailed in the
context-sensitive help in the Dynamic Workload Console by clicking the question
mark (?) icon in the top-right corner of the properties pane.
For more information about creating jobs using the various supported product
interfaces, see Chapter 2, “Defining a job,” on page 7.
The following table lists the required and optional attributes for Oozie jobs:
Table 16. Required and optional attributes for the definition of an Oozie job
Attribute Description and value Required
Connection attributes
hostname The host name of the Oozie server. If you do not
specify the
hostname
attribute, then
hostname,
protocol, and
VerifyCheckbox
attributes are
read from the
properties file.
port The port number where the Oozie server is listening.
protocol The protocol for connecting to the Oozie server. Supported values are
http and https.
userName The user to be used for accessing the Oozie server.
password The password to be used for accessing the Oozie server.
keyStore FilePath The fully qualified path of the keystore file containing the private key
that is used to make the connection.
keyStore Password The password that protects the private key and is required to make Required only
the connection. if you specify a
keystore file
path.
HostnameVerify Supported values are true and false.
Checkbox v When the value is true, the syntax of the Oozie server name, as
featured in the keystore file, must match exactly the URL. If they
do not match, no authorization is granted to access the server.
v When the value is false, the control on the server name is not
enforced.
NumberOfRetries The number of times the program retries in case of connection failure.
Default value is 0.
77
Table 16. Required and optional attributes for the definition of an Oozie job (continued)
Attribute Description and value Required
RetryIntervalSeconds The number of seconds the program waits before retrying in case of
connection failure. Default value is 30.
Action attributes for all the job types
nodeName The URL of the Hadoop name-node. U
jobTracker The URL of the Hadoop job-tracker. U
jobUserName The name of the user submitting the Hadoop job. U
libPath The path in the Hadoop file system, where the jar files necessary to U
the Hadoop job reside.
Action attributes for Oozie workflow job type
workflowPath The path in the Hadoop file system, where the workflow application U
resides.
Action attributes for MapReduce job type
Mapper-task classname The map-task classname. U
Reducer-task classname The reducer-task classname. U
Mapper-task input The map-task input directory. U
directory
Reducer-task output The reduce-task output directory. U
directory
Action attributes for Hive, Pig, and Sqoop job types
Actual command or script The actual command or script that you want to run with your job. U
Parameters The parameters, and related values, that you are passing to your job.
Options The options that you are passing to your job.
Advanced attributes
customPropertiesTableValue Additional properties, and related values, that you might want to
pass to your job. For example:
<jsdloozie:customPropertiesTableValue
key="examplesRoot">examples</jsdloozie:
customPropertiesTableValue>
Note:
If incorrect values are specified for timeout and pollingPeriod at job definition
time, during the job execution the incorrect values are replaced as follows:
v If numeric values are specified that are lower than the minimum allowed values,
they are replaced with:
timeout
10 seconds (the minimum allowed value)
pollingPeriod
5 seconds (the minimum allowed value)
v If non-numeric values are specified, they are replaced with:
timeout
7200 seconds (the default value)
pollingPeriod
15 seconds (the default value)
You schedule IBM Workload Scheduler Oozie jobs by defining them in job streams.
Add the job to a job stream with all the necessary scheduling arguments and
submit the job stream.
You can submit jobs by using the Dynamic Workload Console, Application Lab or
the conman command line. See Chapter 3, “Scheduling and submitting jobs and job
streams,” on page 9 for information about how to schedule and submit jobs and
job streams using the various interfaces.
After submission, when the job is running and is reported in EXEC status in IBM
Workload Scheduler, you can stop it if necessary, by using the kill command. This
action stops also the program execution on the Oozie server.
Monitoring a job
If the IBM Workload Scheduler agent stops when you submit the Oozie job, or
while the job is running, the job restarts automatically as soon as the agent restarts.
For information about how to monitor jobs using the different product interfaces
available, see Chapter 4, “Monitoring IBM Workload Scheduler jobs,” on page 11.
OozieJobExecutor.properties file
The properties file is automatically generated either when you perform a "Test
Connection" from the Dynamic Workload Console in the job definition panels, or
when you submit the job to run the first time. Once the file has been created, you
can customize it. This is especially useful when you need to schedule several jobs
of the same type. You can specify the values in the properties file and avoid
having to provide information such as credentials and other information, for each
job. You can override the values in the properties files by defining different values
at job definition time.
For a description of each property, see the corresponding job attribute description
in Table 16 on page 77.
While the job is running, you can track the status of the job and analyze the
properties of the job. In particular, in the Extra Information section, if the job
contains variables, you can verify the value passed to the variable from the remote
system. Some job streams use the variable passing feature, for example, the value
of a variable specified in job 1, contained in job stream A, is required by job 2 in
order to run in the same job stream.
For information about how to display the job properties from the various
supported interfaces, see Chapter 5, “Analyzing the job log,” on page 13. For
example, from the conman command line, you can see the job properties by
running:
conman sj <job_name>;props
The properties are listed in the Extra Information section of the output command.
For information about passing job properties, see the topic about passing job
properties from one job to another in the same job stream instance in the User's
Guide and Reference.
For information about how to display the job log from the various supported
interfaces, see Chapter 5, “Analyzing the job log,” on page 13.
For example, you can see the job log content by running conman sj
<job_name>;stdlist, where <job_name> is the Oozie job name.
See also
From the Dynamic Workload Console you can perform the same task as described
in
the Dynamic Workload Console User's Guide, section about Creating job definitions.
For more information about how to create and edit scheduling objects, see
the Dynamic Workload Console User's Guide, section about Designing your Workload.
* Prerequisites
* For information about the supported versions of the job plug-ins, generate a
* dynamic Data Integration report from the IBM Software Product Compatibility
* Reports web site, and select the Supported Software tab: Data Integration.
* A description of the job properties and valid values are detailed in the
* context-sensitive help in the Dynamic Workload Console by clicking the question
* mark (?) icon in the top-right corner of the properties pane.
* For more information about creating jobs using the various supported product
* interfaces, see Chapter 2, “Defining a job,” on page 7.
* The following table lists the required and optional attributes for Apache Spark
* jobs:
* Table 17. Required and optional attributes for the definition of an Apache Spark job
* Attribute Description and value Required
* Connection attributes
* Url The Apache Spark server Url. It must have the following format: If not specified
* http://<SPARK_SERVER>:8080/json (dashboard address). in the job
* definition, it
* must be
* supplied in the
* plug-in
* properties file.
* REST Url The Apache Spark server Url to execute REST API calls. It must have If not specified
* the following format: http://<SPARK_SERVER>:6066 where 6066 is in the job
* the default port for REST API calls. definition, it
* must be
* supplied in the
* plug-in
* properties file.
* Resource Name The full path to the .jar, .py, or .R file that contains the application U
* code.
* Resource Type The type of resource specified in the Resource Name field.
* Main Class The entry point for your application. For example, U
* org.apache.spark.examples.SparkPi.
* Arguments The arguments passed to the main method of your main class, if any.
* If more than one argument is present, use commas to separate the
* different arguments.
* Application Name The name of the application. U
* JAR The full path to a bundled jar including your application and all U
* dependencies. The URL must be globally visible inside your cluster,
* for instance, an hdfs path or a file path that is present on all nodes.
* Deploy Mode The deploy mode of Apache Spark driver program:
81
* Table 17. Required and optional attributes for the definition of an Apache Spark
* job (continued)
* Attribute Description and value Required
* Driver Cores Number of cores to use for the driver process, only in cluster mode.
* Driver Memory Amount of memory in gigabytes to use for the driver process.
* Executor Cores The number of cores to use on each executor. It is ignored when
* Apache Spark runs in standalone mode: in this case, it gets the value
* of Driver Cores since the executor is launched within a driver jvm
* process.
* Executor Memory Amount of memory in gigabytes to use per executor process. It is U
* ignored when spark runs in standalone mode: in this case, it gets the
* value of Driver Memory since the executor is launched within a
* driver jvm process.
* Variable List The list of variables with related values that you want to specify.
* Click the plus (+) sign to add one or more variables to the variable
* list. Click (-) sign to remove one or more variables from the variable
* list. You can search a variable in the list by specifying the variable
* name in the filter box.
*
* Note: Required and optional attributes cannot contain double quotation mark.
* You schedule IBM Workload Scheduler Apache Spark jobs by defining them in job
* streams. Add the job to a job stream with all the necessary scheduling arguments
* and submit the job stream.
* You can submit jobs by using the Dynamic Workload Console, Application Lab or
* the conman command line. See Chapter 3, “Scheduling and submitting jobs and job
* streams,” on page 9 for information about how to schedule and submit jobs and
* job streams using the various interfaces.
* After submission, when the job is running and is reported in EXEC status in IBM
* Workload Scheduler, you can stop it if necessary, by using the kill command. This
* action stops also the program execution on the Apache Spark server.
* Monitoring a job
* If the IBM Workload Scheduler agent stops when you submit the Apache Spark
* job, or while the job is running, the job restarts automatically as soon as the agent
* restarts.
* For information about how to monitor jobs using the different product interfaces
* available, see Chapter 4, “Monitoring IBM Workload Scheduler jobs,” on page 11.
* ApacheSparkJobExecutor.properties
* The properties file is automatically generated either when you perform a "Test
* Connection" from the Dynamic Workload Console in the job definition panels, or
* when you submit the job to run the first time. Once the file has been created, you
* can customize it. This is especially useful when you need to schedule several jobs
* of the same type. You can specify the values in the properties file and avoid
* having to provide information such as credentials and other information, for each
* job. You can override the values in the properties files by defining different values
* at job definition time.
* The url and sparkurl properties must be specified either in this file or when
* creating the Apache Spark job definition in the Dynamic Workload Console. For
* more information, see the Dynamic Workload Console online help.
* The timeout property represents the time, in seconds, that IBM Workload
* Scheduler waits for a reply from Apache Spark server. When the timeout expires
* with no reply, the job terminates with abend status. The timeout property can be
* specified only in the properties file.
* For a description of each property, see the corresponding job attribute description
* in Table 17 on page 81.
* Job properties
* While the job is running, you can track the status of the job and analyze the
* properties of the job. In particular, in the Extra Information section, if the job
* contains variables, you can verify the value passed to the variable from the remote
* system. Some job streams use the variable passing feature, for example, the value
* of a variable specified in job 1, contained in job stream A, is required by job 2 in
* order to run in the same job stream.
* For information about how to display the job properties from the various
* supported interfaces, see Chapter 5, “Analyzing the job log,” on page 13. For
* example, from the conman command line, you can see the job properties by
* running:
* conman sj <job_name>;props
* The properties are listed in the Extra Information section of the output command.
* For information about passing job properties, see the topic about passing job
* properties from one job to another in the same job stream instance in the User's
* Guide and Reference.
* The following example shows an Apache Spark job definition via composer
* command line:
* <?xml version="1.0" encoding="UTF-8"?>
* <jsdl:jobDefinition xmlns:jsdl="http://www.ibm.com/xmlns/prod/scheduling/1.0/jsdl"
* xmlns:jsdlapachespark="http://www.ibm.com/xmlns/prod/scheduling/1.0/jsdlapachespark" name="APACHESPARK">
* <jsdl:application name="apachespark">
* <jsdlapachespark:apachespark>
* <jsdlapachespark:ApacheSparkParameters>
* <jsdlapachespark:Connection>
* <jsdlapachespark:connectionInfo>
* <jsdlapachespark:url>{url}</jsdlapachespark:url>
* <jsdlapachespark:sparkurl>{sparkurl}</jsdlapachespark:sparkurl>
* </jsdlapachespark:connectionInfo>
* </jsdlapachespark:Connection>
* <jsdlapachespark:Action>
* <jsdlapachespark:ResourceProperties>
* <jsdlapachespark:resourcename>{resourcename}</jsdlapachespark:resourcename>
* <jsdlapachespark:resourcetype>{resourcetype}</jsdlapachespark:resourcetype>
* <jsdlapachespark:mainclass>{mainclass}</jsdlapachespark:mainclass>
* <jsdlapachespark:arguments>{arguments}</jsdlapachespark:arguments>
* </jsdlapachespark:ResourceProperties>
* For information about how to display the job log from the various supported
* interfaces, see Chapter 5, “Analyzing the job log,” on page 13.
* For example, you can see the job log content by running conman sj
* <job_name>;stdlist, where <job_name> is the Apache Spark job name.
* See also
* From the Dynamic Workload Console you can perform the same task as described
* in
* the Dynamic Workload Console User's Guide, section about Creating job definitions.
* For more information about how to create and edit scheduling objects, see
* the Dynamic Workload Console User's Guide, section about Designing your Workload.
* Prerequisites
* For information about the supported versions of the job plug-ins, generate a
* dynamic Data Integration report from the IBM Software Product Compatibility
* Reports web site, and select the Supported Software tab: Data Integration.
* Before you can define Amazon EC2 jobs, you must have the Access Key ID and the
* Secret Access Key to use Amazon EC2 API(s).
* A description of the job properties and valid values are detailed in the
* context-sensitive help in the Dynamic Workload Console by clicking the question
* mark (?) icon in the top-right corner of the properties pane.
* For more information about creating jobs using the various supported product
* interfaces, see Chapter 2, “Defining a job,” on page 7.
* The following table lists the required and optional attributes for Amazon EC2 jobs:
* Table 18. Required and optional attributes for the definition of an Amazon EC2 job
* Attribute Description and value Required
* Connection attributes
* Access Key ID The Access Key ID associated to your Amazon EC2 account. U
* Secret Access Key The Secret Access Key associated to your Amazon EC2 account. U
* Dry-Run Supported values are true and false.
*
* Checkbox v When the value is true, Amazon EC2 verifies the required
* permission to run the requested action, without actually making
* the request.
* v When the value is false, Amazon EC2 doesn't verifies the required
* permission to run the requested action.
* Default is false.
* Action attributes when managing an existing Amazon EC2 instance
* Instance The name of the instance that you want to work with. U
* Change Power State To change the power state of the instance, specify the new state:
* v Start
* v Stop
* v Reboot
* Create Amazon Machine To create an Amazon EC2 Machine Image that provides the
* Image (AMI) information required to launch the instance.
* Image Name The Amazon EC2 Machine Image name.
* Description The Amazon EC2 Machine Image description.
85
* Table 18. Required and optional attributes for the definition of an Amazon EC2
* job (continued)
* Attribute Description and value Required
* No Reboot Instance Supported values are true and false.
*
* Checkbox v When the value is true, you do not want Amazon EC2 to reboot
* the instance.
* v When the value is false, you do want Amazon EC2 to reboot the
* instance.
* Default is false.
* Remove Select this option to remove the instance.
* Action attributes when creating a new Amazon EC2 instance
* Amazon Machine Image The Amazon EC2 Machine Image (AMI) name. U
* (AMI)
* Instance Type The hardware of the host computer used for your instance. U
* Network The network interface to your instance. If you don’t specify this
* attribute, the Amazon EC2 default value is used.
* Subnet The ID of the subnet where your instance is created. If you don’t
* specify this attribute, the Amazon EC2 default value is used.
* Security-group The security group to be assigned to your instance. If you don’t
* specify this attribute, the instance is automatically assigned to the
* Amazon EC2 default security group.
* EBS Volume size The size (GB) of the Elastic Block Storage device that you can attach
* to the instance.
*
* AmazonEC2JobExecutor.properties file
* Additional properties needed to run Amazon EC2 are set in the plug-in properties
* file
* The properties file is automatically generated either when you perform a "Test
* Connection" from the Dynamic Workload Console in the job definition panels, or
* when you submit the job to run the first time. Once the file has been created, you
* can customize it. This is especially useful when you need to schedule several jobs
* of the same type. You can specify the values in the properties file and avoid
* having to provide information such as credentials and other information, for each
* job. You can override the values in the properties files by defining different values
* at job definition time.
* The region, maxresults, and keypair properties can be specified only in the
* AmazonEC2JobExecutor.properties file.
* You schedule IBM Workload Scheduler Amazon EC2 jobs by defining them in job
* streams. Add the job to a job stream with all the necessary scheduling arguments
* and submit the job stream.
* You can submit jobs by using the Dynamic Workload Console, Application Lab or
* the conman command line. See Chapter 3, “Scheduling and submitting jobs and job
* streams,” on page 9 for information about how to schedule and submit jobs and
* job streams using the various interfaces.
* After submission, when the job is running and is reported in EXEC status in IBM
* Workload Scheduler, you can stop it if necessary, by using the kill command.
* Monitoring a job
* If the IBM Workload Scheduler agent stops when you submit the Amazon EC2 job,
* or while the job is running, the job restarts automatically as soon as the agent
* restarts.
* For information about how to monitor jobs using the different product interfaces
* available, see Chapter 4, “Monitoring IBM Workload Scheduler jobs,” on page 11.
* Job properties
* While the job is running, you can track the status of the job and analyze the
* properties of the job. In particular, in the Extra Information section, if the job
* contains variables, you can verify the value passed to the variable from the remote
* system. Some job streams use the variable passing feature, for example, the value
* of a variable specified in job 1, contained in job stream A, is required by job 2 in
* order to run in the same job stream.
* For information about how to display the job properties from the various
* supported interfaces, see Chapter 5, “Analyzing the job log,” on page 13. For
* example, from the conman command line, you can see the job properties by
* running:
* conman sj <job_name>;props
* The properties are listed in the Extra Information section of the output command.
* For information about passing job properties, see the topic about passing job
* properties from one job to another in the same job stream instance in the User's
* Guide and Reference.
* The following example shows the job definition for an Amazon EC2 job that
* changes the power state of an instance:
* <?xml version="1.0" encoding="UTF-8"?>
* <jsdl:jobDefinition xmlns:jsdl="http://www.XXX.com/xmlns/prod/scheduling/1.0/jsdl"
* xmlns:jsdlaws="http://www.XXX.com/xmlns/prod/scheduling/1.0/jsdlaws" name="AWS">
* <jsdl:application name="aws">
* <jsdlaws:aws>
* <jsdlaws:AwsParameters>
* <jsdlaws:Connection>
* <jsdlaws:connectionInfo>
* <jsdlaws:key1>YYYYYYYYYYY</jsdlaws:key1>
* <jsdlaws:key2>ZZZZZZZZZZZZ</jsdlaws:key2>
* </jsdlaws:connectionInfo>
* </jsdlaws:Connection>
* For information about how to display the job log from the various supported
* interfaces, see Chapter 5, “Analyzing the job log,” on page 13.
* For example, you can see the job log content by running conman sj
* <job_name>;stdlist, where <job_name> is the Amazon EC2 job name.
* See also
* From the Dynamic Workload Console you can perform the same task as described
* in
* the Dynamic Workload Console User's Guide, section about Creating job definitions.
* For more information about how to create and edit scheduling objects, see
* the Dynamic Workload Console User's Guide, section about Designing your Workload.
* Prerequisites
* For information about the supported versions of the job plug-ins, generate a
* dynamic Data Integration report from the IBM Software Product Compatibility
* Reports web site, and select the Supported Software tab: Data Integration.
* Before you can define IBM SoftLayer jobs, you must have a SoftLayer account. Log
* in to the SoftLayer Customer Portal and get the user name and API key required
* to connect to the SoftLayer cloud.
* A description of the job properties and valid values are detailed in the
* context-sensitive help in the Dynamic Workload Console by clicking the question
* mark (?) icon in the top-right corner of the properties pane.
* For more information about creating jobs using the various supported product
* interfaces, see Chapter 2, “Defining a job,” on page 7.
* The following table lists the required and optional attributes for IBM SoftLayer
* jobs:
* Table 19. Required and optional attributes for the definition of an IBM SoftLayer job
* Attribute Description and value Required
* Connection attributes
* Url The SoftLayer cloud URL. Default value is api-dev.softlayer.com/rest/ U
* v3.
* Username The user name associated to your IBM SoftLayer account. U
* Key The API access key associated to your IBM SoftLayer account. U
* Action attributes when managing an existing IBM SoftLayer virtual server
* Virtual Server The IBM SoftLayer virtual server whose configuration you want to U
* change.
* Change Power State To change the power state of the virtual server, specify the new state:
* v Power on
* v Power off
* v Reboot
* v Pause
* v Resume
* Take Snapshot To capture an image of the virtual server to quickly replicate its
* configuration.
* Image Type The image type:
* v Flex Image
* v Image Template
* Image Name The Image Template name.
* Notes Any note about the image template.
* Remove To remove the virtual server.
89
* Table 19. Required and optional attributes for the definition of an IBM SoftLayer
* job (continued)
* Attribute Description and value Required
* Action attributes when creating a new IBM SoftLayer virtual server
* Host Name The host name for the new virtual server. U
* Domain The domain for the new virtual server. U
* Location The IBM SoftLayer geographical location (data center) for the new U
* virtual server.
* O.S. The operating system to install on the new virtual server. U
* Number of CPUs The number of CPU cores to allocate. U
* Memory The amount of memory to allocate in megabytes. U
* Billing The billing type for the new virtual server: U
* v Hourly
* v Monthly
* Disk Type The disk type for the new virtual server: U
* v Local
* v SAN (Storage Area Network)
*
* IBMSoftLayerJobExecutor.properties file
* The properties file is automatically generated either when you perform a "Test
* Connection" from the Dynamic Workload Console in the job definition panels, or
* when you submit the job to run the first time. Once the file has been created, you
* can customize it. This is especially useful when you need to schedule several jobs
* of the same type. You can specify the values in the properties file and avoid
* having to provide information such as credentials and other information, for each
* job. You can override the values in the properties files by defining different values
* at job definition time.
* The timeout property can be specified only in the properties file and represents the
* maximum time in seconds that the job waits for IBM SoftLayer to complete the
* requested action. When the timeout expires, the job is terminated with ABEND
* status. When the properties file is created, the timeout default is set to 3600. If the
* property is modified and the timeout is set to blank, the job waits for 600 seconds.
* For a description of each of the properties, see the corresponding job attribute
* description. The following is an example of the IBM SoftLayer properties file:
* url=api-dev.softlayer.com/rest/v3
* username=my_name
* timeout=44400
* location=[{"longname":"Amsterdam","name":"ams01"},{"longname":"Amsterdam","name":"ams03"},
* {"longname":"Chennai","name":"che01"},{"longname":"Dallas","name":"dal01"},{"longname":"Dallas","name":"dal05"},
* {"longname":"Dallas","name":"dal06"},{"longname":"Dallas","name":"dal09"},{"longname":"Dallas","name":"dal10"},
* {"longname":"Dallas","name":"dal12"},{"longname":"Dallas","name":"dal13"},{"longname":"Frankfurt","name":"fra02"}]
* operatingsystem=[{"operatingsystemlongname":"CentOS Latest","operatingSystemReferenceCode":"CENTOS_LATEST"},
* {"operatingsystemlongname":"CentOS Latest (64 bit)","operatingSystemReferenceCode":"CENTOS_LATEST_64"},
* {"operatingsystemlongname":"CentOS Latest (32 bit)","operatingSystemReferenceCode":"CENTOS_LATEST_32"},
* {"operatingsystemlongname":"Red Hat Enterprise Linux 7.x Minimal Install
* (64 bit)","operatingSystemReferenceCode":"REDHAT_7_64"}]
* You schedule IBM Workload Scheduler IBM SoftLayer jobs by defining them in job
* streams. Add the job to a job stream with all the necessary scheduling arguments
* and submit the job stream.
* You can submit jobs by using the Dynamic Workload Console, Application Lab or
* the conman command line. See Chapter 3, “Scheduling and submitting jobs and job
* streams,” on page 9 for information about how to schedule and submit jobs and
* job streams using the various interfaces.
* After submission, when the job is running and is reported in EXEC status in IBM
* Workload Scheduler, you can stop it if necessary, by using the kill command.
* Monitoring a job
* If the IBM Workload Scheduler agent stops when you submit the IBM SoftLayer
* job, or while the job is running, the job restarts automatically as soon as the agent
* restarts.
* For information about how to monitor jobs using the different product interfaces
* available, see Chapter 4, “Monitoring IBM Workload Scheduler jobs,” on page 11.
* Job properties
* While the job is running, you can track the status of the job and analyze the
* properties of the job. In particular, in the Extra Information section, if the job
* contains variables, you can verify the value passed to the variable from the remote
* system. Some job streams use the variable passing feature, for example, the value
* of a variable specified in job 1, contained in job stream A, is required by job 2 in
* order to run in the same job stream.
* For information about how to display the job properties from the various
* supported interfaces, see Chapter 5, “Analyzing the job log,” on page 13. For
* example, from the conman command line, you can see the job properties by
* running:
* conman sj <job_name>;props
* The properties are listed in the Extra Information section of the output command.
* For information about passing job properties, see the topic about passing job
* properties from one job to another in the same job stream instance in the User's
* Guide and Reference.
* The following example shows the job definition for an IBM SoftLayer job that
* changes the power state of an instance:
* <?xml version="1.0" encoding="UTF-8"?>
* <jsdl:jobDefinition xmlns:jsdl="http://www.xxx.com/xmlns/prod/scheduling/1.0/jsdl"
* xmlns:jsdlsoftlayer="http://www.xxx.com/xmlns/prod/scheduling/1.0/jsdlsoftlayer" name="SOFTLAYER">
* <jsdl:application name="softlayer">
* <jsdlsoftlayer:softlayer>
* <jsdlsoftlayer:SoftLayerParameters>
* <jsdlsoftlayer:Connection>
* <jsdlsoftlayer:connectionInfo>
* <jsdlsoftlayer:url></jsdlsoftlayer:url>
* <jsdlsoftlayer:username>my_name</jsdlsoftlayer:username>
* <jsdlsoftlayer:key>
* xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
* For example, you can see the job log content by running conman sj
* <job_name>;stdlist, where <job_name> is the IBM SoftLayer job name.
* See also
* From the Dynamic Workload Console you can perform the same task as described
* in
* the Dynamic Workload Console User's Guide, section about Creating job definitions.
* For more information about how to create and edit scheduling objects, see
* the Dynamic Workload Console User's Guide, section about Designing your Workload.
* Prerequisites
* For information about the supported versions of the job plug-ins, generate a
* dynamic Data Integration report from the IBM Software Product Compatibility
* Reports web site, and select the Supported Software tab: Data Integration.
* Before you can define Microsoft Azure jobs, you must have a Microsoft Azure
* Tenant ID, a Client ID, and a Client Secret Key.
* For more information about creating jobs using the various supported product
* interfaces, see Chapter 2, “Defining a job,” on page 7.
* The following table lists the required and optional attributes for Microsoft Azure
* jobs:
* Table 20. Required and optional attributes for the definition of a Microsoft Azure job
* Attribute Description and value Required
* Connection attributes
* Subscription The ID that uniquely identifies your U
* subscription to Microsoft Azure.
* Client The Client ID associated to your U
* Microsoft Azure account.
* Tenant The Tenant ID associated to your U
* Microsoft Azure account.
* Key The Secret Key associated to your U
* Microsoft Azure account.
* Action attributes when managing an existing Microsoft Azure virtual machine
* Virtual Machine The name of the virtual machine that U
* you want to work with.
* Change Power State To change the power state of the virtual
* machine, specify the new state:
* v Start
* v Stop
* v Restart
* v Deallocate
* v Generalize
* v Delete
* Generalize To generalize a virtual machine.
* Capture a Custom Image To create a virtual machine image.
* Image Name The name of the virtual machine image.
* Add a Tag To add a tag to the virtual machine.
93
* Table 20. Required and optional attributes for the definition of a Microsoft Azure
* job (continued)
* Attribute Description and value Required
* Tag Name The name of the tag.
* Tag Value The value of the tag.
* Deallocate To deallocate the virtual machine.
* Delete To delete the virtual machine.
* Action attributes when creating a new Microsoft Azure virtual machine
* VM Name The name of the virtual machine that U
* you want to create.
* Resource Group The container that holds all the U
* resources for an application.
* Primary Network The primary network interface to your U
* virtual machine.
* Primary Private IP The private IP address for the primary
* network interface. If not specified, a
* dynamic IP address is assigned.
* Primary Public IP The public IP address for the primary
* network interface. If not specified, a
* dynamic IP address is assigned.
* From Image The virtual machine image name. U
* Username The user name to log on to the virtual U
* machine.
* Password The password to log on to the virtual U
* machine.
* Size The size of the virtual machine. U
*
* MicrosoftAzureJobExecutor.properties file
* The properties file is automatically generated either when you perform a "Test
* Connection" from the Dynamic Workload Console in the job definition panels, or
* when you submit the job to run the first time. Once the file has been created, you
* can customize it. This is especially useful when you need to schedule several jobs
* of the same type. You can specify the values in the properties file and avoid
* having to provide information such as credentials and other information, for each
* job. You can override the values in the properties files by defining different values
* at job definition time.
* You schedule IBM Workload Scheduler Microsoft Azure jobs by defining them in
* job streams. Add the job to a job stream with all the necessary scheduling
* arguments and submit the job stream.
* You can submit jobs by using the Dynamic Workload Console, Application Lab or
* the conman command line. See Chapter 3, “Scheduling and submitting jobs and job
* streams,” on page 9 for information about how to schedule and submit jobs and
* job streams using the various interfaces.
* After submission, when the job is running and is reported in EXEC status in IBM
* Workload Scheduler, you can stop it if necessary, by using the kill command.
* Monitoring a job
* If the IBM Workload Scheduler agent stops when you submit the Microsoft Azure
* job, or while the job is running, the job restarts automatically as soon as the agent
* restarts.
* For information about how to monitor jobs using the different product interfaces
* available, see Chapter 4, “Monitoring IBM Workload Scheduler jobs,” on page 11.
* Job properties
* While the job is running, you can track the status of the job and analyze the
* properties of the job. In particular, in the Extra Information section, if the job
* contains variables, you can verify the value passed to the variable from the remote
* system. Some job streams use the variable passing feature, for example, the value
* of a variable specified in job 1, contained in job stream A, is required by job 2 in
* order to run in the same job stream.
* For information about how to display the job properties from the various
* supported interfaces, see Chapter 5, “Analyzing the job log,” on page 13. For
* example, from the conman command line, you can see the job properties by
* running:
* conman sj <job_name>;props
* The properties are listed in the Extra Information section of the output command.
* For information about passing job properties, see the topic about passing job
* properties from one job to another in the same job stream instance in the User's
* Guide and Reference.
* The following example shows the job definition for a Microsoft Azure job that
* creates a new virtual machine:
* <?xml version="1.0" encoding="UTF-8"?>
* <jsdl:jobDefinition xmlns:jsdl="http://www.xxx.com/xmlns/prod/scheduling/1.0/jsdl"
* xmlns:jsdlazure="http://www.xxx.com/xmlns/prod/scheduling/1.0/jsdlazure" name="AZURE">
* <jsdl:application name="azure">
* For information about how to display the job log from the various supported
* interfaces, see Chapter 5, “Analyzing the job log,” on page 13.
* For example, you can see the job log content by running conman sj
* <job_name>;stdlist, where <job_name> is the Microsoft Azure job name.
* See also
* From the Dynamic Workload Console you can perform the same task as described
* in
* the Dynamic Workload Console User's Guide, section about Creating job definitions.
* For more information about how to create and edit scheduling objects, see
* the Dynamic Workload Console User's Guide, section about Designing your Workload.
You can define, run, and manage these jobs both in a distributed and in a z/OS
environment, by selecting the appropriate IBM Workload Scheduler or IBM
Workload Scheduler for z/OS engine in the Dynamic Workload Console.
In IBM Workload Scheduler environments, the plug-in jobs run on dynamic agents.
z/OS In IBM Workload Scheduler for z/OS environments, the plug-in jobs run
on IBM Workload Scheduler for z/OS Agents.
In both environments the agent running the jobs, where a portion of the plug-in is
installed, must have a working connection with the Informatica Web Services Hub.
For information about the supported versions of the job plug-ins, generate the Data
Integration report on the IBM Software Product Compatibility Reports web site,
and select the Supported Software tab.
Prerequisites
You can run the IBM Workload Scheduler plug-in for Informatica PowerCenter
both in a distributed and in a z/OS environment.
For information about the supported versions of the job plug-ins, generate the Data
Integration report from the IBM Software Product Compatibility Reports site, and
select the Supported Software tab.
Distributed Distributed To define, run, and manage job types with advanced options
for Informatica PowerCenter, install:
v The IBM Workload Scheduler master domain manager
v A dynamic agent connected:
– To the master domain manager
or
– To a dynamic domain manager connected to the master domain
manager.
v The dynamic agent running the plug-in must have a working connection
with the Informatica PowerCenter Web Services Hub.
z/OS To define, run, and manage job types with advanced options
z/OS
for Informatica PowerCenter, install:
v The IBM Workload Scheduler for z/OS controller.
v An IBM Workload Scheduler for z/OS agent connected to:
– The IBM Workload Scheduler for z/OS controller.
97
or
– A dynamic domain manager connected to the IBM Workload
Scheduler for z/OS controller.
v The IBM Workload Scheduler for z/OS agent running the plug-in must
have a working connection with the Informatica PowerCenter Web
Services Hub.
For detailed information about the IBM Workload Scheduler supported operating
systems, see the System Requirements Document.
See Chapter 2, “Defining a job,” on page 7 for more information about creating
jobs using the various interfaces available. Some samples of using one or more of
these interfaces to create an Informatica PowerCenter job definition are contained
in the sections that follow.
The following table lists the required and optional attributions for the PowerCenter
job definition, together with a description of each attribute.
Table 21. Required and optional attributes for the job definition of PowerCenter jobs.
Attribute Description/value Required
UserName The name used to access the U
PowerCenter repository. See Note.
password The password used to access the U
PowerCenter repository. It is
encrypted when you submit the
job. See Note.
repositoryDomain The domain where the repository
is located. See Note.
serviceDomain The domain where the
PowerCenter Integration Service
is located. See Note.
Note: If you do not want to specify a value for this attribute in the XML, you can
define it in the PowerCenterJobExecutor.properties file. See “Customizing IBM
Workload Scheduler to run Informatica PowerCenter jobs” on page 103 for details.
The following example shows the job definition of a PowerCenter job with all the
attributes specified:
$JOBS
LINUX206#PC-FULL
TASK
<?xml version="1.0" encoding="UTF-8"?>
<jsdl:jobDefinition xmlns:jsdl="http://www.abc.com/xmlns/prod/scheduling/1.0/jsdl"
xmlns:jsdlpowercenter="http://www.abc.com/xmlns/prod/scheduling/1.0/jsdlpowercenter" name="POWERCENTER">
<jsdl:application name="powercenter">
<jsdlpowercenter:powercenter>
<jsdlpowercenter:PowerCenterParameters>
<jsdlpowercenter:PowerCenterPanel>
<jsdlpowercenter:logon>
<jsdlpowercenter:userName>Administrator</jsdlpowercenter:userName>
<jsdlpowercenter:password>{aes}BPnRnktQdkmJt3HIy/r4Z4EVy40EWhUGpur+qshPdhU=</jsdlpowercenter:password>
<jsdlpowercenter:repositoryDomain>Domain_nc125123</jsdlpowercenter:repositoryDomain>
<jsdlpowercenter:serviceDomain>Domain_nc125123</jsdlpowercenter:serviceDomain>
<jsdlpowercenter:repository>MyRepository</jsdlpowercenter:repository>
</jsdlpowercenter:logon>
<jsdlpowercenter:jobDefinitionGroup>
<jsdlpowercenter:projectNameGroup>
<jsdlpowercenter:service>IntegrationService</jsdlpowercenter:service>
<jsdlpowercenter:folder>tws4apps</jsdlpowercenter:folder>
<jsdlpowercenter:workflow>DB2_COPY_FROM_SOURCE_TO_TARGET</jsdlpowercenter:workflow>
</jsdlpowercenter:projectNameGroup>
<jsdlpowercenter:wkfParamFile>C:\Informatica variables file.txt</jsdlpowercenter:wkfParamFile>
</jsdlpowercenter:jobDefinitionGroup>
</jsdlpowercenter:PowerCenterPanel>
</jsdlpowercenter:PowerCenterParameters>
</jsdlpowercenter:powercenter>
</jsdl:application>
</jsdl:jobDefinition>
DESCRIPTION "Added by composer1."
RECOVERY STOP
The following example shows the job definition of the same PowerCenter job with
only the required attributes specified:
$JOBS
LINUX206#PC-REQD
TASK
<?xml version="1.0" encoding="UTF-8"?>
<jsdl:jobDefinition xmlns:jsdl="http://www.abc.com/xmlns/prod/scheduling/1.0/jsdl"
xmlns:jsdlpowercenter="http://www.abc.com/xmlns/prod/scheduling/1.0/jsdlpowercenter" name="POWERCENTER">
<jsdl:application name="powercenter">
<jsdlpowercenter:powercenter>
<jsdlpowercenter:PowerCenterParameters>
100 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Password
The password used to access the repository. You can
leave blank if a valid value is provided in the
PowerCenterJobExecutor.properties properties file.
Repository Domain
The domain where the repository is located.
Alternatively, a valid value provided in the
PowerCenterJobExecutor.properties properties file.
This field is optional.
Service Domain
The domain where the Integration Service is located.
Alternatively, a valid value provided in the
PowerCenterJobExecutor.properties properties file.
This field is optional.
Repository Name
The repository where the workflow is located. Click the
Repository List tab to get a list of selectable
repositories.
Workflow information
Use this section to identify the workflow that you want the job
to run.
Service Name
The integration service used to run the workflow. Click
the Service List tab to get a list of selectable integration
services.
Folder Name
The folder in the repository that you selected where the
workflow is located. Click the Folder List tab to get a
list of selectable folders.
Workflow Name
The name of the workflow that you want to run. Click
the Workflow List tab to get a list of selectable
workflows located in the folder that you indicated in
the previous field.
Workflow Parameter File
The full path and name of the parameters file, stored
on the Informatica PowerCenter server, that contains a
list of parameters to be passed to the workflow when
its run is issued. You can find instructions to write and
use parameters files in the Informatica PowerCenter
documentation guides.
z/OS z/OS To define a job of type PowerCenter in the Dynamic
Workload Console:
1. Select Administration > Workload Design > Manage Workload
Definitions.
2. Select a z/OS engine.
3. Select New > Business Analytics > PowerCenter.
The Properties panel for the new job is displayed.
4. In the General tab, enter:
102 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Workflow Parameter File
The full path and name of the parameters file, stored
on the Informatica PowerCenter server, that contains a
list of parameters to be passed to the workflow when
its run is issued. You can find instructions to write and
use parameters files in the Informatica PowerCenter
documentation guides.
The properties file is automatically generated either when you perform a "Test
Connection" from the Dynamic Workload Console in the job definition panels, or
when you submit the job to run the first time. Once the file has been created, you
can customize it. This is especially useful when you need to schedule several jobs
of the same type. You can specify the values in the properties file and avoid
having to provide information such as credentials and other information, for each
job. You can override the values in the properties files by defining different values
at job definition time.
The file contains two types of properties for the use of the plug-in jobs:
v Credential properties that are repeated in all the plug-in job definitions. If you
specify them in this file, you can leave the corresponding fields blank in the job
definition and the values are retrieved from the properties file at runtime.
The properties are:
userName
The name used to access the repository.
password
The password used to access the repository. The password is encrypted
the first time it is used (either in a pick list or at the runtime of a job).
repositoryDomain
The domain where the repository is located.
serviceDomain
The domain where the Integration Service is located.
The values specified for any of these properties in the file are overridden by the
job definition values when entered in the corresponding fields.
v Properties required to establish a connection with the Informatica Web Services
Hub. It is mandatory that you specify these properties in the file.
The properties are:
hostName
The hostname or the IP address of the computer hosting the Informatica
Web Services Hub; that is, the service that provides the web services
used for accessing the workflows.
port The port number of the Informatica Web Services Hub.
In the file the properties are specified each in a line, using the key=value syntax.
See the following example:
port=7334
password=mypassword
isSSLEnabled=false
userName=Administrator
serviceDomain=Domain_NY004114
hostName=NY004114.citylab.com
repositoryDomain=Domain_NY004114
use_load_balancer=YES
polling=No
no_infa_log=YES
errorMsgs=TimeOut
checkWfStatusBeforeWait=true
104 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
If IsSSLEnabled=true in the PowerCenterJobExecutor.properties properties file,
you must also change the JVMOption key in file JobManager.ini located in directory
TWS_home/ITA/cpa/config (\ITA\cpa\config) on the agent. In this case, JVMOption
should contain the following:
-Djava.protocol.handler.pkgs=com.sun.net.ssl.internal.www.protocol
-Djavax.net.ssl.trustStore=keystore_pathfile_name
where keystore_pathfile_name is the path and the filename of the truststore used to
access the protected web services hub. For example:
-Djavax.net.ssl.trustStore=/opt/abc/TWA/ssl/wsh.keystore
See Chapter 3, “Scheduling and submitting jobs and job streams,” on page 9 for
more information about how to schedule and submit jobs and job streams using
the various interfaces.
After you define an IBM Workload Scheduler for PowerCenter job, you add it to a
job stream with all the necessary scheduling arguments and submit it.
After submission you can kill the IBM Workload Scheduler for PowerCenter job, if
necessary. This action is converted to an Abort action for the PowerCenter
workflow.
The IBM Workload Scheduler agent might become unavailable while the job
running the PowerCenter workflow is in execution. When the agent becomes
available again, IBM Workload Scheduler starts to monitor the job from where it
stopped.
For information about monitoring jobs and job streams, see Chapter 4, “Monitoring
IBM Workload Scheduler jobs,” on page 11.
For information about analyzing the job log, see Chapter 5, “Analyzing the job
log,” on page 13.
You can monitor and even interrupt the job by using the monitoring features of
IBM Workload Scheduler. You can monitor jobs by using any of the product
interfaces.
For information about monitoring jobs and job streams, see Chapter 4, “Monitoring
IBM Workload Scheduler jobs,” on page 11.
When monitoring the job from Dynamic Workload Console, you can display
detailed information about the Informatica PowerCenter workflow and run actions
either on the workflow, or on any first-level tasks that are in the workflow, if these
are session tasks or worklets. You cannot display or run actions on :
v Tasks that are not session tasks.
During the workflow execution, the Workflow details panel displays information
about active worklets or session tasks, that is, worklets or session tasks that
already started on the Informatica PowerCenter server. The Workflow details
panel does not display information about:
v Worklets or session tasks that have not yet started.
v Workflow instances related to a previous run.
v Worklets or session tasks that are running but not with their original name (that
is, they have been renamed by using Informatica PowerCenter Workflow
Manager).
If all the worklets or session tasks fall into this list, the Workflow details panel is
empty.
Note: Distributed To display workflow details, click the hyperlink Workflow details
in the Job Type column, in both the Monitor jobs and Monitor Critical jobs views
of Dynamic Workload Console. However, if the version of your engine is earlier
than V9.3.0.2, workflow details are displayed only in the Monitor jobs view.
Note: z/OS To display workflow details, click the hyperlink Workflow details
in the Job Type column in the Monitor jobs view of the Dynamic Workload
Console. The Monitor Critical jobs view does not display workflow details.
Note: To display the Workflow details panel, your dynamic domain manager
must be at version 9.3.0.2 or later.
From the Workflow details panel of Dynamic Workload Console, if the Informatica
PowerCenter workflow fails, you can restart it from the failed task. For details
about restarting an Informatica PowerCenter workflow, see the procedure for
restarting a workflow from the failed task: “Procedure for restarting an Informatica
PowerCenter workflow from the failed task” on page 107.
When the job completes, the status of the plug-in job reflects the status of the run
workflow and a job log is made available. The job log shows the status, start date,
and end date of any first-level tasks that are in the workflow, if these are session
tasks or worklets.
Details that are produced by Informatica about the run of the workflow are also
copied into the job log after the task status.
For detailed information about how to access the job properties by using the
various interfaces available, see Chapter 5, “Analyzing the job log,” on page 13.
Messages
All the messages that are issued by the plug-in are described in IBM Workload
Automation: Messages and Codes.
106 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Procedure for restarting an Informatica PowerCenter workflow
from the failed task
You can restart an Informatica PowerCenter workflow from the failed task, by
using the Dynamic Workload Console.
To restart an Informatica PowerCenter workflow from the failed task, run the
following steps:
Procedure
1. From the navigation toolbar, click System Status and Health.
2. In the Workload Monitoring section, click Monitor Workload.
3. From the engine drop-down list, select the name of the IBM Workload
Scheduler engine connection from where you want to work with Informatica
PowerCenter jobs.
4. From the object type drop-down list, select the object type Job .
5. From the drop-down list of defined tasks or queries for monitoring jobs, select
to display the related list of jobs. If you have a predefined task to display
Informatica PowerCenter jobs, click that task. A list of jobs is displayed.
Distributed
Distributed environment
The Job Type column displays PowerCenter Workflow details.
z/OS
z/OS environment
The Advanced Job Type column displays PowerCenter Workflow
details. To display the Advanced Job Type column in the table, edit the
Task Properties and in Column Definition, add the Advanced Job
Type column to the Selected Columns list. Move the column up to
define the order of the column in the table and make it more visible.
6. Click the hyperlink Workflow details. The Workflow details panel opens.
7. Select the action that you want to run either on the workflow or on the single
task (worklet task or session task):
a. Actions on the workflow:
Refresh
Refreshes the Workflow Details panel with the latest updates on the
remote Informatica PowerCenter system to synchronize the two
views.
Recover
Restarts the workflow from the failed task.
Stop Stops the workflow.
Abort Aborts the workflow.
Distributed Rerun job
Restarts the IBM Workload Scheduler job. The Informatica
PowerCenter workflow restarts from the beginning. If you select to
rerun the job, to display the workflow details you must close the
current Workflow details panel and open the one related to the
new instance of the job.
Table 22 table shows how you can map the job status to the PowerCenter workflow
status based on the return code you find in the job log output.
Table 22. Mapping between IBM Workload Scheduler job statuses and PowerCenter workflow statuses
PowerCenter workflow Dynamic Workload IBM Workload Scheduler IBM Workload Scheduler
status Console job status job status for z/OS job status
Running Running EXEC Executing
Succeeded Successful SUCC Completed
Failed Error ABEND Error
Aborted Error ABEND Error
Stopped Error ABEND Error
Suspended Running EXEC Executing
Summary
Incorrect worklet or session task properties are shown in the job log after the IBM
Workload Scheduler job run.
Problem symptom
Despite the successful completion of a worklet or session tasks, the IBM Workload
Scheduler job log displays its status as UNKNOWN and does not display the Start
and Completion times, as shown below:
Task Name Start Time Completion Time Status
-----------------------------------------------------------------------------
Worklet_Renamed UNKNOWN
108 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Solution
To avoid this problem, you must use the same worklet or session task name when
rerunning a workflow.
Summary
A restart of the PowerCenter Web Services Hub prevents the proper submission of
IBM Workload Scheduler plug-in for Informatica jobs.
Problem symptom
Following a restart of the Web Services Hub, the IBM Workload Scheduler jobs
launched from the command line end in FAIL state (Error state in the Dynamic
Workload Console) and return the following exception in the job log:
AWKIPC005E Failed to run workflow.
---------------------------------------
Remote Exception
----------------------------------------
Solution
After restarting the Hub, to enable the correct submission of plug-in jobs, connect
to the Informatica PowerCenter Web Services Hub URL and follow these steps:
1. In the Navigator pane, expand Web Service -> Batch WebService, and then
click Integration WebService.
2. In the Operations pane, click Try-It from the toolbar.
3. In the Web Service Operations Navigator pane, click login.
4. Fill out the form in the right pane, specifying the required information in the
UserName, RepositoryName, and Password text fields.
5. Click Send.
6. In the SOAP Response pane, copy the value for the SessionId tag.
7. In the Web Service Operations Navigator pane, click getWorkflowLog.
8. Paste the value copied previously in the SessionId text field and then enter the
required information in the FolderName, WorkflowName, Timeout, and
ServiceName text fields.
9. Click Send.
You can now submit jobs safely again.
The problem is due to an Informatica Powercenter defect, for which change request
296859 is outstanding. A formal solution to the problem should be provided by the
Informatica HotFix that will address this change request.
You can manage Oracle E-Business Suite applications both in a distributed and in a
z/OS environment, by selecting the appropriate engine.
The Oracle E-Business Suite plug-in and the Oracle E-Business Suite instance can
be installed on machines having different operating systems.
Prerequisites
Before you run Oracle E-Business Suite jobs, complete the following actions.
1. On the Oracle E-Business Suite server, in the sqlnet.ora configuration file that is
located in the directory:
v ORACLE_HOME/network/admin (Windows systems)
v $ORACLE_HOME/network/admin (UNIX systems)
comment out the following instructions:
v SQLNET.EXPIRE_TIME = 10
v TCP.VALIDNODE_CHECKING=yes
2. Stop and start the Oracle Listener, as described at the following link:
http://docs.oracle.com/cd/B28359_01/server.111/b32009/
strt_stp.htm#UNXAR156.
Business scenario
ANYLife New York Life Insurance experienced an amazing growth in business
during the last few years. As a result, several new branches were opened in
different countries. To support this growth and be ready for further expansion,
ANYLife decided to improve the efficiency of its internal processes, HR, Payroll,
Finance, and Procurement, and optimize its overhead costs. In particular, ANYLife
decided to improve the management of Financial Processes, showing recurring
peaks at close of month and quarter times. As a solution ANYLife chose to invest
in Oracle E-Business Suite a comprehensive suite of integrated, global business
applications that enable organizations to make better decisions, reduce costs, and
increase performance. Today however, as the operational environment becomes
more complex, the built-in scheduling capabilities of Oracle E-Business Suite are no
longer sufficient and often require a manual intervention. This is very crucial to
ANYLife, especially during the financial close process. Moreover, as the business
grows, ANYLife needs to integrate Oracle E-Business Suite with other external
business applications.
The solution is IBM Workload Scheduler integration with Oracle E-Business Suite
to automate, monitor, and control Oracle E-Business Suite applications, reduce
111
complexity, manual tasks, and batch execution time. Thanks to the integration
among different systems, and by conditioning the execution of Oracle E-Business
Suite applications and other external business applications, ANYLife can have the
entire financial close process controlled by IBM Workload Scheduler.
A single point of control during the financial close process reduces complexity and
allows for an error-free execution of the batch jobs, and the financial reports are
always available without delay. ANYLife has been able to reduce administration
overhead, improve the operations efficiency and increase the reliability of the
enterprise. The company optimized business results while achieving a high return
on investments.
z/OS Define an IBM Workload Scheduler job to run an Oracle E-Business Suite
job by using the Dynamic Workload Console connected to a z/OS engine.
See Chapter 2, “Defining a job,” on page 7 for more information about creating
jobs using the various interfaces available. Some samples of using one or more of
these interfaces to create an Oracle E-Business Suite job definition are contained in
the sections that follow.
Table 23 describes the required and optional attributes for Oracle E-Business Suite
jobs, together with a description of each attribute.
Table 23. Required and optional attributes for the job definition of Oracle E-Business Suite
jobs.
Attribute Description Required
Hostname The Oracle E-Business Suite U Required if
server address. not specified in
the Oracle
E-Business Suite
job plug-in
properties file.
Port The port number of the Oracle U
E-Business Suite server.
SID The Oracle E-Business Suite U Required if
server identifier. not specified in
the Oracle
E-Business Suite
job plug-in
properties file.
112 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 23. Required and optional attributes for the job definition of Oracle E-Business Suite
jobs. (continued)
Attribute Description Required
Username The name of the user authorized U Required if
to access the Oracle E-Business not specified in
Suite server. the Oracle
E-Business Suite
job plug-in
properties file.
Password The password associated with the U Required if
user authorized to access the not specified in
Oracle E-Business Suite server. the Oracle
E-Business Suite
job plug-in
properties file.
Driver Classpath The path of the JAR file that U Required if
contains the drivers required to not specified in
connect to Oracle E-Business the Oracle
Suite. E-Business Suite
job plug-in
properties file.
Application User The name of a valid Oracle U
E-Business Suite application user.
Application Identifier The name of the Oracle U
E-Business Suite application
module used to sign on to Oracle
E-Business Suite application.
Responsibility Identifier A valid responsibility for the
Oracle E-Business Suite
application module. U
Concurrent Program The Oracle E-Business Suite job U
name.
Application Name The name of the Oracle U
E-Business Suite application that
registered the job.
Organization Name The name of the Oracle U
E-Business Suite operating unit
(ORG_ID).
Parameters The Oracle E-Business Suite
application parameters, if any.
Printer Name The name of the printer. If the
specified printer doesn't exist, the
job execution doesn't fail.
Print Copies The number of copies that you
want to print.
Users to notify The list of users that must be
notified by the Oracle E-Business
Suite application. Input format is:
To define a job that runs an Oracle E-Business Suite job by using the Dynamic
Workload Console, complete the following procedure. See Chapter 2, “Defining a
job,” on page 7 for information about defining jobs with other available interfaces.
A description of the job properties and valid values are detailed in the
context-sensitive help in the Dynamic Workload Console by clicking the question
mark (?) icon in the top-right corner of the properties pane.
Procedure
1. In the console navigation tree, expand Administration > Workload Design and
click Manage Workload Definitions.
2. Specify an engine name, either distributed or z/OS. The Workload Designer is
displayed.
3. In the Working List panel, select New > Job Definition > ERP > Oracle
E-Business. In a z/OS environment, select New > ERP > Oracle E-Business.
The properties of the job are displayed in the right-hand panel for editing.
4. In the properties panel, specify the attributes for the job definition that you are
creating. You can find detailed information about all the attributes in the help
available with the panel. In particular:
In the General panel:
Distributed
Distributed Environment:
Enter the name of the IBM Workload Scheduler job that runs
the Oracle E-Business Suite application.
Enter the name of the workstation where the IBM Workload
Scheduler job is to run.
z/OS
z/OS Environment:
Enter the name of the partitioned data set where you want to
create the JCL.
Enter the name of the JCL that you want to create in the
partitioned data set.
Enter the name of the workstation where the IBM Workload
Scheduler job is to run.
In the Oracle E-Business Suite panel:
In the Server section:
Enter the credentials that are related to the Oracle E-Business
Suite server. If you do not want to specify them here, you can
define them in the OracleEBusinessJobExecutor.properties
file. In this case IBM Workload Scheduler reads them from the
.properties file when you submit the job. Use the Test
114 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Connection button to verify the connection to the Oracle
E-Business Suite server using the credentials specified.
You must specify the credentials either using the Dynamic
Workload Console or the .properties file, otherwise you
receive an error message. See “Customizing IBM Workload
Scheduler to run Oracle E-Business Suite jobs” on page 116.
For a description of the parameters in the Server section, see
“Job definition for Oracle E-Business Suite jobs” on page 112 or
refer to the help available with the panel.
In the Application section
Enter the Oracle E-Business Suite application user and the
application job attributes. Pick lists are available for easy
selection. IBM Workload Scheduler retrieves this information
directly from the Oracle E-Business Suite Server. Click Test
Application to verify that the information you entered for the
Application User, the Application Identifier and the
Responsibility Identifier are correct.
For a description of the parameters in the Application section,
see “Job definition for Oracle E-Business Suite jobs” on page
112 or refer to the help available with the panel.
5. Click Save to save the job definition in the database.
See Chapter 3, “Scheduling and submitting jobs and job streams,” on page 9 for
information about how to schedule and submit jobs and job streams using the
various interfaces available with the product.
After you define an IBM Workload Scheduler for Oracle E-Business Suite job, you
add it to a job stream with all the necessary scheduling arguments and submit it.
After submission, when the job is running and is reported in EXEC status in IBM
Workload Scheduler, you can stop it if necessary, by using the kill command. The
kill command stops the job in IBM Workload Scheduler, but not in the Oracle
E-Business Suite environment.
If the IBM Workload Scheduler agent becomes unavailable when you submit the
IBM Workload Scheduler for Oracle E-Business Suite job or while the job is
running, IBM Workload Scheduler collects the job log when the agent restarts and
assigns the Error or ABEND status to the job.
For information about monitoring jobs and job streams, see Chapter 4, “Monitoring
IBM Workload Scheduler jobs,” on page 11.
For information about analyzing the job log, see Chapter 5, “Analyzing the job
log,” on page 13.
Where, agent_install_dir is the path where you installed the IBM Workload
Scheduler dynamic agent or the IBM Workload Scheduler for z/OS agent.
The properties file assigns default values to the Oracle E-Business Suite job
properties. You can override the values set by the installation at a later time.
The properties file is automatically generated either when you perform a "Test
Connection" from the Dynamic Workload Console in the job definition panels, or
when you submit the job to run the first time. Once the file has been created, you
can customize it. This is especially useful when you need to schedule several jobs
of the same type. You can specify the values in the properties file and avoid
having to provide information such as credentials and other information, for each
job. You can override the values in the properties files by defining different values
at job definition time.
You can define the properties contained in the .properties file also at job
definition time. In this case, IBM Workload Scheduler uses the values that you
specify at job definition time for running the job.
Table 24. Properties to run Oracle E-Business Suite jobs
Property Description Required
sid The Oracle E-Business Suite server identifier.
interval The monitoring frequency. It determines how often U
the job is monitored. Default value is 10 seconds.
username The name of the user authorized to access the
Oracle E-Business Suite server.
printerCopies The number of copies that you want to print.
driver_class_path The path of the JAR file that contains the drivers
required to connect to Oracle E-Business Suite
server.
timeout The monitoring time. It determines for how long U
the job is monitored. At the end of the timeout
interval the job fails. Default value is 7200 seconds.
hostname The Oracle E-Business Suite server address.
printerName The name of the default printer.
password The password that is associated with the user
authorized to access the Oracle E-Business Suite
server.
parameters The Oracle E-Business Suite application
parameters.
116 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Example
This example shows the default .properties file for Oracle E-Business Suite jobs.
sid=
interval=10
username=
printerCopies=
driver_class_path=
timeout=7200
hostname=
printerName=
password=
parameters=
Table 25 shows how you can map the IBM Workload Scheduler job status in the
job log output to the status of the Oracle E-Business Suite application.
Table 25. Mapping between IBM Workload Scheduler and Oracle E-Business Suite
application statuses
Oracle E-Business Suite Dynamic Workload Console IBM Workload Scheduler
application statuses and Application Lab statuses
Request Failure or Request UT (unsupported task)
not Found
Inactive Blocked SUSP
Pending Waiting WAIT
Pending Normal Waiting ADD
Running Normal Running EXEC
Completed Normal Successful SUCC
Completed Warning Successful SUCC
Completed Error Error ABEND
Completed Terminated Error ABEND
Completed Canceled Error ABEND
For more information about job management, see IBM Workload Scheduler User's
Guide and Reference.
The output of an IBM Workload Scheduler for Oracle E-Business Suite job log is
composed of two parts:
v The first part is the job definition in XML format.
v The second part is the output of the job.
Example
Extra Information
Development Phase=COMPLETE
Development Status=NORMAL
Job Id=7449273
Message=Normal completion
Phase=Completed
Status=Normal
118 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
|
| Prerequisites
| You gain greater control of your Salesforce jobs with both calendar-based and
| event-based workload automation, as well as providing a single point of control to
| handle exceptions and automate recovery processes.
| For information about the supported versions of the job plug-ins, generate a
| dynamic Data Integration report from the IBM Software Product Compatibility
| Reports web site, and select the Supported Software tab: Data Integration.
| Before you start to create a Salesforce job definition with IBM Workload Scheduler,
| consider the following limitations:
| v The batch Apex classes (and related methods) that you want to run with the
| Salesforce plug-in, must be defined with global access level, in order to make
| them accessible by all Apex everywhere (the public access level is not enough).
| v At job definition time, only Salesforce batch Apex classes can be run. If you
| select a non-batch Apex class, the job fails.
| To create a Salesforce job definition, you must complete the prerequisite steps that
| are listed in the following procedure.
| 1. Register on Salesforce Server and ask for user ID and password.
| 2. Log in to Salesforce Server.
| 3. Create the following Apex classes that are needed for the communication
| between IBM Workload Scheduler and Salesforce Server. The IBM Workload
| Scheduler Apex classes must be defined outside any package.
| Class TWSListApexClass
| @RestResource(urlMapping=’/TWSListApexClass/*’)
| global with sharing class TWSListApexClass{
| //This Apex class exposes the TWSListApexClass REST service
| //which returns a list of all known Batchable Apex classes.
| @HttpGet
| global static List<ApexClass> doGet() {
| RestRequest req = RestContext.request;
| RestResponse res = RestContext.response;
| String fullName=’’;
| List<ApexClass> tempList =
| [SELECT NamespacePrefix,Name FROM ApexClass ORDER BY Name];
| List<ApexClass> result = new List<ApexClass>();
| for (ApexClass a: tempList){
| if (a.NamespacePrefix==null || a.NamespacePrefix.equals(’’)){
| fullName=a.Name;
| } else {
| fullName=a.NamespacePrefix+’.’+a.Name;
| }
| System.debug(LoggingLevel.Info, ’ApexClass: ’+fullName);
| result.add(a);
| }
| return result;
| }
| }
| Class TWSSubmitApexJob
| @RestResource(urlMapping=’/TWSSubmitApexJob/*’)
| global with sharing class TWSSubmitApexJob{
| //This Apex class exposes the TWSSubmitApexJob REST service
| //which submits an Apex class to the Salesforce server.
| @HttpGet
| global static ID doGet() {
| RestRequest req = RestContext.request;
| RestResponse res = RestContext.response;
| String apexClass = req.params.get(’className’);
| System.debug(LoggingLevel.Info, ’Execute Batch:’+apexClass);
| Type t = Type.forName(apexClass);
119
| if (t == null){
| throw new TWSException (apexClass + ’ not found’);
| }
| Object s = t.newInstance();
| ID batchprocessid =
| Database.executeBatch((Database.Batchable<sObject>)s);
| System.debug(LoggingLevel.Info, ’Job ID: ’+batchprocessid);
| return batchprocessid;
| }
| global class TWSException extends Exception{}
| }
| Class TWSMonitorApexJob
| @RestResource(urlMapping=’/TWSMonitorApexJob/*’)
| global with sharing class TWSMonitorApexJob{
| //This Apex class exposes the TWSMonitorApexJob REST service
| //which will monitor the progress of the backend Apex job.
| @HttpGet
| global static AsyncApexJob doGet() {
| RestRequest req = RestContext.request;
| RestResponse res = RestContext.response;
| ID i = (ID) req.params.get(’jobID’);
| AsyncApexJob a = [SELECT Id, Status, ExtendedStatus, NumberOfErrors,
| JobItemsProcessed, TotalJobItems FROM AsyncApexJob WHERE Id = :i];
| return a;
| }
| }
| Class TWSAbortApexJob
| @RestResource(urlMapping=’/TWSAbortApexJob/*’)
| global with sharing class TWSAbortApexJob{
| //This Apex class exposes the TWSAbortApexJob REST service
| //which will abort the Apex job on the Salesforce server.
| @HttpGet
| global static void doGet() {
| RestRequest req = RestContext.request;
| RestResponse res = RestContext.response;
| String jobID = req.params.get(’jobID’);
| System.abortJob(jobID);
| }
| }
| where
| ProxyServer
| The IP address or the server name for the proxy server. Specify this
| property if you connect to the Salesforce server through a proxy server.
| ProxyServerPort
| The listening port of the proxy server.
| pollingPeriod
| The monitoring frequency. It determines how often the job is monitored
| during its execution. It is expressed in seconds.
| pollingTimeout
| The monitoring time. It determines for how long the job is monitored
| during its execution. At the end of the timeout interval, the job fails. It
| is expressed in seconds.
| The values that you specify in the properties file are the values that are used as
| default at job definition time.
| Business Scenario
120 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
| WWMail4U.Inc operates in a very competitive market, and to maintain a leading
| role, it recently implemented cloud solutions to provide business applications as a
| service to its customers. WWMail4U.Inc's top priority is to have its SAP source
| servers aligned with the SalesForce Server within the cloud environment. The
| company's SAP workload is already controlled by IBM Workload Scheduler and
| the plan is to extend this control to all their (batch) business processes.
| A description of the job properties and valid values are detailed in the
| context-sensitive help in the Dynamic Workload Console by clicking the question
| mark (?) icon in the top-right corner of the properties pane.
| For more information about creating jobs using the various supported product
| interfaces, see Chapter 2, “Defining a job,” on page 7.
| The following table lists the required and optional attributes for Salesforce jobs:
| Table 26. Required and optional attributes for the definition of a Salesforce job
| Attribute Description and value Required
| Server The Salesforce server that Salesforce provides you, after your U
| registration.
| User name The name of the user authorized to access the Salesforce server. U
| Password The password that is associated with the user that is authorized to U
| access the Salesforce server.
| APEX Class The APEX batch class that is supported for IBM Workload Scheduler. U
| You can execute only Salesforce Apex batch classes. If you specify a
| non-batch class, the job fails.
|
| You can submit jobs by using the Dynamic Workload Console, Application Lab or
| the conman command line. See Chapter 3, “Scheduling and submitting jobs and job
| streams,” on page 9 for information about how to schedule and submit jobs and
| job streams using the various interfaces.
| After submission, when the job is running and is reported in EXEC status in IBM
| Workload Scheduler, you can stop it if necessary, by using the kill command from
| the Dynamic Workload Console. However, this action is effective only for the
| Request/Response scenario, therefore the IBM Workload Scheduler processes do
| not wait to receive a response from the Salesforce job.
| If the IBM Workload Scheduler agent stops when you submit the IBM Workload
| Scheduler Salesforce job or while the job is running, as soon as the agent restarts in
| the Request/Response scenario, IBM Workload Scheduler begins monitoring the
| job from where it stopped and waits for the Response phase.
| Job properties
| While the job is running, you can track the status of the job and analyze the
| properties of the job. In particular, in the Extra Information section, if the job
| contains variables, you can verify the value passed to the variable from the remote
| system. Some job streams use the variable passing feature, for example, the value
| of a variable specified in job 1, contained in job stream A, is required by job 2 in
| order to run in the same job stream.
| For more information about passing variables between jobs, see the related section
| in User's Guide and Reference.
| For information about how to display the job properties from the various
| supported interfaces, see Chapter 5, “Analyzing the job log,” on page 13.
| For example, from the conman command line, you can see the job properties by
| running:
| conman sj <Salesforce_job_name>;props
| For a Salesforce job in the Extra Information section of the output command, you
| see the following properties:
| Extra Information
| Apex batch class = TWSBatchTest
| Apex job ID = 7072000000eLnYOAA0
| Job item processed = 1
| Number of errors= 0
| Server address = regionA.salesforce.com
| Batch status = Completed
| Total Job items = 1
| User name = [email protected]
| where
| Apex batch class
| The APEX batch class that is supported for IBM Workload Scheduler.
| Apex job ID
| The Salesforce job ID.
| Job item processed
| The number of the processed job items.
| Number of errors
| The number of the errors.
| Server address
| The Salesforce server that you specify in the Server field.
| Batch status
| The status of batch job
| Total Job items
| The total number of processed job items.
122 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
| User name
| The name of the user authorized to access the Salesforce server that you
| specify in the User name field.
| You can export the Salesforce job properties that you can see in the Extra
| Information section, to a successive job in the same job stream instance. For more
| information about the list of job properties that you can export, see the table about
| properties for Salesforce jobs in User's Guide and Reference.
| The properties file is automatically generated either when you perform a "Test
| Connection" from the Dynamic Workload Console in the job definition panels, or
| when you submit the job to run the first time. Once the file has been created, you
| can customize it. This is especially useful when you need to schedule several jobs
| of the same type. You can specify the values in the properties file and avoid
| having to provide information such as credentials and other information, for each
| job. You can override the values in the properties files by defining different values
| at job definition time.
| The following example shows the job definition for a Salesforce job :
| NEWYORK-01#JOB-SF-0000
| TASK
| <?xml version="1.0" encoding="UTF-8"?>
| <jsdl:jobDefinition xmlns:jsdl=
| "http://www.abc.com/xmlns/prod/scheduling/1.0/jsd
| l" xmlns:jsdlsalesforce=
| "http://www.abc.com/xmlns/prod/scheduling/1.0/jsdlsalesforce"
| name="SALESFORCE">
| <jsdl:application name="salesforce">
| <jsdlsalesforce:salesforce>
| <jsdlsalesforce:SalesforceParameters>
| <jsdlsalesforce:SalesforceParms>
| <jsdlsalesforce:ServerConnection>
| <jsdlsalesforce:Server>regionA.salesforce.com
| sforce.com</jsdlsalesforce:Server>
| <jsdlsalesforce:UserID>[email protected]
| </jsdlsalesforce:UserID>
| <jsdlsalesforce:password>{aes}+D
| 2UAAxhxtYf8ENfb7LNr0DLRt0hwKPHlDiA2/PO1e4=
| </jsdlsalesforce:password>
| </jsdlsalesforce:ServerConnection>
| <jsdlsalesforce:APEXJobDetails>
| <jsdlsalesforce:APEXClass>TWSBatchTest
| </jsdlsalesforce:APEXClass>
| </jsdlsalesforce:APEXJobDetails>
| </jsdlsalesforce:SalesforceParms>
| </jsdlsalesforce:SalesforceParameters>
| </jsdlsalesforce:salesforce>
| </jsdl:application>
| </jsdl:jobDefinition>
| RECOVERY STOP
| For information about how to display the job log from the various supported
| interfaces, see Chapter 5, “Analyzing the job log,” on page 13.
| For example, you can see the job log content by running the command conman sj
| <Salesforce_job_name>;stdlist, where <Salesforce_job_name> is the Salesforce job
| name.
| See also
| From the Dynamic Workload Console you can perform the same task as described
| in
| the Dynamic Workload Console User's Guide, section about Creating job definitions.
| For more information about how to create and edit scheduling objects, see
| the Dynamic Workload Console User's Guide, section about Designing your Workload.
124 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Part 3. Access methods
Access methods are used to extend the job scheduling functions of IBM Workload
Scheduler to other systems and applications. They run on extended agents,
dynamic agents, and IBM Workload Scheduler for z/OS agents. They enable
communication between external systems (SAP R/3, z/OS) and IBM Workload
Scheduler and launch jobs and return the status of jobs.
For information about the supported versions of the plug-ins and access methods,
run the Data Integration report and select the Supported Software tab.
125
126 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Chapter 22. Installing and configuring the access methods
The access methods documented in this guide are packaged with IBM Workload
Scheduler and are automatically installed with the product on dynamic and
fault-tolerant agents.
To use any of the access methods on supported agents, you create an options file,
which configures the access method and defines the workstation and the jobs that
extend the scheduling capability to external systems or applications.
All access methods use two types of options files: a global options file and one or
more local options files. The names of the local options files are generically
referred to as XANAME_<accessmethod>.opts on extended agents and
DYNAMIC_AGENT_FILE_<accessmethod>.opts on dynamic agents. The file names
specified for the local options files for both types of agents must respect the
following rules:
v BothXANAME and DYNAMIC_AGENT_FILE in the file name must be upper case
alphanumeric characters. For example, if the installation of the r3batch access
method includes two extended agent workstations, CPU1 and CPU2, the names of
the local options files are CPU1_r3batch.opts and CPU2_r3batch.opts.
v Double-byte character set (DBCS), single-byte character set (SBCS), and
bidirectional text are not supported. For information about acceptable values for
the extended agent workstation name, See Table 27 on page 135.
Dynamic agents and IBM Workload Scheduler for z/OS agents
Global options file
A common configuration file created by default for each access
method installed, whose settings apply to all the dynamic agent
workstations defined for that method. When the global options file
is created, it contains only the LJuser option, which represents the
operating system user ID used to launch the access method. You
can customize the global options file by adding the options
appropriate for the access method.
The name of the global options file is accessmethod.opts, which,
depending on your operating system, corresponds to:
For PeopleSoft
psagent.opts
127
For SAP R/3
r3batch.opts
For z/OS
mvsjes.opts, mvsopc.opts
Local options file
One or more configuration files that are specific to each access
method. The name of this file is optionsfile_accessmethod.opts
and they are saved to the path <TWA_DIR>/TWS/methods.
In a distributed environment
v If you are defining a job to run the access method by
using the Dynamic Workload Console, it is the options
file you specify in the New > Job definition > ERP >
Access Method XA Task tab.
v If you are defining the SAP R/3 job to run the access
method by using the Dynamic Workload Console, it is
the options file you specify in the New > Job definition
> ERP > SAP Job on Dynamic Workstations XA Task
tab.
v If you are defining the job to run the access method by
using composer, it is the options file you specify in the
target attribute of the job definition.
If you do not create a local options file, the global options
file is used.
In a z/OS environment
v If you are defining a job to run the access method by
using the Dynamic Workload Console, it is the options
file you specify in the New > ERP > Access Method XA
Task tab.
v If you are defining the SAP R/3 job to run the access
method by using the Dynamic Workload Console, it is
the options file you specify in the New > ERP > SAP
XA Task tab.
v If you are defining the job to run the access method by
using the JOBREC statement, it is the name of the
workstation where the access method runs.
If you do not create a local options file, the global options
file is used.
If you do not specify an option in the
optionsfile_accessmethod.opts file, while the access method is
running, the product uses the values specified for that option in
the global options file. If you do not specify options either in the
optionsfile_accessmethod.opts or in the global option file, the
product issues an error message.
If the SAP R/3 access method is installed for AGENT1 workstation,
but you have two external SAP systems on which to schedule jobs,
then in the <TWA_DIR>/TWS/methods directory, you create the
following options files:
v SAP1_AGENT1_r3batch.opts
v SAP2_AGENT1_r3batch.opts
128 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Each file contains the options specific to each external SAP system,
for example, the connection information.
For pools and dynamic pools containing n agents, you must create
an options file for the dynamic pool and copy it in the
TWA_DIR/TWS/methods of each agent of the pool so that all members
of the pool have a local options file with the same name. Then you
must create another options file for the specific agent in the same
directory. For example, if the SAP R/3 access method is installed
for AGENT1 and AGENT2 which belong to the dynamic pool DYN_POOL,
create in the <TWA_DIR>/TWS/methods directory of each agent the
following options files:
AGENT1
v FILEOPTS_AGENT1_r3batch.opts
v FILEOPTS_DYN_POOL_r3batch.opts
AGENT2
v FILEOPTS_AGENT2_r3batch.opts
v FILEOPTS_DYN_POOL_r3batch.opts
Extended agents
All access methods use two types of options file:
Global options file
A common configuration file created by default for each access
method installed, whose settings apply to all the extended agent
workstations defined for that method. When the global options file
is created, it contains only the LJuser option, which represents the
operating system user ID used to launch the access method. You
can customize the global options file by adding the options
appropriate to the access method.
The name of the global options file is accessmethod.opts, which,
depending on your operating system, corresponds to:
For PeopleSoft
psagent.opts
For SAP R/3
r3batch.opts
For z/OS
mvsjes.opts, mvsopc.opts
For custom access methods
netmth.opts
Local options file
A configuration file that is specific to each extended agent
workstation within a particular installation of an access method.
The name of this file is XANAME_accessmethod.opts, where:
XANAME
Is the name of the extended agent workstation.
accessmethod
Is the name of the access method.
The options files must be located in the <TWA_DIR>/TWS/methods directory. They are
read when the supported agent is started. Options are specific to each access
method. For details about how to configure each access method, see the following
sections:
PeopleSoft
“Configuring the PeopleSoft access method” on page 144.
SAP R/3
“Configuring the SAP R/3 access method” on page 207.
z/OS “Configuring the z/OS access method” on page 166.
For example, you might want to define the same value for the Ljuser option and a
different value for the retrieve_joblog option. To do this, you define the Ljuser
option value in the r3batch.opts file. Then you define a different value for the
retrieve_joblog option in each local options file. This results in the following
actions when launching the SAP R/3 job:
v The value for the Ljuser option is extracted from the r3batch.opts file.
v The value for the retrieve_joblog option is taken from each local options file.
For SAP jobs, the Options Editor is available on the General tab.
130 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Select to edit or create a new options file. An Options Editor panel displays all the
related options for the r3batch access method:
For PeopleSoft jobs, the Options Editor is available on the XA Task tab of the
Access Method job type.
How you use the Option Editor to perform the following tasks:
v Modify local or global options file, see “Modifying local or global options files”
on page 133.
v Create a local options file, see “Creating local options files” on page 133.
To start the Option Editor, go to the TWS_home/methods/opted directory of your IBM
Workload Scheduler installation and, depending on your operating system, run the
following command:
On Windows operating systems
opted.bat
On UNIX operating systems
opted.sh
Note: To use the Option Editor on a UNIX workstation, you must have a
graphical environment.
The Option Editor automatically loads all the existing global and local options files
grouped by access method.
The Option Editor provides three possible views of an options file. To change the
view in which the options file is displayed select View > Display As. The views
are:
Simple
Displays the options as a sequence of tabs that you can select to edit one
option at a time. To view or edit an option, select the tab with the option
name to display the field for the value. This is the easiest way to edit
options, because you only see the actual value that is used in the file.
Inherited or default values are not displayed.
Table Provides a tabular view of all the options for a selected file. This is the
default view. For each option the following columns are displayed:
Name The name of the option.
132 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Value The value specified in the file.
Default Value
The value used by the method if not specified in the options file or
inherited.
Inherited Value
The value obtained from the global options file if inheritance is
supported by the access method. For a detailed explanation, see
“Option value inheritance” on page 130.
Actual Value
The value used at run time. The values are used in the order:
value, if available; inherited value, if supported; default value.
Text Displays an options file in the typical format, showing only the options
that have a value. This view is generally used for preview purposes,
because files are shown in read-only form. Password fields are shown
encrypted.
Mandatory options are identified by a yellow background and are marked with an
asterisk (*). Options that are not correctly entered are shown with a red
background. The Option Editor performs only syntactic checks.
To modify local or global options files, using the Option Editor, perform the
following steps:
1. From the File Options Tree area, select the options file that you want to
modify. The options are displayed in the File Options View area.
2. Select the option that you want to modify and modify its value.
3. Save and close the options file.
You create local options files when you define a new supported agent workstation.
For PeopleSoft and SAP access method, you can create a local options file right
from the Workload Designer in the Dynamic Workload Console when defining the
job definition. Alternatively, for these access methods and the z/OS access method,
you can create a local options file using the Option Editor.
To create a local options file, using the Option Editor, perform the following steps:
1. Click File > New in the menu bar. The New Option File window is displayed.
2. In the Insert XA CPU Name field:
Extended agents
Enter the name of the extended agent workstation XANAME for which
you want to create an options file.
Dynamic agents and IBM Workload Scheduler for z/OS agents
How to create a workstation definition for supported agents using the Dynamic
Workload Console.
Dynamic agents and IBM Workload Scheduler for z/OS Agents
The agents are automatically registered to the IBM Workload Scheduler
network. For further information about the dynamic agents registration,
see IBM Workload Scheduler User's Guide and Reference.
Extended agents
To define an extended agent workstation using the Dynamic Workload
Console, perform the following steps:
1. From the Dynamic Workload Console portfolio, click Administration >
Workload Environment Design > Create Workstations.
2. Select an engine, distributed or z/OS, from the list and click Create
Workstation.
3. In the Workstations properties panel, specify the attributes for the
extended agent workstation you are creating. For all the details about
available fields and options, see the online help by clicking the "?" in
the top-right corner. In the workstation definition, specify the access
method and other properties, as shown in Table 27 on page 135. For
further information about the workstation definition properties, see
IBM Workload Scheduler User's Guide and Reference
4. To assign the workstation to an existing domain or to create a new
domain, click Assign to Domain.
5. Click Save.
The following table shows how to complete some specific fields of the
workstation properties panel for extended agents.
134 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 27. How to complete the extended agents definition. This table shows how to complete the extended agents
definition
Field Description by Access Method
PeopleSoft z/OS SAP R/3
Name The name for the extended agent workstation. For all access methods (except for
z/OS, which always has a limit of 8 characters), the name must start with a letter and
can contain alphanumeric characters, dashes, and underscores. The maximum length
is 16 characters. Workstation names must be unique and cannot be the same as
workstation class and domain names. Double-byte character set (DBCS), single-byte
character set (SBCS), and bidirectional text are not supported. If a workstation name
contains these characters and, as a result, the options file name contains the same
name, the workstation cannot be validated by the SAP system.
For all the access methods, this name must be consistent with the name of the options
file associated with the extended agent workstation. That is, if the options file name is
MYXAGENT_accessmethod.opts, then MYXAGENT and Name must be the same.
Node Name null The node name or IP address null
of the z/OS system. Fully
qualified domain names are
accepted.
TCP Port Any number other than The TCP/IP address (port Any number other than 0.
0. number) of the z/OS gateway
on the z/OS system. Enter the
same value as the SYSTSIN
variable PORT.
Access Method psagent Depending on your job r3batch
scheduling interface, one of the Note: In UNIX the name
following: is case sensitive and must
be lowercase.
mvsjes To launch and
monitor z/OS jobs
using JES2 or JES3.
mvsopc
To launch and
monitor z/OS jobs
using IBM Workload
Scheduler for z/OS.
Note: In UNIX operating
systems, the name is case
sensitive and must be
lowercase.
For details about defining workstations with composer, see the IBM Workload
Scheduler User's Guide and Reference.
136 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Creating the CPUREC statement for extended agents
This section is valid only for Extended agents. Create the CPUREC statement for the
workstation in the TOPOLOGY initialization statement. The TOPOLOGY initialization
statement is used to define parameters related to the topology of the connected
IBM Workload Scheduler network. Such a network topology statement is made up
of one or more (one for each domain) DOMREC statements that describe the topology
of the distributed network, and by several CPUREC statements, one for each
fault-tolerant workstation.
The following example shows a CPUREC statement for an SAP R/3 extended agent
workstation named R3XA. The extended agent is hosted by an IBM Workload
Scheduler agent named TWSA, which is also the domain manager of DOMAIN1.
****************TPLGINFO MEMBER ***************************
/*********************************************************************/
/* DOMREC: Domain definition */
/*********************************************************************/
DOMREC DOMAIN(DOMAIN1)
DOMMNGR(TWSA)
DOMPARENT(MASTERDM)
/*********************************************************************/
/* CPUREC: Extended agent workstation definition */
/*********************************************************************/
CPUREC CPUNAME(R3XA)
CPUOS(OTHER)
CPUNODE(NODE1)
CPUDOMAIN(DOMAIN1)
CPUHOST(TWSA)
CPUTYPE(XAGENT)
CPUACCESS(r3batch)
CPUUSER(TWSuser)
CPUTZ(’Europe/Rome’)
/*********************************************************************/
/* CPUREC: Domain manager workstation definition */
/*********************************************************************/
CPUREC CPUNAME(TWSA)
CPUNODE(NODE1)
CPUAUTOLINK(ON)
CPUDOMAIN(DOMAIN1)
CPUTYPE(FTA)
CPUUSER(TWSuser)
CPUTZ(’Europe/Rome’)
Note: The CPUREC statement does not exist for an IBM Workload Scheduler for
z/OS agent workstation.
For further information about CPUREC for extended agents, see Customization and
Tuning.
This section shows the ISPF definition for extended agents and agents with
z-centric capability.
Extended agents
In ISPF, define the workstation as computer automatic and then set the FT
Work station field to Y. The CPUREC statement with the three keywords
described in “Creating the CPUREC statement for extended agents” on
page 137 provides the extended agent specification.
Note: Make sure you write the CPUREC statement before making the ISPF
or Dynamic Workload Console definition, because they have no effect
without the CPUREC statement.
Enter the command R for resources A for availability or M for access method
above, or enter data below:
Options:
SPLITTABLE ===> N Interruption of operation allowed, Y or N
JOB SETUP ===> N Editing of JCL allowed, Y or N
STARTED TASK, STC ===> N Started task support, Y or N
WTO ===> N Automatic WTO, Y or N
DESTINATION ===> ________ Name of destination
Defaults:
TRANSPORT TIME ===> 00.00 Time from previous work station HH.MM
DURATION ===> ________ Duration for a normal operation HH.MM.SS
If you are scheduling in an end-to-end environment, to define the job, use either of
the following methods:
138 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
v Dynamic Workload Console.
v IBM Workload Scheduler for z/OS ISPF dialogs. You must also create a member
in the SCRIPTLIB with a JOBREC statement for the job.
Jobs defined for supported agents are added to job streams and scheduled in the
same way as any other job in IBM Workload Scheduler and IBM Workload
Scheduler for z/OS.
How to create a job definition for supported agents using the Dynamic Workload
Console.
Note: The access method for SAP R/3 provides supplementary features if you use
the alternative steps described in “Create an SAP job and associate it to an IBM
Workload Scheduler job” on page 224 or “Creating an SAP job from the Dynamic
Workload Console” on page 228. You can create native SAP R/3 Standard jobs on a
remote SAP system directly from the Dynamic Workload Console.
For more information about using the composer command line to define jobs, see
User's Guide and Reference.
140 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
--------------------------------- OPERATIONS -----------------Row 1 to 1 of 1
Command ===> Scroll ===> PAGE
For each job, create a member in the SCRIPTLIB of IBM Workload Scheduler
for z/OS with details about the job in a JOBREC statement. A SAPJOB
member was created for the job of the previous example. It contains a
JOBREC statement like this:
JOBREC
JOBCMD(’/-job BAPRINT46B -user MAESTRO -i 14160001 -c C’)
JOBUSR(twsila)
The string in JOBCMD is read and interpreted by the access method before
running the job. The job of this example, BAPRINT46B, was previously
defined on SAP R/3 and assigned with an ID of 14160001, that was
manually written in JOBCMD.
The following example is for a PeopleSoft job. The entire string that
follows the JOBCMD keyword must be enclosed within quotation marks ("),
because for PeopleSoft jobs single quotes are already used in the string.
JOBREC
JOBCMD("/ -process XRFWIN -type ’SQR Report’ -runcontrol IWS")
JOBUSR(PsBuild)
IBM Workload Scheduler for z/OS agents
For information about the jobs definition for agent with z-centric
capabilities, see Scheduling End-to-end with z-centric capabilities.
Submitting jobs
About this task
To submit jobs on the supported agent workstation, perform the following steps:
1. Verify that the application system to which the job belongs and the related
database is up and running.
2. Launch the job. For details, see:
Dynamic agents
142 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Chapter 23. Access method for PeopleSoft
What you need and what you can do with Access method for PeopleSoft.
Using Access method for PeopleSoft you can run and monitor PeopleSoft jobs from
the IBM Workload Scheduler environment. These jobs can be run as part of a
schedule or submitted for ad-hoc job processing. PeopleSoft extended agent or
dynamic agent jobs can have all of the same dependencies and recovery options as
other IBM Workload Scheduler jobs. PeopleSoft jobs must be defined in IBM
Workload Scheduler to be run and managed in the IBM Workload Scheduler
environment.
For information about the supported versions of the plug-ins and access methods,
run the Data Integration report and select the Supported Software tab.
Features
Look at the tasks you can perform by using Access method for PeopleSoft.
Using Access method for PeopleSoft, you can perform the following tasks:
v Use IBM Workload Scheduler standard job dependencies on PeopleSoft jobs.
v Schedule PeopleSoft jobs to run on specified days, times, and in a prescribed
order.
v Define inter-dependencies between PeopleSoft jobs and IBM Workload Scheduler
jobs that run on different applications such as SAP R/3 and Oracle E-Business
Suite.
v Define inter-dependencies between PeopleSoft jobs and jobs that run on different
operating systems.
143
Scheduling process for the PeopleSoft supported agents
IBM Workload Scheduler can launch and monitor jobs in the PeopleSoft process
scheduler using a PeopleSoft extended agent or dynamic agent workstation. The
PeopleSoft supported agent (extended agent or dynamic agent) is defined in a
standard IBM Workload Scheduler workstation definition. This definition is a
logical workstation name and specifies the access method as psagent. The access
method is used to communicate job requests to the PeopleSoft process scheduler.
To launch a PeopleSoft job, IBM Workload Scheduler runs the psagent method,
passing it information about the job. An options file provides the method with the
path, the executable, and other information about the PeopleSoft process scheduler
and application server used to launch the job. The supported agent t can then
access the PeopleSoft process request table and make an entry in the table to
launch the job. Job progress and status information is written to the job’s standard
list file.
Security
Security for the PeopleSoft jobs is handled by standard IBM Workload Scheduler
security.
144 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Dynamic agent
DYNAMIC_AGENT_FILE_psagent.opts where DYNAMIC_AGENT_FILE is any
text string. This string does not necessarily correspond to the name of the
dynamic agent workstation since the dynamic agent can have more than
one .opts file associated. For more information, see “Setting options for the
access methods” on page 127.
To edit both options file, you can use either the Option Editor available with this
product, or any other text editor. On dynamic workstations, you can edit the
options files from the job definition panels in the Dynamic Workload Console. For
details about how to create and edit the options files with the Option Editor, see
“Setting options for the access methods” on page 127. For examples of options files
for this access method, see “PeopleSoft options file example” on page 146.
Table 29 describes the options for the psagent access method. Option names are
case insensitive. Before you use a manually-created options file, check that all the
option names are written correctly, otherwise they will be ignored.
Table 29. Psagent access method options
Option Description
CHECKINTERVAL (Optional) Specifies the frequency (in seconds) with which
the psagent monitors a submitted process up to completion.
The default is 120.
LJUSER (Optional) Specifies the ID of the IBM Workload Scheduler
user that runs the psagent to launch jobs (LJ tasks). This
user must be a valid IBM Workload Scheduler user on the
IBM Workload Scheduler hosting workstation.
PS_DISTSTATUS (Optional) Determines whether the distribution status of the
PeopleSoft request is taken into account when determining
the status of the IBM Workload Scheduler job. Values are 0
(not taken into account) or 1 (taken into account - the
default value).
PSFT_DOMAIN_PWD (Optional) Specifies the encrypted password (case-sensitive)
of the PeopleSoft domain used for the connection to the
PeopleSoft application server.
where:
server Specifies the host name or TCP/IP address of the
server
port Specifies the port number the server is listening on.
TWS_MAX_WAIT_TIME (Optional) Specifies the maximum time that the supported
agent waits (timeout) after a failed operation on the
PeopleSoft application server before retrying the operation.
The default is 10 seconds.
TWS_MIN_WAIT_TIME (Optional) Specifies the minimum time that the supported
agent waits (timeout) after a failed operation on the
PeopleSoft application server before retrying the operation.
The default is 5 seconds.
TWS_RETRY (Optional) The maximum number of times that the
supported agent attempts to re-run a failed operation on the
PeopleSoft application server. The default is 5.
TWSXA_INLINE_CI (Optional) Specifies the name of the component interface
that the psagent invokes to submit jobs to PeopleSoft.
146 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
LJuser=TwsUsr
CheckInterval=120
PSFT_OPERATOR_ID=PSHC
PSFT_OPERATOR_PWD=*****
SERVER_NAME_LIST=9.87.120.36:9000
If you create the options file manually, you must encrypt the PeopleSoft operator
password, as described in “Encrypting PeopleSoft operator passwords.”
When you add or change the PeopleSoft user password using a text editor, you
must run the pwdcrypt program to encrypt the password before writing it in the
file. To run the encryption program, enter the following command:
pwdcrypt password
The program returns an encrypted version that you can then copy and paste into
the options file.
To support this, you can set up multiple PeopleSoft extended agent workstations
that connect to the same method but use different options files. When a
workstation starts the method, it first looks for the options file with extended agent
workstation name prepended to psagent.opts. For example, a PeopleSoft extended
agent named ps847system would have the following options file:
PS847SYSTEM_psagent.opts
The psagent method searches first for an options file with the extended agent
workstation name, and then for the default psagent.opts file. This allows the user
to set up an extended agent for each PeopleSoft application server.
To connect to only one application server, use the default name for the options file,
psagent.opts.
Note: In case you specify some connection properties in your local option files,
make sure that the same properties are commented out in your global option file,
with the exception of the global property LJuser. This action is needed to avoid
that warning messages related to duplicate properties are displayed in the job log.
The configuration steps described in this section are necessary to enable IBM
Workload Scheduler to schedule PeopleSoft jobs that have in-line variables in their
definitions.
When you use IBM Workload Scheduler to submit a process that has in-line bind
variables, the name of the process type in the PeopleSoft GUI becomes
ITWS_process type. For example, SQR Process becomes ITWS_SQR Process.
To schedule a job that contains in-line variables in its definition you must perform
the following tasks:
v Leave the value of the TWSXA_INLINE_CI option set to ITWS_PROCESSREQUEST, that
is the default value. See “Defining the configuration options” on page 144 for a
detailed explanation.
v Upload the PeopleSoft project as described in “Uploading the PeopleSoft
project” on page 149.
148 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Uploading the PeopleSoft project
About this task
Note: From the SQL interactive command line, the same task can be
performed by the following sample statement, customized for your database
environment:
INSERT INTO PS_SERVERCLASS SELECT o.SERVERNAME,
o.OPSYS,’ITWS_’||o.PRCSTYPE,o.PRCSPRIORITY,
o.MAXCONCURRENT FROM PS_SERVERCLASS
o WHERE ( SELECT count(*) FROM PS_SERVERCLASS i WHERE
i.SERVERNAME=o.SERVERNAME AND i.OPSYS=o.OPSYS AND
i.PRCSTYPE=’ITWS_’||o.PRCSTYPE ) = 0
AND ( select count(*) from PS_PRCSTYPEDEFN
a where a.PRCSTYPE=’ITWS_’||o.PRCSTYPE AND a.OPSYS=o.OPSYS ) > 0
10. Restart the process servers.
You do not need to change the existing IBM Workload Scheduler job definitions,
except for the scheduling nVision process, where the runcontrol ID must be
specified using the BUSINESS_UNIT.REPORT_ID convention.
The following is an example of a job definition for the scheduling nVision process:
-process ’NVSRUN’ -type nVision-Report -runcontrol AUS01.VARIABLE
150 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
where NVSRUN is the process name and AUS01.VARIABLE is the
BUSINESS_UNIT.REPORT_ID.
For more information, refer to “Defining jobs for supported agents” on page 138.
where:
process_name
The process name for the PeopleSoft job.
process_type
The process type for the PeopleSoft job. This entry must be enclosed
within single quotes.
runcontrol_ID
The runcontrol ID for the PeopleSoft job.
TWS_user_name
The IBM Workload Scheduler for z/OS user who runs the psagent
access method from the end-to-end scheduling environment.
152 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
The following is an example of a task string specification for a PeopleSoft 8.44 job:
-process XRFWIN -type ’SQR Report’ -runcontrol 1 -runlocationdescr PSNT
If the final status of a PeopleSoft job is success or warning, you can decide whether
to use the distribution status of the PeopleSoft job when determining the status of
the IBM Workload Scheduler job by setting the PS_DISTSTATUS option in the
options file:
0 The distribution status is ignored and the IBM Workload Scheduler job
status is calculated as shown in Table 32 on page 154.
1 The distribution status is used and the IBM Workload Scheduler job status
is calculated as shown in Table 31. This is the default value.
Table 31 shows the relationship between the run status, the distribution status, and
the IBM Workload Scheduler job status. The return code associated with the status
is shown in parentheses. IBM Workload Scheduler uses this return code to evaluate
the return code condition you specified in the Return Code Mapping Expression
field in the Properties panel of the job definition. For more details about this field,
refer to the online help by clicking the "?" in the top-right corner of the panel.
Table 31. Relationship between the run status, the distribution status, and the IBM Workload
Scheduler job status
PeopleSoft job distribution IBM Workload Scheduler
PeopleSoft job run status status job status
v Success (9) v Posted (5) SUCC
v Warning (17) v None (0)
v Success (9) v Not Posted (4) ABEND
v Warning (17) v Delete (6)
v Success (9) v Not Available (1) EXEC
v Warning (17) v Processing (2)
v Generated (3)
v Posting (7)
v Cancel (1) Any distribution status ABEND
v Delete (2)
v Error (3)
v Canceled (8)
v No Success (10)
v Blocked (18)
v Restart (19)
Table 32 on page 154 shows the relationship between the PeopleSoft run status and
the IBM Workload Scheduler job status. The return code associated with the status
is shown in parentheses. IBM Workload Scheduler uses this return code to evaluate
the return code condition you specified in the Return Code Mapping Expression
field in the Properties panel of the job definition. For more details about this field,
refer to the online help by clicking the "?" in the top-right corner of the panel.
Note: If IBM Workload Scheduler fails to retrieve the status of the PeopleSoft job,
the IBM Workload Scheduler job status is DONE.
154 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Chapter 24. Access method for z/OS
What you need to know and to do before using the Access method for z/OS.
Using Access method for z/OS you can schedule and control z/OS jobs using the
job scheduling features of IBM Workload Scheduler.
Note: Throughout this publication, the term z/OS is used to refer also to
supported versions of OS/390®.
The access method for z/OS is installed automatically when you install a dynamic
or a fault-tolerant agent. To be entitled to its use, you must purchase a separate
chargeable component beside IBM Workload Scheduler or IBM Workload
Scheduler for z/OS agents.
For information about the supported versions of the plug-ins and access methods,
run the Data Integration report and select the Supported Software tab.
Features
Using Access method for z/OS you can:
v Use IBM Workload Scheduler to schedule z/OS jobs to run at specific times and
in a prescribed order.
v Define dependencies between IBM Workload Scheduler jobs running on different
systems and operating systems.
v Define dependencies for IBM Workload Scheduler jobs based on the completion
of z/OS jobs that were not launched by IBM Workload Scheduler.
v Define dependencies for IBM Workload Scheduler jobs based on the existence of
files on a z/OS system.
155
Installing, configuring, and uninstalling the z/OS gateway
Access method for z/OS consists of the z/OS access method that must be located
on the IBM Workload Scheduler agent, and of the gateway software that is located
on the z/OS system.
The Access method for z/OS is installed automatically when you install a dynamic
or a fault-tolerant agent. To be entitled to its use, however, you must purchase a
separate chargeable component beside IBM Workload Scheduler or IBM Workload
Scheduler for z/OS agents. Ask your IBM representative for details.
To install, configure, and uninstall the z/OS gateway, refer to the following
sections.
v “Installing”
v “Configuring” on page 158
v “Uninstalling” on page 159
Installing
You can install the z/OS gateway module in either of these two ways:
v Unload the files from the IBM Workload Scheduler CD. See “Unloading the files
from the CD.”
v Unload the files from a 3480 tape cartridge written in non-IDRC (uncompressed)
format. See “Unloading the files from the tape” on page 157.
The z/OS gateway files are stored in the ZOS directory of the product CD and are
named:
v LOADLIB
v SAMPLES
For example:
LOADLIB
1. Issue the following command:
TSO RECEIVE INDSN(’TWS4APPS.LOADLIB.L80’)
156 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
where "da" means "data set" and the MVS™ data set name in quotes is
the name you want for the output loadlib data set.
Some IEBCOPY messages are displayed as the library is uncompressed.
SAMPLIB
1. Issue the following command:
TSO RECEIVE INDSN(’TWS4APPS.SAMPLIB.L80’)
where "da" means "data set" and the MVS data set name in quotes is
the name you want for the output samplib data set.
Some IEBCOPY messages are displayed as the library is uncompressed.
The z/OS gateway files are supplied on a 3480 tape cartridge written in non-IDRC
(uncompressed) format.
Modify and submit the JCL below to unload the tape. Customize the job card and
modify the following parameters according to your environment standards:
v Enter an appropriate job name.
v Identify a 3480 tape device.
//MVSXAUNL JOB (876903,D07),’OPCL3’,MSGLEVEL=(1,1),
// MSGCLASS=A,CLASS=A,NOTIFY=&SYSUID
//**********************************************************************
//* *
//* THIS IS THE JOB THAT UNLOADS THE WORKLOAD SCHEDULER FOR *
//* APPLICATIONS z/OS Access Method Version 8.4 TO CUSTOMIZE *
//* *
//**********************************************************************
//STEP01 EXEC PGM=IEBCOPY
//SYSPRINT DD SYSOUT=*
//INDD DD DSN=TWSX.V8R4M0.SAMPLES,
// DISP=(OLD,PASS),UNIT=600,
// VOL=SER=ABC001,
// LABEL=(1,SL)
//OUTDD DD DSN=TWSX.V8R4M0.SAMPLES,
// DISP=(NEW,CATLG),
// SPACE=(32760,(2,2,10)),
// DCB=(RECFM=FB,LRECL=80,BLKSIZE=0),
// UNIT=3390,VOL=SER=OPC00C
//SYSUT3 DD UNIT=SYSDA,SPACE=(TRK,(20,1,10))
//SYSUT4 DD UNIT=SYSDA,SPACE=(TRK,(20,1,10))
//SYSIN DD *
COPY OUTDD=OUTDD,INDD=((INDD,R))
//STEP02 EXEC PGM=IEBCOPY
//SYSPRINT DD SYSOUT=*
//INDD DD DSN=TWSX.V8R4M0.SERVICE.APFLIB1,
// DISP=(OLD,PASS),UNIT=600,
// VOL=SER=ABC001,
// LABEL=(2,SL)
//OUTDD DD DSN=TWSX.V8R4M0.SERVICE.APFLIB1,
Configuring
About this task
Note: This member must have the PACK OFF option set in its profile. If PACK
ON is set, the started task will end with RC=04.
6. Copy EEWSPACE and EEWSERVE to the PROCLIB ( from SAMPLIB) and edit it (for
example, include STEPLIB, and specify the appropriate PARMLIB member
name).
7. Verify that the TCP/IP port specified in the PARMLIB member is not in use. To
do this, issue the following command and review the output:
TSO NETSTAT PORTLIST
If the port is in use, choose another port that is not in use and modify the
PARMLIB member.
8. Ensure that the IEFU84 exit is enabled by checking the SYS1.PARMLIB member
SMFPRMxx, or by issuing the following console command:
D SMF,O
If the SMFPRMxx member must be changed, to make the changes effective issue
the command:
SET SMF=xx
9. Set the RACF permissions for the started tasks EEWSPACE and EEWSERVE.
For details, see “Setting RACF authorizations on z/OS” on page 159.
10. Start EEWSPACE.
11. When EEWSPACE is up, start EEWSERVE.
158 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Setting APF authorizations on z/OS
About this task
This section describes how to authorize the load library in APF, by following these
steps:
1. Issuing the SETPROG command from the console log. For example:
SETPROG APF,ADD,DSN=twsx.SERVICE.APFLIB1,
VOL=xxxxxx
where: xxxxxx is the volume serial number where the load library is located, or:
SETPROG APF,ADD,DSN=twsx.SERVICE.APFLIB1,VOL=SMS
To set the RACF permissions to authorize the PARMLIB command to be used by the
user "userone", issue the following commands:
rdefine tsoauth parmlib uacc(upd)
permit parmlib class(tsoauth) id(userone) acc(upd)
setropts raclist(tsoauth) refresh
To set the RACF permissions for the started tasks EEWSPACE and EEWSERVE,
issue the following commands:
redefine started EEWSPACE.**
stdata(user(<user_ID>) group(group_name))
redefine started EEWSERVE.**
stdata(user(<user_ID>) group(group_name))
setropts raclist(started) refresh
Uninstalling
About this task
Additional information
The following topics provide additional information for the z/OS extended agent:
v “Gateway software components” on page 160
v “IEFU84 Exit” on page 160
v “Security” on page 160
IEFU84 Exit
The extended agent for z/OS tracks job streams using the IEFU84 exit. This exit
must be turned on in the SMF parm member in SYS1.PARMLIB. IBM distributes a
dummy IEFU84 exit with the operating system that is an IEFBR14 program. The
EEWSPACE job dynamically chains to the IEFU84 exit. If the IEFU84 exit is
currently being used, EEWSPACE will “front-end” the IEFU84 exit, obtain the
information it requires, and then branch to the existing user exit. When
EEWSPACE is stopped, it removes itself from the chain and restores the chain to
its original status. It is important to note that EEWSPACE has no effect on any
existing IEFU84 exits, which continue to run normally.
Security
Security is enforced in several areas, usually, RACF, Top Secret, and ACF2. The
EEWSERVE job must have the ability to submit jobs that run under the user IDs
that are supplied in the JCL to be submitted. The JCL must not contain passwords.
This can be authorized by using SURROGAT class resources in RACF, and the
equivalents in ACF2 and Top Secret. PROPCNTL class resources in RACF can be
used to prevent submitted jobs from running under the EEWSERVE user ID. ACF2
and Top Secret equivalents can also be used. Resource class JESJOBS in RACF, and
ACF2 or Top Secret equivalents, can be used to control which job names and user
IDs (with or without passwords) can be submitted by EEWSERVE.
Startup
About this task
Follow these steps:
Procedure
1. Customize and start the EEWSPACE procedure (following the commented
instructions it contains) to start the extended agent Gateway Data Space. The
160 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
job must be a started task and must not be canceled. See “SYSTSIN variables”
for a description of the parameter settings. EEWSPACE creates the Data Space
and installs the IEFU84 exit. To stop the job, use the STOP EEWSPACEcommand
from any z/OS console.
Note:
a. EEWSPACE must be active before EEWSERVE is started.
b. To shut down, stop EEWSERVE before stopping EEWSPACE.
2. Customize and start the EEWSERVE procedure by following the commented
instructions it contains. For a description of the parameter settings, see
“SYSTSIN variables.”
3. To stop the job, use the STOP EEWSERVE command from any z/OS console.
SYSTSIN variables
Table 34 lists all the SYSTSIN variables and their description. Modify the settings
as required for your site configuration. The default values are shown in
parentheses.
Table 34. SYSTSIN variables
Variable Description
COMPLETIONCODE(LASTSTEP) Specifies the job completion code of a JES
multi-step job. This variable can have one of
the following values:
LASTSTEP
The completion code for a JES
multi-step job is determined by the
last run step in the job. This is the
default value.
MAXSTEP
The completion code is determined
by the highest completion code of
any run step in the job.
162 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 34. SYSTSIN variables (continued)
Variable Description
SUBSYS(UNIS) The prefix used by the extended agent for
z/OS as the first four characters of extended
console names. It is also used as the first four
characters of internal reader DDNAMES.
Change only in coordination with IBM
Software Support.
SVCDUMP(NO) When set to YES, abends cause a SVC dump.
Use only in coordination with IBM Software
Support.
TCPIPSTACK(IBM) The vendor of TCP/IP stack (IBM,
INTERLINK, or OPENCONNECT).
TCPNAME(TCPIP) The name of the TCP/IP address space when
the IBM version of TCP/IP stack is used.
TERMINATOR(X’25’) The transaction termination character. Do not
change the default unless asked to do so by
IBM Software Support.
WTP(NO) When set to YES, it directs trace information
to SYSLOG as write-to-programmer
information. This can be used if SYSTSPRT
does not meet your needs.
If a fix or fix pack for Access method for z/OS is issued by IBM, you can receive it
by downloading the files from the IBM software FTP site.
Note: These files are not allocated as type PDS but as regular sequential files.
2. Use FTP to retrieve the LOADLIB and SAMPLES fix pack files from the download
site, by logging in as anonymous, with your e-mail address as the password.
Issue the following commands:
tso ftp ftp.software.ibm.com
anonymous
your_e-mail_address
cd software/support/patches/patches_8.4.0/
cd patch_name
cd ZOS
bin
get loadlib_file_name ’TWS4APPS.LOADLIB.L80’ (rep
get samples_file_name ’TWS4APPS.SAMPLIB.L80’ (rep
quit
For example, for Fix Pack 01 the variables in this list of commands would have
the following values:
patch_name
8.4.0-TIV-TWSWSE-FP0001
loadlib_file_name
LOADLIB_820WSEFP07
samples_file_name
SAMPLES_820WSEFP07
Note: The data set names in quotes on the get commands (the MVS file names)
must match the files that were allocated in step 1 on page 163.
3. The downloaded files are in an 80-byte packed format. To ensure that the files
have been downloaded correctly, browse them. The beginning of the output
should be similar to the following:
BROWSE TWS4APPS.LOADLIB.L80 Line
\INMR01. ....&......NODENAME......TWSUSR2......A......A......20
....... \INMR02..........IEBCOPY......... ........ ....... .....
..........."8. .... ................TWS84..XAGENT..V8R4M0..FIXPAC04..DRV1511..LO
164 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
INMR901I Dataset TWS84.XAGENT.V8R4M0.FIXPAC04.DRV1511.LOADLIB
from TWSUSR2 on NODENAME NMR906A
Enter restore parameters or ’DELETE’ or ’END’ +
***
b. Reply:
da(’TWS4APPS.LOADLIB’)
where da means data set and the MVS data set name in quotes is
the name you want for the output loadlib data set.
Some IEBCOPY messages are displayed as the library is
uncompressed.
SAMPLIB
a. Issue the following command:
TSO RECEIVE INDSN(’TWS4APPS.SAMPLIB.L80’)
where da means data set and the MVS data set name in quotes is
the name you want for the output samplib data set.
Some IEBCOPY messages are displayed as the library is
uncompressed.
After receiving the files, the file characteristics change to those shown in
Table 36.
Table 36. File characteristics for the LOADLIB file after receiving it
Characteristic Value
Data Set Name TWS4APPS.LOADLIB
Organization PO
Record format U
Record length 0
Block size 32760
1st extent blocks 10
Secondary blocks 5
Data set name type PDS
Table 37. File characteristics for the SAMPLIB file after receiving it
Characteristic Value
Data Set Name TWS4APPS.SAMPLIB
Organization PO
Record format FB
Record length 80
Block size 27920
1st extent blocks 4
Use the Search Support option to search for items of interest to you. Useful terms
to enter for Access method for z/OS are: "TWS4APPS", message IDs, ABEND
codes, "EEWSERVE", "MVS xagent", "TWS applications z/OS", or the interface
types ("TWS for z/OS", JES2, JES3).
See “Defining supported agent workstations” on page 134 to learn how to define a
supported agent workstation in IBM Workload Scheduler.
The options file must be located on the IBM Workload Scheduler hosting computer
for the extended agent or dynamic agent in the TWS_home\methods directory. If you
do not create one, the agent uses by default one of the global options files (either
mvsjes.opts or mvsopc.opts).
Table 38 on page 167 describes the options that you can define for the z/OS access
method.
166 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 38. Access method for z/OS access method options
Options File Entries Description
BLOCKTIME=min (Optional) Defines the amount of time, in minutes, the
method waits for a response to a status check before timing
out. This value must be less than the value of
CHECKINTERVAL (described below) and of IBM
Workload Scheduler’s local option bm check status.
Fractional values are accepted; for example, .5 for 30
seconds, or 1.5 for one minute and 30 seconds. The default
is 2.
Note: Only change this option from the default value if
specific network problems cause delays in the transmission
of data from the z/OS gateway.
CHECKINTERVAL=min (Optional) Defines the polling rate, in minutes, for checking
the status of z/OS jobs that were launched by the method.
Fractional values are accepted; for example, .5 for 30
seconds, or 1.5 for one minute and 30 seconds. The default
is 2.
Note:
1. For best results, allow LJUSER, CFUSER, and GSUSER to take the default
values.
Remember that you should include neither special characters, other than dash (-)
and underscore (_), nor national characters in the IBM Workload Scheduler job
name. Such characters are not supported in z/OS, and when the IBM Workload
Scheduler job is passed to z/OS by the access method, z/OS rejects the name and
abends the job if it finds them.
See “Defining jobs for supported agents” on page 138 for reference.
You specify these task string parameters in the following places when you define
their associated IBM Workload Scheduler jobs:
v In the Task string field of the Task page of the Properties - Job Definition panel,
if you use the Dynamic Workload Console
v As arguments of the scriptname keyword in the job definition statement, if you
use the IBM Workload Scheduler command line.
v As arguments of the JOBCMD keyword in the JOBREC statement in the SCRIPTLIB of
IBM Workload Scheduler for z/OS, if you are scheduling in an end-to-end
environment.
where:
dataset Specifies the JES job data set or the name of a member of a partitioned
data set.
condcode
Specifies the condition code that indicates successful job completion. If
preceded by <, the condition code must be less than or equal to this value.
If preceded by =, the condition code must be equal to this value. If
omitted, “ = 0000” is used. Note that there must be a space on both sides
of the operator (< or =).
Example:
gold.apayable.cntl(apayjob1) = 0004
For IBM Workload Scheduler for z/OS jobs: The syntax is:
appl [IA(yymmddhhmm)|IATIME(hhmm)] [...]
[DEADLINE(yymmddhhmm)|DEADLINETIME(hhmm)]
[PRIORITY(pri)]
[CPDEPR(Y|N|P|S)]
168 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
where:
appl The name of the IBM Workload Scheduler for z/OS application to be
inserted into the current plan.
IA The input arrival date and time in the form: yymmddhhmm.
IATIME
The input arrival time in the form: hhmm.
DEADLINE
The deadline arrival date and time in the form: yymmddhhmm.
DEADLINETIME
The deadline arrival time in the form: hhmm.
PRIORITY
The priority (1-9) at which to run the application.
CPDEPR
The current plan dependency resolution selection.
Y Add all successor and predecessor dependencies.
N Do not add any dependencies, (the default.)
P Add predecessor dependencies.
S Add successor dependencies.
For complete descriptions of the parameters, refer to the IBM Workload Scheduler
for z/OS documentation.
Example:
PREFABJOB44 IA(0202181000) PRIORITY(5) CPDEPR(Y)
IBM Workload Scheduler monitors these jobs until their status changes to success.
The details of the outcome of such jobs must be checked in the subsystem where
the jobs were launched. IBM Workload Scheduler only records whether or not
these jobs completed successfully. To find the reason for the failed submission or
completion of one of these jobs, or to check for dependency failures, work with the
host subsystem operator who can obtain this information from the EEWSERVE log.
where:
tws-job The name of the IBM Workload Scheduler job that depends on the
completion of the specified z/OS job.
XAname
The name of the IBM Workload Scheduler extended agent workstation
where:
jobname
The name of the job in JES.
condcode
The condition code that indicates successful job completion. If preceded by
<, the condition code must be less than or equal to this value. If preceded
by =, the condition code must be equal to this value. If omitted, “ = 0000”
is used. There must be a space on both sides of the operator (< or =).
Example:
job5 follows jesworkstation::"apayable = 0004"
where:
application
The name of the IBM Workload Scheduler for z/OS application (job
stream) in the current plan.
IA The input arrival date and time.
IATIME
The input arrival time.
JOBNAME
The z/OS job name.
OPNO
The operation number (1-255). If included, the application is considered
completed when it reaches this operation number.
For complete descriptions of the parameters, refer to the IBM Workload Scheduler
for z/OS documentation. For example:
joba follows twsworkstation::"PREFABJOB44 IA(0202181000) JOBNAME(PFJ3)"
170 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
extended agent, the job will show an error status and will fail, that is, the
job will not run after the EEWSERVE task is started. However, if an IBM
Workload Scheduler job has a follows dependency for an external
(non-IBM Workload Scheduler) job which runs under JES or Access
method for z/OS, the internal check job (CJ) command is reissued after
EEWSERVE is started. The extended agent workstation still shows its
status as linked even if EEWSERVE is not running.
For this reason, if a z/OS automation product such as NetView® is
available on the mainframe, write a rule to detect any outages of the
EEWSERVE task.
Instance limitations in LPARs
Due to the ENQ/DEQ mechanism in use, only one instance of the
EEWTCP02 task (default name EEWSPACE) can be run on a z/OS LPAR.
If a second instance is started, it fails with RC=04. So even if you use
different started task names and PORT numbers, only one instance of
EEWSPACE or EEWSERVE can exist concurrently on a z/OS LPAR.
where:
tws-job The name of the IBM Workload Scheduler job dependent on the specified
z/OS file.
XAname
The name of the IBM Workload Scheduler extended agent workstation
associated with the scheduler of the z/OS job, that is, an extended agent
defined with the mvsjes or mvsopc method.
For more information, see “Checking for files on z/OS” on page 174.
Reference information
This section describes job states when operating on JES and IBM Workload
Scheduler for z/OS in the IBM Workload Scheduler environment.
Technical overview
The z/OS gateway uses an extended MCS console to communicate with JES. The
program issues the MCSOPER macro to activate an extended MCS console. The z/OS
gateway can then receive messages and command responses by issuing the
MCSOPMSG macro, and can issue commands by issuing the MGCRE macro. All the
return codes from the extended MCS macros are handled as described in IBM z/OS
Programming: Authorized Assembler Services Reference, Volume 3, SA22-7611.
When a job is submitted, the job name and JES job ID are also entered in the
Tablespace. When an SMF record containing relevant job scheduling data is passed
through the IEFU84 exit, the job and condition code information are made
available to IBM Workload Scheduler. Because IBM Workload Scheduler keeps
track of both the job name and the JES job ID, it can check for the specific job it
submitted. (Currently, the Gateway uses Type 30 SMF records and also subtypes 1,
4, 5.)
IBM Workload Scheduler checks submitted jobs periodically to see if they are
active. If an IBM Workload Scheduler-submitted job is not active and no
information about it is found through the IEFU84 exit, the job is marked as abend
in IBM Workload Scheduler displays. This situation might occur if a job fails for
security reasons or JCL syntax problems.
JES job states: Table 39 lists JES job states with respect to IBM Workload Scheduler.
Table 39. JES job states with respect to IBM Workload Scheduler
IBM Workload
Scheduler Job
State JES Job State Comment
intro Not available IBM Workload Scheduler is starting the method.
wait Queued Job is queued.
wait Not available If the job remains in this state, it might be due
to a security violation in z/OS. Check the job
on the z/OS system.
exec Executing Job is running.
succ Completed Job’s condition code meets the completion
criteria in the IBM Workload Scheduler job
definition.
abend Completed Job condition code does not meet the
completion criteria in the IBM Workload
Scheduler job definition, or a system or user
abend has occurred. System abend codes, in
hexadecimal, are prefixed with "S", and user
abend codes, in decimal, are prefixed with "U".
Both types of code are written to the job stdlist
file.
extrn Not available Status unknown. Can occur only when checking
a job that is used as a dependency.
Monitoring JES jobs: The details of the outcome of JES jobs must be requested
from the subsystem where these jobs were launched. From IBM Workload
Scheduler you should only expect to find out if these jobs completed successfully.
To find the reason for a failed submission or failed completion of one of these jobs
or to check for dependency failures, the IBM Workload Scheduler operator should
work with the host subsystem operator who can obtain this information from the
EEWSERVE log.
172 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Checking JES jobs: To check a JES job that was not launched by IBM Workload
Scheduler, the name of the job is passed by IBM Workload Scheduler to the
gateway. Because IBM Workload Scheduler did not submit the job, the JES job ID is
not available. The Gateway enters the name in the Tablespace, and waits for
information about the job to appear in SMF records passed through the IEFU84
exit.
The IEFU84 exit cannot handle every job without impacting the performance of the
entire system because it is invoked for each job running on the z/OS system.
v If the job is not present in the gateway dataspace, the IEFU84 exit does not
perform any action.
v If the job is submitted by IBM Workload Scheduler, the gateway inserts the job
into the dataspace. In this case the IEFU84 exit monitors the status of the job and
of each step contained in the job.
v If the job is not submitted by IBM Workload Scheduler, the gateway inserts the
job into the dataspace only if the gateway receives a request from IBM Workload
Scheduler to check its status because the job represents an internetwork
dependency.
To have the internetwork dependencies of a z/OS job correctly handled by IBM
Workload Scheduler for z/OS system, ensure that there are no occurrences of the
z/OS job in any job queue including the output queue. If any occurrences of the
z/OS job are present then purge them. The internetwork dependencies of a z/OS
job are handled by IBM Workload Scheduler in the following ways:
v If there are no occurrences of z/OS jobs in the job queues, the gateway inserts
the job into the dataspace the first time it receives the request from IBM
Workload Scheduler to check the job status. The gateway inserts the job into the
dataspace with an unknown job ID ready to be monitored.
v When the z/OS job is submitted, the IEFU84 exit finds the job in the dataspace
and updates the corresponding entry with the JES job ID. From now on the
z/OS job is monitored using the associated JES job ID. If the job completes
successfully, the gateway returns the information to IBM Workload Scheduler,
and the internetwork dependency is correctly resolved.
Launching IBM Workload Scheduler for z/OS jobs: To launch and monitor an
IBM Workload Scheduler for z/OS job, IBM Workload Scheduler passes the
application name, and other optional parameters, it wants to run to the z/OS
Gateway. If it exists in the IBM Workload Scheduler for z/OS database, the
application is inserted into the current plan. The input arrival, deadline arrival,
priority, and automatic dependency resolution parameters, if included, override
any values specified in IBM Workload Scheduler for z/OS.
At a rate defined by the CheckInterval value in the method options file, IBM
Workload Scheduler checks the status of the occurrence (application) in IBM
Workload Scheduler for z/OS.
IBM Workload Scheduler for z/OS operation states: Table 40 on page 174 lists
IBM Workload Scheduler for z/OS operation states with respect to IBM Workload
Scheduler.
IBM Workload Scheduler for z/OS occurrence states: Table 41 lists IBM
Workload Scheduler for z/OS operation occurrence states with respect to IBM
Workload Scheduler.
Table 41. IBM Workload Scheduler for z/OS operation occurrence states with respect to IBM
Workload Scheduler
IBM Workload Scheduler
Job Stream State IBM Workload Scheduler for z/OS Occurrence State
wait pending
wait undecided
exec started
succ complete
abend error
abend deleted
abend Not applicable
extrn Not applicable. Status unknown. Can occur only when
checking a job that is used as a dependency.
Checking IBM Workload Scheduler for z/OS jobs: To check an IBM Workload
Scheduler for z/OS job that was not launched by IBM Workload Scheduler, the
name of the application, and optionally the operation, is passed to the gateway. A
check is made to see if the occurrence or operation is in the current plan. If it is
not found, IBM Workload Scheduler rechecks at a rate defined by the bm check
status value in its local options file.
If the data set does not exist, IBM Workload Scheduler continues to wait and check
for the file at a frequency determined by the bm check file option in the
localopts file of the fault-tolerant workstation that is hosting the extended agent.
The localopts options are described in the Planning and Installation Guide.
Note: IBM Workload Scheduler can only use fully qualified data set names for
non-partitioned files. If a Generation Data Group name is to be used, it must be
the fully qualified name and not a relative name (for example, xxxxx.xxxxx(-1)
cannot be used).
Timing considerations
When IBM Workload Scheduler checks dependencies on z/OS jobs not launched by
IBM Workload Scheduler, certain timing issues are critical to ensuring that any
associated job dependencies are correctly resolved. For the correct resolution of
these external dependencies, IBM Workload Scheduler must attempt to resolve the
dependency at least once before the z/OS job is submitted. After the z/OS job has
been submitted and has successfully completed, the next periodic check of the
dependency by IBM Workload Scheduler can manage the dependency.
If this synchronization is not taken into account, IBM Workload Scheduler might
wait indefinitely to resolve a job dependency. A similar problem can occur as the
result of a communication failure between the z/OS and IBM Workload Scheduler
environments that prevents IBM Workload Scheduler from determining the status
of a z/OS job to satisfy a job dependency.
Diagnostic information
z/OS jobs submitted by IBM Workload Scheduler can fail to complete for a
number of reasons. The step in the submission process in which a job fails
determines how much information is available and is provided by IBM Workload
Scheduler as follows:
v If a job fails before it is actually initiated (usually the result of a JCL or security
problem), IBM Workload Scheduler recognizes that it no longer exists, and
marks it as abend in the conman command line displays. No further information
is provided.
v If a job fails after being started, IBM Workload Scheduler:
1. Obtains its condition code and user abend code, if any
2. Writes them to the job standard list file
Troubleshooting
To assist in troubleshooting, ensure that the JES log is obtained for the EEWSPACE
and EEWSERVE started tasks. This helps in determining the context in which a
message was issued. Depending on the job scheduling interface you use, additional
helpful information might be obtained from other logs.
EEWI27I APPLICATION application WAS INSERTED IN CP WITH INPUT
ARRIVAL DATE AND TIME yymmddhhss
EEWI28W yymmdd hhmmss APPLICATION appl WAS NOT INSERTED IN
CURRENT PLAN WITH INPUT ARRIVAL DATE AND TIME iadatetime
EEWI29I yymmdd hhmmss TASK task MODULE module ISSUED, MACRO macro NEAR LABEL label
WITH RETURN CODE = code AND ERROR NUMBER = err
EEWI30S yymmdd hhmmss module CA7SPAN MUST BE 4 DIGITS IN FORMAT HHMM
EEWI31E TASK task MODULE module LAUNCH OF JOB ’jobname’ FAILED
EEWI32S yymmdd hhmmss module AT LEAST ONE INTERFACE MUST BE DIFFERENT FROM NO
EEWI33W yymmdd hhmmss TASK task APPLICATION application NOT FOUND
EEWI34W APPLICATION application NOT FOUND
EEWI35W JCL dataset(member) NOT FOUND
EEWI36W yymmdd hhmmss IA and IATIME cannot be specified together
EEWI37W yymmdd hhmmss DEADLINE and DEADLINETIME cannot be specified together
EEWI38I jobname(jobid) nl n2 result (restype)
v n1 indicates the number of seconds passed from the request
v n2 indicates the number of seconds of CPU time consumed
v result can assume one of the following values:
ABEND
If the job abends. In this case restype can be:
Sxyz In case of System Abend
Unnnn
in case of User Abend
CONDCOD
If the job does not end successfully due to the condition code of one step
that does not match the definition. In this case restype contains: the
RC=nnnnn value of the return code of the last step that has been run, if
LASTSTEP specified, or of the worst step that does not match the
definition of the job on the distributed side.
EXEC If the job is running or is in the input queue.
JCLERRO
If the job failed due to a JCL error.
SUCCES
If the job completed successfully.
UNKNOWN
If the jobid is unknown
blank In case of an internetwork dependency when the manual submission
was not performed.
176 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Note: All the above messages are written in the EEWSPACE or in the EEWSERVE
log files. These are the files indicated in the SYSTSPRT DD card of the respective
procedure. In the files the messages are written starting from column 1, except for
the messages that do not contain the date and time after the message identifier, for
example EEWI27I. These messages appear with different characteristics in the
z/OS system and in IBM Workload Scheduler. In the z/OS system log the
messages appear in the text of another message and in some cases they might
appear truncated. This is because the maximum length of each text record is
limited to 251 characters. In IBM Workload Scheduler they are always displayed in
their complete form.
To assist in troubleshooting, be sure to obtain the JES log for the EEWSPACE and
EEWSERVE started tasks. This will help in determining the context in which a
message was issued. Depending on the job scheduling interface you use, additional
helpful information may be obtained from other logs. For example, if you use
CA-7, you should obtain the following:
v The CA-7 log
v The console log for the interval covering the test period
v The job log of the job resulting in error (if this is the case)
v The UNIX script file related to that job
178 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Chapter 25. Common serviceability for the access methods
This section provides information common to all the access methods including
return code mapping, configuring the tracing utility, and troubleshooting the access
method.
The return code mapping feature provides more granularity when defining the
success or failure policies of jobs and improved flexibility in controlling job
execution flows based on execution results. Job return code mapping provides the
following capabilities:
v Users can define a job final status (successful or failed) based on a condition on
the return code of the execution of the program or script of the job.
v The return code can be provided also to the recovery job that is associated with
it in the job definition. This causes the recovery job to perform different
processing based on the return code.
Parameters
# Optional comment. All the lines starting with this symbol (#) are not used
for mapping.
patternn
Pattern strings delimited by quotation marks (“ and ”). If you use only one
pattern string, you can omit the quotation marks. If the pattern string
contains a quotation marks character, then it must be escaped by backslash
(\). The string can contain the following wildcards and special characters:
Asterisk (*)
Matches an arbitrary number of characters.
Question mark (?)
Matches a single character.
Backslash (\)
Escape character.
RC value
The return code value. This value is sent by the method to IBM Workload
Scheduler by a %RC nnnn message.
179
Each method has its own set of files to map the messages into return code values.
The mapping files can be either global or local for a workstation.
Return code mapping files that are specific to a workstation are named according
to the following scheme:
TWS_home/methods/rcm/accessmethod-type-workstation.rcm
Global mapping files have a file name according to the following scheme:
TWS_home/methods/rcm/accessmethod-type.rcm
For the PeopleSoft access method, type is always equal to rcmap. For the SAP R/3
access method, type is as described in “Return code mapping file names for
r3batch” on page 183.
Syntax
About this task
Use the following syntax to create the return code mapping file:
[#] “pattern1” “pattern2”...“patternn” = RC value
Examples
The following is an example of a return code mapping file. The line numbers in
bold do not belong to the file, but are shown for reference:
1. # This is an RC mapping file for joblog.
2.
3. “User * missing ” = 102
4. “\*\*\*” = 103
5. “User \
6. * \
7. missing” = 102
In this example:
v Line 1 is a comment and is not used for mapping.
v Line 2 is blank and is ignored. All blanks preceding or following a pattern string
are ignored, as well as those between the equals sign and the return code value.
v Line 3 matches every message starting with the string User and ending with the
string missing.
v Line 4 matches every message starting with three asterisks (*) followed by a
blank. When you use the asterisk in this way and not as a wildcard, you must
escape it with a backslash.
v Lines 5 through 7 contain a pattern taking several lines. It matches the same
messages as the pattern of line 3.
Considerations
Note the following facts:
v The order of the pattern lines is important because the first matching pattern
line is used to build the return code value.
v Empty pattern strings (“”) are ignored by the pattern matching procedure.
For example, the following is a valid pattern sequence. The first line is more
restrictive than the second line.
“625” “User * missing” = 104
“” “User * missing” = 102
180 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
The following pattern sequence is formally valid, but the second pattern line is
never used. Because the first line is more general, it is always matched first.
“” “User * missing” = 102
“625” “User * missing” = 104
When no return code mapping files are defined, or when a string returned by the
access method does not satisfy any of the matching patterns of the mapping file,
the access method uses the respective standard return codes listed in the tables.
Table 42. Job states and return codes for the PeopleSoft access method
psagent job state psagent return code
"CANCEL" 1
"DELETE" 2
"ERROR" 3
"HOLD" 4
"QUEUED" 5
"INITIATED" 6
"PROCESSING" 7
"CANCELED" 8
"SUCCESS" 9
"NO SUCCESSPOSTED" 10
"POSTED" 11
"NOT POSTED" 12
"RESEND" 13
"POSTING" 14
"GENERATED" 15
Using return code mapping with r3batch can be useful in overcoming differences
in the return code mechanisms of R/3, which returns a mixture of messages and
numbers, and of IBM Workload Scheduler, which handles exclusively numeric
return codes. By customizing the return code mapping files listed in “Return code
mapping file names for r3batch” on page 183, you can map messages from R/3
logs, spool lists, and exceptions from RFC function modules into return code
values that IBM Workload Scheduler can handle.
Note that when you do not use this feature, r3batch does not send any return
codes to IBM Workload Scheduler. In this case, IBM Workload Scheduler displays
only the r3batch exit code, which cannot be used to set up rccondsucc conditions.
When setting up your return code mapping for r3batch, consider the following:
v You can define any return code numbers for your use because there are no
reserved return codes for the access method or for IBM Workload Scheduler.
v Mapping files are scanned sequentially: the first match found performs the
corresponding mapping. When you define a mapping file, write the most
restrictive strings first.
v When you define a mapping file, remember that the R/3 log messages are read
in their entirety. If you want to map only a part of the entry, you must use the
wildcard characters.
v If two lines match two different patterns, then the return code is set to the
higher value. In general the return code is set to the highest value among the
ones yielded by the matched patterns. This is shown in the following example:
The job log returned after job PAYT410 has run is:
*** ERROR 778 *** EEWO0778E Failed to modify the job PAYT410 with job id
*** 05710310.
*** ERROR 552 *** EEWO0552E The R/3 job scheduling system has found an
*** error for user name * and job name PAYT410. Please check R/3
*** syslog.
*** ERROR 118 *** EEWO0118E Execution terminated. Could not create and
*** start an instance of the R/3 batch job.
ERROR LEVEL=118
182 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
"*MAESTRO*Step 1 contains illegal values"=9999
In this case, the return code sent back to IBM Workload Scheduler is 9999
because it is the higher of the two matching patterns.
v If no matching takes place, no return code is sent to IBM Workload Scheduler.
because:
message_text_pattern
"Step 001 started (program BTCTEST, variant GIULIO, user name
TWSDEV)"
program_pattern
"*"
message_number_pattern
"550"
message_id_pattern
"*"
TWS_home/methods/rcm/r3batch-pchainlog.rcm
Maps messages from the protocol of a Process Chain into return code
values. If this file is not present, the messages in the protocol are ignored.
The format of the mapping file is:
message_number_pattern
[message_id_pattern[message_variable1[message_variable2
[message_variable3[message_variable4[message_type]]]]]]=RCvalue
TWS_home/methods/rcm/r3batch-spoollist.rcm
Maps messages in the job spool list of an R/3 job into return code values.
If this file is not present, the messages in the spool list are ignored.
The format of the mapping file is:
spool_list_row_pattern=RCvalue
To set up return code mapping for intercepted jobs, after defining the appropriate
return code conditions in the r3batch-joblog.rcm file, do the following:
1. Create a customized template file named TWS_home/methods/r3batch_icp/
rctemplate.jdf containing the following:
alias;rccondsucc "Success Condition"
where the "Success Condition" must match a condition saved in the rcm file.
2. Modify the TWS_home/methods/r3batch_icp/XANAME_r3batch.icp referring to the
jdf file you created as follows:
client job_mask user_mask rctemplate
IBM Workload Scheduler manages the intercepted R/3 job as a docommand job
with all the options specified in the customized jdf file. You can check if your
intercepted job is correctly submitted by reading the job_interceptor joblog.
Note: If you delete this file accidentally, IBM Workload Scheduler creates a new
file with all the default values and contains the following comment:
# This file was automatically created using the default values.
184 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Customizing the .properties file
About this task
Depending on the access method you are working with, customize the trace
parameters in the following properties files:
psagent.properties
For the PeopleSoft access method.
r3batch.properties, r3evmon.properties
For the SAP R/3 access method.
With this access method, you can also specify debug and trace parameters
in the single job definitions. See “Creating SAP Standard R/3 jobs from the
Dynamic Workload Console” on page 224 and “Task string to define SAP
jobs” on page 233.
mvsjes.properties, mvsopc.properties
For the z/OS access method, depending on the scheduler with which you
are working.
For each .properties file you can customize the following parameters:
accessmethod.trace.tracers.level
Specify the level of tracing you want to set. Possible values are:
DEBUG_MIN
Only error messages are written in the trace file. This is the
default.
DEBUG_MID
Informational messages and warnings are also written in the trace
file.
DEBUG_MAX
A most verbose debug output is written in the trace file.
The value you set in the .properties file applies to all the jobs of the
corresponding access method. To specify a different trace setting for a
particular job, specify the following option in the job definition:
-tracelvl=(1|2|3)
where:
v 1 = DEBUG_MIN
v 2 = DEBUG_MID
v 3 = DEBUG_MAX
Note: When making changes to the trace level setting, the changes are
effective immediately after saving the .properties file. No restart is
required.
accessmethod.trace.handlers.traceFile.fileDir
Specifies the path where the trace file is created. Depending on the access
method, the default is:
SAP R/3
TWS_home/methods/traces
All other access methods
TWS_home/methods
186 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Part 4. Integration with SAP
The following sections give you information about IBM Workload Scheduler for
SAP, the SAP access method and job plug-ins, and how to schedule jobs by using
the SAP Solution Manager.
187
188 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Chapter 26. Introducing IBM Workload Scheduler for SAP
Improve SAP operations and enable business growth with IBM Workload
Scheduler.
Use IBM Workload Scheduler for SAP, to create, schedule, and control SAP jobs
using the job scheduling features of IBM Workload Scheduler. IBM Workload
Scheduler supported agent workstations help extend the product scheduling
capabilities to SAP through the R/3 batch access method . In addition, you can
define IBM Workload Scheduler job plug-ins for SAP BusinessObjects BI and SAP
PI Channel. With the SAP Solution Manager integration, you can have the IBM
Workload Scheduler engine run job scheduling tasks available from the Solution
Manager user interface.
IBM Workload Scheduler is certified by SAP for the following SAP interfaces:
v BC-XBP 6.10 (V2.0) - Background Processing
189
v BC-XBP 7.00 (V3.0) - Background Processing
v BW-SCH 3.0 - Business Information Warehouse
v SAP Solution Manager
Certification Category: Background Processing, Business Information Warehouse,
Job Scheduling, Platform User Licensing Compliant, Scheduling, Solution Manager
Ready, System Management
SAP Certified - Integration with SAP NetWeaver. The R3batch process has been
updated with the latest SAP NetWeaver RFC library.
Note: For detailed information, see the SAP online product partner directory .
Features
Table 43 shows the tasks you can perform with IBM Workload Scheduler for SAP
either in a distributed or an end-to-end environment, or both.
Table 43. IBM Workload Scheduler for SAP features
Distributed
Feature environment End-to-end
Using IBM Workload Scheduler standard job U U
dependencies and controls on SAP jobs
Listing jobs, defining jobs, variants, and extended U U
variants using the IBM Workload Scheduler
interface
Defining jobs and variants dynamically at run time U U
Scheduling SAP jobs to run on specified days and U U
times, and in a prescribed order
Scheduling SAP BusinessObjects Business U U
Intelligence (BI) jobs to gain greater control over
your SAP BusinessObjects Business Intelligence (BI)
reports through the IBM Workload Scheduler
plug-in for SAP BusinessObjects Business
Intelligence (BI).
Scheduling SAP Process Integration (PI) Channel U U
jobs to control communication channels between the
Process Integrator and a backend SAP R/3 system.
Scheduling and monitoring job scheduling tasks U
available from the SAP Solution Manager user
interface.
Defining the national language support options U U
Using the SAP Business Warehouse Support U U
functions
Customizing job execution return codes U U
Using SAP logon groups for load balancing and U U
fault-tolerance
190 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 43. IBM Workload Scheduler for SAP features (continued)
Distributed
Feature environment End-to-end
Using Business Component-eXternal Interface v Collect v Track child
Background Processing (XBP 2.0 and later) interface intercepted jobs
support to: jobs v Keep all job
v Track child attributes when
jobs you rerun a job
v Keep all job v Raise events
attributes when
you rerun a job
v Raise events
Using Business Component-eXternal Interface v Create criteria v Create criteria
Background Processing (XBP 3.0) interface support profiles to log profiles to log
to: raised events, raised events,
reorganize the reorganize the
event history, event history,
and intercept and intercept
and relaunch and relaunch
jobs, according jobs, according
to the criteria to the criteria
you specify. you specify.
v SAP v SAP
application log application log
and application and application
return code return code
v Spool list v Spool list
request and request and
display for jobs display for jobs
that have run. that have run.
v Temporary v Temporary
variants variants
Assigning an SAP job to a server group, for batch U U
processing
Exporting SAP factory calendars and adding their U
definitions to the IBM Workload Scheduler database
Defining internetwork dependencies and event rules U
for IBM Workload Scheduler based on SAP events
Defining event rules based on IDoc records U
Defining event rules based on CCMS Monitoring U
Architecture alerts
Rerunning a job that submits a process chain from a U U
specific process, from failed processes, or as a new
instance
Displaying the details of a job that submits a U U
process chain
Enabling job throttling U U
Using the SAP access method you can run and monitor SAP jobs from the IBM
Workload Scheduler environment. These jobs can be run as part of a schedule or
submitted for ad-hoc job processing. SAP extended agent or dynamic agent jobs
can have all of the same dependencies and recovery options as other IBM
Workload Scheduler jobs. SAP jobs must be defined in IBM Workload Scheduler to
be run and managed in the IBM Workload Scheduler environment.
The supported agent workstations use the access method, r3batch, to pass SAP
job-specific information to predefined SAP instances. The access method uses
information provided in an options file to connect and launch jobs on an SAP
instance.
Multiple extended agent workstations can be defined to use the same host, by
using multiple options entries or multiple options files. Using the SAP extended
agent name as a key, r3batch uses the corresponding options file to determine
which instance of SAP will run the job. It makes a copy of a template job in SAP
and marks the job as "scheduled". It then monitors the job through to completion,
writing job progress and status information to a job standard list on the host
workstation.
On dynamic agent workstations, more than one options file can be associated to
the workstation.
For more information about job management, refer to the IBM Workload Scheduler:
User's Guide and Reference.
193
For more detailed information about configuration files on extended agents and
dynamic agents, see “Configuring the SAP R/3 access method” on page 207.
194 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 44. Roles and responsibilities in IBM Workload Scheduler for SAP (continued)
User role User task
IBM Workload Scheduler developer v “Editing a standard SAP job” on page 232
v “Task string to define SAP jobs” on page 233
v “Displaying details about a standard SAP job” on
page 241
v “Verifying the status of a standard SAP job” on
page 242
v “Deleting a standard SAP job from the SAP
database” on page 243
v “Balancing SAP workload using server groups” on
page 243
v “Defining SAP jobs dynamically” on page 249
v “Managing SAP R/3 Business Warehouse
InfoPackages and process chains” on page 279
v “Defining an IBM Workload Scheduler job that
runs an SAP PI Channel job” on page 341
v See the section about prerequisite steps to create
SAP BusinessObjects BI in User's Guide and
Reference.
IBM Workload Scheduler developer v “Defining internetwork dependencies and event
rules based on SAP R/3 background events” on
page 301
v “Defining event rules based on IDoc records” on
page 309
v “Defining event rules based on CCMS Monitoring
Architecture alerts” on page 316
IBM Workload Scheduler operator v “Rerunning a standard SAP job” on page 247
v “Mapping between IBM Workload Scheduler and
SAP job states” on page 244
v “Raising an SAP event” on page 246
v “Killing an SAP job instance” on page 245
v “Displaying details about a process chain job” on
page 284
For more detailed information about the security file, security file syntax, and how
to configure the security file, see "Configuring user authorization (Security file)" in
the Administration Guide.
The following table displays the access keywords required to grant authorization to
access and work with SAP scheduling objects assigned to IBM Workload Scheduler
users.
To communicate and manage the running of jobs on SAP R/3 systems using the
access method for SAP R/3, complete the following configuration steps in the SAP
R/3 environment.
The steps require that you have knowledge of an SAP R/3 Basis Administrator.
Overview
About this task
Procedure
1. Create a new user ID for RFC communications in SAP R/3 for IBM Workload
Scheduler.
2. Create the authorization profile as described in “Creating the authorization
profile for the IBM Workload Scheduler user” on page 197.
3. Copy the correction and transport files from the IBM Workload Scheduler
server to the SAP R/3 server.
4. Import the correction and transport files into SAP R/3 and verify the
installation.
Results
Note: The import procedure adds new ABAP/4 function modules and several
new internal tables to the SAP R/3 system. It does not modify any of the existing
objects.
196 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Creating the IBM Workload Scheduler RFC user
About this task
For IBM Workload Scheduler to communicate with SAP R/3, you must create a
user ID in SAP R/3 for IBM Workload Scheduler batch processing. For security
reasons, use a new user ID rather than an existing one.
1. Create a new RFC user ID.
2. Give this new RFC user ID the following attributes:
v A user type of CPIC, Communications, or DIALOG, depending on the SAP
R/3 release.
v A password at least six characters in length. IBM Workload Scheduler
requires this password to start or monitor SAP R/3 jobs. If this password
changes in SAP R/3, you must update the options file used by r3batch with
the new password.
v The appropriate security profiles, depending on your version of SAP R/3.
Object Description
S_ADMI_FCD System authorizations
S_APPL_LOG Application logs
S_BTCH_ADM Background processing: Background administrator
S_BTCH_JOB Background processing: Operations on background
jobs
S_BTCH_NAM Background processing: Background user name
S_PROGRAM ABAP: Program run checks
198 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Object Description
S_SPO_ACT Spool: Actions
S_SPO_DEV Spool: Device authorizations
S_XMI_LOG Internal access authorizations for XMI log
S_XMI_PROD Authorization for external management interfaces
(XMI)
Object Description
S_ADMI_FCD System authorizations
v System administration function: Full authorization
S_APPL_LOG Activity: Display
v Application log Object name: Full authorization
v Application log subobject: Full authorization
S_BTCH_ADM Background processing: Background administrator
v Background administrator ID: Full authorization
S_BTCH_JOB Background processing: Operations on background
jobs
v Job operations: Full authorization
v Summary of jobs for a group: Full authorization
S_BTCH_NAM Background processing: Background user name
v Background user name for authorization check: Full
authorization
S_PROGRAM ABAP: Program run checks
v User action ABAP/4 program: Full authorization
v Authorization group ABAP/4 program: Full
authorization
The setup file loads four correction and transport files into the IBM Workload
Scheduler home directory. Copy these correction and transport files to the SAP R/3
server and import them into the SAP R/3 database, as follows:
1. On your SAP R/3 database server, log on to the SAP R/3 system as an
administrator.
2. Copy the control file and data file from the TWS_home\methods directory to the
following directories on your SAP R/3 database server:
copy control_file /usr/sap/trans/cofiles/
copy data_file /usr/sap/trans/data/
The names of control_file and data_file vary from release to release. The
files are located in TWS_home\methods and have the following file names and
format:
For SAP R/3 releases earlier than 6.10:
v K000xxx.TV1 (control file) and R000xxx.TV1 (data file)
v K900xxx.TV2 (control file) and R900xxx.TV2 (data file)
For SAP R/3 releases 6.10, or later:
v K9000xx.TV1 (control file) and R9000xx.TV1 (data file)
v K9007xx.TV1 (control file) and R9007xx.TV1 (data file)
Specifically, for IBM Workload Scheduler version 9.4 the following files are used:
For SAP R/3 releases earlier than 6.10:
v K000538.TV1 (for standard jobs scheduling)
v R000538.TV1 (for standard jobs scheduling)
v K900294.TV2 (for IDoc monitoring and job throttling)
v R900294.TV2 (for IDoc monitoring and job throttling)
For SAP R/3 releases 6.10, or later:
v K900044.TV1 (for standard jobs scheduling)
v R900044.TV1 (for standard jobs scheduling)
v K900751.TV1 (for IDoc monitoring and job throttling)
v R900751.TV1 (for IDoc monitoring and job throttling)
200 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
About this task
This section describes the procedure to generate, activate, and commit new
ABAP/4 function modules to your SAP R/3 system and several new internal
tables. You do not modify any existing SAP R/3 system objects. For information
about the supported SAP R/3 releases, see the System Requirements Document at
http://www-01.ibm.com/support/docview.wss?rs=672
&uid=swg27045181#accmthdplug.
The number of ABAP/4 modules that you install with the import process varies
from release to release. The modules are installed in the TWS_home\methods
directory and have the following file names and format:
v K9000xx.TV1 (function modules for standard jobs scheduling extensions)
v K9007xx.TV1 (function modules for IDoc monitoring and job throttling)
where x is a digit generated by the SAP system.
Procedure
1. Change to the following directory:
cd /usr/sap/trans/bin
2. Add the transport file to the buffer:
tp addtobuffer transport sid
where:
transport
The transport request file.
sid The SAP R/3 system ID.
Table 47 shows the contents of the ABAP modules for the IDoc records and job
throttling feature.
Table 47. ABAP/4 modules contents
Object Description Used by...
/IBMTWS/ Type = Development Namespace. For Internal use only
IBM Workload Scheduler.
/IBMTWS/EQ_XAPPL Type = Lock Object. Synchronizes the
job throttler instances and job Job throttling
interception collector jobs that are Job interception
running against the same SAP system.
Type = Function Module. It is used to
/IBMTWS/GET_ query for existing external application Job throttling
XAPPL_REGISTRATION registration data in table Job interception
IBMTWS/XAPPL, for example the
registration data of a job throttler
instance or job interception collector.
202 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 47. ABAP/4 modules contents (continued)
Object Description Used by...
/IBMTWS/ Type = Function Module. Modifies the
MODIFY_JOB_CLASS job class of an intercepted job that is Job throttling
controlled by the job throttler. For Job interception
details, see “Step 3. Enabling job class
inheritance” on page 294.
/IBMTWS/ Type = Function Module. Registers an
REGISTER_XAPPL external application, for example the Job throttling
job throttler. Job interception
/IBMTWS/TWS4APPS Type = Function group. For IBM Internal use only
Workload Scheduler.
/IBMTWS/ Type = Function Module. Unregisters
UNREGISTER_XAPPL an external application, for example Job throttling
the job throttler. Job interception
/IBMTWS/XAPPL Type = Table. Stores the registration
data of external applications. An Job throttling
external application can be a job Job interception
throttler instance or a job interception
collector.
J_1O1_IDOC_SELECT Type = Function Module. Selects IDoc IDoc event rules
records from SAP internal tables. For
details, see “Defining event rules
based on IDoc records” on page 309.
J_1O1_TWS_EDIDC Type = Data structure in FM interface Function module
J_1O1_IDOC_SELECT
Type = Data structure in FM interface Function module
J_1O1_TWS_IDOC_ J_1O1_IDOC_SELECT
SELECTION
Type = Data structure in FM interface Function module
J_1O1_TWS_STATE_ J_1O1_IDOC_SELECT
SELECTION
If the password of the IBM Workload Scheduler RFC user ID is modified after the
initial installation, the options file used by r3batch must be updated with this
change.
In UNIX, log on as root to the system where IBM Workload Scheduler is installed.
In Windows, log on as an administrator and start a DOS shell on the system where
IBM Workload Scheduler is installed, as follows:
Procedure
1. Generate an encrypted version of the new password using the enigma
command in TWS_home/methods. To do this in a command shell, type:
enigma newpwd
where newpwd is the new password for the IBM Workload Scheduler RFC user
ID.
The enigma command prints an encrypted version of the password.
Chapter 27. Access method for SAP 203
2. Copy the encrypted password into the options file, which is located in the
TWS_home/methods directory. The file can be edited with any text editor.
Results
Ensure that you copy the password exactly, preserving uppercase, lowercase, and
punctuation. The encrypted password looks similar to:
{3des}Hchwu6IsF5o=
Data communication paths between the client and server components of the SAP
system that use the SAP protocols RFC or DIAG are more secure with SNC. The
security is strengthened through the use of additional security functions provided
by an external product that are otherwise not available with SAP systems.
SNC provides security at application level and also end-to-end security. IBM
Workload Scheduler is extended to read SNC configuration parameters and
forward them to the SAP RFC communication layer used when logging in to the
SAP system. IBM Workload Scheduler does not provide or ship SNC software but
instead enables the use of third-party SNC products to secure the RFC
communication.
Levels of protection
The following options in the local options file are used to configure SNC for IBM
Workload Scheduler:
v r3snclib: the path and file name of the SNC library.
v r3sncmode: enables or disables SNC between r3batch and the SAP R3 system.
v r3sncmyname: the name of the user sending the RFC for SNC.
v r3sncpartnername: the SNC name of the SAP R3 communication partner
(application server).
v r3sncqop: the SNC protection level.
204 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
See “Defining the local options” on page 210 for a description of these options in
the local options file.
These limitations are no longer true with XBP 2.0 and later.
The following is a list of print parameters supported by BAPI XBP 1.0 for SAP R/3
release 4.6x and later:
v archiving mode
v authorization
v columns
v delete after output
v lines
v number of copies
v output device
v print immediately
v recipient
v sap cover page
v selection cover page
v spool retention period
To resolve the loss of print parameters when copying a job, install the appropriate
SAP R/3 Support Package as stated in the SAP R/3 notes 399449 and 430087.
The same applies to the job class. Official SAP R/3 interfaces only allow class C
jobs. Installing the SAP R/3 Support Package also resolves this issue.
Unicode support
Access method for SAP supports the Unicode standard.
What is Unicode
Unicode was devised to address the problem caused by the profusion of code sets.
Since the early days of computer programming hundreds of encodings have been
developed, each for small groups of languages and special purposes. As a result,
the interpretation of text, input, sorting, display, and storage depends on the
knowledge of all the different types of character sets and their encodings.
Programs are written to either handle one single encoding at a time and switch
between them, or to convert between external and internal encodings.
Unicode provides a single character set that covers the languages of the world, and
a small number of machine-friendly encoding forms and schemes to fit the needs
of existing applications and protocols. It is designed for best interoperability with
both ASCII and ISO-8859-1, the most widely used character sets, to make it easier
for Unicode to be used in applications and protocols.
r3batch uses the UTF-8 code page internally. Because it communicates with SAP
R/3 at the application server layer, it uses UTF-16 when communicating with
Unicode-enabled SAP R/3 systems.
If these conditions are not met, you cannot use Unicode support and must make
sure that r3batch, the Dynamic Workload Console, and the target SAP R/3 system
code page settings are aligned. Use the options related to national language
support described in “SAP R/3 supported code pages” on page 328.
To avoid conflicts with other vendors, the IBM Workload Scheduler ABAP modules
now belong to the IBM Workload Scheduler partner namespace J_1O1_xxx and
/IBMTWS. After you have completed the imports as described in “Importing
ABAP/4 function modules into SAP R/3” on page 200, the RFC J_1O1_xxx
function modules and the /IBMTWS function modules are installed on your
system.
If you had a previous installation of IBM Workload Scheduler extended agent for
SAP R/3 on your system, you can delete the following function modules from
your SAP R/3 system:
Z_MAE2_BDC_STATUS
Z_MAE2_DATE_TIME
Z_MAE2_JOB_COPY
Z_MAE2_JOB_DELETE
Z_MAE2_JOB_FIND
Z_MAE2_JOB_FINDALL
Z_MAE2_JOB_LOG
Z_MAE2_JOB_OPEN
Z_MAE2_JOB_START
Z_MAE2_JOB_STATUS
Z_MAE2_JOB_STOP
These are old versions of the ABAP functions, which belong to the customer name
space. You can also delete the function group YMA3. It is not necessary to delete
the function modules and the function group, but delete them if you want to clean
up your system.
The files for the SAP access method are located in TWS_home/methods. If r3batch
finds the local configuration file for an extended agent or dynamic agent, it ignores
the duplicate information contained in r3batch.opts. If instead it does not find a
local configuration file then it will use r3batch.opts global options file.
To successfully use the SAP R/3 access method, you must first install the SAP RFC
libraries, as described in the System Requirements Document in the SAP R/3
Access Method Requirements section.
Dynamic agents
r3batch.opts
A common configuration file for the r3batch access method, whose
settings affect all the r3batch instances. It functions as a “global”
configuration file.
DYNAMIC_AGENT_FILE_r3batch.opts
One or more configuration files that are specific to each dynamic
agent workstation within a particular installation of a r3batch
access method. The DYNAMIC_AGENT_FILE_r3batch.opts is the name
of the options file, where DYNAMIC_AGENT is not necessarily the
name of the dynamic agent workstation, because the dynamic
agent can have more than one.opts file associated. If you do not
create a local options file, the global options file is used. Every
208 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Defining the configuration options
This section describes the options you can configure in r3batch.opts and in
XANAME_r3batch.opts.
Modifying the default values of the semaphore options is particularly useful when
the IDs that are generated would be the same as the IDs already used by other
applications.
On UNIX and Linux, to resolve the problem of duplicated IDs, IBM Workload
Scheduler for SAP uses system-5 semaphores to synchronize critical ABAP function
module calls. It uses one semaphore for job-related tasks and another one for tasks
related to variant maintenance.
To synchronize on the same semaphore, the communication partners must use the
same identifier. There are several ways to choose this identifier. IBM Workload
Scheduler for SAP uses two parameters: a path name and a project ID (which is a
character value). The path name parameter is the fully qualified path to the
options file. The project ID is taken from the options described in Table 48. If these
options are omitted, IBM Workload Scheduler for SAP uses default values, which
work for most installations.
Note:
1. The semaphore options must be edited directly in the global options file using
a text editor; you cannot use the options editor to modify these values.
210 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 49. r3batch local configuration options (continued)
Option Description
job_duration (Optional) Enables (ON) that the CPU time value in the
production plan report that is run from the Dynamic
Workload Console is set to the actual duration of the SAP
job. Default value is OFF.
To retrieve the job duration from the SAP system, ensure that
the authorization profile contains the following authorization
objects:
v S_DEVELOP
v S_TCODE with parameter SE38 (only for SAP 6.40 and
7.00)
Note: This option takes effect the first time you start the CCMS
alert monitoring. If you initially set it to OFF and later you want
to retrieve the alerts generated before the monitoring process
started, stop the monitoring and delete the XAname_r3xalmon.cfg
file located in TWS_home/methods/r3evmon_cfg. In the options file,
set ccms_alert_history=on and start the monitoring process
again.
212 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 50. r3batch common configuration options (continued)
Option Description Default
commit_dependency (Optional) Enables (ON) or disables (OFF) the product to commit OFF
internetwork dependencies after processing.
level_number
Process chains are logged down to the level of chain you
indicate here. For example, if you indicate 2 only the
first two levels are logged.
all All process chains are logged.
214 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 50. r3batch common configuration options (continued)
Option Description Default
pchainlog_verbosity (Optional) Supplements the option retrieve_pchainlog. If you omit this
option, and leave
Specifies which type of process chain logs you want to retrieve. retrieve_pchainlog
Allowed values are: set to ON, the default
chains_only is complete.
Logs only the process chains.
chains_and_failed_proc
In addition to the process chains, logs all failed
processes.
complete
Logs all process chains and processes.
Note: This option affects the entire process chain; verbosity
cannot be reduced for individual processes.
pc_launch_child (Optional) Enables (ON) or disables (OFF) the product to launch OFF
child jobs that are in scheduled state.
Note: You can use this option only if you activated the
parent-child feature on the SAP system. On the XBP 2.0 or later
SAP system, you can activate this feature by using the INITXBP2
ABAP report.
placeholder_abap_step (Optional) If XBP version 2.0 is used, the name of the ABAP If this option is not
report used as the dummy step in the SAP placeholder job that is specified, either as
created to monitor an SAP event defined as external dependency. global or local option,
the default BTCTEST is
used.
qos_disable (Optional) Enables (ON) or disables (OFF) the creation of the OFF
environment variable QOS_DISABLE on Microsoft Windows
systems that use the Quality of Service (QoS) feature, before
r3batch opens an RFC connection.
216 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 50. r3batch common configuration options (continued)
Option Description Default
short_interval (Optional) The minimum interval, in seconds, between status 10
checks. It cannot be less than 2 seconds. Setting this option to
low values makes the notification of status changes faster, but
increases the load on the hosting machine. See also long_interval.
(Optional) Enables (ON) or disables (OFF) the inheritance of ON
throttling_enable_ priority class.
job_class_inheritance
ON means that the intercepted job inherits the priority class of its
progenitor job, if it is higher than its own class; otherwise it
keeps its own class. OFF means that the intercepted job keeps its
own class, regardless of its progenitor’s class.
Note: By setting this option, the parent-child feature is
automatically enabled on the SAP system.
(Optional) Enables (ON) the job interception feature at job throttler ON
throttling_enable_ startup, or keeps the current setting (OFF).
job_interception
ON means that when the job throttler starts, it enables the job
interception feature on the SAP system. When the job throttler is
stopped, the job interception feature is also automatically restored
to the setting that was previously configured on the SAP system.
OFF means that the job interception feature is kept as it is
currently set in the SAP system.
throttling_job_ Specifies the BC-XBP interface version to be used when the job 2
interception_ throttler starts. Valid values are:
version v 2
v 3
The default BC-XBP interface version that is used is 2 (version
2.0).
throttling_interval (Optional) The interval (in seconds) between each job throttling 5
run.
(Optional) The maximum number of connections (connection 5
throttling_max_ pool size) that the job throttler can open to communicate with the
connections SAP system. The minimum value is 3.
(Optional) Enables (ON) or disables (OFF) the release of all ON
throttling_release_all_ intercepted jobs.
on_exit
ON means that when the job throttler is stopped, it releases all the
intercepted jobs. OFF means that when the job throttler is
stopped, it does not release the intercepted jobs therefore the jobs
remain intercepted, in scheduled state.
(Optional) Enables (ON) or disables (OFF) the sending of data from OFF
throttling_send_ job throttling to the SAP CCMS Monitoring Architecture.
ccms_data
ON means that the job throttler sends its status data to CCMS
continuously. OFF means that the job throttler does not send its
status to CCMS.
(Optional) Rate (in number of runs) at which the job throttler 1
throttling_send_ sends its status data to the SAP CCMS monitoring architecture.
ccms_rate The minimum value is 1, meaning that the job throttler sends the
data at every run.
218 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 50. r3batch common configuration options (continued)
Option Description Default
variant_delay (Optional) The time, in seconds, that r3batch allows the SAP 10
system to clean up the structures used for communication
between r3batch and the SAP system. This option is valid when
you launch a job that uses extended variants and requires a copy
of a job template. Use this option only when you want to reduce
r3batch response time, because it increases the load on the
hosting machine. Higher values of variant_delay increase the
response time and decrease the load.
You can include the password on the command line or enter it in response to a
prompt. The program returns an encrypted version that you can then enter in the
options file.
with no blanks before the option, after the value, or before or after the equals (=)
character.
You can put all the common information, such as the LJuser, IFuser, JobDef, and
LogFileName options in r3batch.opts, while you can put tailored data for the target
SAP system of the extended agent or dynamic agent (for example, SAP1) in a local
configuration file (for example, XA1_r3batch.opts).
You can put a local option in the global configuration file if you want to give the
same option to all the r3batch instances. For example, if the SAP user name is the
same in all your SAP systems, you can place the r3user option in the global file
without duplicating that information in all the local configuration files.
A global option, such as job_sem_proj, only has effect in the global configuration
file. If you put global options in a local file they have no effect.
r3batch reads the global configuration file first and then the local file. Every option
(except the global options) contained in the local configuration file will override
those in the global file. For example, if both the global and the local configuration
files contain the r3user option, r3batch uses the one in the local file.
You can put them all in the local configuration file or you can spread them
between the global and the local files. For example, you could put r3user and
r3password in the global configuration file and r3sid, r3instance, r3client, and
r3host in the local one.
The r3user option is both local and mandatory. It must be put either in the global
configuration file or the local configuration file.
Note: These configuration files are not created during the installation process.
To successfully use the SAP R/3 access method, you must first install the SAP RFC
libraries, as described in the System Requirements Document in the SAP R/3
Access Method Requirements section.
220 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Connecting to a specific application server
To connect to a specific application server, you enter strings which, according to
the complexity of the networks, might be more or less complex and contain
passwords to secure the routers.
In its basic form, a connection string consists of the host name (or IP name) of an
SAP application server; for example:
/H/hemlock.romlab.rome.abc.com
This type of connection string works only in very simple network environments,
where all application servers can be reached directly through TCP/IP. Usually,
modern companies use more complex network topologies, with a number of small
subnetworks, which cannot communicate directly through TCP/IP. To support this
type of network, the SAP RFC library supports SAP routers, which are placed at
the boundaries of the subnetworks and act as proxies. For this type of network, the
connection string is a composite of basic connection strings for each SAP router,
followed by the basic connection string for the target SAP system; for example:
/H/litespeed/H/amsaix33/H/hemlock.romlab.rome.abc.com
Moreover, you can secure the SAP routers with passwords, to prevent
unauthorized access. In this case, the basic connection string for the SAP router is
followed by /P/ and the password of the router.
Note: The SAP RFC library limits the length of the connection string to a
maximum of 128 characters. This is a real limitation in complex network
environments. As a workaround, it is recommended to use simple host names,
without the domain name whenever possible. Alternatively, you can use the IP
address, but this is not recommended, because it is difficult to maintain.
IBM Workload Scheduler for SAP supports both types of connection strings, basic
and composite, where:
r3host The connection string.
r3instance
The SAP instance number.
r3sid The SAP system ID.
For example:
r3host=/H/litespeed/H/amsaix33/H/hemlock.romlab.rome.abc.com
r3instance=00
r3sid=TV1
In large SAP installations, the application servers are usually configured in logon
groups for load balancing and fault-tolerance purposes. Load balancing is done by
a dedicated server, called the message server. The message server automatically
assigns users to the application server with the least workload of the logon group
it controls.
where SID is the SAP system ID, and system_number is the SAP system number.
For example:
r3host=pwdf0647.wdf.sap-ag.de
r3group=PUBLIC
r3sid=QB6
To be able to define event rules based on one or more SAP events, stop the IBM
Workload Scheduler WebSphere Application Server and copy the following file
(located on the system where you installed IBM Workload Scheduler:
TWS_home/methods/SAPPlugin/SapMonitorPlugIn.jar
to the following directory of the master domain manager and of its backup nodes:
TWS_home/eventPlugIn
For the changes to take effect, stop and restart the IBM Workload Scheduler
WebSphere Application Server. If the master domain manager is connected to the
Dynamic Workload Console, stop and restart also the Dynamic Workload Console
Application Server.
Command syntax
►► r3evman start ►◄
stop
Where:
start | stop
The action to perform:
start Starts monitoring SAP events.
stop Stops monitoring SAP events.
To define and manage jobs on an SAP workstation from IBM Workload Scheduler,
you must define the following:
Jobs in SAP that you want to run under IBM Workload Scheduler control
You can define these jobs using standard SAP tools or using the Dynamic
Workload Console.
Jobs in IBM Workload Scheduler that correspond to the jobs in SAP
The IBM Workload Scheduler job definitions are used in scheduling and
defining dependencies, but the SAP jobs are actually run.
You can define SAP job definitions from the Dynamic Workload Console and then
have IBM Workload Scheduler launch the jobs in SAP R/3 using jobs defined on
the following workstations that support the r3batch access method:
v An IBM Workload Scheduler extended agent workstation. A workstation that is
hosted by a fault-tolerant agent or master workstation.
v A dynamic agent workstation.
v A dynamic pool.
v A z-centric workstation.
The SAP job definitions can reference the following types of SAP jobs:
v Standard R/3
v Business Warehouse Process Chains
v Business Warehouse InfoPackages
You can easily create and manage Standard R/3 jobs on a remote SAP system
entirely from the Dynamic Workload Console, and then continue to manage the
remote SAP job from IBM Workload Scheduler.
The IBM Workload Scheduler job definition, available for both distributed and
z/OS environments, maps to the newly created job on the SAP system. The SAP
job can run on extended agent workstations, dynamic agent workstations, pools,
dynamic pools, and workstations depending on the type of job definition you
choose to create.
Note: Using this procedure to create a new IBM Workload Scheduler for z/OS
Agent SAP Standard R/3 job, you cannot manage variants. To manage variants,
use the SAP graphical user interface or use the List Jobs on SAP entry from the
navigation tree of the Dynamic Workload Console.
To create a new SAP Standard R/3 job on a remote SAP system that maps to an
IBM Workload Scheduler job definition, you have to associate your SAP Standard
R/3 jobs to IBM Workload Scheduler jobs and you can do it in either of the
following ways:
v Starting from an SAP job: “Create an SAP job and associate it to an IBM
Workload Scheduler job” or
v Starting from an IBM Workload Scheduler job (“Create an IBM Workload
Scheduler job and associate it to an SAP job” on page 226)
v Alternatively, you can simply create an SAP job on a remote SAP system,
without having it managed by IBM Workload Scheduler: “Creating an SAP job
from the Dynamic Workload Console” on page 228.
224 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Before you begin
To be able to save your SAP job on a remote SAP system, you must specify the
connection details. See “Setting the SAP data connection” on page 228.
To create a new SAP job and then associate it to a new IBM Workload Scheduler
job, perform the following steps:
Procedure
1. Click Administration > Workload Design > Manage Workload Definitions.
2. Select an engine. The Workload Designer window is displayed.
3. From the Working List pane, click New > Remote SAP R/3 Job:
4. In the Properties pane, specify the properties for the SAP job definition you are
creating using the tabs available. The tabs for each type of SAP job definition
are similar, but there are some differences depending on the type of engine you
selected and the type of workstation on which the job runs. For more detailed
information about the UI elements on each tab, see the Dynamic Workload
Console online help.
5. In the Details view, right-click the new job to add ABAP, External command or
External program steps to it. It is mandatory to add at least one job step to the
job before you can save the job:
6. Right-click the SAP job and click Create SAP Job Definition to create a new
IBM Workload Scheduler job associated to the new job on SAP. Select the job
definition in accordance with the engine and type of agent on which the job
runs.
9. Click to save the SAP job definition in the IBM Workload Scheduler
database.
To create a new IBM Workload Scheduler job and then associate it to a new SAP
job, follow these steps:
Procedure
1. Click Administration > Workload Design > Manage Workload Definitions.
2. Select an engine. The Workload Designer window is displayed.
3. From the Working List pane,
v Distributed Click: New >Job Definition > ERP > SAP Job on...., choosing
the type of workstation on which it is going to run:
SAP Job on Dynamic Workstations
For distributed systems only. This job definition can run on
dynamic agent workstations, dynamic pools, and IBM Workload
Scheduler for z/OS Agent workstations.
SAP Job on XA Workstations
This job definition can run on extended agent workstations, which
are workstations hosted by fault-tolerant agents or master
workstations.
226 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
v z/OS Click: New > ERP > SAP
SAP For z/OS systems only. This job definition references an existing job
on the SAP system and can run on dynamic agent workstations,
dynamic pools, and IBM Workload Scheduler for z/OS Agent.
4. In the Properties pane, specify the properties for the SAP job definition you
are creating using the tabs available. The tabs for each type of SAP job
definition are similar, but there are some differences depending on the type of
engine you selected and the type of workstation on which the job runs. For
more detailed information about the UI elements on each tab, see the
Dynamic Workload Console online help.
5. In the Task tab, specify the IBM Workload Scheduler job that you want to
associate to the SAP job. If this job already exists, specify it in the Job name
field, otherwise, click New to create it from new and specify its properties in
the Properties pane.
6. In the Details view, right-click the new job to add ABAP, External command
or External program steps to it. It is mandatory to add at least one job step to
the job before you can save the job:
| 10. Click to save the SAP job definition in the IBM Workload Scheduler
| database.
You can also create and save SAP Standard R/3 jobs directly on the remote SAP
system from IBM Workload Scheduler, as you would from the SAP graphical user
interface. To create Standard R/3 jobs on the SAP system from the Dynamic
Workload Console, perform the following steps:
Procedure
1. Click Administration > Workload Design > Manage Jobs on SAP.
2. In the Filter, select Standard R/3 Job and specify the workstation name. This
parameter is mandatory because it identifies the remote SAP system.
3. Specify the workstation where the SAP job runs. This is the workstation with
the r3batch access method that communicates with the remote SAP system.
4. If the workstation is not an extended agent workstation, you must also specify
the options file to be used.
5. Click Display to view a list of the Standard R/3 jobs for the specified
workstation.
6. Click New to create a new Standard R/3 job and enter the required information
in the R/3 Job Definition and R/3 steps tabs.
7. Click OK to save the job on the SAP system.
What to do next
After creating the new SAP job on the SAP from the Dynamic Workload Console,
you must reference it in an IBM Workload Scheduler SAP Standard R/3 job if you
want to manage the job from within IBM Workload Scheduler as explained in
“Create an IBM Workload Scheduler job and associate it to an SAP job” on page
226.
There are several operations you can perform which require connection details to
establish a link to a remote SAP system. The connection is made through an IBM
Workload Scheduler workstation with the r3batch access method installed used to
communicate with the SAP system. Each workstation can have one or more
options files that can be used to customize the behavior of the r3batch access
method, except for extended agent workstations, where only one options file can
be defined and therefore a selection is not required.
For example, you can use Workload Designer to create IBM Workload Scheduler
job definitions that reference remote SAP jobs, or you can create a SAP job on a
remote SAP system. You can also search for SAP jobs on the remote system from
the Working List and Quick Open panes.
To configure a default SAP data connection to be used when creating objects with
Workload Design that require a SAP connection, perform the following steps:
Procedure
1. In the Workload Designer window, click from the toolbar of the Details
view.
2. In Workstation, enter the name of the workstation that communicates with the
SAP system or use the pick tool to search for and select one.
3. In Options file, enter the options file to be used or use the pick tool to search
for options files that reside on the specified workstation and select one.
4. Click OK.
Results
A default SAP connection is now configured. It will be used each time an object
that requires access to a SAP system is defined.
This section describes how to manage variants using the Dynamic Workload
Console:
Procedure
1. Click Administration > Workload Design > Manage Jobs on SAP from the
portfolio.
2. Specify an engine connection.
3. In Workstation name, type the name of the workstation where the SAP job
runs. This is the workstation with the r3batch access method that
communicates with the remote SAP system. If you do not know the name of
the workstation, click (...) browse to enter your filter criteria and click Search.
If you enter a string representing part of the workstation name, it must be
followed by the asterisk (*) wildcard character. Both the question mark (?) and
asterisk (*) are supported as wildcards. You can also simply use the asterisk
wildcard character (*) to display all workstations. Optionally, specify any of
the other search criteria available and click Search. From the results displayed,
select the workstation and click OK.
4. In Options file, specify an options file that resides on the specified
workstation. Each workstation can have one or more options files that can be
used to customize the behavior of the r3batch access method, except for
extended agent workstations, where only one options file can exist and
therefore does not need to be specified. For the workstation specified, enter
the file name of the options file or click the browse (...) button to search for
options files that reside on the specified workstation and select one.
5. Click Display. The list of available jobs on the remote SAP system for the
specified engine is displayed.
6. A list of SAP jobs on the remote SAP system are displayed.
10. From this panel, you can take the following actions:
Refresh
To refresh the content of the variant list with the information
contained in the SAP database.
New To create a new variant as described in “Creating or editing a
variant.”
View To display information on an existing variant.
Edit To modify information on an existing variant as described in
“Creating or editing a variant.”
Delete To delete a variant.
Set To associate the value chosen from the list to the ABAP.
You can create or edit a variant from the Variant List panel. To display the Variant
List panel, see “Managing SAP variants using the Dynamic Workload Console” on
page 229.
Procedure
1. In the Variant List panel, click New or Edit. The Variant Information page is
displayed by default. If you are editing an existing variant, the fields and
230 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
selections are not empty.
v Variant Values:
In the Variant Values page, the fields and values are dynamically built
through r3batch depending on the characteristics of the variant or step and
are identical to the ones in the equivalent SAP panel.
You can edit SAP Standard R/3 jobs in two different ways in IBM Workload
Scheduler.
v The Dynamic Workload Console contains the Manage Jobs on SAP entry in the
portfolio for creating and editing SAP Standard R/3 jobs on remote SAP
systems.
v The Workload Designerwindow in the Dynamic Workload Console allows you to
create and edit remote SAP jobs. See “Creating SAP Standard R/3 jobs from the
Dynamic Workload Console” on page 224.
Procedure
1. Click Administration > Workload Design > Manage Jobs on SAP.
2. Select the name of the engine connection from which you want to work with
SAP jobs.
3. Leave the default setting in the SAP Job Type section to Standard R/3 Job.
4. In Workstation name, type the name of the workstation where the SAP job
runs. This is the workstation with the r3batch access method that
communicates with the remote SAP system. If you do not know the name of
the workstation, click (...) browse to enter your filter criteria and click Search. If
you enter a string representing part of the workstation name, it must be
followed by the asterisk (*) wildcard character. Both the question mark (?) and
asterisk (*) are supported as wildcards. You can also simply use the asterisk
wildcard character (*) to display all workstations. Optionally, specify any of the
other search criteria available and click Search. From the results displayed,
select the workstation and click OK.
232 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
5. In Options file, specify an options file that resides on the specified workstation.
Each workstation can have one or more options files that can be used to
customize the behavior of the r3batch access method, except for extended agent
workstations, where only one options file can exist and therefore does not need
to be specified. For the workstation specified, enter the file name of the options
file or click the browse (...) button to search for options files that reside on the
specified workstation and select one.
6. Click Display. The list of available jobs on the remote SAP system for the
specified engine is displayed.
7. Select the job you want to modify in the list and click Edit. The List Jobs on
SAP panel is displayed.
8. Edit the properties on the R/3 Job Definition and R/3 Steps pages as
appropriate. Refer to the contextual online help available for more detailed
information about the UI elements available on each page.
Note:
v On the R/3 Job Definition page, when you modify the Job Class, Target
Host, or Server Group and click OK, the Job ID is maintained and remains
synchronized with the one associated to the current job. Instead, when you
modify the Job Name and click OK, the Job ID is automatically replaced
with the one associated to the new job name.
v On the R/3 Steps page, for each step you modify, the new step information is
saved in the SAP database. For each step you add or delete, the Job ID is
maintained and remains synchronized with the one associated to the
modified step.
9. Click OK to save your changes.
where:
class_value
The priority with which the job runs in the SAP system. For details, see
Table 52 on page 236.
job_ID The unique SAP job ID. For details, see Table 52 on page 236.
job_name
The name of the SAP job to run. For details, see Table 52 on page 236.
►► -job job_name ►
-i job_ID -user user_name
-id
► ►
-host host_name -sg server_group
-ts
► ►
-client source_client -exec_client target_client
► ►
-rfc_client rfc_logon_client -c class_value
► ►
-bdc_job_status_failed bdc_processing -nobdc
-nobdcwait
► ►
high -s starting_step_number
-bapi_sync_level medium
low
► ▼ ►
-sStep_number attribute_name
=attribute_value
► ►
-vStep_number variant_name -vtxtStep_number variant_description
234 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
► ►
-vparStep_number name=variant_value
► ►
-vselStep_number name= i #operation#lowest
e #highest
► ►
-vtempStep_number -recipient R/3_login_name
► ►
-rectype recipient_type -flag reccp -flag recex
recbl
► ►
-flag recnf -flag im -flag enable_applinfo
immed disable_applinfo
► ►
-flag enable_appl_rc -flag enable_joblog
disable_appl_rc disable_joblog
► ►
-flag enable_job_interceptable -flag enable_spoollist
disable_job_interceptable disable_spoollist
► ►◄
-flag pc_launch -debug -tracelvl 1 -rfctrace
2
3
Table 52 on page 236 describes the parameters for the task string to define SAP
jobs.
Note:
1. You can specify both -i or -id and -user in the same job definition, but the
user name is ignored.
2. When you specify the job ID, both -client and -exec_client are ignored
because the ID is unique for the entire SAP system.
3. Typically, the -debug and -trace options are for debugging the extended agent
and should not be used in standard production.
236 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 52. Task string parameters for SAP jobs (continued). The following are the task string parameters for the SAP
jobs
GUI
Section Parameters Description Support
JOB -client source_client The number that identifies the SAP client where the
job definition is to be found, regardless of the client
number defined by the r3client keyword in the
options file. This parameter has no effect if a job ID is
specified in the job definition.
-exec_client target_client The number that identifies the SAP client where the
job is to be run, regardless of the client number
defined by the r3client keyword in the options file.
This parameter has no effect if a job ID is specified in
the job definition.
-rfc_client rfc_logon_client The number that identifies the SAP client to be used
for RFC logon. This value overwrites the value
specified by the r3client keyword in the corresponding
r3batch options file.
-c class_value The priority with which the job runs in the SAP U
system. Possible values are:
A High priority
B Medium priority
C Low priority. This is the default value.
-bdc_job_status_failed bdc_processing How IBM Workload Scheduler sets the completion U
status of a job running BDC sessions, according to a
possible BDC processing failure. The allowed values
are:
n If at least n BDC sessions failed (where n is
an integer greater than 0), IBM Workload
Scheduler sets the job completion status as
failed.
all If all the BDC sessions failed, IBM Workload
Scheduler sets the job completion status as
failed.
ignore When all the BDC sessions complete,
regardless of their status, IBM Workload
Scheduler sets the job completion status as
successful. This is the default.
238 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 52. Task string parameters for SAP jobs (continued). The following are the task string parameters for the SAP
jobs
GUI
Section Parameters Description Support
VARIANT -vstep_number name The variant name for the specified step number. U
-vtxtstep_number variant_description The textual description of the variant, in the IBM U
Workload Scheduler logon language (customizable
with the TWSXA_LANG option of r3batch). The
maximum length is 30 characters.
-vparstep_number name=value For ABAP modules only. The value for a variant U
parameter for the specified step number. This
parameter is mandatory when creating a new variant.
See “Defining attributes for ABAP steps” on page 258
for a complete list of the supported attributes for
ABAP steps.
-vselstep_number For ABAP modules only. The value for a variant U
name=sign#operation#lowest[#highest] selection option for the specified step number.
sign Sign of the operation. Possible values are:
I Include
E Exclude
operation
Possible values are:
EQ Equals
NE Not equal to
BT Between
NB Not between
LT Less than
LE Less than or equal to
GT Greater than
GE Greater than or equal to
CP Contains pattern
NP Does not contain pattern
lowest Low value of the selection. You can use up to
45 characters.
highest High value of the selection. You can use up
to 45 characters. This attribute is optional.
For a complete list of the supported attributes for
ABAP steps, see“Defining attributes for ABAP steps”
on page 258.
-vtempstep_number For ABAP modules only. Specifies to assign a
temporary variant to the specified step number.
Temporary variants are created ad-hoc by the SAP
system and assigned to the job instance when it is
run. The lifecycle of the temporary variant is
determined by the SAP system. If the job is deleted by
SAP, then the temporary variant is deleted. See
“Examples: Dynamically defining and updating SAP
jobs” on page 262 to refer to examples that
demonstrate the behavior of temporary variants.
240 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 52. Task string parameters for SAP jobs (continued). The following are the task string parameters for the SAP
jobs
GUI
Section Parameters Description Support
-flag Specifies to launch child jobs that
pc_launch are in scheduled state.
ON The product launches child
jobs that are in scheduled
state.
OFF The product does not
launch child jobs that are
in scheduled state. This is
the default value.
Note: You can use this option only
if you activated the parent-child
feature on the SAP system. On the
XBP 2.0 (or later)SAP system you
can activate this feature using the
INITXBP2 ABAP report
TRACING -debug Enables maximum trace level. U
-tracelvl 1|2|3 Specifies the trace setting for the job. Possible values U
are:
1 Only error messages are written in the trace
file. This is the default.
2 Informational messages and warnings are
also written in the trace file.
3 A most verbose debug output is written in
the trace file.
For detailed information, refer to “Configuring the
tracing utility” on page 184.
-rfctrace Enables RFC trace.
-trace
The following is an example for an SAP job named BVTTEST with ID 03102401 and
user myuser:
-job BVTTEST –i 03102401 -user myuser -debug
Perform the following steps to display details for standard jobs on specific
workstations.
For information about how to display details about a job that submits an SAP
process chain, refer to “Displaying details about a process chain job” on page 284.
Procedure
1. Click Administration > Workload Design > Manage Jobs on SAP.
2. In Engine name, select the name of the IBM Workload Scheduler engine
connection from which you want to view SAP job details.
To verify the status of a standard SAP job, perform the following steps:
Procedure
1. Click Administration > Workload Design > Manage Jobs on SAP.
2. In Engine name, select the name of the IBM Workload Scheduler engine
connection from which you want to verify the status of an SAP job.
3. In Workstation name, type the name of the workstation where the SAP job
runs. This is the workstation with the r3batch access method that
communicates with the remote SAP system. If you do not know the name of
the workstation, click (...) browse to enter your filter criteria and click Search. If
you enter a string representing part of the workstation name, it must be
followed by the asterisk (*) wildcard character. Both the question mark (?) and
asterisk (*) are supported as wildcards. You can also simply use the asterisk
wildcard character (*) to display all workstations. Optionally, specify any of the
other search criteria available and click Search. From the results displayed,
select the workstation and click OK.
4. In Options file, specify an options file that resides on the specified workstation.
Each workstation can have one or more options files that can be used to
customize the behavior of the r3batch access method, except for extended agent
workstations, where only one options file can exist and therefore does not need
to be specified. For the workstation specified, enter the file name of the options
file or click the browse (...) button to search for options files that reside on the
specified workstation and select one.
5. Click Display. The list of available jobs for the specified engine is displayed.
6. Select the job for which you want to verify the status and click Status. The
current status for the SAP job is displayed, as well as the database name where
the job is installed.
242 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
7. When you have finished verifying the status for the job, click OK to return to
the list of SAP jobs on the workstation specified.
To delete a standard SAP job from the SAP database, perform the following steps:
Procedure
1. Click Administration > Workload Design > Manage Jobs on SAP.
2. In Engine name, select the name of the IBM Workload Scheduler engine
connection from which you want to delete the SAP job.
3. In Workstation name, type the name of the workstation where the SAP job
runs. This is the workstation with the r3batch access method that
communicates with the remote SAP system. If you do not know the name of
the workstation, click (...) browse to enter your filter criteria and click Search. If
you enter a string representing part of the workstation name, it must be
followed by the asterisk (*) wildcard character. Both the question mark (?) and
asterisk (*) are supported as wildcards. You can also simply use the asterisk
wildcard character (*) to display all workstations. Optionally, specify any of the
other search criteria available and click Search. From the results displayed,
select the workstation and click OK.
4. In Options file, specify an options file that resides on the specified workstation.
Each workstation can have one or more options files that can be used to
customize the behavior of the r3batch access method, except for extended agent
workstations, where only one options file can exist and therefore does not need
to be specified. For the workstation specified, enter the file name of the options
file or click the browse (...) button to search for options files that reside on the
specified workstation and select one.
5. Click Display. The list of available jobs for the specified engine is displayed.
6. Select the job or jobs you want to delete and click Delete. A confirmation
message prompts you to confirm the delete action.
7. When the delete action is complete, click OK to return to the list of SAP jobs
on the workstation specified.
If the application servers defined in a group are modified in the SAP system, the
job defined as belonging to that server group is not affected and does not need to
be modified. The batch execution targets are reorganized in the SAP system
without having to change job definitions in IBM Workload Scheduler.
The INTRO state indicates that IBM Workload Scheduler is in the process of
introducing the job, but in SAP, the job has not yet entered the ready state. Because
it takes some time to get a job queued and into the ready column, the INTRO state
might last a few minutes if the SAP system is particularly busy.
Even if a job is finished in SAP, IBM Workload Scheduler keeps it in the EXEC
state if its BDC sessions are not complete and you have not selected the Disable
BDC Wait option. For details about this option, see “Using the BDC Wait option”
on page 267.
Managing spools
Browse spool lists on request without having to download the entire spool which
can occupy significant space on the file system.
Spool lists can be very large so rather than download them as part of a job run,
you can request to browse the spool list, chunks at a time, even if you have
disabled the option, retrieve_spoollist, to append the spool list to the IBM
Workload Scheduler joblog.
From the Dynamic Workload Console, you can list the spool data available for SAP
Standard R/3 jobs that have run. Each spool is identified by the following
information:
v The spool number.
v The related step number.
v The name of the spool request.
v The title of the spool request.
v The total number of pages for the spool information.
v The user who executed the SAP job related to the spool.
v The date the spool was created based on the Coordinated Universal Time (UTC)
time standard.
v The client for which the spool was created.
244 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Browsing spool data
You can list the spool data available for SAP Standard R/3 jobs that have run and
browse the contents of the spool.
Procedure
1. In the navigation bar at the top, click System Status and Health > Workload
Monitoring > Monitor Workload.
2. In the Monitor Workload input fields enter the engine name, the plan, and any
filtering data that helps you filter the selection of jobs (you can also select Edit
for a guided selection of filtering criteria) and select Run.
3. In the output table select an SAP Standard R/3 job and click More Actions >
Show Spool List. The list of spool data available for the selected job is
displayed.
4. Select a spool and click Spool.
Results
By default, the first ten pages of the spool are made available. You can change this
default by editing the number of pages specified in Pages for screen. Use the page
functions to jump to a specific page number, jump to the last page of the spool,
jump to the first page of the spool, or move forward or back through the number
of pages indicated by Pages for screen.
This section describes how to kill an IBM Workload Scheduler job that submits
either a standard SAP job or an SAP process chain.
The IBM Workload Scheduler job status is set to ABEND. The SAP job or process
chain is set to canceled in the SAP system.
Note: If you kill a process chain job, the SAP system stops as soon as the process
that is currently running completes.
Procedure
1. Use the Monitor Workload query of the Dynamic Workload Console to display
a list of defined job instances containing the job you want to kill. In the
navigation bar at the top, click System Status and Health > Workload
Monitoring > Monitor Workload.
2. In the Monitor Workload input fields enter the engine name, the plan, and any
filtering data that helps you filter the selection of jobs (you can also select Edit
for a guided selection of filtering criteria) and select Run.
3. The Monitor Jobs panel is displayed. Select the job instance you want to kill
and click More Actions > Kill.
You can raise events on XBP 2.0 (or later) SAP jobs in the IBM Workload Scheduler
database in one of the following ways:
Using the Monitor Workload in the Dynamic Workload Console
Perform the following steps:
1. On the SAP system, create a job that has as start condition a SAP event.
When you create this job, its status is released.
2. Check that this job was not intercepted by the interception function.
3. Log in to the Dynamic Workload Console.
4. In the navigation bar at the top, click System Status and Health >
Workload Monitoring > Monitor Workload.
5. In the Monitor Workload window select the engine, enter Workstation
in the Object Type field, and select the plan to display the list of
workstations you want to monitor. Click Run.
A list of workstations is displayed.
6. Select a workstation that has been defined to connect to a remote SAP
system.
7. From the toolbar, select More Actions > Raise Event. The Raise Event
panel opens.
246 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
3. In the Working List pane, select New > Job Definition .
4. Select the Native category and then either Windows or UNIX.
5. Use the General page to provide general information about the new job
definition.
6. Use the Task page to provide task information for the job.
7. In the Task page, select Command and in the command string type the
following command that raises the event:
TWS_home/methods/r3event -c workstation_name -u user_name
-e SAP_event_ID -p parameter
where:
workstation_name
The name of the workstation where the SAP R/3 job is defined.
user_name
The name of the SAP user with which the access method
connects to the SAP system. This is the name specified in the
r3user option.
SAP_event_ID
The identifier of the event.
parameter
The parameter defined for the event.
8. Save the job definition.
What to do next
See “Defining conditions and criteria” on page 263 for information about how to
define criteria that manages which raised events to log.
To rerun a standard SAP job, you can use one of the following user interfaces:
conman
For details, refer to the IBM Workload Scheduler: User's Guide and Reference.
Dynamic Workload Console
Dynamic Workload Console
For details about how to rerun a job that submits an SAP process chain, refer to
“Rerunning a process chain job” on page 286.
For an SAP extended agent, a step is the numeric step of the SAP instruction from
which a job can be restarted. Before you rerun an SAP job with IBM Workload
Scheduler, you have the option of providing a step name for the job. This affects
r3batch in the following ways:
v If you use a step name that is up to 9 digits (or 8 digits preceded by a character)
in length, this name is used as the starting step number for the rerunning job.
v If you use any different format, the name is ignored and the job is rerun starting
from the first step.
z/OS In z/OS environments, you need to set the status of the job to Ready
before you can rerun the job.
Note: By default, if you specify a job step to rerun, the new job is assigned the
name of the step you indicated. To keep the original job name, set the IBM
Workload Scheduler global option enRetainNameOnRerunFrom to yes. This
option works only when used with the following arguments: rr
jobselect;from=[wkstat#]job. For details about these arguments, see IBM Workload
Scheduler: User's Guide and Reference, Managing objects in the plan - conman, Conman
commands, rerun. For details about this option, see IBM Workload Scheduler: Planning
and Installation.
When r3batch reruns a job from its first step, either because you specified it as the
starting step or because no starting step was specified, it uses the new copy feature,
if applicable. If the starting step is greater than one, r3batch uses the old copy to
rerun the job. For a description about the difference between the new and old copy
of a rerunning job, refer to “Old copy and new copy of a rerunning job” on page
249.
To rerun a SAP Standard R/3 job from the Dynamic Workload Console, perform
the following steps:
Procedure
1. In the Dynamic Workload Console select System status and Health >
Workload Monitoring > Monitor Workload.
2. In the Monitor Workload input fields select Job as the Object Type, the engine
name, the plan, and any filtering data that helps you filter the selection of jobs
(you can also select Edit for a guided selection of filtering criteria) and select
Run.
3. A list of jobs is displayed. Select an SAP Standard R/3 job.
4. Rerun the job.
Distributed Distributed environment
a. Click Rerun.... The General properties for the rerun operation are
displayed.
b. Optionally, you can choose to not rerun the same job but instead,
substitute the selected SAP job with a different job definition and
run it. Type the job definition name in the From Job Definition
field, or use the browse button to search for it and select it.
c. Optionally, type the workstation name of the workstation on which
you want to rerun the job in the Workstation Name field.
d. Optionally, in Step, enter a specific numeric step of the SAP
instruction from which you want to rerun the job rather than
rerunning the whole job.
e. Optionally, specify the start and finish time for the job.
f. Click Rerun.
The job reruns immediately or at the specified start time.
248 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
z/OS z/OS environment
In a z/OS environment, an alias for the job name is not required so the
job reruns with the same name. The list of jobs always reports the latest
action performed on the job.
a. Before you can rerun a job, you must change the status of the job to
Ready. Select a job and click Set Status.
b. In Change Status, select Ready.
c. Click OK to return to the list of jobs.
The job reruns immediately and the internal status reports Started.
The new copy feature is available for SAP versions 3.1i, and later. It copies an entire
job, preserving steps, job class, and all print and archive parameters. It is
performed by using a new SAP function module that is part of the SAP Support
Package as stated in the SAP Notes 399449 and 430087.
The old copy feature, instead, is based on standard SAP function modules, and
creates a new SAP job and adds the steps with a loop that starts from the step
name or number you specified. Be aware that, unless you have XBP 2.0 or later:
v The old copy does not preserve all the print and archive parameters.
v The job class of the copy is always set to class C.
Refer to “Print parameter and job class issues” on page 205 to learn how to resolve
the problem of lost job class and print and archive parameters.
SAP Note 758829 is required to ensure correct operation of the new copy and old
copy features. See also Table 77 on page 329.
When you launch a job created as described in “Creating SAP Standard R/3 jobs
from the Dynamic Workload Console” on page 224 and “Task string to define SAP
jobs” on page 233, IBM Workload Scheduler makes a copy of the predefined job
(also known as a template job) and runs the copy. If you want to run the job on
several SAP systems, you must manually create the template job on each system.
To take full advantage of this feature, make sure that you have XBP version 2.0 or
later installed, because earlier versions of XBP do not support the full set of print
and archive parameters, or provide a way to set the job class or the spool list
recipient.
You can specify them in the following places when you define their associated IBM
Workload Scheduler jobs:
v In the R/3 Command Line section of the Task page of the Submit Ad Hoc Jobs
action from the Dynamic Workload Console.
v In the R3 Command Line field of the More Options page of the SAP job
definition, if you use the Dynamic Workload Console and selected a SAP job
definition.
v As arguments of the scriptname keyword in the job definition statement, if you
use the IBM Workload Scheduler command line.
v As arguments of the JOBCMD keyword in the JOBREC statement in the SCRIPTLIB of
IBM Workload Scheduler for z/OS, if you are scheduling in an end-to-end
environment. The following is an example of a JOBREC statement:
JOBREC
JOBCMD(’/-job job_name -user user_name -i job_ID -c class_value’)
JOBUSR(TWS_user_name)
To define and submit an SAP R/3 job dynamically, use the following syntax:
► ►
-sg server_group -client source_client -exec_client target_client
► ►
-rfc_client rfc_logon_client -c class_value
► ►
-bdc_job_status_failed bdc_processing
► ►
-nobdc
-nobdcwait high -s starting_step_number
-bapi_sync_level medium
low
250 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
► ▼ ►
-sStep_number attribute_name
=attribute_value
► ►
-vStep_number variant_name -vtxtStep_number variant_description
► ►
-vparStep_number name=variant_value
► ►
-vselStep_number name= i #operation#lowest -vtempStep_number
e #highest
► ►
-recipient R/3_login_name -rectype recipient_type -flag reccp
recbl
► ►
-flag recex -flag recnf -flag im
immed
► ►
-flag enable_applinfo -flag enable_appl_rc
disable_applinfo disable_appl_rc
► ►
-flag enable_joblog -flag enable_job_interceptable
disable_joblog disable_job_interceptable
► ►
-flag enable_spoollist -flag pc_launch -debug
disable_spoollist
► ►◄
-tracelvl 1 -rfctrace -rfc_client rfc_logon_client
2
3
Table 54 on page 252 describes the parameters for the task string to define SAP
jobs dynamically.
252 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 54. Task string parameters for SAP jobs (dynamic definition) (continued). This table shows Task string
parameters for Sap jobs
Section Parameters Description
JOB -bdc_job_status_failed bdc_processing How IBM Workload Scheduler sets the completion status
of a job running BDC sessions, according to a possible
BDC processing failure. The allowed values are:
n If at least n BDC sessions failed (where n is an
integer greater than 0), IBM Workload Scheduler
sets the job completion status as failed.
all If all the BDC sessions failed, IBM Workload
Scheduler sets the job completion status as
failed.
ignore When all the BDC sessions complete, regardless
of their status, IBM Workload Scheduler sets the
job completion status as successful. This is the
default value.
254 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 54. Task string parameters for SAP jobs (dynamic definition) (continued). This table shows Task string
parameters for Sap jobs
Section Parameters Description
1
VARIANT -vstep_number name The variant name for the specified step number.
-vtxtstep_number variant_description The textual description of the variant, in the IBM
Workload Scheduler logon language (customizable with
the TWSXA_LANG option of r3batch). The maximum length
is 30 characters. Not valid for temporary variants.
-vparstep_number name=value For ABAP modules only. The value for a variant
parameter for the specified step number. This parameter
is mandatory when creating a new variant. For a
complete list of the supported attributes for ABAP steps,
see “Defining attributes for ABAP steps” on page 258.
-vselstep_number For ABAP modules only. The value for a variant
name=sign#operation#lowest[#highest] selection option for the specified step number.
sign Sign of the operation. Possible values are:
I Include
E Exclude
operation
Possible values are:
EQ Equals
NE Not equal to
BT Between
NB Not between
LT Less than
LE Less than or equal to
GT Greater than
GE Greater than or equal to
CP Contains pattern
NP Does not contain pattern
lowest Low value of the selection. You can use up to 45
characters.
highest High value of the selection. You can use up to
45 characters. This attribute is optional.
For a complete list of the supported attributes for ABAP
steps, see “Defining attributes for ABAP steps” on page
258.
-vtempstep_number For ABAP modules only. Specifies to assign a temporary
variant to the specified step number. Temporary variants
are created ad-hoc by the SAP system and assigned to
the job instance when it is run. The lifecycle of the
temporary variant is determined by the SAP system. If
the job is deleted by SAP, then the temporary variant is
deleted. See “Examples: Dynamically defining and
updating SAP jobs” on page 262 to refer to examples
that demonstrate the behavior of temporary variants.
256 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 54. Task string parameters for SAP jobs (dynamic definition) (continued). This table shows Task string
parameters for Sap jobs
Section Parameters Description
-flag Specifies to launch child jobs that are in
pc_launch scheduled state.
ON The product launches child jobs
that are in scheduled state.
OFF The product does not launch child
jobs that are in scheduled state.
This is the default value.
Note: You can use this option only if you
activated the parent-child feature on the
SAP system. On the XBP 2.0 (or later) SAP
system, you activate this feature by using
the INITXBP2 ABAP report.
TRACING -debug Enables maximum trace level.
-tracelvl 1|2|3 Specifies the trace setting for the job. Possible values are:
1 Only error messages are written in the trace file.
This is the default.
2 Informational messages and warnings are also
written in the trace file.
3 A most verbose debug output is written in the
trace file.
For more details, refer to “Configuring the tracing
utility” on page 184.
-rfctrace Enables RFC trace.
-trace
Note: See “Examples: Dynamically defining and updating SAP jobs” on page 262
to refer to examples that demonstrate the behavior of variants and temporary
variants.
1. The following rules apply when you create or update SAP jobs dynamically:
v To create or reference a variant within an ABAP step, you can use one of the
following equivalent syntaxes:
– -s1 Variant=Var1
– -s1 Parameter=Var1
– -v1 Var1
v If a variant does not exist, it is created with the parameters specified in the
job definition statement. In this case, all the required attributes of the variant
must be given a value. You cannot create empty variants. For example, if you
specify -vtemp1, with no value assigned, an empty temporary variant is
erroneously created.
v If a variant is already present in the SAP system, its values are modified
according to the command line parameters. If the existing variant is an
extended one, a new instance of it is created with resolved placeholders and
updated counters. This new variant instance is then updated using the
values from the command line. Finally, the job step is run using this variant
instance.
v All changes to the variant values are permanent. That is, IBM Workload
Scheduler neither restores the old values of the variants, nor deletes the
variants created after the job is run. IBM Workload Scheduler does not
change the case of the variant values.
Table 55 shows a complete list of the supported attributes for ABAP step module
definition:
Table 55. Supported attributes for ABAP step definition
Attribute
name Synonym Description Required
type typ Specify the step type. Possible values are: U
v A
v ABAP
The product performs a check for correct
attribute values prior to launching the job.
program Specify the ABAP program name. U
parameter Specify the ABAP variant name. U
user authcknam Specify the user of the step. U
language lang U
Specify the step language.
258 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 55. Supported attributes for ABAP step definition (continued)
Attribute
name Synonym Description Required
pr_exp pexpi
Print Parameter: Spool retention period
Note:
1. This attribute is available for BC-XBP 2.0 and later.
2. This attribute is a flag, that is, it does not have a value, for example: –s2
pr_release.
Validation is performed before the job is created in the SAP system. If the
validation fails, the IBM Workload Scheduler job goes into the ABEND state.
260 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 56. Supported attributes for external programs and external commands step
definition (continued)
Attribute
name Synonym Description Required
language lang
Step language.
Note:
1. This attribute is available for BC-XBP 2.0 and later.
2. This attribute is a flag, that is, it does not have a value, for example: –s2
pr_release.
Validation is performed before the job is created in the SAP system. If the
validation fails, the IBM Workload Scheduler job goes into the ABEND state.
The variable substitution process occurs while IBM Workload Scheduler is creating
the symphony file.
262 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
A temporary variant is created using the information indicated in the expression
The following is the syntax to be used:
-vpar1 <parameter_name>=<parameter_value> ...
-vsel1 <selection_option_name>
... -vtemp1
The following example shows how you can submit a job that creates a
temporary variant that is assigned to step number 1, and assigns a value to
a variant parameter for step number 1:
-job TESTJOB01 -C A -flag type=exec -user R3USER
-s1 type=A -s1 program=MYPROG1
-vtemp1 -vpar1 TESTNAME=TST
The following example shows how you can submit a job that creates a
temporary variant that is assigned to step number 1, assigns a value to a
variant parameter for step number 1, and assigns a value to a variant
selection option (date) for step number 1:
-job TESTJOB01 -C A -flag type=exec -user R3USER
-s1 type=A -s1 program=MYPROG1
-vtemp1 -vpar1 FILENAME=FLN
-vsel1 date=E#BT#20110101#20110412
Assign a temporary variant to the specified step number
The following is the syntax to be used:
-v1 <temporary_variant_name> -vtemp1
The following is an example of how you can submit a job that substitutes
the value of a temporary variant, which must already exist, with a new
value. The temporary variant must exist, otherwise, the expression returns
an error.
-job TESTJOB01 -C A -flag type=exec -user R3USER
-s1 type=A -s1 program=MYPROG1
-vtemp1 -v1 &000000000001 -vpar1 TESTNAME=TST2
IBM Workload Scheduler supports the BC-XBP 3.0 interface which provides
functions to control R/3 batch jobs.
The Criteria Manager enables you to define a criteria profile which is a container
for a combination of criteria. The criteria profile can be of various types and each
criteria type has a standard set of selection criterion. For each criteria, you can
specify a single value, a range of values by indicating a lower and upper limit, and
multiple values. The following is the standard set of selection criterion for each
criteria profile type. In addition to these, you can also see any other types of
criteria profiles you have defined on your SAP system:
Event History
EVENTID
The identifier of the event defined in the SAP system.
EVENTPARM
The parameter of the event defined in the SAP system.
PARMID
The identifier of the parameter of the event defined in the SAP
system.
Event History Reorg
Event State
The state of the event.
Event Timestamp
The timestamp for the event.
Interception
Job Name
A name identifying the job.
Job Class
The class assigned to the job that represents the priority with
which the job runs in the SAP system.
You create and combine criteria in a criteria hierarchy. The criteria hierarchy is a
set of all the criteria that must be fulfilled for a specific action to take place in the
specific context. For example, you can define a criteria hierarchy to log all raised
events in the SAP event history with an event name that begins with
"CRITICAL_EVENT" and with an event argument equal to 150.
The criteria in the hierarchy is grouped in nodes and relationships between the
nodes are determined by the logical operators AND or OR. You can nest nodes in
other nodes.
264 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
To have the criteria profile begin processing, the criteria profile must be activated.
Only one criteria profile of the same type can be active at one time.
An example
See “Example: Defining which raised events to log” for an example that
demonstrates how to build a criteria hierarchy to manage the logging of raised
events in the SAP event history.
The event history enables IBM Workload Scheduler to consume events that are
raised by the SAP system.
Checking the log of raised events gives you access to the following information:
v Verify that an event was raised in the system.
v Verify if the event was processed.
In the example that follows, an event history criteria profile is created that contains
the definition of the criteria, the criteria hierarchy, that events must fulfill to be
logged in the event history. The criteria profile must then be activated so that it
can begin processing events according to the criteria.
The criteria profile, Event profile 1, contains a criteria hierarchy that logs only
those events in the event history with event name that begins with CRITICAL_EVENT
and event argument equal to "789".
Create a criteria profile, Event profile 1, of type, Event History, to contain the
criteria hierarchy.
Procedure
1. In the portfolio, click Administration > Workload Design > Manage SAP
Criteria Profiles.
2. In Engine name, select the name of the IBM Workload Scheduler engine
connection from which you want to work with SAP jobs.
3. In Workstation name, type the name of the workstation where the SAP job
runs. This is the workstation with the r3batch access method that
communicates with the remote SAP system. If you do not know the name of
the workstation, click the Lookup Workstations icon to enter your filter criteria
and click Search. If you enter a string representing part of the workstation
name, it must be followed by the asterisk (*) wildcard character. Both the
question mark (?) and asterisk (*) are supported as wildcards. You can also
simply use the asterisk wildcard character (*) to display all workstations.
Optionally, specify any of the other search criteria available and click Search.
From the results displayed, select the workstation and click OK.
Results
The criteria profile is displayed in the list of criteria profiles and it is not yet active.
What to do next
Next, begin building the criteria hierarchy. The criteria profile is the container for
the criteria hierarchy.
In this example, define a criterion that logs all events whose name begins with
CRITICAL_EVENT and with event argument equal to 789.
Procedure
266 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
5. In Options, select Pattern and in Single Value or Lower Limit, type
CRITICAL_EVENT*. This sets the condition for the event name.
Results
The criteria profile now contains a criterion that specifies which raised events must
be logged. You can continue to create another criteria in the same parent node or
you can nest either an AND or an OR node in the parent node to determine the
logical relation between the criteria that the nested node will contain. Add an AND
node within which you can create one or more criteria where all the criteria
specified in the node must be fulfilled, or add an OR node within which you can
create one or more criteria where at least one of the criteria specified must be
fulfilled.
What to do next
To apply this criteria profile so that it begins processing events according to the
criteria defined, you must activate the criteria profile.
A criteria profile can either be active or not active. For a criteria profile to take
effect, the profile must be activated. Only one criteria profile of the same type can
be active at one time. Criteria profiles cannot be edited if they are in the active
state. Follow the procedure to activate the Event profile 1 criteria profile.
Procedure
1. Select the Event profile 1 criteria profile from the table of criteria profiles.
Results
The status of the criteria profile is updated to show that it is now active. The
criteria profile can now begin to process raised events according to the
specifications of the criteria hierarchy and log them to the event history. If another
criteria profile of the same criteria type was active, its status changes to inactive.
The Batch Data Collector (BDC) Wait option prevents other IBM Workload
Scheduler jobs that are dependent on the R/3 job from being launched until all of
the related BDC sessions for the R/3 job have ended.
To use the option, an R/3 job must write informational messages in its job log.
This can be done by modifying the SAP function module BDC_OPEN_GROUP as
follows:
FUNCTION BDC_OPEN_GROUP.
...
CALL ’BDC_OPEN_GROUP’ ID ’CLIENT’ FIELD CLIENT
ID ’GROUP’ FIELD GROUP
ID ’USER’ FIELD USER
ID ’KEEP’ FIELD KEEP
ID ’HOLDDATE’ FIELD HOLDDATE
ID ’DESTINATION’ FIELD DEST
ID ’QID’ FIELD QID
ID ’RECORD’ FIELD RECORD
ID ’PROG’ FIELD PROG.
*
IF SY-SUBRC EQ 0.
BQID = QID.
BUSER = SY-MSGV1.
BGROUP = GROUP.
* CALL FUNCTION ’DB_COMMIT’.
CALL FUNCTION ’ENQUEUE_BDC_QID’
EXPORTING DATATYP = ’BDC ’
GROUPID = BGROUP
QID = BQID
EXCEPTIONS FOREIGN_LOCK = 98
SYSTEM_FAILURE = 99.
IF SY-SUBRC EQ 0.
message i368(00) with ’BDCWAIT: ’ qid.
ENDIF.
ENDIF.
*
PERFORM FEHLER_BEHANDLUNG USING SY-SUBRC.
*
*
ENDFUNCTION.
Note: The actual parameters of the call of the C function (CALL 'BDC_OPEN_GROUP'
ID ...) might vary depending on the SAP release. With this approach, you obtain
a global change in your R/3 system.
The completion status of an R/3 job launched by IBM Workload Scheduler is based
on the value you set for the bdc_job_status_failed option. By default, this option is
set to ignore, meaning that the job is considered successfully completed when the
BDC sessions are finished, regardless of their success or failure. For details about
the bdc_job_status_failed option, refer to Table 52 on page 236.
268 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Note: Distributed The process of defining relaunch criteria and collecting and
relaunching intercepted jobs is supported only in distributed environments and not
in z/OS environments.
Job interception is a feature of both the BC-XBP 2.0 and BC-XBP 3.0 interfaces. It
enables IBM Workload Scheduler to have a very sophisticated control over the jobs
launched by SAP R/3 users from the SAP graphical interface.
The job interception mechanism becomes active when the SAP R/3 job scheduler is
about to start an SAP R/3 job (that is, when the start conditions of an SAP R/3
job are fulfilled). It checks the job parameters (job name, creator, client) against
the entries in the SAP R/3 table TBCICPT1, and when the job parameters match the
criteria, the SAP R/3 job is set back to the scheduled status and is marked with a
special flag, denoting that the job has been intercepted. The criteria defined in the
criteria table establishes which job are intercepted.
Job interception with the BC-XBP 2.0 interface is based on the single extended
agent workstation, whereas with the BC-XBP 3.0 interface, job interception is based
on the currently active job interception criteria profile.
Note:
v Jobs launched by IBM Workload Scheduler, or by any other external scheduler
using the BC-XBP interface, can be intercepted provided the job_interceptable
option in the common options file is set to ON, and the -flag
enable_job_interceptable keyword is included in the job definition.
v Ensure that the job interception and job throttling features are not running at the
same time. The interception collector jobs fail if a job throttler instance is
running. To stop the job throttler, refer to “Step 5. Starting and stopping the job
throttling feature” on page 295.
The following are the high-level steps required to implement job interception for
both the BC-XBP 2.0 and 3.0 interfaces.
Procedure
1. Install the BC-XBP 2.0 interface. Refer to SAP Note 604496 to know if your SAP
R/3 system already has the BC-XBP 2.0 interface, or which SAP R/3 support
package you need to install to enable it.
2. Define an IBM Workload Scheduler job to periodically collect the intercepted
SAP R/3 jobs.
Procedure
1. Verify if the BC-XBP 3.0 interface is already installed on the SAP R/3 system.
2. Define an IBM Workload Scheduler job to periodically collect the intercepted
SAP R/3 jobs.
3. Specify interception criteria in the SAP R/3 system.
4. Specify interception criteria in IBM Workload Scheduler from the Manage SAP
Criteria Profiles portlet on the Dynamic Workload Console.
5. Activate the job interception feature of the BC-XBP 3.0 interface.
Define an IBM Workload Scheduler job that uses the SAP R/3 interception collector
task to collect intercepted jobs and restart them.
To define an IBM Workload Scheduler job that collects intercepted job and
relaunches them, use the following syntax:
XANAME#JOBNAME
SCRIPTNAME "TWS_home/methods/r3batch -t HIJ -c XANAME"
DESCRIPTION "Collects intercepted jobs on SAP XA XANAME"
STREAMLOGON TWSuser
RECOVERY STOP
Where:
XANAME
Name of the extended agent workstation.
JOBNAME
Name of the IBM Workload Scheduler job.
TWS_home
Fully qualified path to your IBM Workload Scheduler installation.
–t HIJ This is the SAP R/3 task type to run the job interception collector. HIJ
stands for Handle Intercepted Jobs.
TWSuser
Name of the IBM Workload Scheduler user that launches the access
method.
270 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
The interception collector job runs at periodical intervals; for example, every 10
minutes. It retrieves all the jobs that have been intercepted since the last run of the
interception collector, and launches them again according to a template.
Because intercepted jobs remain in the Released and then Intercepted status until
they are relaunched, you need to use the SAP R/3 interception collector task to
collect and relaunch them.
To define an IBM Workload Scheduler job that collects and relaunches jobs use the
following syntax:
ENGINE_NAME_HOSTING_XA#JOBNAME
DOCOMMAND "TWS_home/methods/r3batch -t HIJ -c ENGINE_NAME_HOSTING_XA -- \
"-profile_id <profile_ID_number>\""
STREAMLOGON TWSuser
DESCRIPTION "Collects intercepted jobs on SAP ENGINE_NAME_HOSTING_XA"
TASKTYPE UNIX
RECOVERY STOP
where,
ENGINE_NAME_HOSTING_XA
The name of the engine workstation hosting the XA workstation with the
r3batch access method that communicates with the SAP system.
JOBNAME
Name of the IBM Workload Scheduler job.
TWS_home
Fully qualified path to your IBM Workload Scheduler installation.
–t HIJ This is the SAP R/3 task type to run the job interception collector. HIJ
stands for Handle Intercepted Jobs.
- profile_id <profile_ID_number>
Specifies the identification number of the interception criteria profile on the
SAP system for XBP 3.0.
TWSuser
Name of the IBM Workload Scheduler user that launches the access
method.
The interception collector job runs at periodical intervals; for example, every 10
minutes. It retrieves all the jobs that have been intercepted since the last run of the
interception collector, and launches them again according to a template.
Note: If the interception collector is configured for XBP 3.0 job interception, but
the XBP 2.0 interface is configured on the SAP system, the collector fails. Ensure
the XBP interface versions are synchronized.
In SAP R/3, the interception criteria are held in table TBCICPT1. Only jobs that
match the criteria of this table are intercepted, when their start conditions are
fulfilled. All the other jobs are run normally.
You can maintain the entries in this table by using transaction se16 and setting the
following:
v Client number
v Job mask
v User mask
In IBM Workload Scheduler, interception criteria are defined and used by setting:
Table criteria
For BC-XBP 2.0
You use the Monitor Workload of Dynamic Workload Console to
set table criteria.
For details about how you set table criteria, see “Setting SAP R/3
table criteria on the extended agent workstation.”
For BC-XBP 3.0
You set table criteria from the Administration > Workload Design
> Manage SAP Criteria Profiles portlet from the Dynamic
Workload Console.
For details about how you set table criteria, see “Setting SAP R/3
criteria in the job interception criteria profile” on page 273.
Template files (optional)
For details about how you create template files, see “Using template files”
on page 275.
To set table criteria with the BC-XBP 2.0 interface on an SAP R/3 job using the
Monitor Workload of the Dynamic Workload Console, follow these steps:
Procedure
1. Log in to the Dynamic Workload Console.
2. In the navigation bar at the top, click System Status and Health > Workload
Monitoring > Monitor Workload.
3. In the Monitor Workload window select the engine, enter Workstation in the
Object Type field, and select the plan to display the list of workstations you
want to monitor. Click Run.
4. Select an extended agent workstation in the table of displayed workstations,
and click More Actions > Table Criteria... from the toolbar.
5. The Table Criteria panel displays. From this panel you can add, delete, edit, or
refresh criteria.
272 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Figure 7. The Table Criteria panel
To set the criteria that defines which SAP R/3 jobs to intercept and relaunch with
the BC-XBP 3.0 interface using the Dynamic Workload Console, perform the
following steps:
c. In JOB NAME, click to specify the value for the JOB NAME field.
d. Leave the default value Select to indicate to use the selection criterion
specified when intercepting jobs.
e. In Options, select Pattern and in Single Value or Lower Limit, type ICP*.
This sets the condition for the job name.
f. Click Save to save the criterion definition.
8. Define the criteria that must be matched to relaunch intercepted jobs. Click
the Job Relaunch Criteria tab.
274 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
10. You can continue to define more criteria and then save the criteria profile.
11. When you are done defining the criteria, save the criteria profile.
12. Select the criteria profile and then click Activate from the toolbar.
Results
The status of the criteria profile is updated to show that it is now active. The
criteria profile can now begin to intercept jobs according to the specifications of the
criteria hierarchy and relaunch them as defined in the IBM Workload Scheduler
job. If another criteria profile of the same criteria type was active, its status
changes to inactive.
A template is a file with extension .jdf located in the same directory as the
interception criteria file (TWS_home/methods/r3batch_icp). The template file contains
instructions for the interception collector about how to run the intercepted SAP
R/3 job under control of IBM Workload Scheduler. Its syntax corresponds to the
syntax of docommand in conman. You can use any text editor to maintain this file.
Ensure that the user, LJUser, is able to read and write to this file.
If the user template file is empty, a template file named default.jdf is used. If
default.jdf does not exist, the following instructions are used:
alias=SAP_$RUN_$JOBNAME_$JOBCOUNT
This means that the intercepted SAP R/3 jobs are to be restarted immediately,
because of the absence of the at= job option. Their IBM Workload Scheduler names
are composed of the string SAP_, the current run number of the interception
collector, and the name and ID of the SAP R/3 job.
The instruction set for restarting an intercepted SAP R/3 job is retrieved in the
following order:
1. From the template file, if an existing template is specified in the interception
criteria file.
2. From the default template file, if the template is specified in the interception
criteria file but does not exist, or if the template is not specified in the
interception criteria file.
3. From the default instruction set, if the default template file does not exist.
Using placeholders: In the template files you can use a number of placeholders
that are replaced by the interception collector at run time. They are listed in
Table 57.
Table 57. Placeholders for job interception template files
Placeholder Description
$CPU Name of the extended agent workstation where the
interception collector runs.
$CLIENT Client number of the intercepted SAP R/3 job.
$JOBNAME Name of the intercepted SAP R/3 job.
$JOBCOUNT Job ID of the intercepted SAP R/3 job.
$USER Name of the user who launched the SAP R/3 job.
$JOBNUM Job number of the interception collector.
$RUN Current run number of the interception collector.
$SCHED Schedule name of the interception collector.
$RAND Random number.
276 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
The template:
alias=ICP_$RAND_$JOBNAME_$JOBCOUNT_$CLIENT;at=1000
instructs the interception collector to restart the SAP R/3 job named DEMO_JOB with
job ID 12345678 on client 100 at 10:00 as IBM Workload Scheduler job
ICP_1432_DEMO_JOB_12345678_100.
Procedure
1. Run ABAP report INITXBP2. This report shows you the current status of the job
interception and parent-child features, and allows you to toggle the status of
both features.
2. Select the BC-XBP interface version as appropriate:
v Activate 3.0
v Activate 2.0
3. Save the changes.
The BC-XBP 2.0 interface allows you to determine if a job has launched subjobs,
together with their names and IDs, and so it is now possible to track them.
To activate this feature, use the INITXBP2 ABAP report, which you can also use to
toggle the status of job interception.
When the parent-child feature is active, IBM Workload Scheduler considers an SAP
R/3 job as finished only after all its child jobs have ended. The status of the IBM
Workload Scheduler job remains as EXEC while the parent job or any of its child
jobs are running.
The status of the IBM Workload Scheduler job becomes SUCC if the parent job and
all child jobs end successfully. If any of the jobs ended with an error, the status of
the IBM Workload Scheduler job becomes ABEND.
Note: The parent-child feature can interfere with job interception because,
although the parent job cannot be intercepted, any of its child jobs can be
intercepted if they match the interception criteria. In this case, the IBM Workload
Scheduler job remains in the EXEC status until the intercepted child job has been
relaunched and has ended.
The joblogs of the child jobs are appended in the IBM Workload Scheduler stdlist
after the joblog of the parent job.
To use the InfoPackages component, you must have the SAP Business Warehouse
Systems, version 2.0B or later installed.
To use the Process Chains component, you must have the SAP Business Warehouse
Systems, version 3.0B or later installed.
The Support Package 9 (SAPKW31009) for SAP Business Warehouse version 3.1 is
required so that r3batch can launch process chains.
An InfoPackage is the entry point for the loading process from a specific
InfoSource (a logical container of data source, generically named InfoObject).
Technically, an InfoPackage is an SAP R/3 job whose aim is to load data. Like any
other SAP R/3 job, it contains job-pecific parameters such as start time, and
dependencies.
Access method for SAP can manage SAP R/3 Business Warehouse InfoPackages
and process chains. To use the SAP R/3 Business Warehouse functions, you must
define an IBM Workload Scheduler user within SAP R/3 with full authorization for
the ABAP Workbench object S_DEVELOP.
278 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Managing SAP R/3 Business Warehouse InfoPackages and
process chains
You can manage existing InfoPackages and process chains on SAP systems from
SAP.
Business Warehouse InfoPackages and process chains can only be created from the
SAP R/3 environment. However, the Dynamic Workload Console supports pick
lists of InfoPackages and process chains, so that you can also define IBM Workload
Scheduler jobs for these existing objects.
You can create IBM Workload Scheduler job definitions that map to SAP jobs that
already exist on SAP systems in the following environments:
v Distributed Distributed
v z/OS z/OS
The SAP jobs can run on extended agent workstations, dynamic agent
workstations, dynamic pools, and z-centric workstations depending on the type of
job definition you choose to create.
This section describes how to perform tasks such as creating the IBM Workload
Scheduler job definitions that map to SAP jobs, how to display the details of these
jobs, and how to rerun a process chain job.
This section describes how to create an IBM Workload Scheduler SAP job
definition that references a Business Warehouse InfoPackage or Process Chain SAP
job.
SAP job definitions can be created using both a distributed or z/OS engine and
they can be scheduled to run on the following workstations with the r3batch access
method:
v An IBM Workload Scheduler extended agent workstation. A workstation that is
hosted by a fault-tolerant agent or master workstation.
v A dynamic agent workstation.
v A dynamic pool.
v A z-centric workstation.
Refer to the Dynamic Workload Console online help for a complete description of
all UI elements for both engine types and all supported workstation types.
You can create a SAP job definition to reference an InfoPackage or process chain
using the Dynamic Workload Console.
The following procedure creates an IBM Workload Scheduler SAP job definition
and references an InfoPackage or process chain in the IBM Workload Scheduler
database:
Procedure
1. Click IBM Workload Scheduler > Workload > Design > Create Workload
Definitions.
2. Select a an engine. The Workload Designer is displayed.
3. From the Working List pane, click:
v z/OS engine: New > ERP
v Distributed engine: New > Job Definition > ERP
4. Select the SAP job definition in accordance with the engine and type of agent
on which the job runs.
z/OS engine
SAP This job definition references an existing job on the SAP system
and can run on dynamic agent workstations, dynamic pools,
and z-centric workstations.
Distributed engine
SAP Job on Dynamic Workstations
This job definition can run on dynamic agent workstations,
dynamic pools, and z-centric workstations.
SAP Job on XA Workstations
This job definition can run on extended agent workstations. A
workstation that is hosted by a fault-tolerant agent or master
workstation.
5. In the Workspace pane, specify the properties for the job definition you are
creating using the tabs available. The tabs for each type of SAP job definition
are similar, but there are some differences depending on the type of engine you
selected and the type of workstation on which the job runs. For more detailed
information about the UI elements on each tab, see the Dynamic Workload
Console online help.
The General page requires information regarding the workstation that connects
to the remote SAP system. If a default SAP connection is already configured,
then these fields are alreaday prefilled, otherwise, you can specify the required
information on the General page or you can configure a default connection to
be used each time it is required in a definition, see “Setting the SAP data
connection” on page 228 for more information.
On the Task page, in Subtype, specify either BW Process Chain or BW
InfoPackage.
280 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
6. Click Save to add the SAP job definition to the IBM Workload Scheduler
database.
► ►
-flag imm -flag enable_pchainlog
immed disable_pchainlog
► ►
-flag enable_ipaklog -flag level_all_pchainlog
disable_ipaklog level_n_pchainlog
► ►
-flag pchainlog_chains_only
pchainlog_chains_and_failed_proc
pchainlog_complete
► ►
-flag enable_pchainlog_bapi_msg
disable_pchainlog_bapi_msg
► ►◄
-flag enable_pchain_details -flag pchain_rerun
disable_pchain_details pchain_restart
pchain_refresh
282 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 58. Task string parameters for SAP R/3 jobs (continued)
Parameter Description GUI
Support
Note: Typically, the -debug and -trace options are for debugging the extended
agent and should not be used in standard production.
The following is an example for an InfoPackage job whose technical field name is
ZPAK_3LZ3JRF29AJDQM65ZJBJF5OMY:
-job ZPAK_3LZ3JRF29AJDQM65ZJBJF5OMY -i ipak_
Procedure
1. Open the Workload Designer, from the portfolio, click IBMWorkload
Scheduler > Workload > Design > List Jobs on SAP.
2. In Engine name, select the name of the IBM Workload Scheduler engine
connection from which you want to view SAP job details.
3. In SAP Job Type, select Business Warehouse InfoPackage.
4. In Workstation name, specify the workstation where the SAP job runs. If you
do not know the object name, click the ... (Browse) button. In the Name and
Location panel, enter some characters of the object name (asterisk is supported
as a wildcard) and click Start. From the displayed list, select the workstation
you want to use, and click OK.
5. Click Display. The list of available jobs of type Business Warehouse
InfoPackage for the specified engine is displayed.
6. Select the job for which you want to display the details and click Details.
7. When you have finished viewing the details for the job, click OK to return to
the list of SAP jobs on the workstation specified.
Ensure you have performed the following steps before running this procedure:
v Set the pchain_details option to ON in the common options file. For more
information about this option, refer to “Defining the common options” on page
212.
v Distributed In a distributed environment, customize the Browse Jobs tasks that
you created before installing IBM Workload Scheduler 8.4 Fix Pack 1 to show the
Job Type column. For details about how to customize the task properties, refer
to the Dynamic Workload Console online help.
v In a z/OS environment, you must customize the task properties to display the
Advanced Job Type column that indicates the job type. For details about how to
customize the task properties, refer to the Dynamic Workload Console online
help.
To display details about an SAP Process Chain that you scheduled as an IBM
Workload Scheduler job, perform the following steps from the Dynamic Workload
Console.
Procedure
1. Click Workload > Monitor Jobs.
2. The list of defined tasks or queries for monitoring jobs is displayed. Select the
hyperlink of a task to display the related list of jobs. If you have a predefined
task to display SAP jobs or process chain jobs, click that task.
284 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
3. In Engine name, select the name of the IBM Workload Scheduler engine
connection from where you want to work with SAP jobs and click OK.
4. The table of results for the task is displayed:
5. Select a process chain job. For each process chain job, a hyperlink named SAP
Process Chain is displayed. Distributed
Distributed environment
The Job Type column displays SAP Process Chain to help you identify
SAP process chain jobs.
z/OS
z/OS environment
The Advanced Job Type column displays SAP Process Chain to help
you identify SAP process chain jobs.
Click the hyperlink for the job whose details you want to display.
6. The details for the process chain are displayed:
IBM Workload Scheduler monitors the process chain job until the job
completes. The details shown reflect the last monitoring process performed.
Perform a restart of the process chain indicating a refresh operation to
synchronize the details with those on the remote SAP system to have the most
updated information possible. If the process chain contains local subchains, a
hyperlink is displayed for each one. Click the hyperlink you want, to display
details about the corresponding subchain job. Alternatively, you can display the
process chain details by clicking the hyperlink for the job and display the job
properties panel. Click the hyperlink shown under SAP Job Details. The details
for the process chain are displayed.
To rerun an SAP job that submits a process chain, you can use one of the following
user interfaces:
conman
For details, refer to the IBM Workload Scheduler User's Guide and Reference.
Dynamic Workload Console
See “Procedure for rerunning a process chain job” on page 290 for
information about performing this task from the console.
For information about rerunning an SAP Standard R/3 job, see “Rerunning a
standard SAP job” on page 247.
In general, when you rerun a process chain job, the new job is assigned the name
of the alias you specify. To keep the original job name, set the IBM Workload
Scheduler global option enRetainNameOnRerunFrom to yes. For details about this
option, see IBM Workload Scheduler Administration Guide.
On extended agents, an alias is mandatory for each action you perform on the
process chain job and the action itself, is the prefix of the alias name. For example,
286 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
if you choose to restart a process chain from the failed processes, and assign
PCHAIN1 as the alias for the process chain job, then the new job name is
Restart_PCHAIN1.
z/OS In a z/OS environment, the process chain job maintains the same name
and the Monitor Jobs view always displays the status for the last action performed
on the job. Every time a rerun is performed on a process chain job, a new instance
is generated each with a different ID.
Note:
1. By default, if you do not specify any setting, rerunning a process chain job
corresponds to submitting a new process chain instance.
2. If you kill an IBM Workload Scheduler job that submits a process chain, the
process chain is removed from schedule in the SAP Business Information
Warehouse system. To restart the same process chain instance with r3batch, you
require at least the following SAP Business Information Warehouse versions:
v 3.0 with SP25
v 3.1 with SP19
v 3.5 with SP10
v 7.0
If your version of SAP Business Information Warehouse is earlier, you can
restart the process chain only manually, through the SAP graphical interface.
Table 59 shows the action performed when you rerun an IBM Workload
Scheduler job that submits a process chain, depending on the settings you
specify. These are the actions performed when you submit the rerun operation
using the Rerun button from the Monitor Jobs view.
Table 59. Actions performed when you rerun a process chain job
Action performed Description and setting
A new process chain IBM Workload Scheduler creates another process chain instance and submits it to be
instance is submitted run again. This action occurs when:
v On extended agents, you specify RERUNvalue as the step to rerun, where value is
any value you want. This setting overrides the settings in the job definition and
options file, if any.
In an end-to-end environment, you can perform this action on a centralized job by
adding the following parameter to the script file:
-flag pchain_rerun
v In the job definition, you set -flag pchain_rerun. This setting overrides the setting
in the options file, if any. For a description of this parameter, see Table 58 on page
281.
v In the options file, you set the pchain_recover option to rerun. For a description
of this option, refer to Table 50 on page 212.
This action is performed only if at least one process in the process chain did not
complete successfully. It occurs when:
v On extended agents, you specify RESTARTvalue as the step to rerun, where value is
any value you want. This setting overrides the settings in the job definition and
options file, if any.
In an end-to-end environment, you can perform this action on a centralized job by
adding the following parameter to the script file:
-flag pchain_restart
v In the job definition, you set -flag pchain_restart. This setting overrides the
setting in the options file, if any. For a description of this parameter, see Table 58
on page 281.
v In the options file, you set the pchain_recover option to restart. For a description
of this option, refer to Table 50 on page 212.
288 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 59. Actions performed when you rerun a process chain job (continued)
Action performed Description and setting
The process that you specify IBM Workload Scheduler restarts the process of the original process chain that you
is restarted specify, and monitors the process chain run until its final state.
On extended agents, this action occurs when you specify PROCESSprocessID as the
step to rerun, where processID is the identifier of the process you want. For
example, if the process ID is 3, you must specify PROCESS3 as the step.
The following list shows the meaning of the alphabetic value used as the actual state
in the job log:
Actual state
Meaning
A Active
F Completed
G Successfully completed
P Planned
Q Released
R Ended with errors
S Skipped
X Canceled
Y Ready
blank Undefined
You can rerun all of the processes in the process chain from the Dynamic Workload
Console or you can rerun at a process level.
z/OS In z/OS environments, you need to set the status of the job to Ready
before you can rerun the job.
1. Select a job and click Set Status.
2. In Change Status, select Ready.
3. Click OK to return to the list of jobs.
Procedure
1. Click Workload > Monitor Jobs.
2. The list of defined tasks or queries for monitoring jobs is displayed. Select the
hyperlink of a task to display the related list of jobs. If you have a predefined
task to display SAP jobs or process chain jobs, click that task.
3. In Engine name, select the name of the IBM Workload Scheduler engine
connection from where you want to work with SAP jobs and click OK.
4. A list of jobs is displayed. Select a process chain job. Distributed
z/OS
z/OS environment
The Advanced Job Type column displays SAP Process Chain to help
you identify SAP process chain jobs. To display the Advanced Job Type
column in the table, edit the Task Properties and in Column
Definition, add the Advanced Job Type column to the Selected
Columns list. Move the column up to define the order of the column in
the table and make it more visible.
5. Rerun the job.
290 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
a. Click More Actions > Restart Process Chain.
b. Select the action you want to perform on the selected process chain:
Rerun Reruns the entire process chain. The process chain ID on the SAP
system remains the same, as well as the job identifier on z/OS
systems.
Distributed Specify an alias to identify the new job. In distributed
systems the rerun process chain is identified with this alias name
prefixed by RERUN.
Refresh
Refreshes the Dynamic Workload Console view with the latest
updates on the remote SAP system so that the two views are
synchronized.
Distributed Specify an alias to identify the new job. In distributed
systems the refreshed process chain is identified with this alias
name prefixed by REFRESH.
Restart from the failed processes
Action available only for process chains in error state. Rerun only
some steps of the process chain, starting from the failed processes.
Distributed Specify an alias to identify the new job. In distributed
systems the restarted process chain is identified with this alias name
prefixed by RESTART.
Restart from a specific process
Action available only for process chains in error state. Rerun only
some steps of the process chain, starting from the process specified
in the SAP Process ID field. You can find the process ID by
opening the job log or viewing the job type details from the table of
results of your monitor job task.
Distributed In distributed systems the restarted process chain is
identified with this alias prefixed by PROCESS.
6. Click OK to perform the selected action on the process chain.
Results
Business scenario: rerunning the original process chain job from the failed
process: As a scheduling administrator, you are responsible for managing batch
jobs in both SAP and non-SAP systems. The workflow is one or more job streams
in IBM Workload Scheduler. A job stream contains jobs that collect and prepare
data for month-end closing over all sales channels. The month-end closing report
requires data to be collected from several sales and distribution systems. Data is
collected using local and remote process chains in the SAP Business Intelligence
system. The process chains include a set of Infopackages, ABAP reports, and
operating system jobs to sort the report data by a logical hierarchy.
To administer from a single point of control, you link the SAP process chains to
IBM Workload Scheduler through IBM Workload Scheduler.
Alternatively, you can select the process chain job from the Monitor Jobs view on
the Dynamic Workload Console and then select More Actions > Restart Process
Chain and then select the Restart from the failed processes option.
Business scenario: restarting a specific process of the process chain: You might
decide to restart a single process as a preparation step before restarting the failed
processes of a process chain. A failed process might have corrupted some data, so
you run the single process to restore the data and set up the required system state
before you rerun the other processes in the process chain.
Suppose you are using InfoPackages and process chains to extract data from one or
several sources and you want to transform this data into managerial reports, for
example by using aggregate functions. If the process that transforms this data fails,
it might corrupt the data that the preceding InfoPacakge process had successfully
extracted. After fixing the problem with the transformation process, you must
restart the InfoPackage extraction process to reload the data, even though this
extraction process had completed successfully before. Restart the failed
transformation process only after the data has been reloaded, either by restarting
the failed processes of the process chain or by restarting just the failed
transformation process.
On an extended agent, from the Monitor Jobs view on the Dynamic Workload
Console, select the process chain and click Rerun, then specify PROCESSprocessID as
the step to rerun, where processID is the identifier of the process you want to
restart.
To restart a specific process of the process chain, from the Monitor Jobs view on
the Dynamic Workload Console, select the process chain and click More Actions >
Restart Process Chain and then select the Restart from a specific process option,
specifying the process ID in the SAP Process ID field.
Using advanced XBP 2.0 and 3.0 functions, such as the job interception and
parent-child, the job throttler ensures that the SAP system is not overloaded and
the number of released jobs does not exceed the total number of SAP background
work processes in the system.
You can also configure the job throttler to send data related to its activity to the
SAP Computing Center Monitoring System (CCMS) for monitoring purposes.
Business scenario
You manage your Internet sales through an application software that verifies that
data is correct, checks the availability of the item, and validates the order. To
process all the orders received, you scheduled an IBM Workload Scheduler job to
run every 12 hours, connect to SAP, and generate a child job for every order to
process. Child jobs are in charge of creating shipping bills, checking destination
292 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
address, and forwarding the orders to the appropriate carrier, thus optimizing the
delivery process. A potential overload of the system might occur during peak
times, for example over Christmas, and could risk the late delivery of orders,
damaging your business. To manage the submission of jobs and activate an
advanced management of their priority class (for both parent and child jobs),
enable the job throttling feature.
Additionally, you might want to set a policy so that an SAP CCMS alert is raised
each time the number of jobs to be released under the control of the job throttler
exceeds a certain threshold. To do this, you enable the job throttler to send data to
the SAP CCMS monitoring architecture. At job throttler startup, an MTE that
monitors the number of jobs to be released by the job throttler is created. By
including the MTE in a monitoring set and specifying the related threshold, you
are alerted each time the threshold is exceeded.
Software prerequisites
To use job throttling, you must have the SAP JCo 3.0.2 libraries or later (dll and
jar files) installed in the TWS_home/methods/throttling/lib directory. To download
JCo 3.0.x, visit the SAP Service Marketplace web site.
To define the behavior of the job throttling feature, set the following options in the
options file. For detailed information about the options, see Table 50 on page 212.
v throttling_enable_job_class_inheritance
v throttling_enable_job_interception
v throttling_interval
v throttling_max_connections
v throttling_release_all_on_exit
As a prerequisite, the job throttler requires that the job interception feature is
enabled in the SAP system. To enable and configure job interception, follow these
steps.
Note: Ensure that the job throttling and job interception features are not running at
the same time. The job throttler cannot start if interception collector jobs are
running.
1. Enable job interception, either automatically or manually, as follows:
Note: When you stop the job throttler, the setting for the job interception
feature that was previously configured on the SAP system is restored.
2. In the SAP system, configure the job interception criteria as follows:
a. Launch the transaction se16 to access the table TBCICPT1, where the
interception settings are maintained.
b. Set the job name, creator, and client related to the jobs you want to
intercept. To intercept all SAP jobs, specify the wildcard * (asterisk) for the
job name, creator, and client.
c. Save your settings and close the dialog.
SAP will intercept all the jobs matching the selection criteria, and the job throttling
will release all the jobs that were intercepted.
You can configure the job throttler to have the intercepted job inherit the priority
class from its progenitor (the top-level job in the hierarchy), if the progenitor class
is higher than the intercepted job class. To do this, in the options file set
throttling_enable_job_class_inheritance=on; this setting automatically enables
the parent-child feature on the SAP system.
Note: When you stop the job throttler, the setting for the parent-child feature that
was previously configured on the SAP system is restored.
You can configure the trace properties of the job throttler by editing the logging
configuration file jobthrottling.properties located in TWS_home/methods/
throttling/properties.
Procedure
1. Set the trace level property. The supported trace levels are: DEBUG_MIN,
DEBUG_MID, andDEBUG_MAX, where DEBUG_MAX is the most verbose trace level.
2. Save the changes.
294 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Results
When making changes to the trace level setting, the changes are effective
immediately after saving the .properties file. Other changes might require a restart
to make them effective.
What to do next
You can also configure the name, number, and size of the trace file. By default, the
job throttler generates a maximum of 3 files of 5 MB in the TWS_home/methods/
traces directory.
To start job throttling, run the jobthrottling executable file related to the
operating system you are using. Optionally, you can create an IBM Workload
Scheduler job that starts the job throttler.
Where:
XAname
The name of the extended agent you are using.
base_options_filename
For dynamic and z-centric agents, the file name of the options file without
the extension, defined on the engine workstation hosting the workstation
with the r3batch access method.
-scratch
If you enabled the job throttler to send data to CCMS (for details, see
“Sending data from job throttling to the CCMS Monitoring Architecture”
on page 296), the job throttler starts and resets the attribute MTE named JT
total released jobs to 0. If you do not specify -scratch, the job throttler
starts and increments the JT total released jobs.
This parameter is optional, and has effect only if the job throttler sent its
data to CCMS at least once before.
To know the syntax for the jobthrottling command, run the command as follows:
To stop the job throttler, enter the following command (optionally, you can create
an IBM Workload Scheduler job that stops the job throttler):
UNIX operating systems
TWS_home/methods/stop-jobthrottling.sh
{XAname|base_options_filename}
Windows operating systems
TWS_home\methods\stop-jobthrottling.bat
{XAname|base_options_filename}
Alternatively, you can enter the following command (you must be connected as
TWSUser and have read and write permissions on the txt file):
echo shutdown > TWS_home/methods/{XAname|base_options_filename}_jobthrottling_cmd.txt
You can configure the job throttler to send data related to its activity to the SAP
Computing Center Monitoring System (CCMS) for monitoring purposes. Sending
data from the job throttler to CCMS is supported if you have at least the SAP Web
Application Server 6.20, Support Package 12 installed.
In the options file, set the following options (for details, see Table 50 on page 212):
throttling_send_ccms_data
throttling_send_ccms_rate
In this way, at job throttler startup the following monitoring tree elements (MTE)
are created:
v A context MTE named ITWS for Apps.
v An object MTE with the same name as the IBM Workload Scheduler extended
agent where the job throttler is running. This object MTE belongs to the context
MTE ITWS for Apps.
v The following attribute MTEs:
JT total released jobs
The total number of jobs that the job throttler has released since startup.
This value depends on the -scratch option you set at job throttler
startup; for details, see “Step 5. Starting and stopping the job throttling
feature” on page 295.
JT queue
The number of enqueued intercepted jobs to be released.
JT released jobs per cycle
The number of released jobs in the latest run. This value depends on the
throttling_send_ccms_rate setting; for details, see Table 50 on page 212.
296 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Note: By default throttling_release_all_on_exit is set to ON, meaning that when
you stop the job throttler, all the intercepted jobs are released. However, these jobs
are not considered when updating the JT total released jobs, JT queue, and JT
released jobs per cycle MTEs.
To begin monitoring, include the MTEs in the monitoring set you want, and set the
thresholds to generate an alert.
You can define an IBM Workload Scheduler event rule based on the CCMS alerts;
for detailed information, refer to “Defining event rules based on CCMS Monitoring
Architecture alerts” on page 316.
For example, to define an event that monitors the attribute MTE JT total released
jobs, on the extended agent workstation named SAP_XA, connected to the SAP
system ID T01, specify the following information:
XA Workstation
SAP_XA
MTE SAP System ID
T01
MTE Monitoring Context Name
ITWS for Apps
MTE Monitoring Object Name
SAP_XA
MTE Monitoring Attribute Name:
JT total released jobs
After you stopped the job throttling feature, if you configured it to send its status
data to CCMS, you can delete one or more MTEs that were created. To do this:
1. From the SAP GUI, invoke the transaction rz20 to display a list of monitor sets.
2. Locate the monitor set named SAP CCMS Technical Expert Monitors, and
expand it.
3. Locate the monitor named All Monitor Contexts, and double-click it to open it.
4. From the action menu, select Extras -> Activate Maintenance Functions.
5. Locate the MTE named ITWS for Apps and select it.
6. Right-click the MTE and select Delete. You are prompted to choose one of the
delete options.
7. Select the option you want. The MTE is deleted accordingly.
Note: Deleting ITWS for Apps from the All Monitor Contexts monitor, deletes
also all the copies that you might have created in other monitors.
You might want to configure your IBM Workload Scheduler scheduling activities
based on the schedule calendar in your SAP R/3 system. To do this, use the
r3batch export function to export the SAP R/3 calendar definitions into a file
whose format is compatible with the IBM Workload Scheduler composer command
line. Based on the parameters you specify, you create a file that contains only the
SAP R/3 calendar definitions that meet your scheduling requirements. Use this file
as input for the composer add command, to import the calendar definitions into
the IBM Workload Scheduler database. Your IBM Workload Scheduler and SAP
R/3 calendars are now synchronized.
To keep the IBM Workload Scheduler and SAP R/3 calendar definitions
synchronized and avoid duplicating data maintenance in the two environments,
you can schedule to export the calendar definitions from SAP R/3 and import
them to IBM Workload Scheduler on a regular basis using a dedicated job.
For details about the IBM Workload Scheduler calendar definitions, see User's
Guide and Reference.
Command syntax
-getworkdays
► -year_from yyyy -year_to yyyy ►
-getfreedays
► ►
-tws_name tws_cal_name -tws_description tws_cal_desc
298 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
► " ►◄
-filename output_filename
Where:
-t RSC
The identifier of the task to be performed, in this case RSC (Retrieve SAP
R/3 Calendars). This parameter is required.
-c XAname
The extended agent workstation connected to the SAP R/3 system where
the calendar data to export is located. The SAP R/3 system must be
configured as a workstation to IBM Workload Scheduler. This parameter is
required.
-calendar_id calendarID
The identifier of the SAP R/3 calendar to be exported, which consists of
two alphanumeric characters. This parameter is required.
-year_from yyyy
The year of the calendar from when to start exporting dates, in the format
yyyy. This parameter is required.
-year_to yyyy
The year of the calendar when to stop exporting dates, in the format yyyy.
This parameter is required.
-getworkdays | -getfreedays
Specify getworkdays to create the IBM Workload Scheduler calendar
definition based on the working days of the SAP R/3 calendar. In this way,
each date of a working day is stored in the output file.
Specify getfreedays to create the IBM Workload Scheduler calendar
definition based on the holidays of the SAP R/3 calendar. Each date of a
non-working day is stored in the output file.
These parameters are optional and mutually exclusive. If you do not
specify either, the default is getworkdays.
-tws_name tws_cal_name
The IBM Workload Scheduler name for the exported SAP R/3 factory
calendar. It is stored in the output file.
You can specify up to eight alphanumeric characters. This parameter is
optional, the default is SAPXX_calendarID, where:
XX Corresponds to WK if the calendar includes only working days or
FR if the calendar includes only non-working days.
calendarID
The identifier of the SAP R/3 calendar.
For example, the default IBM Workload Scheduler name for an exported
calendar, whose identifier is 04, that includes only working days, is
SAPWK_04.
-tws_description tws_cal_desc
The description of the IBM Workload Scheduler calendar. It is stored in the
output file. You can specify up to 120 alphanumeric characters. If the
description contains blanks, it must be enclosed between single quotes.
This parameter is optional.
This command exports the SAP R/3 calendar named 01, located on the
SAP R/3 system named horse10. The dates exported begin from year 2007,
until year 2010, considering only working days. The IBM Workload
Scheduler name used for the calendar is CAL1, and the description written
in the output file is SAP Calendar 01. The output file is named
calendar_01.dat, stored in TWS_home/methods/my dir, and its content looks
like the following
$CALENDAR
CAL1
"SAP Calendar 01"
01/02/2007 01/03/2007 01/04/2007 01/05/2007 01/08/2007 01/09/2007 01/10/2007
01/11/2007 01/12/2007 01/15/2007 01/16/2007 01/17/2007 01/18/2007 01/19/2007
01/22/2007 01/23/2007 01/24/2007 01/25/2007 01/26/2007 01/29/2007 01/30/2007
01/31/2007 02/01/2007 02/02/2007 02/05/2007 02/06/2007 02/07/2007 02/08/2007
.......
11/24/2010 11/25/2010 11/26/2010 11/29/2010 11/30/2010 12/01/2010 12/02/2010
12/03/2010 12/06/2010 12/07/2010 12/08/2010 12/09/2010 12/10/2010 12/13/2010
12/14/2010 12/15/2010 12/16/2010 12/17/2010 12/20/2010 12/21/2010 12/22/2010
12/23/2010 12/24/2010 12/27/2010 12/28/2010 12/29/2010 12/30/2010 12/31/2010
To import the exported calendar definitions into the IBM Workload Scheduler
database, copy the output file from the extended agent for SAP R/3 to the master
workstation and from the composer command line on the master workstation,
enter the following command:
-add output_filename
where ouput_filename is the name of the exported file, with its complete path.
300 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
-add TWS_home/methods/my dir/tws_calendar_01.dat
where TWS_home is the complete path where you installed IBM Workload Scheduler.
Note: To be able to define and monitor event rules, you must configure your
environment as described in “Configuring SAP event monitoring” on page 222.
For more details about internetwork dependencies, refer to the IBM Workload
Scheduler: User's Guide and Reference. For more details about how to raise SAP
events, see “Raising an SAP event” on page 246.
Note: Because an SAP event history is ignored, for each SAP background
event to be checked, a placeholder SAP job is created. This is a dummy job
whose running depends on the SAP background event, therefore an SAP
event is considered raised as soon as the corresponding placeholder job has
completed.
XBP version 3.0 (supported by SAP NetWeaver 7.0 with SP 9, or later)
Only the SAP background events stored in the SAP event history table are
considered by IBM Workload Scheduler to check for internetwork
dependencies resolution. As a prerequisite, the SAP administrator must
create the appropriate event history profiles and criteria on the target SAP
system.
To avoid performance reduction, run reorganization tasks against the SAP
event history.
Note: Some SAP systems providing XBP version 3.0 still return XBP
version as 2.0. To check if your SAP system provides XBP 3.0, invoke the
transaction se37 and search for the function module
BAPI_XBP_BTC_EVTHISTORY_GET. If your system contains the module, set the
The resulting internetwork dependency looks like the following, where SAPWS is the
name of the extended agent workstation that connects to the SAP background
processing system where the event runs:
follows SAPWS::"-evtid SAP_TEST -evtpar 12345678"
302 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
-evtid SAP_TEST -commit
The resulting internetwork dependency looks like the following, where SAPWS is the
name of the extended agent workstation that connects to the SAP background
processing system where the event runs:
follows SAPWS::"-evtid SAP_TEST -evtpar 12345678"
Table 61 shows the correspondence between the definition and possible resolution
of an internetwork dependency that depends on an SAP event, with or without
parameters assigned. In this table, SAP_TEST is used as the event name and
12345678 or ABCDEFG as the event parameter.
Table 61. Internetwork dependency definition and possible resolution
IBM Workload
IBM Workload Scheduler Scheduler
internetwork dependency SAP event raised SAP event internetwork
specified in SAP system parameter dependency resolved
-evtid SAP_TEST none none No
-evtid SAP_TEST END_OF_JOB none No
-evtid SAP_TEST SAP_TEST none Yes
-evtid SAP_TEST SAP_TEST 12345678 Yes
-evtid SAP_TEST -evtpar SAP_TEST none No
12345678
-evtid SAP_TEST -evtpar SAP_TEST 12345678 Yes
12345678
-evtid SAP_TEST -evtpar SAP_TEST ABCDEFG No
12345678
The PI task commits all the processed events that meet the given criteria. For this
reason, it is recommended that you run this task at the end of the working day. By
doing so, internetwork dependencies that are already resolved are not reset and the
objects depending on them are not blocked until they are resolved again.
Command syntax
► " ►◄
-evtpar sap_event_parm
Where:
The following is an example of how to commit the SAP event named SAP_TEST,
with parameter 1234567, running on the background processing system named
horse10:
r3batch -t PI -c horse10 -- " -t CE -evtid SAP_TEST -evtpar 1234567"
Procedure
1. Launch Workload Designer from the Dynamic Workload Console portfolio.
Click Workload > Design > Create Workload Definitions.
2. Search for and open the job stream you want to manage.
a. In the Working List pane, click Search > Job Stream.
b. Type the job stream name or simply click Search to display all job streams.
c. Select the job stream and click Edit. The job stream and its contents are
displayed in the Details view.
3. Type the job stream name or simply click Search to display all job streams.
4. From the Details view, select the job or the job stream to which you want to
add the dependency.
5. From the toolbar, click Select an Action > Add Dependencies > Internetwork.
6. Specify the properties for the internetwork dependency.
a. In the Network Agent field, enter the name of the agent workstation
connected to the SAP background processing system where the event runs.
b. In the Dependency field, enter the parameters to define the internetwork
dependency. For a description of the parameters allowed, refer to Table 60
on page 302.
7. Click Save to save the changes to the job stream.
304 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Results
The local job or job stream now has a dependency on a SAP background event.
You can also perform this procedure from the graphical view available from the
Workload Designer. For more information about adding dependencies and editing
objects in the Graphical View, refer to the Dynamic Workload Console User's
Guide.
An event rule is identified by a rule name and by a set of attributes that specify if
the rule is active, the time frame of its validity, and other information required to
decide when actions are triggered. It includes information related to the specific
events (eventCondition) that the rule must detect and the specific actions it is to
trigger upon their detection or timeout (ruleAction). Complex rules might include
multiple events and multiple actions.
If you are using XBP 3.0, only the SAP background events that are stored in the
event history table are considered by IBM Workload Scheduler.
| Note:
| 1. If you specify a pattern with the wildcard asterisk (*), all the agents
| whose name matches the pattern will monitor the specified event.
| 2. As a best practice, define that an event belonging to an SAP system is
| monitored by one agent workstation only. If the same SAP event is
| monitored by more than one agent, you might either be notified
| multiple times for the same event occurrence or the first agent that
| notifies the event occurrence makes that event unavailable to the other
| agents.
| 3. If you modify the extended agent configuration in the r3batch option
| files, to make the changes effective you must stop and restart the agent.
| 4. For dynamic agents you can specify the name of a local options file. In
| the Properties section of the Create Event Rules window of the
| Dynamic Workload Console a lookup button provides a list of all the
| local options files associated with that agent. If you do not specify the
| name of a local options file, the global options file is used by default in
| the rule definition.
306 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Monitoring of the SAP_TEST event with parameter ABCDEF is automatically
started. Suppose that the following SAP events were raised on the SAP
system:
Table 62. History table of the SAP events raised
EVENT SAP EVENT EVENT EVENT EVENT EVENT PROCESS COUNT
GUID ID PARM SERVER TIMESTAMP STATE STATE OF JOBS
1234 SAP_TEST ABC123 ... 20070925 13:00 C OK 1
2345 SAP_TEST ABCD ... 20070925 14:00 N OK 2
3456 SAP_TEST ... 20070925 15:00 N OK 3
4567 SAP_TEST ABCDEF ... 20070925 16:00 N OK 4
Only the following SAP events are notified to IBM Workload Scheduler:
where:
PROVIDER=@
Specifies that the user can use the events coming from any provider.
+CUSTOM=SAP_USA@
Specifies that the user can use only the SAP events whose ID begins with
SAP_USA.
This keyword applies only to the SAP provider (SapMonitor).
ACCESS=USE
Sets the user access to the object to USE.
In the security file that defines the user access for the Italy department, define the
CUSTOM keyword for the EVENT object as follows:
EVENT PROVIDER=@ ~CUSTOM=SAP_USA@ ACCESS=USE
where:
PROVIDER=@
Specifies that the user can use the events coming from any provider.
~CUSTOM=SAP_USA@
Specifies that the user can use all SAP events, except those whose ID
begins with SAP_USA.
This keyword applies only to the SAP provider (SapMonitor).
ACCESS=USE
Sets the user access to the object to USE.
For more details about the security file and how to set up user authorizations, see
the IBM Workload Scheduler: Administration Guide.
308 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Defining event rules based on IDoc records
You can use IBM Workload Scheduler to monitor Intermediate Document (IDoc)
records in SAP systems and forward events to the IBM Workload Scheduler event
integration framework.
To do this, you define an event condition that contains the criteria that the IDocs
must match to be forwarded to IBM Workload Scheduler. When the event
condition occurs, the action that you associated with it (for example, running a job)
is performed.
Business scenario
You connected your Internet sales application to your SAP Customer Relationship
Management (CRM) system, which receives the orders as incoming IDocs. The
orders are classified as emergency and ordinary, and therefore have different IDoc
message types. You want the emergency orders to be imported into the CRM
system directly, and the ordinary orders to be processed in batch mode. To do this,
in IBM Workload Scheduler, you define an event rule that monitors the IDoc
message types corresponding to emergency orders and sends an event to IBM
Workload Scheduler. In IBM Workload Scheduler, you define a job to be released
when this type of event is received and is linked to an SAP job that runs an import
ABAP report for these specific types of IDocs.
Internet
Order
System
order
IDocs
Monitors IDocs
SAP IBM
Workload
CRM Triggers immediate import Scheduler
for high priority orders
IDoc
tables
To define event rules based on IDocs, specify the fields to be used as matching
criteria during IDoc monitoring. For details about these fields, refer to “Events
matching criteria” on page 310. To create the event rules, you can use either of the
following:
The composer command line
You edit the rules with an XML editor of your choice. For a general
Note:
1. To be able to define and monitor event rules, ensure that you configured your
environment as described in “Configuring SAP event monitoring” on page 222.
2. To configure how IBM Workload Scheduler retrieves the IDoc monitors, set
idoc_no_history and idoc_shallow_result in the options file. For details about
these options, refer to “Defining the common options” on page 212.
310 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Optionally, you can define also correlation rules by using the fields listed in
Table 67. Date and time values are specified in GMT time zone.
Table 67. IBM Workload Scheduler fields used to define correlation rules for IDoc events
Composer property Console property IDoc field
SAPIDocNumber IDoc number DOCNUM
SAPReleaseForIDoc IDoc SAP release DOCREL
SAPIDocType IDoc type DOCTYP
SAPReceiverAddress Receiver SADR address RCVSAD
SAPReceiverSADRClient Receiver SADR client RCVSMN
SAPFlagForInternationalReceiverAddress Receiver SADR flag RCVSNA
SAPReceiverCommunicationType Receiver SADR RCVSCA
communication type
SAPDefaultFlagForReceiverAddress Receiver SADR default flag RCVSDF
SAPReceiverAddressSequentialNumber Receiver SADR sequential RCVSLF
number
SAPReceiverLogicalAddress Receiver logical address RCVLAD
SAPEDIStandard EDI Standard STD
SAPEDIStandardVersion EDI standard version STDVRS
SAPEDIMessageType EDI message type STDMES
SAPSenderAddress Sender SADR address SNDSAD
SAPSenderSADRClient Sender SADR client SNDSMN
SAPFlagForInternationalSenderAddress Sender SADR flag SNDSNA
SAPSenderCommunicationType Sender SADR SNDSCA
communication type
SAPDefaultFlagForSenderAddress Sender SADR default flag SNDSDF
SAPSenderAddressSequentialNumber Sender SADR sequential SNDSLF
number
SAPSenderLogicalAddress Sender logical address SNDLAD
SAPReferenceToInterchangeFile Interchange file reference REFINT
SAPReferenceToMessageGroup Message group reference REFGRP
SAPReferenceToMessage Message reference REFMES
SAPEDIArchiveKey EDI archive key ARCKEY
SAPIDocCreationDate IDoc creation date CREDAT
SAPIDocCreationTime IDoc creation time CRETIM
SAPExtension Extension CIMTYP
SAPEDIALESerializationField EDI/ALE Serialization field SERIAL
SAPOverridingInInboundProcessing Overriding in inbound EXPRSS
processing
SAPIDocChangeDate IDoc last update date UPDDAT
SAPIDocChangeTime IDoc last update time UPDTIM
Based on the defined rule, the r3evmon process of IBM Workload Scheduler
monitors the events related to IDoc records according to a polling rate. To
Table 68 lists the values that you can specify as attribute filter name when defining
the event condition.
Table 68. Parameters of IDOCEventGenerated event type
Filtering Multiple values Wildcard Length
Property name Description Type Required
allowed allowed allowed (min-max)
SAPClient SAP client number numeric (0-9) U U U 1 3
IDoc status
information
Value can be 2
SAPOutputMode Output Mode (immediate U 1 1
sending) or 4
(collected
sending).
Table 69 on page 313 lists the standard outbound IDoc statuses and Table 70 on
page 314 lists the standard inbound IDoc statuses. Optionally, you can activate a
check to prevent event rule definitions with inconsistent IDoc status list and
direction. If you activate the check and specify inconsistent values when defining a
rule (for example, 02 as status and 2 as direction), you receive an error message
and you cannot save the rule definition. To activate the check, perform the
following steps:
312 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
1. In the TWS_home/eventPlugIn directory, create the SapMonitorPlugIn.properties
file.
2. Edit SapMonitorPlugIn.properties to set the following configuration property:
TWSPlugIn.event.idoc.consistency.check = true
3. From conman, stop and restart the event processing server by using,
respectively, the stopeventprocessor and starteventprocessor commands.
The default value is false.
To have predictable event action results, when defining event rules consider using
only non-transitory statuses that allow user checks.
Table 69. Standard outbound IDoc statuses
Status Description
01 IDoc generated
02 Error passing data to port
03 Data passed to port
04 Error within control information of EDI subsystem
05 Error during translation
06 Translation
07 Error during syntax check
08 Syntax check
09 Error during interchange
10 Interchange handling
11 Error during dispatch
12 Dispatch OK
13 Retransmission OK
14 Interchange acknowledgement positive
15 Interchange acknowledgement negative
16 Functional acknowledgement positive
17 Functional acknowledgement negative
18 Triggering EDI subsystem OK
19 Data transfer for test OK
20 Error triggering EDI subsystem
22 Dispatch OK, acknowledgement still due
23 Error during retransmission
24 Control information of EDI subsystem OK
25 Processing despite syntax error
26 Error during syntax check of IDoc
27 Error in dispatch level (ALE service)
29 Error in ALE service
30 IDoc ready for dispatch (ALE service)
31 Error no further processing
32 IDoc was edited
33 Original of an IDoc which was edited
314 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
IDoc status list
56,60
IDoc direction
2 (inbound)
After saving the rule according to these settings, when the rule becomes active a
file named SAPCPU_r3evmon.cfg is created on SAPCPU. It contains the following
!IDOC keyword:
!IDOC 0003001000556,600001200000000000000000000000000000000000000000000000000000000
IDoc monitoring is automatically started. When the event condition is verified, the
action defined in the rule is triggered
For an explanation of the !IDOC keyword format, refer to Table 77 on page 329.
With IBM Workload Scheduler, you can integrate the CCMS monitoring functions
into your management infrastructure by defining event rules based on the alerts
raised in the SAP system.
316 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Business scenarios
The following sections describe:
v “Business scenario: defining an event rule to process alerts related to IDocs”
v “Business scenario: defining an event rule to process alerts related to operating
system”
SAP systems are shipped with a predefined set of monitors, grouped in monitor
sets. A monitor set contains a list of monitors, each monitor contains a set of
monitoring trees. A monitor is a set of monitoring tree elements (MTEs) that are
arranged in a hierarchical structure, named alert monitoring tree. You can define
event rules based on the alert generated for a specific MTE.
Figure 12 shows the monitor named BW Monitor (belonging to the monitor set
SAP BW Monitor) and its associated monitor tree elements (MTEs).
Figure 12. A monitor and its MTEs - © SAP AG 2009. All rights reserved.
To configure how IBM Workload Scheduler retrieves the CCMS alerts, set
ccms_alert_history in the options file. For details about this option, refer to
“Defining the common options” on page 212.
To define the CCMS event for your rule, specify the following information. For
more details about how you separate the MTE name into the individual IBM
Workload Scheduler fields, see “Mapping between the MTE name and IBM
Workload Scheduler fields” on page 320.
| Extended or dynamic agent workstation
| The name of the extended or dynamic agent workstation running event
| monitoring.
318 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
| Note:
| 1. If you specify a pattern with the wildcard asterisk (*), all the agents
| whose name matches the pattern will monitor the specified event.
| 2. As a best practice, define that an event belonging to an SAP system is
| monitored by one agent workstation only. If the same SAP event is
| monitored by more than one agent, you might either be notified
| multiple times for the same event occurrence or the first agent that
| notifies the event occurrence makes that event unavailable to the other
| agents.
| 3. If you modify the extended agent configuration in the r3batch option
| files, to make the changes effective you must stop and restart the agent.
| 4. For dynamic agents you can specify the name of a local options file. In
| the Properties section of the Create Event Rules window of the
| Dynamic Workload Console a lookup button provides a list of all the
| local options files associated with that agent. If you do not specify the
| name of a local options file, the global options file is used by default in
| the rule definition.
| MTE SAP System ID
Name of the SAP system where the MTE is located (for example, GS0 in
Figure 12 on page 318). This field is required. Wildcards are not allowed,
you can specify up to eight characters.
MTE Monitoring Context Name
Name of the monitoring context to which the MTE belongs. This field is
required. A monitoring context is a logically connected group of
monitoring objects that are ordered together under one summary in the
monitoring tree (for example, Background in Figure 12 on page 318).
Wildcards are not allowed, you can specify up to 40 characters.
MTE Monitoring Object Name
Name of the monitoring object in the alert monitor. This field is required.
A monitoring object is a component or property of the system that is to be
monitored (for example, BackgroundService in Figure 12 on page 318). If
you choose not to specify a value, you must leave the value NULL, which
is the default.
Wildcards are not allowed, you can specify up to 40 characters.
MTE Monitoring Attribute Name
Name of the monitoring attribute in the alert monitor. In the monitoring
tree, a monitoring attribute is always an end node in the hierarchy (for
example, SystemWideFreeBPWP in Figure 12 on page 318). This field is
required. If you choose not to specify a value, you must leave the value
NULL, which is the default.
Wildcards are not allowed, you can specify up to 40 characters.
Alert Value
Numeric value that indicates the color of the alert generated for the MTE.
This field is optional. You can specify one or a combination of the
following values:
1 Green, meaning Everything OK.
2 Yellow, meaning Warning.
3 Red, meaning Problem or error.
If you do not specify any value, all the alerts generated for the MTE are
considered.
Within SAP, MTEs are identified by a name made up of several tokens, separated
by backslashes (\). To display the complete MTE name, select the MTE and click
Properties or press F1:
Figure 13. Name and description of an MTE - © SAP AG 2009. All rights reserved.
According to the type of MTE that you want to monitor, you must fill in each IBM
Workload Scheduler field with a specific token of the MTE name (to know your
MTE type, select the MTE and click Legend):
If you are using Dynamic Workload Console V8.5.1, or later
1. In the Event Rule Editor panel, from the Properties section, click
Autofill MTE Tokens. The MTE Name window opens.
2. In the MTE Name field, write the name of the MTE to monitor and
click OK. You are returned to the Event Rule Editor panel, where the
IBM Workload Scheduler fields are filled in accordingly.
If you are using Dynamic Workload Console prior to V8.5.1
Refer to the instructions provided in the following sections:
v “Context MTE”
v “Object MTE” on page 321
v “Attribute MTE” on page 322
For example:
T10\SystemConfiguration\...
320 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Refer to Table 71 for an explanation about how you report this type of
MTE in the IBM Workload Scheduler fields:
Table 71. Mapping between root context MTE name and IBM Workload Scheduler fields
IBM Workload Scheduler field Token of MTE name In this example...
MTE SAP System ID tokenA T10
MTE Monitoring Context Name tokenB SystemConfiguration
MTE Monitoring Object Name N/A NULL
MTE Monitoring Attribute Name N/A NULL
Summary
According to the SAP version you are using, a summary context MTE
name can have either of the following formats:
tokenA\tokenB\...\tokenC\...
- OR -
tokenA\tokenB\tokenC
For example:
T10\SystemConfiguration\...\InstalledSupportPackages\...
Refer to Table 72 for an explanation about how you report this type of
MTE in the IBM Workload Scheduler fields:
Table 72. Mapping between summary context MTE name and IBM Workload Scheduler
fields
IBM Workload Scheduler field Token of MTE name In this example...
MTE SAP System ID tokenA T10
MTE Monitoring Context Name tokenB SystemConfiguration
MTE Monitoring Object Name tokenC InstalledSupportPackages
MTE Monitoring Attribute Name N/A NULL
Object MTE: According to the SAP version you are using, an object MTE name
can have either of the following formats:
tokenA\tokenB\tokenC\tokenD
- OR -
tokenA\tokenB\...\tokenD
For example:
PR0\amsp53_PR0_11\R3Services\Background\
Refer to Table 73 for an explanation about how you report this type of MTE in the
IBM Workload Scheduler fields:
Table 73. Mapping between object MTE name and IBM Workload Scheduler fields
IBM Workload Scheduler field Token of MTE name In this example...
MTE SAP System ID tokenA PR0
MTE Monitoring Context Name tokenB amsp53_PR0_11
MTE Monitoring Object Name tokenD Background
MTE Monitoring Attribute Name N/A NULL
For example:
PR0\amsp53_PR0_11\R3Services\Background\AbortedJobs
Refer to Table 74 for an explanation about how you report this type of MTE in the
IBM Workload Scheduler fields:
Table 74. Mapping between attribute MTE name and IBM Workload Scheduler fields
IBM Workload Scheduler field Token of MTE name In this example...
MTE SAP System ID tokenA PR0
MTE Monitoring Context Name tokenB amsp53_PR0_11
MTE Monitoring Object Name tokenD Background
MTE Monitoring Attribute Name tokenE AbortedJobs
322 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 75. Alert properties for correlations (continued)
CCMS alert property Console property Composer property
MSGCLASS XMI Ext Company Name XMIExtCompanyName
MSGID XMI Log Msg ID XMILogMsgID
MTCLASS Alert MT Class AlertMTClass
MTINDEX Alert MT Index AlertMTIndex
MTMCNAME Alert Monitoring Context AlertMTEContext
Name
MTNUMRANGE Alert MTE Range AlertMTERange
MTSYSID Alert MTE System AlertMTESys
MTUID Alert MT Type ID AlertMTTypeID
OBJECTNAME Alert Monitoring Object Name AlertMonObjName
RC Alert Return Code AlertReturnCode
REPORTEDBY Alert Reported By AlertReportedBy
SEVERITY Alert Severity AlertSeverity
STATCHGBY Alert Changed By AlertChangedBy
STATCHGDAT Alert Change Date AlertChangeDate
STATCHGTIM Alert Change Time AlertChangeTime
STATUS Alert Status AlertStatus
USERID User ID UserID
VALUE Alert Value AlertValue
To get the current status of a CCMS alert from IBM Workload Scheduler, use the
external task Get Information (GI). To replace the command arguments with the
actual values, refer to the output returned by the event rule you defined. For
details about the correspondence between the CCMS properties and the Console
and composer properties, see Table 75 on page 322.
Command syntax
Where:
-t GI Identifier of the task to be performed, in this case GI (Get Information).
| -c Agent_name
| The name of the dynamic or extended agent workstation connected to the
| SAP system where the MTE for which the alert was raised is located.
-t GAS
Identifier of the task to be performed, in this case GAS (Get Alert Status).
-alsysid sap_system_ID
Identifier of the SAP system where the MTE for which the alert was raised
is located. If the name contains blanks, enclose it between single quotes.
-msegname alert_monitoring_segment
Name of the alert monitoring segment. You can specify from 1 to 40
characters.
-aluniqnum alert_UID
Unique identifier of the alert, made up of 10 characters.
-alindex alert_index
Alert index, made up of 10 characters.
-alertdate alert_date
Date of the alert, in the format yyyymmdd.
-alerttime alert_time
Time of the alert, in the format hhmmss.
The following is an example of how to retrieve the current status of a CCMS alert:
r3batch -t GI -c horse10 -- " -t GAS -alsysid T10
-msegname SAP_CCMS_horse10_T10_00 -aluniqnum 0017780869
-alindex 0000000104 -alertdate 20081007 -alerttime 040356"
The CCMS alerts that you defined as IBM Workload Scheduler events are not
automatically committed after their processing. To commit an alert that was
processed by IBM Workload Scheduler, use the external task Put Information (PI).
To replace the command arguments with the actual values, refer to the output
returned by the event rule you defined. For details about the correspondence
between the CCMS properties and the Console and composer properties, see
Table 75 on page 322.
Command syntax
324 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
► -alindex alert_index -alertdate alert_date -alerttime alert_time " ►◄
Where:
-t PI Identifier of the task to be performed, in this case PI (Put Information).
| -c Agent_name
| Name of the dynamic or extended agent workstation connected to the SAP
| system where the MTE for which the alert was raised is located.
-t CA Identifier of the task to be performed, in this case CA (Commit Alert).
-alsysid sap_system_ID
Identifier of the SAP system where the MTE for which the alert was raised
is located. If the name contains blanks, enclose it between single quotes.
-msegname alert_monitoring_segment
Name of the alert monitoring segment. You can specify from 1 to 40
characters.
-aluniqnum alert_UID
Unique identifier of the alert, made up of 10 characters.
-alindex alert_index
Alert index, made up of 10 characters.
-alertdate alert_date
Date of the alert, in the format yyyymmdd.
-alerttime alert_time
Time of the alert, in the format hhmmss.
You are returned with the message The CCMS alert was successfully confirmed.
326 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
National Language support
The National Language support feature allows you to install r3batch on a localized
IBM Workload Scheduler workstation and use localized characters for IBM
Workload Scheduler job names, job streams, and R/3 variants.
Using the local and global configuration files, you can set up r3batch to use
different code pages and languages for both its output and its connection with a
remote R/3 system.
As described in “Unicode support” on page 205, this version of Access method for
SAP features Unicode, which is widely supported by R/3 systems since version
4.7. However, if either the workstation running r3batch or the target R/3 systems
do not support Unicode, this section describes how you configure code pages and
national languages for r3batch.
Troubleshooting
Learn what to do if you get any problems while installing or using IBM Workload
Scheduler access methods or plug-ins.
where systemname is the system name or IP address of the SAP R/3 server and
xx is the SAP R/3 instance.
If the command fails to complete, this means that communication between
r3batch and the SAP R/3 application server is down.
v Log on to the SAP R/3 system as an administrator and verify that the IBM
Workload Scheduler RFC user (created in the “Creating the IBM Workload
Scheduler RFC user” on page 197) exists.
v If the SAP R/3 gateway truncates the connection string, replace the host name
with the IP address.
v If r3batch runs on an AIX system that does not use U.S. English, make sure that
the U.S. Language Environment® is installed on both the IBM Workload
Scheduler workstation and the SAP R/3 database workstation. Otherwise the
error BAD TEXTENV (or a similar error message) might appear in the dev_rfc trace
file and connections to SAP R/3 fail.
328 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Other known problems
Table 77 lists miscellaneous troubleshooting problems.
Table 77. Miscellaneous troubleshooting items. In this table you find a miscellaneous of troubleshooting items
Area Item
r3batch and Symptom: When you enter the r3batch and r3event commands interactively (for example, to
r3event: output export an SAP calendar) the output is returned in UTF-8 format.
contains
Solution: To resolve this problem, you can either use a shell that supports the UTF-8 code
unreadable
page or redirect the output to a file and open it with a text editor that supports the UTF-8
characters
format.
r3batch: SAP jobs Symptoms: SAP jobs whose names contain quotation marks or reverse quotes are not
contain quotation displayed in the pick list of the Dynamic Workload Console.
marks (") or reverse
quotes (`) -OR-
You have an IBM Workload Scheduler job that tries to submit an SAP job whose name
contains quotation marks, but it abends with an error. The following message might be
displayed:
EEWO0439E The required options are not specified either in the global
or in the local options file.
Solution: In your SAP system, make a copy of the SAP job and assign it a name that does not
contain quotation marks or reverse quotes.
r3batch: SAP job Symptom: An SAP job abends when the job contains Arabic characters.
containing Arabic
Solution: If you run an SAP job that contains Arabic characters, you must set the local
characters.
codepage of the agent workstation hosting the r3batch access method to the Arabic codepage.
Refer to the twsmeth_cp keyword in the common options file, “Defining the common options”
on page 212.
r3batch: error Symptom: When working with dynamic workstations and performing actions such as:
messages displaying a process chain, restarting a process chain, or retrieving the spool list, the following
submitting a job on messages might be displayed from the Dynamic Workload Console:
dynamic agents. EEWO0439E The required options are not specified either in the global
or in the local options file.
EEWO1065W The environment variable UNISON_JOB is not set. The process chain
cannot be restarted.
Solution: These messages might indicate that the requested action is not supported on
dynamic workstations. Refer to the IBM Workload Scheduler Release Notes for more information
about IBM Workload Scheduler features and minimum required versions for compatibility.
r3batch: r3batch Symptoms: r3batch hangs when performing actions from the Dynamic Workload Console such
hangs when selecting from a pick list, submitting a job, or similar actions that require connection to the
performing actions SAP system. The IBM Workload Scheduler joblog might also contain multiple "Timer expired"
from the Dynamic messages.
Workload Console.
Solution: This problem is caused by the IBM Workload Scheduler logging and tracing
component.
330 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 77. Miscellaneous troubleshooting items (continued). In this table you find a miscellaneous of troubleshooting
items
Area Item
r3batch: Symptom: The SAP events on which the event rule is based are not monitored nor committed.
monitoring SAP
Solution: The SAP events being monitored are listed in the following file:
events is not
performed TWS_home/monconf/XAname_r3evmon.cfg
Check that the file is updated and contains the current monitoring plan. The SAP events are
indicated by the following keyword (one for each SAP event on the same extended agent):
!R3EVENT SAP_event_name_lengthSAP_event_name[SAP_event_parm_lengthSAP_event_parm]
where:
SAP_event_name_length
The length of the SAP event name to monitor, in the format nnnn. For example, 0008,
if the event name is SAP_TEST.
SAP_event_name
The name of the SAP event to monitor.
SAP_event_parm_length
The length of the parameter associated with the SAP event to monitor, if any. The
format is nnnn. For example, 0007, if the event name is SAP_PAR.
SAP_event_parm
The parameter associated with the SAP event to monitor, if any. This value is
optional, but omitting it identifies an SAP event with no parameter associated. For
details about how the events are matched between r3evmon.cfg and the SAP system,
see “SAP events matching criteria” on page 306.
For each configuration file, an r3evmon process is started to monitor the SAP events listed. To
start an r3evmon monitoring process for a specific extended agent workstation, enter the
following command.
Note:
1. For UNIX only, r3evmon must be entered by the owner of the IBM Workload Scheduler
installation:
2. If you run r3evmon from a Windows DOS shell, the command prompt is not returned until
the process completes.
r3evmon -t SEM -c XAname -- "[-EIFSRV EIF_server -EIFPORT EIF_port]"
where:
XAname
The name of the extended agent workstation.
EIF_server
The host name or IP address of the master domain manager.
EIF_port
The port that the master domain manager uses to receive the event notification.
where XAname is the name of the SAP extended agent workstation. It is the same file that is
used to monitor SAP events in general.
Check that the file is updated and contains the current monitoring plan. The events
corresponding to the IDOCEventGenerated event type are indicated by the following keyword
(one for each event on the same extended agent):
!IDOC nnnn<Client Number>nnnn<IDoc Status List>nnnn<Direction>nnnn<Receiver Port>
nnnn<Receiver Partner Type>nnnn<Partner Function of Receiver>
nnnn<Partner Number of Receiver>nnnn<Sender Port>nnnn<Sender Partner Type>
nnnn<Partner Function of Sender>nnnn<Partner Number of Sender>
nnnn<Message Type>nnnn<IDoc Type>nnnn<Logical Message Variant>
nnnn<Logical Message Function>nnnn<Test Flag>nnnn<Output Mode>
where:
nnnn The length of the IDoc field. For example, 0005 indicates the value of an IDoc status
list corresponding to 56,60.
<> Contains the value of the field associated with the IDoc to be monitored. For a list of
the supported IDoc fields, refer to Table 66 on page 310.
For each configuration file, an r3evmon process is started to monitor the events listed. Make
sure that an r3evmon monitoring process is started for the involved extended agent
workstation.
r3evmon: Symptom: Memory consumption increases continuously during monitoring of IDoc and
monitoring SAP standard SAP events.
and IDoc events
Solution: Refer to SAP Notes 1021071 and 1109413.
increases memory
consumption
332 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 77. Miscellaneous troubleshooting items (continued). In this table you find a miscellaneous of troubleshooting
items
Area Item
r3batch: Symptom: The action defined in an event rule with IDOCEventGenerated event type is
Duplicated events unexpectedly repeated.
generated during
Solution: Reset the start date and time for the next monitoring loop. These values are stored in
IDoc monitoring
the following file:
TWS_home/methods/r3evmon_cfg/XAname_r3idocmon.cfg
where XAname is the name of the SAP extended agent workstation. Therefore you can either:
v Stop r3evmon, delete the XAname_r3idocmon.cfg file and then start r3evmon again.
- OR -
v Stop r3evmon, set the date and time in the XAname_r3idocmon.cfg file to the values you
want, and startr3evmon again.
Use the following format for the start date and time:
start_date=YYYYMMDD
start_time=HHMMSS
For example:
start_date=20080307
start_time=115749
334 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 77. Miscellaneous troubleshooting items (continued). In this table you find a miscellaneous of troubleshooting
items
Area Item
@longlink file Symptom: After installing IBM Workload Scheduler on a computer with an AIX operating
present in system where a master domain manager is already installed, a @longlink file containing the
installation following is present in the installation directory:
directory methods/_tools/_jvm_64/lib/desktop/icons/HighContrastInverse/48x48/mimetypes/
gnome-mime-text-x-java.png
Solution: The file can be ignored. It does not present any problems for the proper functioning
of the product.
Job throttling does Symptom: You try to start the job throttler and the following error message is displayed:
not start on HP-UX Error occurred during initialization of VM
java.lang.NoSuchMethodError: java.lang.Thread.start()V Abort
Cause and Solution: Java version 1.5 does not start because there are symbolic links of Java
version 1.4 libraries used by third party products. For example, you might have
/usr/lib/pa20_64/libjava.sl linked to /opt/java1.4/jre/lib/PA_RISC2.0W/libjava.sl
Before starting the job throttling again, change the PATH and SHLIB_PATH environment
variables as follows:
PATH=TWS_home/methods/_tools/_jvm_64/lib/PA_RISC2.0W:$PATH
export PATH
SHLIB_PATH=TWS_home/methods/_tools/_jvm_64/lib/PA_RISC2.0W:$SHLIB_PATH
export SHLIB_PATH
To apply this change definitively, edit the jobthrottling.sh file by adding the environment
settings after the following line:
#!/bin/sh
Job throttling does Symptom: When you start the job throttling feature, nothing happens and the following error
not start message is displayed:
EEWOJTR0207E Error, another job throttler instance is already running
against the same SAP system. Foreign job throttler registration is:
Client ID="clientID", Name="TWS4APPS_JOBTHROTTLER",Host="hostname",
UID "UniqueID"
Cause and Solution: Possible causes are:
v You are running job interception collector jobs, but the job interception and job throttling
features cannot run at the same time. Choose which feature to start. For detailed
information, refer to “Job interception and parent-child features” on page 268 and “Job
throttling feature” on page 292.
v Another job throttler instance is running against the same SAP system. You can start only
one job throttler instance.
v A previous job throttler instance created an exclusive lock object on the SAP system that
could have become permanent. To verify it, use transaction sm12 and query for the lock
object named TWS4APPS_JOBTHROTTLER. If the lock object exists, and you are not running any
job throttler or job interception instance, remove the lock manually and restart the job
throttler.
Job throttling does Symptom: When you start the job throttling feature, nothing happens and the following error
not start message is displayed:
EEWOJT0209E Error, the password format is not valid.
Cause and Solution: Your password is encrypted in old format. To encrypt the password with
the correct encryption version, use the enigma or pwdcrypt programs, or the Option Editor.
For details about how to encrypt the password, see “Encrypting SAP R/3 user passwords” on
page 219.
To solve this problem, edit the MTE properties and save them again, even if you do not
change anything.
Job throttling: Symptom: When you edit and save the properties of MTEs generated by the job throttler, the
saving MTE following informational message is displayed:
properties Message does not exist.
generates an
informational Cause and Solution: In the pop-up window that displays the message, click Continue and
message close the Properties window. Your settings are saved.
Job throttling: error Symptom: While the job throttler is stopping, there are intercepted jobs to release on exit. The
message displayed following error message is displayed:
when creating trace CJL0006E Handler jobthrottling.trace.handlers.file
file on HP is unable to write a log event.
operating systems
Cause and Solution: The message does not report any real error, and can be ignored.
The system cannot Symptom: Although the job interception feature is active on the SAP system, the intercepted
intercept jobs jobs are kept in scheduled state.
Cause and Solution: The job throttler feature or the Java Virtual Machine used by the job
throttler might still be active.
On each extended agent where the job throttler was started at least once, ensure that:
1. You stopped the feature. For details, see “Step 5. Starting and stopping the job throttling
feature” on page 295.
2. The Java Virtual Machine used by the job throttler was stopped by the process. To search
for Java processes, use:
On Windows
The Process Explorer
On UNIX
The command ps -ef | grep throttling
If a Java Virtual Machine instance related to the job throttler is found, kill it.
access method Symptom: The job log reports multiple "Permission denied" messages.
executables:
Cause and Solution: The root cause might be that the access method executable, for example,
r3batch, r3event,
r3batch, is submitted by the root user and not the twsuser. This creates directories and files
psagent:
with the wrong ownership and file permissions. Verify the ownership of the following
permission denied
directories and files if you are running the product on UNIX platforms. Ensure that the
messages in the job
twsuser is the owner of the files and that the user has both read and write permissions on the
log.
files, and execute permission on the directories.
<TWShome>/methods/traces
<TWShome>/methods/traces/*.log
336 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 77. Miscellaneous troubleshooting items (continued). In this table you find a miscellaneous of troubleshooting
items
Area Item
psagent: misleading Symptom: The job log shows the following message:
message displayed EEW00439E You did not specify the required options either in the global
if the local options or in the local options file.
file has no right
permissions but all the mandatory options were correctly set in the options file.
Solution: Check that the options file has read and write permissions available to the user who
is trying to launch the job.
No messages Symptom: IBM Workload Scheduler does not write any messages in the job log if the file
written in the job system for tracing is full or the ljuser does not have the correct permission to write in the trace
log directory.
The submission of Symptom: The submission of a PeopleSoft job fails and the IBM Workload Scheduler job log
a PeopleSoft job contains a Java exception similar to the following:
fails Exception in thread "3194" java.lang.ExceptionInInitializerError
at bea.jolt.JoltSessionAttributes.<clinit>(JoltSessionAttributes.java:183)
at psft.pt8.net.JoltSessionPool.createConnection(JoltSessionPool.java:363)
at psft.pt8.net.JoltSessionPool.getJoltSession(JoltSessionPool.java:220)
Cause and Solution: The psjoa.jar path contains special characters.
IBM Workload Scheduler provides the following job plug-ins to extend its job
scheduling capabilities to your SAP business:
v “SAP Process Integration (PI) Channel jobs”
v “SAP BusinessObjects BI jobs” on page 344
For information about the supported versions of the job plug-ins, run the Data
Integration report on the IBM Software Product Compatibility Reports site, and
select the Supported Software tab.
You then schedule SAP PI Channel jobs with IBM Workload Scheduler to take full
advantage of the IBM Workload Scheduler scheduling and automation functions to
manage jobs.
You can manage these jobs both in a distributed and in a z/OS environment, by
selecting the appropriate engine.
For information about the supported versions of the job plug-ins, generate a
dynamic Data Integration report from the IBM Software Product Compatibility
Reports web site, and select the Supported Software tab: Data Integration.
Business scenario
A practical business scenario demonstrating how SAP PI Channel jobs can be used
to manage communication channels.
339
interactions between these parties include secure data exchange to transmit critical
business information. Given the differences in hardware and operating systems, the
information is converted to the SAP Intermediate Document (IDoc) format for
transferring data related to business transactions. The IDocs are generated in the
sending database and are transmitted to the receiver, which in this case, is a SAP
R/3 system.
In this scenario, only if files arrive with specific file names, such as ABC.txt,
DEF.txt, and GHI.txt, should they be transformed by the process integrator into a
single IDoc. To achieve this, the company leverages Event Driven Workload
Automation in which an event rule is created that monitors the arrival of files. The
action associated with the event rule is to open the sender communication channel.
After the conversion to IDoc is complete, the process integrator removes the
original source files. The removal of the files is detected by IBM Workload
Scheduler and a job is submitted to close the sender channel until the next iteration
of this same process.
There are several advantages to using the IBM Workload Scheduler plug-in for
SAP PI Channel to control communication channels. For example, they can also be
used when the backend SAP R/3 system is under maintenance. A job can be
submitted into a job stream to stop the receiving channel and then reactivate it
once the maintenance is complete.
Prerequisites
Prerequisites for using the IBM Workload Scheduler plug-in for SAP PI Channel.
For information about the supported versions of the job plug-ins, generate a
dynamic Data Integration report from the IBM Software Product Compatibility
Reports web site, and select the Supported Software tab: Data Integration.
340 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Defining an IBM Workload Scheduler job that runs an SAP PI
Channel job
Define IBM Workload Scheduler jobs that run SAP PI Channel jobs by using either
the Dynamic Workload Console, the composer command line, or Application Lab.
In a distributed environment, you define an IBM Workload Scheduler job that runs
an SAP PI Channel job by using the Dynamic Workload Console connected to a
distributed engine, Application Lab or the composer command line. In a z/OS
environment, you define jobs by using the Dynamic Workload Console connected
to a z/OS engine.
See Chapter 2, “Defining a job,” on page 7 for more information about creating
jobs using the various interfaces available. Some samples of SAP PI Channel report
job definitions are contained in the sections that follow.
A description of the job properties and valid values are detailed in the
context-sensitive help in the Dynamic Workload Console by clicking the question
mark (?) icon in the top-right corner of the properties pane.
Table 78 describes the required and optional attributes for SAP PI Channel jobs,
together with a description of each attribute.
Table 78. Required and optional attributes for the job definition of SAP PI Channel jobs.
Attribute Description/value Required
Use this section to specify the
options for the SAP Process
Host name Integration server. U
The port number of the SAP
Server connection port Process Integration instance. U
Identifies the service of the
channel to be administered. You
can specify an asterisk (*) to
administer more than one channel
Service simultaneously.
Identifies the party of the channel
to be administered. You can
specify an asterisk (*) to
administer more than one channel
Party simultaneously.
Identifies the name of the channel
to be administered. You can
specify an asterisk (*) to
administer more than one channel
Channel simultaneously. U
The user that controls the
channels. This user requires the
xi_af_channel_admin_display and
xi_af_channel_admin_modify
User name roles. U
Chapter 28. SAP job plug-ins to extend workload scheduling capabilities 341
Table 78. Required and optional attributes for the job definition of SAP PI Channel
jobs. (continued)
Attribute Description/value Required
The password of the user. The
password is encrypted when the
Password job is created. U
Action Start, stop, verify status.
The following example shows the job definition of an SAP PI Channel job with all
the attributes specified:
TWSAGENT#CHANNELTEST
TASK
<?xml version="1.0" encoding="UTF-8"?>
<jsdl:jobDefinition xmlns:jsdl="http://www.abc.com/xmlns/prod/scheduling/1.0/
jsdl" xmlns:jsdlpichannel="http://www.abc.com/xmlns/prod/scheduling/1.0/
jsdlpichannel" name="PICHANNEL">
<jsdl:application name="pichannel">
<jsdlpichannel:pichannel>
<jsdlpichannel:PIChannelParameters>
<jsdlpichannel:PIChannelParms>
<jsdlpichannel:ServerInfo>
<jsdlpichannel:HostName>pihost</jsdlpichannel:HostName>
<jsdlpichannel:PortNumber>50000</jsdlpichannel:PortNumber>
</jsdlpichannel:ServerInfo>
<jsdlpichannel:ChannelInfo>
<jsdlpichannel:ServiceName>*</jsdlpichannel:ServiceName>
<jsdlpichannel:PartyName>*</jsdlpichannel:PartyName>
<jsdlpichannel:ChannelName>TESTCHANNEL1</jsdlpichannel:ChannelName>
</jsdlpichannel:ChannelInfo>
<jsdlpichannel:UserInfo>
<jsdlpichannel:UserName>TWSADMIN</jsdlpichannel:UserName>
<jsdlpichannel:password>
{aes}VlHkyc5ufaC6nMRepctNUbZ1exnDF5zUl+9baDGWgos=</jsdlpichannel:password>
</jsdlpichannel:UserInfo>
<jsdlpichannel:ActionInfo>
<jsdlpichannel:StartAction/>
</jsdlpichannel:ActionInfo>
</jsdlpichannel:PIChannelParms>
</jsdlpichannel:PIChannelParameters>
</jsdlpichannel:pichannel>
</jsdl:application>
</jsdl:jobDefinition>
RECOVERY STOP
See Chapter 2, “Defining a job,” on page 7 for information about creating job
definitions using other supported product interfaces. To define a job that runs an
SAP PI Channel job by using the Dynamic Workload Console, perform the
following procedure.
Procedure
1. In the console navigation tree, expand Administration > Workload Design and
click Manage Workload Definitions
2. Specify an engine name, either distributed or z/OS. The Workload Designer
opens.
3. In the Working List panel, select the SAP PI Channel job definition.
z/OS z/OS Environment
New > ERP > SAP PI Channel
342 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Distributed Distributed Environment
New > Job Definition > ERP > SAP PI Channel
The properties of the job are displayed in the right-hand panel for editing.
4. In the properties panel, specify the attributes for the job definition you are
creating. You can find detailed information about all the attributes in the
contextual help available on each panel.
5. Click Save to save the job definition in the database.
What to do next
You can now proceed to adding the job to a job stream and submitting it to run.
After submission you can kill the IBM Workload Scheduler for SAP PI Channel job
if necessary, this action is converted in a Stop action for the SAP PI Channel job.
If the IBM Workload Scheduler agent is not available when you submit the IBM
Workload Scheduler for SAP PI Channel job or when the job is running, IBM
Workload Scheduler collects the job log when the agent restarts and assigns the
Error or ABEND status to the IBM Workload Scheduler for SAP PI Channel job
independently of the status of the SAP PI Channel job.
The table shows how you can map the IBM Workload Scheduler job status to the
SAP PI Channel job status based on the return code you find in the job log output.
Table 79. Mapping between IBM Workload Scheduler and SAP PI Channel job statuses
SAP PI Channel Dynamic Workload IBM Workload Scheduler IBM Workload Scheduler
Communication Status Console Job Status Job Status for z/OS Job Status
Green Running EXEC Executing
Green Successful SUCC Completed
Red Error ABEND Error
Yellow Error ABEND Error
Grey Error ABEND Error
Not Available Error FAILED Error
Chapter 28. SAP job plug-ins to extend workload scheduling capabilities 343
Purpose
More information about how to analyze the job log using the supported product
interfaces can be found in Chapter 5, “Analyzing the job log,” on page 13.The
output of an IBM Workload Scheduler for SAP PI Channel job relays valuable
information to perform problem determination such as:
v If the user has the required roles.
v If the communication channels are enabled for external control.
v If the user name or password are valid.
v If the hostname is resolvable.
v If the hostname is resolvable but not listening on the indicated port.
v If the channel exists.
v If the hostname and port are working correctly, but there is no process
integration running.
Sample
This example shows the output of a job where the hostname was resolved
successfully, the port is listening, the process integration is running, the channel
exists and is enabled for external control, and the start action is being performed:
<?xml version="1.0" encoding="UTF-8" ?>
<ChannelAdminResult xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://pihost:50000/AdapterFramework/channelAdmin/
ChannelAdmin.xsd">
<Channels>
<Channel>
<Party></Party>
<Service></Service>
<ChannelName>TESTCHANNEL1</ChannelName>
<ChannelID>f750195443af39b2be83dd5c3686983d</ChannelID>
<ActivationState>STARTED</ActivationState>
<ChannelState>OK</ChannelState>
</Channel>
</Channels>
</ChannelAdminResult>
Exit Status : 0
Prerequisites
For information about the supported versions of the job plug-ins, generate a
dynamic Data Integration report from the IBM Software Product Compatibility
Reports web site, and select the Supported Software tab: Data Integration.
344 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Note: Only reports scheduled for the single user who is specified in the SAP
BusinessObjects BI server login are supported. The default setting for Schedule
for field must be set to Schedule only for myself. Reports with default setting for
Schedule for set to Schedule for specified users and user groups are not supported.
2. Verify the content of the file
<TWA_HOME>\TWS\javaExt\cfg\<plug-in_name>.properties
This file contains the plug-in properties that were set at installation time. The
default content of the properties file for SAP BusinessObjects BI plug-in is the
following:
server=
username=
password=
authType=SecEnterprise
pollingPeriod=10
pollingTimeout=7200
csvTextQualifier="
csvColumnDelimiter=,
csvCharset=UTF-8
csvOnePerDataProvider=false
where
server The SAP BusinessObjects BI access URL defined in the SAP
BusinessObjects BI RESTful Web Service application.
username
The name of the user authorized to access the SAP BusinessObjects BI
server.
password
The password that is associated with the user authorized to access the
SAP BusinessObjects BI server.
authType
The type of authentication that is supported by SAP BusinessObjects BI.
Can be:
v secEnterprise (default)
v secLDAP
v secWinAD
v secSAPR3
pollingPeriod
The monitoring frequency. It determines how often the job is
monitored. It is expressed in seconds.
pollingTimeout
The monitoring time. It determines for how long the job is monitored.
At the end of the timeout interval the job fails. It is expressed in
seconds.
csvTextQualifier
Text delimiter in case of .CSV report format type. It can be one of the
following: " '
csvColumnDelimiter
Column delimiter in case of .CSV report format type. It can be one of
the following: , ; tab
csvCharset
Report character set in case of .CSV report format type.
Chapter 28. SAP job plug-ins to extend workload scheduling capabilities 345
csvOnePerDataProvider
It indicates that a different .CSV report is generated for each of the data
providers present in the report. It can be true or false.
You can choose to override any of the default values set at installation time.
The values that you specify in the properties file are the values that are used at
job definition time.
The properties file is automatically generated either when you perform a "Test
Connection" from the Dynamic Workload Console in the job definition panels,
or when you submit the job to run the first time. Once the file has been created,
you can customize it. This is especially useful when you need to schedule
several jobs of the same type. You can specify the values in the properties file
and avoid having to provide information such as credentials and other
information, for each job. You can override the values in the properties files by
defining different values at job definition time.
A description of the job properties and valid values are detailed in the
context-sensitive help in the Dynamic Workload Console by clicking the question
mark (?) icon in the top-right corner of the properties pane.
For more information about creating jobs by using the various supported product
interfaces, see Chapter 2, “Defining a job,” on page 7.
The following table lists the required and optional attributes for SAP
BusinessObjects BI jobs:
Table 80. Required and optional attributes for the definition of a SAP BusinessObjects BI job
Attribute Description and value Required
server The SAP BusinessObjects BI access URL defined in the SAP If not specified
BusinessObjects BI RESTful Web Service application. in the plug-in
properties file.
username The name of the user authorized to access the SAP BusinessObjects BI If not specified
server. in the plug-in
properties file.
password The password that is associated with the user authorized to access the If not specified
SAP BusinessObjects BI server. in the plug-in
properties file.
authType The type of authentication that is supported by SAP BusinessObjects If not specified
BI. in the plug-in
properties file.
Can be:
Enterprise
LDAP
Windows AD
SAP
You can use asterisks (*) or question marks (?) as wildcard characters.
346 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Table 80. Required and optional attributes for the definition of a SAP BusinessObjects BI
job (continued)
Attribute Description and value Required
timeout The monitoring time. It determines for how long the job is monitored.
At the end of the timeout interval the job fails. It is expressed in
seconds. Default value set in the plug-in properties file is 7200.
pollingPeriod The monitoring frequency. It determines how often the job is
monitored. It is expressed in seconds. Default value set in the plug-in
properties file is 10.
formatType The type of your SAP BusinessObjects BI report.
Can be:
Web Intelligence
Adobe Acrobat
Microsoft Excel
Default value is the default format type that is defined on your SAP
BusinessObjects BI server.
emailFrom Email From field. If you choose
to send your
If the email parameters are not specified, the report is sent to the SAP report by
BusinessObjects BI server (default destination). email.
emailTo Email To field. If you choose
to send your
report by
email.
emailCc Email Cc field. If you choose
to send your
report by
email.
emailBcc Email Bcc field. If you choose
to send your
report by
email.
emailSubject Email Subject field. If you choose
to send your
report by
email.
emailMessage Email Message text. If you choose
to send your
report by
email.
emailUseSpecificFileName The name of the file to be attached to your email. If you choose
to send your
report by
email.
serverGroupId The identification of a group of servers created on your SAP
BusinessObjects BI server. The group of servers includes the server to
be used to create your report. If the parameter is not specified, the
first available server is used (default value).
Custom parameters for Webi Custom parameters for the SAP BusinessObjects BI Webi Report that
Reports you want to run.
| You can submit jobs by using the Dynamic Workload Console, Application Lab or
| the conman command line. See Chapter 3, “Scheduling and submitting jobs and job
Chapter 28. SAP job plug-ins to extend workload scheduling capabilities 347
| streams,” on page 9 for information about how to schedule and submit jobs and
| job streams by using the various interfaces.
| After submission, when the job is running and is reported in EXEC status in IBM
| Workload Scheduler, you can stop it if necessary, by using the kill command.
| However, this action is effective only for the Request/Response scenario, therefore
| the IBM Workload Scheduler processes do not wait to receive a response from the
| SAP BusinessObjects BI job.
| If the IBM Workload Scheduler agent stops when you submit the IBM Workload
| Scheduler SAP BusinessObjects BI job or while the job is running, as soon as the
| agent restarts in the Request/Response scenario, IBM Workload Scheduler begins
| monitoring the job from where it stopped and waits for the Response phase. For
| information about how to monitor jobs by using the different product interfaces
| available, see Chapter 4, “Monitoring IBM Workload Scheduler jobs,” on page 11.
Job properties
While the job is running, you can track the status of the job and analyze the
properties of the job. In particular, in the Extra Information section, if the job
contains variables, you can verify the value that is passed to the variable from the
remote system. Some job streams use the variable passing feature, for example, the
value of a variable specified in job 1, contained in job stream A, is required by job
2 in order to run in the same job stream.
For information about how to display the job properties from the various
supported interfaces, see Chapter 5, “Analyzing the job log,” on page 13.
For example, from the conman command line, you can see the job properties by
running
conman sj <SAP BusinessObjects_job_name>;props
For a SAP BusinessObjects BI job in the Extra Information section of the output
command, you see the following properties:
Extra Information
Authorization Type = secEnterprise
SAP BusinessObjects resource instance ID = 9547
SAP BusinessObjects resource instance status = Completed(1)
SAP BusinessObjects resource = World Sales Report (rid:5376)
Server address = http://hostname:6405/biprws
User name = [email protected]
where
Authorization Type
The type of authentication that is supported by SAP BusinessObjects BI
Server.
SAP BusinessObjects resource instance ID
The ID of the report instance that is created by the SAP BusinessObjects BI
job.
348 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
SAP BusinessObjects resource instance status
The status of the report instance that is created by the SAP BusinessObjects
BI job.
SAP BusinessObjects resource
The name and the ID of the report that is scheduled by the SAP
BusinessObjects BI job.
Server address
The SAP BusinessObjects BI server that you specify in the Server field.
User name
The name of the user authorized to access the SAP BusinessObjects BI
server that you specify in the User name field.
You can export the SAP BusinessObjects BI job properties that you can see in the
Extra Information section, to a successive job in the same job stream instance. For
more information about the list of job properties that you can export, see the table
about properties for SAP BusinessObjects BI in User's Guide and Reference.
The following example shows the job definition for a SAP BusinessObjects BI job :
$JOBS
PHOENIX_1#REPORT1
TASK
<?xml version="1.0" encoding="UTF-8"?>
<jsdl:jobDefinition xmlns:jsdl="http://www.abc.com/xmlns/prod/scheduling/1.0/jsdl"
xmlns:jsdlsapbusinessobjects=
"http://www.abc.com/xmlns/prod/scheduling/1.0/jsdlsapbusinessobjects"
name="SAPBUSINESSOBJECTS">
<jsdl:application name="sapbusinessobjects">
<jsdlsapbusinessobjects:sapbusinessobjects>
<jsdlsapbusinessobjects:SAPBusinessObjectsParameters>
<jsdlsapbusinessobjects:Webi>
<jsdlsapbusinessobjects:PromptsTableValues>
<jsdlsapbusinessobjects:PromptsTableValue key=
"Compare 2005 data with the following Year:(dpId:DP1, type:Text)">2004
</jsdlsapbusinessobjects:PromptsTableValue>
</jsdlsapbusinessobjects:PromptsTableValues>
<jsdlsapbusinessobjects:formatType>webi</jsdlsapbusinessobjects:formatType>
<jsdlsapbusinessobjects:destinationRadioGroup>
<jsdlsapbusinessobjects:emailDestinationRadioButton>
<jsdlsapbusinessobjects:emailFrom>[email protected]
</jsdlsapbusinessobjects:emailFrom>
<jsdlsapbusinessobjects:emailTo>[email protected]
</jsdlsapbusinessobjects:emailTo>
<jsdlsapbusinessobjects:emailCc/>
<jsdlsapbusinessobjects:emailBcc/>
<jsdlsapbusinessobjects:emailSubject>attach</jsdlsapbusinessobjects:emailSubject>
<jsdlsapbusinessobjects:emailMessage>abcdefgh</jsdlsapbusinessobjects:emailMessage>
<jsdlsapbusinessobjects:emailAttachment/>
<jsdlsapbusinessobjects:emailUseSpecificFileName>myname
</jsdlsapbusinessobjects:emailUseSpecificFileName>
<jsdlsapbusinessobjects:emailAddFileExtensionCheckbox/>
</jsdlsapbusinessobjects:emailDestinationRadioButton>
</jsdlsapbusinessobjects:destinationRadioGroup>
<jsdlsapbusinessobjects:serverGroupRadioGroup>
<jsdlsapbusinessobjects:serverGroupRadioButton>
<jsdlsapbusinessobjects:serverGroupId>185084</jsdlsapbusinessobjects:serverGroupId>
<jsdlsapbusinessobjects:onlyUseGroupCheckbox/>
</jsdlsapbusinessobjects:serverGroupRadioButton>
</jsdlsapbusinessobjects:serverGroupRadioGroup>
</jsdlsapbusinessobjects:Webi>
<jsdlsapbusinessobjects:SAPBusinessObjectsParms>
<jsdlsapbusinessobjects:serverConnection>
<jsdlsapbusinessobjects:server>
http://nc005090:6405/biprws</jsdlsapbusinessobjects:server>
<jsdlsapbusinessobjects:username>administrator</jsdlsapbusinessobjects:username>
<jsdlsapbusinessobjects:password>{aes}gphpOW4YKMVFXpmFpm7gGymVVBrEUIWydZeQ6x0uLHA=
</jsdlsapbusinessobjects:password>
<jsdlsapbusinessobjects:authType>secEnterprise</jsdlsapbusinessobjects:authType>
</jsdlsapbusinessobjects:serverConnection>
<jsdlsapbusinessobjects:resourceDetails>
<jsdlsapbusinessobjects:BOObject>Track Data Changes (rid:12248)
</jsdlsapbusinessobjects:BOObject>
<jsdlsapbusinessobjects:timeout>7200</jsdlsapbusinessobjects:timeout>
<jsdlsapbusinessobjects:pollingPeriod>10</jsdlsapbusinessobjects:pollingPeriod>
</jsdlsapbusinessobjects:resourceDetails>
</jsdlsapbusinessobjects:SAPBusinessObjectsParms>
</jsdlsapbusinessobjects:SAPBusinessObjectsParameters>
Chapter 28. SAP job plug-ins to extend workload scheduling capabilities 349
</jsdlsapbusinessobjects:sapbusinessobjects>
</jsdl:application>
</jsdl:jobDefinition>
RECOVERY STOP
You can see the job log content by running the command conman sj <SAP
BusinessObjects_job_name>;stdlist, where <SAP BusinessObjects_job_name> is the
SAP BusinessObjects BI job name.
For a SAP BusinessObjects BI job log, you see the following information:
===============================================================
= JOB : PHOENIX_1#JOBS[(0000 06/12/14),(JOBS)].SAP1456274126
= TASK :
<jsdl:jobDefinition xmlns:jsdl="http://www.abc.com/xmlns/prod/scheduling/1.0/jsdl"
xmlns:jsdlsapbusinessobjects=
"http://www.abc.com/xmlns/prod/scheduling/1.0/jsdlsapbusinessobjects"
name="SAPBUSINESSOBJECTS">
<jsdl:variables>
<jsdl:stringVariable name="tws.jobstream.name">JOBS
</jsdl:stringVariable>
<jsdl:stringVariable name="tws.jobstream.id">JOBS
</jsdl:stringVariable>
<jsdl:stringVariable name="tws.job.name">SAP1456274126
</jsdl:stringVariable>
<jsdl:stringVariable name="tws.job.workstation">PHOENIX_1
</jsdl:stringVariable>
<jsdl:stringVariable name="tws.job.iawstz">201406120000
</jsdl:stringVariable>
<jsdl:stringVariable name="tws.job.promoted">NO
</jsdl:stringVariable>
<jsdl:stringVariable name="tws.job.resourcesForPromoted">10
</jsdl:stringVariable>
<jsdl:stringVariable name="tws.job.num">262401960
</jsdl:stringVariable>
</jsdl:variables>
<jsdl:application name="sapbusinessobjects">
<jsdlsapbusinessobjects:sapbusinessobjects>
<jsdlsapbusinessobjects:SAPBusinessObjectsParameters>
<jsdlsapbusinessobjects:SAPBusinessObjectsParms>
<jsdlsapbusinessobjects:serverConnection>
<jsdlsapbusinessobjects:server>http://hostname:6405/biprws
</jsdlsapbusinessobjects:server>
<jsdlsapbusinessobjects:username>twsuser
</jsdlsapbusinessobjects:username>
<jsdlsapbusinessobjects:password>{aes}Vd6bVhRQZp0J6J7iyWpI+rWVFzejU0aU5Tr+FyH55dE=
</jsdlsapbusinessobjects:password>
<jsdlsapbusinessobjects:authType>secEnterprise
</jsdlsapbusinessobjects:authType>
</jsdlsapbusinessobjects:serverConnection>
<jsdlsapbusinessobjects:resourceDetails>
<jsdlsapbusinessobjects:BOObject>World Sales Report (rid:5376)
</jsdlsapbusinessobjects:BOObject>
</jsdlsapbusinessobjects:resourceDetails>
</jsdlsapbusinessobjects:SAPBusinessObjectsParms>
</jsdlsapbusinessobjects:SAPBusinessObjectsParameters>
</jsdlsapbusinessobjects:sapbusinessobjects>
</jsdl:application>
<jsdl:resources>
<jsdl:orderedCandidatedWorkstations>
<jsdl:workstation>D84867CC01834629A3C23CCDCF2B5014
</jsdl:workstation>
</jsdl:orderedCandidatedWorkstations>
</jsdl:resources>
</jsdl:jobDefinition>
= TWSRCMAP :
= AGENT : PHOENIX_1
= Job Number: 262401960
= Thu Jun 12 14:56:46 CEST 2014
===============================================================
Scheduling BO object: World Sales Report (rid:5376)
BO object instance created with ID: 9547
Monitoring BO object World Sales Report (rid:5376) - Instance id:9547
BO monitoring iteration 0 Status: Pending
BO monitoring iteration 1 Status: Pending
BO monitoring iteration 2 Status: Running
BO monitoring iteration 3 Status: Running
BO monitoring iteration 4 Status: Running
BO monitoring iteration 5 Status: Completed
BO Report completed with success
===============================================================
= Exit Status : 0
350 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
= Elapsed Time (Minutes) : 2
= Thu Jun 12 14:57:47 CEST 2014
===============================================================
See also
From the Dynamic Workload Console you can perform the same task as described
in
the Dynamic Workload Console User's Guide, section about Creating job definitions.
For more information about how to create and edit scheduling objects, see
the Dynamic Workload Console User's Guide, section about Designing your Workload.
Chapter 28. SAP job plug-ins to extend workload scheduling capabilities 351
352 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
|
| The integration is provided by the SMSE Adapter, which runs on the master
| domain manager. The SMSE Adapter uses the SAP Solution Manager Scheduling
| Enabler (SMSE) interface provided by SAP to enable external schedulers to run the
| scheduling for Solution Manager.
| With this integration, when you schedule a job from the Scheduling panel of
| Solution Manager, IBM Workload Scheduler takes charge of the job scheduling,
| monitoring, and management tasks, as well as of job triggering and notification.
| The jobs scheduled from Solution Manager on IBM Workload Scheduler must have
| been previously defined in the IBM Workload Scheduler database.
| A job scheduled from the Schedule Jobs or Job Documentation panels in Solution
| Manager to be run by IBM Workload Scheduler, is automatically mapped in a job
| stream that is expressly created to include the job.
|
| Registering the master domain manager on SAP Solution Manager
| The first step to run the integration is to register the master domain manager on
| the SAP Solution Manager system.
| To register master domain manager on the SAP Solution Manager system, you
| must:
| 1. Have established a connection based on RFC or Web Services between the
| master and the Solution Manager system.
| 2. Have the SAP JCo 3.0.2 libraries (dll and jar files) installed in the
| TWS_home/methods/smseadapter/lib directory on the master. To download JCo
| 3.0.2, visit the Sap Service Marketplace.
| Attention: The libraries require the Microsoft Visual C++ Redistributable
| Package (vcredist) installed.
| 3. Configure the smseadapter.properties file located in the TWS_home/methods/
| smseadapter/lib directory on the master.
353
| The file contains a SMSE_ADAPTER_CONNECTION_n section that can be duplicated
| depending on the number of connections that you want to define. You can in
| fact set more connection definitions for the same master, where, for example,
| the following can vary:
| v The SAP Solution Manager system.
| v The agent that is to run the workload.
| v The SAP user name.
| Note, however, that a master domain manager can have only one active
| connection at a time via the smseadpter. If the adapter finds more that one
| section with the startAdapter property set to true (or not set to false), it uses
| the first section of properties and ignores the others.
| [SMSE_ADAPTER_CONNECTION_1]
| startAdapter =
| ashost =
| sysnr =
| client =
| sid =
| user =
| passwd =
| lang =
| destination =
| setDestinationAsDefault =
| jobStreamNamePrefix =
| agentName =
| notificationThreadCheckInterval =
| adminConsoleHost =
| adminConsolePort =
| adminConsoleUser =
| adminConsoleUserPassword =
354 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
| Table 81. Properties for the smseadapter.properties file. (continued)
| Property Description Required Notes
| sysnr The SAP system number of U
| the system that the master
| registers on. This value
| must have two digits. For
| example, 00.
| client The SAP client number. For U
| example, 001.
| sid The SAP system identifier U
| (SID) that the master
| registers on. For example,
| SM1.
| user The SAP user name that U
| will be used during the
| notification process to log
| into SAP Solution Manager.
| For example, twsadmin.
| passwd The SAP password that U To encrypt the password use
| will be used during the the enigma program located
| notification process to log in the methods folder on the
| into SAP Solution Manager. master.
| You can enter it in clear or
| in encrypted forms.
| lang The SAP logon language. U
| For example, EN.
| destination A name entered here to U This name defines the
| identify the RFC logical connection between
| Destination that will be the Solution Manager
| used to connect to SAP system and the master
| Solution Manager. For domain manager, referred to
| example, IWSM2. in Solution Manager as the
| external scheduler. The
| complete destination name
| will then be formed by:
| destination@mdm_name
| For example:
| IWSM2@MAS93WIN
| setDestinationAsDefault Set to true to make this Use this property in a
| destination the default one. context where a Solution
| The default is false. Manager system has more
| than one active destination
| defined (that is, more
| registered masters), to set
| the default external
| scheduler. If you do not set
| a default, and you have
| more external schedulers
| registered on an SM system,
| you will have to specify the
| destination at scheduling
| time.
| jobStreamNamePrefix A prefix of at least four The default prefix is
| letters that is to be added SOLMAN.
| to the names of the job
| streams created when jobs
| are submitted. The first
| character must be a letter
| while the remaining
| characters can be
| alphanumeric.
| agentName The name of the IWS agent If no agent name is
| that will run the jobs. specified, the Search utility
| When you search for the returns the names of the
| job definition in the jobs defined to run on all
| Scheduling dialog, the the agents attached to the
| Search utility returns the master domain manager
| names of the jobs defined (unless you use filtering).
| to run on this agent.
Chapter 29. Scheduling jobs on IBM Workload Scheduler from SAP Solution Manager 355
| Table 81. Properties for the smseadapter.properties file. (continued)
| Property Description Required Notes
| notificationThreadCheckInterval The time interval, in The thread notifies Solution
| seconds, between checks Manager with the status
| made by the notification changes of a job.
| thread on the status
| changes of a job. The
| default is 5 seconds.
| adminConsoleURL The protocol used (http or The next four properties, all
| https) and the host name related to the Dynamic
| and port of the Dynamic Workload Console, are
| Workload Console attached optional, but if you specify
| to the master. For example, one, you must specify all.
| https://
| mydwc:port_number/abc/
| console.
| adminConsoleUser The username that logs
| onto the Dynamic
| Workload Console attached
| to the master.
| adminConsoleUserPassword The password of the
| username that logs onto
| the Dynamic Workload
| Console attached to the
| master.
|
| Note that if the language configured for the master domain manager is different
| from the language configured for the Solution Manager system, the messages
| issued in the Solution Manager user interface may be displayed in mixed
| languages.
|
| Scheduling
| The Job Management Work Center panel of Solution Manager has two entry points
| for scheduling jobs:
| v The Schedule Jobs item in Common Tasks, a direct way of scheduling, where you
| pick the job from the IBM Workload Scheduler database and you set the
| scheduling options and time definitions.
| v The Job Documentation object, where you can create and edit job documentation,
| schedule, monitor, and manage jobs.
| The jobs scheduled from Solution Manager on IBM Workload Scheduler must have
| been previously defined in the IBM Workload Scheduler database.
| A job scheduled from the Schedule Jobs or Job Documentation panels in Solution
| Manager to be run by IBM Workload Scheduler, is automatically mapped in a job
| stream that is expressly created to include the job. The job stream is (automatically)
| defined in the IBM Workload Scheduler database with a specific prefix defined in
| the smseadapter.properties file.
| Select the Status message check box to enable monitoring tasks for the job.
356 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
| In the Start Conditions section select when and how frequently the job will run
| and optionally make selections in the Repeat every and Time Period groups. Your
| selections are then mapped to matching run cycles, valid from and valid to dates,
| working days or non- working days, and time dependencies on IBM Workload
| Scheduler.
| Note: The Extended Window start condition is not supported. All other start
| conditions are supported.
| Use the tabs available on the uppermost part of the panel to manage jobs; for
| example, to copy, reschedule, hold, release, kill, or cancel a job, and to subscribe or
| unsubscribe for status/change notification.
| If a scheduled job has not been started, you can change its start conditions or
| parameters and click Schedule/Change Externally again. Alternatively, you can
| change the start conditions and select Reschedule to reset the job to a new start
| time. In either case, IBM Workload Scheduler deletes the current job instance (that
| has not been started) and creates another one with the new characteristics.
| On the other hand, you can click Cancel on a non-completed job that was already
| started. In this case, IBM Workload Scheduler deletes the running instance as
| expected.
| As soon as the job is scheduled with success, the external job ID and the status are
| updated and you can view the job instance on the Dynamic Workload Console.
Chapter 29. Scheduling jobs on IBM Workload Scheduler from SAP Solution Manager 357
|
| Monitoring
| Job status retrieval and job monitoring tasks are run by IBM Workload Scheduler,
| but you can configure and view them from the Solution Manager Job
| Documentation and Monitoring views. In Solution Manager to monitor a job you
| must configure a Business project monitoring object (BPmon). When monitoring
| data is requested for a job, Solution Manager through the SMSE adapter requests
| IBM Workload Scheduler for updates on the job status, logs, alerts.
| To view the status of a job in the Solution Manager Job Documentation view,
| provided you selected the Status message check box in the Scheduling page,
| follow these steps:
| 1. Open the job documentation view for the job.
| 2. Select the Systems and the Scheduling tabs.
| 3. In the Scheduling page select the External Log button.
| The job log is displayed in a pop-up window.
| 4. Select the Refresh button of the External Job Status field in the Scheduling
| page.
| The current status of the job is displayed in the field.
| To configure monitoring for a scheduled job with the Status message check box
| selected, go to the Job Management Work Center panel of Solution Manager and
| open the Job Documentation view related to the job. There, select the Systems tab
| and in the ensuing page select Monitoring.
| 1. In the Job Identification section of the Monitoring Configuration window,
| input all mandatory fields and check the SMSE Push option.
| 2. Select the Alert Configuration tab and configure alerts to your convenience.
| 3. Fill in the mandatory fields in the Notification and Monitoring Activities
| tabs.
| 4. Select the Generate and Activate buttons on top to save and activate the
| monitoring object.
| With the Push mechanism IBM Workload Scheduler forwards to Solution Manager
| the status changes that a job instance undergoes until it reaches a final status such
| as complete, canceled, error, or killed. IBM Workload Scheduler also forwards
| the following scheduling time information for the job instance:
| v Estimated duration
| v Actual start
| v Actual completion
| On the basis of this information, and according to the alert configuration you
| specified in the Alert Configuration pane, Solution Manager triggers these alerts
| when any of the thresholds you specified are reached or exceeded. This grants you
| the means to keep the execution of your workload under control.
| To view the alerts for a monitored job, select the Monitoring view in the Job
| Management Work Center:
| 1. Select the monitoring object for the job in the Job Monitoring Standard view.
| 2. Refresh the alert list table after some monitoring period.
358 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
|
| Setting up to log traces on WebSphere Application Server
| By default, the SMSE adapter is traced only at start up, and only errors due to the
| incorrect configuration of the adapter (in the smseadapter.properties file) or that
| happen during the startup process are logged in the SystemOut.log file of
| WebSphere Application Server (located in <WAS_profile_path>/logs/server1/
| SystemOut.log or <WAS_profile_path>\logs\server1\SystemOut.log in the master
| domain manager).
| If you want the complete logging of the SMSE adapter, you must activate the
| tws_smseadapter trace on WebSphere Application Server.
| where:
| [-user <TWS_user> -password <TWS_user_password>]
| The user and password are optional. By default, the script looks
| for the credentials in the soap.client.props file located in the
| properties directory of the WebSphere Application Server
| profile.
| -mode <tws_smsadapter> is one of the following values:
| active_correlation
| All communications involving the event correlator are
| traced.
| tws_all_jni
| All communications involving the jni code are traced.
| The jni code refers to code in shared C libraries
| invoked from Java™. This option is used by, or under
| the guidance of, IBM Software Support.
| tws_all
| All IBM Workload Scheduler communications are
| traced.
| tws_alldefault
| Resets the trace level to the default level imposed at
| installation.
| tws_bridge
| Only the messages issued by the workload broker
| workstation are traced.
| tws_broker_all
| All dynamic workload broker communications are
| traced.
Chapter 29. Scheduling jobs on IBM Workload Scheduler from SAP Solution Manager 359
| tws_broker_rest
| Only the communication between dynamic workload
| broker and the agents is traced.
| tws_cli
| All IBM Workload Scheduler command line
| communications are traced.
| tws_conn
| All IBM Workload Scheduler connector communications
| are traced.
| tws_db
| All IBM Workload Scheduler database communications
| are traced.
| tws_info
| Only information messages are traced. The default
| value.
| tws_planner
| All IBM Workload Scheduler planner communications
| are traced.
| tws_secjni
| All IBM Workload Scheduler jni code auditing and
| security communications are traced. The jni code refers
| to code in shared C libraries invoked from Java. Only
| use this option under the guidance of, IBMHCL
| Software Support.
| tws_smseadapter
| All the activities of the Solution Manager External
| (SMSE) adapter on the master domain manager are
| logged in the trace.log file. The only exceptions apply
| to errors due to missing libraries or errors incurred
| during the startup process, which are recorded in the
| SystemOut.log file
| tws_utils
| All IBM Workload Scheduler utility communications are
| traced.
| 3. Stop and restart the application server.
| All the SMSE adapter actions will now be logged in the <WAS_profile_path>/logs/
| server1/trace.log or <WAS_profile_path>\logs\server1\trace.log file.
| To disable complete logging, run the command again to change the tracing level
| from all to info.
| See the section about setting the traces on the application server for the major IBM
| Workload Scheduler processes in Troubleshooting Guide for more information.
360 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Part 5. Appendixes
361
362 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Notices
This information was developed for products and services offered in the US. This
material might be available from IBM in other languages. However, you may be
required to own a copy of the product or product version in that language in order
to access it.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user's responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not grant you
any license to these patents. You can send license inquiries, in writing, to:
363
websites. The materials at those websites are not part of the materials for this IBM
product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it
believes appropriate without incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact:
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement or any equivalent agreement
between us.
This information is for planning purposes only. The information herein is subject to
change before the products described become available.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
364 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
© (your company name) (year).
Portions of this code are derived from IBM Corp. Sample Programs.
© Copyright IBM Corp. _enter the year or years_.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of
International Business Machines Corp., registered in many jurisdictions worldwide.
Other product and service names might be trademarks of IBM or other companies.
A current list of IBM trademarks is available on the web at "Copyright and
trademark information" at www.ibm.com/legal/copytrade.shtml.
Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered
trademarks or trademarks of Adobe Systems Incorporated in the United States,
and/or other countries.
Linear Tape-Open, LTO, the LTO Logo, Ultrium, and the Ultrium logo are
trademarks of HP, IBM Corp. and Quantum in the U.S. and other countries.
Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo,
Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or
registered trademarks of Intel Corporation or its subsidiaries in the United States
and other countries.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of
Microsoft Corporation in the United States, other countries, or both.
Java and all Java-based trademarks and logos are trademarks or registered
trademarks of Oracle and/or its affiliates.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Notices 365
Applicability
These terms and conditions are in addition to any terms of use for the IBM
website.
Personal use
You may reproduce these publications for your personal, noncommercial use
provided that all proprietary notices are preserved. You may not distribute, display
or make derivative work of these publications, or any portion thereof, without the
express consent of IBM.
Commercial use
You may reproduce, distribute and display these publications solely within your
enterprise provided that all proprietary notices are preserved. You may not make
derivative works of these publications, or reproduce, distribute or display these
publications or any portion thereof outside your enterprise, without the express
consent of IBM.
Rights
IBM reserves the right to withdraw the permissions granted herein whenever, in its
discretion, the use of the publications is detrimental to its interest or, as
determined by IBM, the above instructions are not being properly followed.
You may not download, export or re-export this information except in full
compliance with all applicable laws and regulations, including all United States
export laws and regulations.
Notwithstanding the terms and conditions of any other agreement you may have
with IBM or any of its related or affiliated entities (collectively "IBM"), the third
party software code identified below are "Excluded Components" and are subject
to the terms and conditions of the License Information document accompanying
the Program and not the license terms that may be contained in the notices below.
The notices are provided for informational purposes.
366 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
v libmsg
v Jakarta ORO
v ISMP Installer
v HSQLDB
v Quick
v Infozip
Libmsg
For the code entitled Libmsg
Permission to use, copy, modify, and distribute this software and its documentation
for any purpose and without fee is hereby granted, provided that the above
copyright notice appear in all copies and that both that copyright notice and this
permission notice appear in supporting documentation, and that Alfalfa's name not
be used in advertising or publicity pertaining to distribution of the software
without specific, written prior permission.
Copyright (c) 2000-2002 The Apache Software Foundation. All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list
of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this
list of conditions and the following disclaimer in the documentation and/or
other materials provided with the distribution.
3. The end-user documentation included with the redistribution, if any, must
include the following acknowledgment: "This product includes software
developed by the Apache Software Foundation (http://www.apache.org/)."
Alternately, this acknowledgment may appear in the software itself, if and
wherever such third-party acknowledgments normally appear.
4. The names "Apache" and "Apache Software Foundation", "Jakarta-Oro" must
not be used to endorse or promote products derived from this software without
prior written permission. For written permission, please contact
[email protected].
5. Products derived from this software may not be called "Apache" or
"Jakarta-Oro", nor may "Apache" or "Jakarta-Oro" appear in their name, without
prior written permission of the Apache Software Foundation.
Notices 367
THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESSED OR IMPLIED
WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE APACHE SOFTWARE
FOUNDATION OR ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
SUCH DAMAGE.
Portions of this software are based upon software originally written by Daniel F.
Savarese. We appreciate his contributions.
JXML CODE
For the code entitled Quick
IBM is required to provide you, as the recipient of such software, with a copy of
the following license from JXML:
Copyright (c) 1998, 1999, JXML, Inc.
All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
v Redistributions of source code must retain the above copyright notice, this list of
conditions and the following disclaimer.
v Redistributions in binary form must reproduce the above copyright notice, this
list of conditions and the following disclaimer in the documentation and/or
other materials provided with the distribution.
All product materials mentioning features or use of this software must display the
following acknowledgement:
v This product includes software developed by JXML, Inc. and its contributors:
http://www.jxml.com/mdsax/contributers.html
368 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Neither name of JXML nor the names of its contributors may be used to endorse or
promote products derived from this software without specific prior written
permission.
InfoZip CODE
For the code entitled InfoZip
IBM is required to provide you, as the recipient of such software, with a copy of
the following license from InfoZip:
v This is version 2000-Apr-09 of the Info-ZIP copyright and license.
The definitive version of this document should be available at
ftp://ftp.info-zip.org/pub/infozip/license.html indefinitely.
For the purposes of this copyright and license, "Info-ZIP" is defined as the
following set of individuals:
v Mark Adler, John Bush, Karl Davis, Harald Denker, Jean-Michel Dubois,
Jean-loup Gailly, Hunter Goatley, Ian Gorman, Chris Herborth, Dirk Haase, Greg
Hartwig, Robert Heath, Jonathan Hudson, Paul Kienitz, David Kirschbaum,
Johnny Lee, Onno van der Linden, Igor Mandrichenko, Steve P. Miller, Sergio
Monesi, Keith Owens, George Petrov, Greg Roelofs, Kai Uwe Rommel, Steve
Salisbury, Dave Smith, Christian Spieler, Antoine Verheijen, Paul von Behren,
Rich Wales, Mike White
This software is provided "as is," without warranty of any kind, express or
implied. In no event shall Info-ZIP or its contributors be held liable for any direct,
indirect, incidental, special or consequential damages arising out of the use of or
inability to use this software.
Permission is granted to anyone to use this software for any purpose, including
commercial applications, and to alter it and redistribute it freely, subject to the
following restrictions:
1. Redistributions of source code must retain the above copyright notice,
definition, disclaimer, and this list of conditions.
Notices 369
2. Redistributions in binary form must reproduce the above copyright notice,
definition, disclaimer, and this list of conditions in documentation and/or other
materials provided with the distribution.
3. Altered versions--including, but not limited to, ports to new operating systems,
existing ports with new graphical interfaces, and dynamic, shared, or static
library versions--must be plainly marked as such and must not be
misrepresented as being the original source. Such altered versions also must not
be misrepresented as being Info-ZIP releases--including, but not limited to,
labeling of the altered versions with the names "Info-ZIP" (or any variation
thereof, including, but not limited to, different capitalizations), "Pocket UnZip,"
"WiZ" or "MacZip" without the explicit permission of Info-ZIP. Such altered
versions are further prohibited from misrepresentative use of the Zip-Bugs or
Info-ZIP e-mail addresses or of the Info-ZIP URL(s).
4. Info-ZIP retains the right to use the names "Info-ZIP," "Zip," "UnZip," "WiZ,"
"Pocket UnZip," "Pocket Zip," and "MacZip" for its own source and binary
releases.
HSQL Code
For the code entitled HSQLDB
IBM is required to provide you, as the recipient of such software, with a copy of
the following license from the HSQL Development Group:
Copyright (c) 2001-2002, The HSQL Development Group All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this list of
conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice, this list
of conditions and the following disclaimer in the documentation and/or other
materials provided with the distribution.
Neither the name of the HSQL Development Group nor the names of its
contributors may be used to endorse or promote products derived from this
software without specific prior written permission.
370 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.
This Program includes HP Runtime Environment for J2SE HP-UX 11i platform
software as a third party component, which is licensed to you under the terms of
the following HP-UX license agreement and not those of this Agreement
The following terms govern your use of the Software unless you have a separate
written agreement with HP. HP has the right to change these terms and conditions
at any time, with or without notice.
License Grant
HP grants you a license to Use one copy of the Software. "Use" means storing,
loading, installing, executing or displaying the Software. You may not modify the
Software or disable any licensing or control features of the Software. If the
Software is licensed for "concurrent use", you may not allow more than the
maximum number of authorized users to Use the Software concurrently.
Ownership
The Software is owned and copyrighted by HP or its third party suppliers. Your
license confers no title or ownership in the Software and is not a sale of any rights
in the Software. HP's third party suppliers may protect their rights in the event of
any violation of these License Terms.
Some third-party code embedded or bundled with the Software is licensed to you
under different terms and conditions as set forth in the
THIRDPARTYLICENSEREADME.txt file. In addition to any terms and conditions
of any third party license identified in the THIRDPARTYLICENSEREADME.txt file,
the disclaimer of warranty and limitation of liability provisions in this license shall
apply to all code distributed as part of or bundled with the Software.
Notices 371
Source Code
Software may contain source code that, unless expressly licensed for other
purposes, is provided solely for reference purposes pursuant to the terms of this
license. Source code may not be redistributed unless expressly provided for in
these License Terms.
You may only make copies or adaptations of the Software for archival purposes or
when copying or adaptation is an essential step in the authorized Use of the
Software. You must reproduce all copyright notices in the original Software on all
copies or adaptations. You may not copy the Software onto any bulletin board or
similar system.
No Disassembly or Decryption
You may not disassemble or decompile the Software unless HP's prior written
consent is obtained. In some jurisdictions, HP's consent may not be required for
disassembly or decompilation. Upon request, you will provide HP with reasonably
detailed information regarding any disassembly or decompilation. You may not
decrypt the Software unless decryption is a necessary part of the operation of the
Software.
Transfer
Your license will automatically terminate upon any transfer of the Software. Upon
transfer, you must deliver the Software, including any copies and related
documentation, to the transferee. The transferee must accept these License Terms
as a condition to the transfer.
Termination
HP may terminate your license upon notice for failure to comply with any of these
License Terms. Upon termination, you must immediately destroy the Software,
together with all copies, adaptations and merged portions in any form.
Export Requirements
You may not export or re-export the Software or any copy or adaptation in
violation of any applicable laws or regulations.
372 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
If the Software is licensed for use in the performance of a U.S. government prime
contract or subcontract, you agree that, consistent with FAR 12.211 and 12.212,
commercial computer Software, computer Software documentation and technical
data for commercial items are licensed under HP's standard commercial license.
SUPPLEMENTAL RESTRICTIONS
You acknowledge the Software is not designed or intended for use in on-line
control of aircraft, air traffic, aircraft navigation, or aircraft communications; or in
the design, construction, operation or maintenance of any nuclear facility. HP
disclaims any express or implied warranty of fitness for such uses.
HP WARRANTY STATEMENT
HP warrants to you, the end customer, that HP hardware, accessories, and supplies
will be free from defects in materials and workmanship after the date of purchase
for the period specified above. If HP receives notice of such defects during the
warranty period, HP will, at its option, either repair or replace products which
prove to be defective. Replacement products may be either new or equivalent in
performance to new.
HP warrants to you that HP Software will not fail to execute its programming
instructions after the date of purchase, for the period specified above, due to
defects in materials and workmanship when properly installed and used. If HP
receives notice of such defects during the warranty period, HP will replace
Software which does not execute its programming instructions due to such defects.
Notices 373
HP does not warrant that the operation of HP products will be uninterrupted or
error free. If HP is unable, within a reasonable time, to repair or replace any
product to a condition warranted, you will be entitled to a refund of the purchase
price upon prompt return of the product. Alternatively, in the case of HP Software,
you will be entitled to a refund of the purchase price upon prompt delivery to HP
of written notice from you confirming destruction of the HP Software, together
with all copies, adaptations, and merged portions in any form.
Warranty does not apply to defects resulting from: (a) improper or inadequate
maintenance or calibration; (b) software, interfacing, parts or supplies not supplied
by HP, (c) unauthorized modification or misuse; (d) operation outside of the
published environmental specifications for the product, (e) improper site
preparation or maintenance, or (f) the presence of code from HP suppliers
embedded in or bundled with any HP product.
374 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
Index
A Business Warehouse components
InfoPackage 278
command line (continued)
getting CCMS alert status 323
ABAP step definition process chain 278 monitoring SAP event 223
attribute 258 SAP R/3 278 common options, SAP 212
ABAP/4 modules Business Warehouse InfoPackage completion time not displayed 108
SAP R/3 importing 201 displaying details 284 COMPLETIONCODE, SYSTSIN
access method Business Warehouse InfoPackage and variable 161
PeopleSoft options 145 process chain composer program
SAP 193 managing 279 defining extended agent job 139
SAP common options 212 defining extended agent
SAP global configuration options 209 workstation 135
SAP local configuration options 210
z/OS options 166 C configuration file
example, R/3 186
accessibility xii CCMS
configuration options
activating sending data from job throttler 296
PeopleSoft 145
criteria profile 267 CCMS event
SAP, common 212
job interception 277 committing MTE alert 324
SAP, global 209
agents, supported 3 defining event rule, business
SAP, local 210
Amazon EC2 scenario 317
SAP, usage 219
Amazon EC2 job 85 getting CCMS alert status 323
z/OS 166
Apache Spark CFUSER, z/OS option 167
configuring
MapReduce 81 changing password, RFC user
agent to connect toIBM Cognos in
APARs SAP R/3 203
SSL 35
IY92806 218, 280, 327 checking
IBM Workload Scheduler for IBM
IY97424 275 for files on z/OS 174
InfoSphere DataStage jobs 47
IZ03505 209, 211 IBM Workload Scheduler for z/OS
IBM Workload Scheduler for Oracle
IZ12321 186 job 174
E-Business Suite jobs 116
IZ26839 329 JES job 173
IBM Workload Scheduler to run IBM
IZ33555 217 CHECKINTERVAL, PeopleSoft
Cognos reports 33
IZ37273 240, 256 option 145
job class inheritance for job
IZ42262 241, 253, 257, 260 CHECKINTERVAL, z/OS option 167
throttling 294
APF authorizations, setting 159 cloud
job interception for job throttling 293
application server Hadoop 77, 81
parent-child for job throttling 277
SAP R/3 221 Salesforce jobs 119
SAP access method 207
application servers SAP BusinessObjects BI jobs 344
SAP environment 196
multiple, PeopleSoft 147 Sterling Connect: Direct jobs 15, 53,
SAP R/3 196
application state 73
tracing utility 184
IBM Workload Scheduler for WebSphere MQ jobs 63
z/OS 166
z/OS 174 Cloud & Smarter Infrastructure technical
z/OS gateway 158
authorization profile training xii
connecting
SAP R/3 197 cloud automation
SAP 220
transaction PFCG 198 Amazon EC2 85
connecting to SAP 228
transaction su02 197 IBM SoftLayer 89
connection to SAP R/3
authorizations Microsoft Azure 93
troubleshooting 328
APF, setting 159 Cloudantjob 19
considerations about return code
RACF, setting 159 Cognos procedure for
mapping 180
parameterized filter 32
control file
Cognos prompt type
SAP R/3 200
B date syntax 31
time stamp syntax 31
correction and transport files
balancing SAP R/3 workload using SAP R/3 200
time syntax 31
server groups 243 CPUREC statement
CognosJobExecutor.properties
batch processing ID creating 137
for IBM Cognos reprts 33
PeopleSoft 147 creating
collecting job interception 270, 271
BDC wait action parameters for CCMS event
command
R/3 268 rules 322
opted.sh 131
BLOCKTIME, z/OS option 167 correlation rules for CCMS
SETPROG 159
bm check status 174 events 322
command line
building CPUREC statement 137
committing SAP event 303, 324
criteria hierarchy 266 criteria profile 265
defining extended agent job 139
Business Information Warehouse support DOMREC statement 137
defining extended agent
SAP R/3 278 event rule based on CCMS alerts 318
workstation 135
375
creating (continued) database object for (continued) defining IBM Workload Scheduler jobs
event rule based on CCMS alerts, Oracle E-Business Suite jobs 112 that run (continued)
business scenario 317 PowerCenter jobs 98 SAP PI Channel job by using the
event rule based on IDocs 309 SAP PI Channel jobs 341 Dynamic Workload Console 342
event rule based on IDocs, business DataStageJobExecutor.properties defining IBM Workload Schedulerjobs
scenario 309 IBM Workload Scheduler for IBM that run
event rule based on SAP event 305 InfoSphere DataStage jobs 47 Oracle E-Business Suite job by using
IBM Workload Scheduler for IBM date syntax Cognos the Dynamic Workload
InfoSphere DataStage job 41 prompt type 31 Console 114
IBM Workload Scheduler for DEBUG, SYSTSIN variable 161 defining job
Informatica PowerCenter job 98 defining SAP 233
IBM Workload Scheduler for Oracle ABAP step attribute 258 SAP, dynamically 249
E-Business Suite job 112 action parameters for CCMS event z/OS 167
IBM Workload Scheduler forSAP PI rules 322 defining jobs for
Channel job 341 correlation rules for CCMS IBM Cognos reports by using the
IBM Workload Scheduler jobs for events 322 Dynamic Workload Console 24
running IBM Cognos report 23 database object for deleting
internetwork dependency based on IBM Cognos reports 25 ITWS for Apps 297
SAP event 302, 304 IBM InfoSphere DataStage jobs 42 SAP job 243
job containing InfoPackage 279 Oracle E-Business Suite jobs 112 SAP variant 229
job containing process chains 279 PowerCenter jobs 98 dependency
jobs 7 SAP PI Channel jobs 341 based on SAP event, defining 302,
local options file 133 event rule based on CCMS alerts 318 304
PeopleSoft job 151 event rule based on CCMS alerts, based on SAP event, limitation with
RFC user, SAP R/3 197 business scenario 317 XBP 2.0 301
SAP jobs 224 event rule based on IDocs 309 committing SAP event by external
creating job event rule based on IDocs, business task 303
for IBM Cognos reports by using the scenario 309 mapping between definition and
Dynamic Workload Console 24 event rule based on SAP event 305 resolution, SAP 303
creating jobs external command step attribute 260 dependency on jobs
for IBM InfoSphere DataStage jobs external program step attribute 260 z/OS 169
using the Dynamic Workload global options file 127 diagnostic information
Console 45 Hadoop Distributed File System z/OS job 175
for Oracle E-Business Suite jobs using jobs 69 displaying details
the Dynamic Workload Hadoop Map Reduce jobs 73 Business Warehouse InfoPackage 284
Console 114 IBM BigInsights jobs 15 Business Warehouse InfoPackage
for SAP PI Channel jobs using the IBM Workload Scheduler jobs for IBM job 284
Dynamic Workload Console 342 InfoSphere DataStage jobs 41 process chain job 284
criteria hierarchy IBM Workload Scheduler jobs for SAP job 241
building 266 Informatica PowerCenterjobs 98 DOMREC statement
description 263 IBM Workload Scheduler jobs for creating 137
Criteria Manager 263 Oracle E-Business Suite jobs 112 downloading z/OS gateway fix pack files
criteria profile IBM Workload Scheduler jobs for by FTP 163
activating 267 running IBM Cognos reports 23 dynamic agent for IBM Cognos in SSL
creating 265 IBM Workload Scheduler jobs for SAP configuring 35
description 263 PI Channel jobs 341 dynamic job definition parameter
CUSTOM keyword, to filter SAP jobs 7 description
events 308 local options file 127 SAP R/3 251
customization procedure PeopleSoft job 151 dynamic job definition syntax
SAP R/3 196 Salesforce jobs 119 ABAP step definition attribute 258
customizing SAP BusinessObjects BI jobs 344 external command step definition
IBM Workload Scheduler for IBM SAP event as internetwork attribute 260
InfoSphere DataStage jobs 47 dependency 302, 304 external program step definition
IBM Workload Scheduler for Oracle SAP job 233 attribute 260
E-Business Suite jobs 116 SAP job dynamically 249 SAP 250
IBM Workload Scheduler to run IBM SAP jobs 224 dynamic job for
Cognos reports 33 SAP R/3 job 281 IBM InfoSphere DataStage job 42
properties file 185 SAP variant 229 Oracle E-Business Suite job 112
Sterling Connect: Direct jobs 53 PowerCenter job 98
supported agents job 138 SAP PI Channel job 341
D supported agents workstation 134
WebSphere MQ jobs 63
dynamic jobs
Amazon EC2 Amazon EC2 85
data file
defining IBM Workload Scheduler jobs Apache Oozie Sqoop 77
SAP R/3 200
that run Apache Spark 81
database object
IBM InfoSphere DataStage job by Hadoop Distributed File System
IBM Cognos 25
using the Dynamic Workload job 69
database object for
Console 45 Hadoop Map Reduce job 73
IBM InfoSphere DataStage jobs 42
376 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
dynamic jobs (continued)
IBM BigInsights job 15
executor
IBM Cognos job 25
G
IBM Cloudant 19 executor for gateway messages 178
IBM SoftLayer IBM SoftLayer 89 IBM InfoSphere DataStage job 42 global options file
Microsoft Azure Microsoft Azure 93 Oracle E-Business Suite job 112 defining 127
Salesforce job 119 PowerCenter job 98 editing 130
SAP BusinessObjects BI job 344 SAP PI Channel job 341 modifying with Option Editor 131,
Sterling Connect: Direct job 53 exporting SAP R/3 calendars 133
WebSphere MQ job 63 business scenario 298 mvsjes.opts 127, 129
dynamic jobs for r3batch export function 298 mvsopc.opts 127, 129
IBM Cognos reports 25 extended agent job name 127, 129
Dynamic Workload Console SAP 223 psagent.opts 127, 129
accessibility xii extended agent job defining with r3batch.opts 127, 129
defining supported agents job 139 command line 139 global options, SAP 209
defining workstation for agent with extended agent workstation, defining GSUSER, z/OS option 167
access method 134 with
local options file 130 command line 135
dynamically ISPF 138 H
defining SAP jobs 250 extended and dynamic agent workstation Hive
dynamically, defining SAP job 249 defining with OozieMapReduce 77
end-to-end scheduling 136
external command step definition
E attribute 260
external program step definition
I
editing IBM Cloudant job 19
attribute 260
global options file 130 IBM Cloudant jobs 19
local options file 130 IBM Cognos
education xii introduction 23
EEWTCP00 F report status to IBM Workload
z/OS program component 160 failing submission 109 Scheduler job status 37
EEWTCP02 feature report to run using IBM Workload
z/OS program component 160 job interception, setting SAP R/3 272 Scheduler 23
Employee Training by Year features reports customizing IBM Workload
sampleCognos job interception and parent-child, SAP Scheduler 33
procedure for parameterized filter 32 R/3 269 scenario 23
encrypting user password job interception, activating SAP IBM Cognos in SSL
PeopleSoft 147 R/3 277 configuring the agent 35
SAP 219 job interception, collecting SAP IBM Cognos reports
end-to-end scheduling R/3 270, 271 defining jobs for by using the
defining extended agent job 140 job interception, implementing SAP Dynamic Workload Console 24
defining supported agent R/3 269 IBM InfoSphere DataStage
workstation 136 job interception, R/3 269 job for using with IBM Workload
enigma program job interception, setting SAP R/3 272 Scheduler 41
encrypting user password 219 job throttling, SAP R/3 292 job status toIBM Workload Scheduler
error messages 178 parent-child R/3 277 job status 48
event rule PeopleSoft 143 jobs to run using IBM Workload
action parameters for CCMS event return code mapping 179 Scheduler 47
rules 322 SAP R/3 190 plug-in 41
based on CCMS alerts 317 z/OS 155 scenario 41
based on CCMS alerts, business file IBM SoftLayer
scenario 317 configuration for R/3 186 IBM SoftLayer job 89
based on IDocs 309 mvsjes.properties 185 IBM Workload Scheduler customizing for
based on IDocs, business mvsopc.properties 185 IBM InfoSphere DataStage jobs 47
scenario 309 psagent.properties 185 IBM Workload Scheduler customizing for
based on IDocs, matching r3batch.properties 185 Oracle E-Business Suite jobs 116
criteria 310 return code mapping 179 IBM Workload Scheduler customizing to
correlation rules for CCMS alerts 322 file name run IBM Cognos reports 33
definition 305 return code mapping 183 IBM Workload Scheduler for z/OS
monitoring SAP event 222 filter parameterizedCognos application state 174
SAP, defining 305 procedure for 32 occurrence state 174
SAP, filtering events 308 filtering SAP events in security file 308 operation overview 173
SAP, prerequisite to define 222 fix pack files, z/OS gateway, operation state 173
events, logging 265 downloading by FTP 163 IBM Workload Scheduler for z/OS job
events, raising SAP 246 FTP checking 174
example downloading fix pack files, z/OS launching 173
dynamic SAP job definition 262 gateway installation 163 managing 173
return code mapping 180 task definition, z/OS 168
Index 377
IBM Workload Scheduler jobs that run introduction (continued) job definition
IBM InfoSphere DataStage job IBM Workload Scheduler for Amazon EC2 jobs 85
defining using the Dynamic Workload SAP 189 Apache Oozie jobs 77
Console 45 Oracle E-Business Suite 111 Apache Spark jobs 81
IBM Workload Scheduler jobs that run PeopleSoft 143 Hadoop Distributed File System
SAP PI Channel job SAP 193 jobs 69
defining using the Dynamic Workload SAP PI Channel 339 Hadoop Map Reduce jobs 73
Console 342 z/OS 155 IBM BigInsights jobs 15
IBM Workload Schedulerjobs that run ISPF IBM SoftLayer jobs 89
Oracle E-Business Suite job defining extended agent Microsoft Azure jobs 93
defining using the Dynamic Workload workstation 138 PeopleSoft 151
Console 114 ITWS_PSXA project Salesforce jobs 119
IDoc PeopleSoft 148 SAP 233
defining event rule 309 SAP BusinessObjects BI jobs 344
defining event rule, business SAP, dynamically 249
scenario 309
defining event rule, matching
J Sterling Connect: Direct jobs 53
WebSphere MQ jobs 63
JCL to unload the tape
criteria 310 job definition for
z/OS gateway installation 157
IEFU84 exit 160 IBM Cognos reports 25
JES job
implementing IBM InfoSphere DataStage jobs 42
checking 173
job interception 269 Oracle E-Business Suite jobs 112
launching 172
InfoPackage PowerCenter jobs 98
managing 171
managing 279 SAP PI Channel jobs 341
monitoring 172
user authorization 278 job definition parameter description
state 172
InfoPackage schedule options SAP R/3 dynamic 251
task definition, z/OS 168
SAP R/3 279 job fails to submit 109
JES operation overview 171, 173
Informatica PowerCenter job IBM Workload Scheduler
JESCMDCHR, SYSTSIN variable 162
job for using with IBM Workload creating job containing
JESINTERFACE, SYSTSIN variable 162
Scheduler 98 InfoPackage 279
job
software requirements for using with creating job containing process
assigning a server group 243
IBM Workload Scheduler 97 chain 279
defining 7
Informatica PowerCenter version 97 job interception
for IBM Cognos reports defining by
Informatica PowerCenter Workflow activating, SAP R/3 feature 277
using the Dynamic Workload
restarting 107 collecting, SAP R/3 feature 270, 271
Console 24
restarting from the failed task 107 enabling and configuring for job
for IBM InfoSphere DataStage job
inheritance r3batch throttling 293
defining using the Dynamic
definition 130 implementing, SAP R/3 feature 269
Workload Console 45
installation 163 SAP R/3 feature 269
for Oracle E-Business Suite job
installation overview setting placeholders in template
defining using the Dynamic
z/OS 156 file 276
Workload Console 114
installing setting, SAP R/3 feature 272
for SAP PI Channel job defining using
ABAP modules, SAP R/3 196 job log 108
the Dynamic Workload
z/OS gateway 156 IBM Workload Scheduler for IBM
Console 342
intercepted job InfoSphere DataStage job 49
IBM Workload Scheduler for IBM
return code mapping 184 IBM Workload Scheduler for Oracle
Cognos reports scheduling 33
interception criteria E-Business Suite job 117
IBM Workload Scheduler for IBM
setting, SAP R/3 feature 272 IBM Workload Scheduler for SAP PI
InfoSphere DataStage defining 41
INTERLINKSUBSYSTEM, SYSTSIN Channel job 344
IBM Workload Scheduler for IBM
variable IBM Workload Scheduler or IBM
InfoSphere DataStage
SYSTSIN variable Cognos reports 37
submitting 47
INTERLINKSUBSYSTEM 161 job plug-in
IBM Workload Scheduler for
internetwork dependency IBM Cognos 23
Informatica PowerCenter
based on SAP event, defining 302, job state
defining 98
304 SAP 244
IBM Workload Scheduler for Oracle
based on SAP event, limitation with job status mapping
E-Business Suite defining 112
XBP 2.0 301 PeopleSoft 153
IBM Workload Scheduler for Oracle
based on SAP event, limitation with job status to
E-Business Suite submitting 115
XBP 3.0 301 IBM Cognos report status 37
IBM Workload Scheduler for
based on SAP R/3 event, IBM InfoSphere DataStage job
PowerCenter submitting 105
prerequisites 301 status 48
IBM Workload Scheduler for SAP PI
committing SAP event by external Oracle E-Business Suite application
Channel submitting 343
task 303 status 117
IBM Workload Scheduler forSAP PI
mapping between definition and PowerCenter workflow status 108
Channel defining 341
resolution, SAP 303 job throttling
jobs monitoring 11
placeholder SAP job 301 business scenario 292
PeopleSoft 151
introduction configuring logging properties 294
SAP job state 244
IBM Cognos 23 deleting ITWS for Apps 297
submitting for supported agent 141
378 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
job throttling (continued)
enabling and configuring job
M mvsopc.opts file
definition 127, 129
interception 293 managing mvsopc.properties 185
enabling job class inheritance 294 Business Warehouse InfoPackage and
options in options file 293 process chain 279
IBM Workload Scheduler for z/OS
SAP R/3 feature 292
sending data to CCMS 296 job 173 N
JES job 171 name
starting 295
SAP extended agent job running 223 global options file 127, 129
stopping 296
mapping local options file 127, 129
throttling_send_ccms_data 296
IBM Workload Scheduler and SAP job National Language support
throttling_send_ccms_rate 296
states 244 R/3 327
job tracking
IBM Workload Scheduler job status to new copy
PeopleSoft 144
IBM Cognos report status 37 re-running jobs SAP 249
JobManager.ini 105
IBM Workload Scheduler job status to new executors
jobs
IBM InfoSphere DataStage job Hadoop Distributed File System
IBM Workload Scheduler jobs for IBM
status 48 job 69
Cognos reports defining 23
IBM Workload Scheduler job status to Hadoop Map Reduce job 73
new SAP 224
Oracle E-Business Suite job IBM BigInsights job 15
jobs dynamic definition
status 117 Salesforce job 119
SAP 250
IBM Workload Scheduler job status to SAP BusinessObjects BI job 344
jobthrottling.bat, usage parameter 295
PowerCenter workflow status 108 Sterling Connect: Direct job 53
jobthrottling.sh, usage parameter 295
mapping job status WebSphere MQ job 63
JVMOption 105
PeopleSoft 153 new plug-ins
MAXWAIT, SYSTSIN variable 162 Amazon EC2 85
MCSSTORAGE, SYSTSIN variable 162 Apache Oozie 77
K messages 178 Apache Spark 81
killing Microsoft Azure Hadoop Distributed File System 69
IBM Workload Scheduler job streams Microsoft Azure job 93 Hadoop Map Reduce 73
for IBM Cognos reports 33 modifying IBM BigInsights 15
IBM Workload Scheduler job streams global options file 133 IBM SoftLayer 89
for IBM InfoSphere DataStage local options file 133 Microsoft Azure 93
jobs 47 monitoring NoSQL database 19
IBM Workload Scheduler job streams IBM Workload Scheduler for IBM Salesforce 119
for Oracle E-Business Suite jobs 115 Cognos report job log 37 SAP BusinessObjects BI 344
IBM Workload Scheduler job streams IBM Workload Scheduler for IBM Sterling Connect: Direct 53
for PowerCenter jobs 105 InfoSphere DataStage job log 49 WebSphere MQ 63
IBM Workload Scheduler job streams IBM Workload Scheduler for Oracle
for SAP PI Channel jobs 343 E-Business Suite job log 117
jobs 9
killing SAP job 245
IBM Workload Scheduler for SAP PI O
Channel job log 344 occurrence state
JES job 172 IBM Workload Scheduler for
jobs 11 z/OS 174
L MTEs 317 old copy
launching SAP event defined as event rule 222 re-running jobs SAP 249
IBM Workload Scheduler for z/OS SAP event defined as internetwork OPCINTERFACE, SYSTSIN variable 162
job 173 dependency, XBP 2.0 301 OPCMSGCLASS, SYSTSIN variable 162
JES job 172 MTE alert OPCSUBSYSTEM, SYSTSIN variable 162
LJUSER, PeopleSoft option 145 action parameters for CCMS event operation overview
LJUSER, z/OS option 167 rules 322 IBM Workload Scheduler for
local options file committing by external task 324 z/OS 173
creating with Option Editor 133 correlation rules 322 operation state
defining 127 defining event rule 318 IBM Workload Scheduler for
Dynamic Workload Console 130 defining event rule, business z/OS 173
editing 130 scenario 317 operator password
modifying with Option Editor 131, getting CCMS alert status 323 encrypting on PeopleSoft 147
133 mapping opted.sh
name 127, 129 attribute MTE name and IBM command 131
local options, SAP R/3 210 Workload Scheduler fields 322 Option Editor
logging raised events 265 context MTE name and IBM global options file modifying 131
logon group Workload Scheduler fields 320 local options file modifying 131
SAP R/3 221 object MTE name and IBM opted.sh command 131
Workload Scheduler fields 321 Simple view 132
multiple application servers, Table view 132
PeopleSoft 147 Text view 132
mvsjes.opts file option inheritance r3batch
definition 127, 129 definition 130
mvsjes.properties 185
Index 379
options PeopleSoft prompt type Cognos (continued)
R/3 National Language support 327 access method options 145 time syntax 31
options file 144 batch processing ID 147 properties file 103
global 127, 130 configuration tasks 144 customizing 185
local 127, 130 connecting to multiple application DEBUG_MAX 185
PeopleSoft 144, 146 servers 147 DEBUG_MID 185
SAP 193, 207 creating job 151 DEBUG_MIN 185
SAP example 219 defining job 151 parameter 185
setting job throttling options, SAP encrypting operator password 147 trace file path 185
R/3 293, 294 functional overview 144 trace level 185
z/OS 166 introduction 143 property file
Oracle E-Business Suite ITWS_PSXA project 148 IBM Workload Scheduler for IBM
introduction 111 job definition 151 Cognos reports 33
job for using with IBM Workload job status mapping 153 IBM Workload Scheduler for IBM
Scheduler 112 job tracking 144 InfoSphere DataStage jobs 47
job status toIBM Workload Scheduler options file 144, 146 IBM Workload Scheduler for Oracle
job status 117 overview 143 E-Business Suite jobs 116
jobs to run using IBM Workload parameters to define job 152 IBM Workload Scheduler for z/OS
Scheduler 116 return code mapping 181 agent IBM Cognos reports 35
Oracle E-Business SuiteOracle E-Business roles and responsibilities 143 PS_DISTSTATUS, PeopleSoft option 145
Suite security 144 psagent 144, 147
scenario 111 task string parameters 152 psagent.opts file
OracleEBusinessJobExecutor.properties Pig definition 127, 129
IBM Workload Scheduler for Oracle Apache Oozie job 77 psagent.properties
E-Business Suite jobs 116 placeholder file 185
other z/OS jobs for job interception in template PSFT_DOMAIN_PWD,PeopleSoft
task definition, z/OS 169 file 276 option 145
output job log SAP job 301 PSFT_OPERATOR_ID,PeopleSoft
IBM Workload Scheduler for IBM plug-in option 145
Cognos reports 37 for running IBM Cognos reports 25 PSFT_OPERATOR_PWD, PeopleSoft
IBM Workload Scheduler for IBM IBM Cognos 23 option 145
InfoSphere DataStage job 49 plug-in for PSJOAPATH, PeopleSoft option 145
IBM Workload Scheduler for SAP PI IBM InfoSphere DataStage job 42 PUTLINE, SYSTSIN variable 162
Channel job 344 Oracle E-Business Suite job 112 pwdcrypt program
IBM Workload Scheduler forOracle PowerCenter job 98 encrypting user password 147
E-Business Suite job 117 SAP PI Channel job 341
overview plug-ins
IBM Cognos plug-in 23
IBM InfoSphere DataStage 41
IBM InfoSphere DataStage 41
overview 1
Q
QLIMIT, SYSTSIN variable 162
IBM Workload Scheduler access PORT, SYSTSIN variable 162
methods 1 PowerCenter
IBM Workload Scheduler for workflow status to IBM Workload
SAP 189 Scheduler job status 108 R
IBM Workload Scheduler plug-ins 1 PowerCenterJobExecutor.properties 103 R/3
Oracle E-Business Suite 111 procedure for Cognos BDC wait 268
PeopleSoft 143 parameterized filter 32 configuration file 186
SAP 193 process chain National Language support 327
SAP PI Channel 339 creating IBM Workload Scheduler parent-child feature 277
z/OS 155 job 279 return code mapping 181
managing 279 Unicode support 205
rerunning job 290 r3batch
P restarting 290
schedule options 279
export function 298
option inheritance 130
parameter
user authorization 278 r3batch.opts
enable trace utility 186
process chain job definition 127, 129
max trace files 186
displaying details 284 options file 207
properties file 185
rerunning 286 SAP R/3 207
return code mapping 179
product support. locating 166 r3batch.properties 185
SAP job definition 233
program r3evman command 223
SAP R/3 dynamic job definition 251
composer 135, 139 r3evmon event configuration file 331
trace file path 185
program component RACF authorization, setting 159
trace file size 186
z/OS 160 raised events 265
trace level 185
project raising SAP event 246
parameterized filter Cognos
ITWS_PSXA PeopleSoft 148 re-running job
procedure for 32
prompt type Cognos new copy, SAP 249
parent-child feature
date syntax 31 old copy, SAP 249
R/3 277
time stamp syntax 31
PEERADDRESS, SYSTSIN variable 162
380 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
refreshing SAP event (continued) SAP R/3 (continued)
SAP variant 229 raising 246 event defined as internetwork
renamed worklet 108 SAP job dependency 302, 304
report defining dynamically 249 event rule based on CCMS alerts,
IBM Cognos status to IBM Workload deleting 243 business scenario 317
Scheduler job status 37 displaying details 241 event rule based on IDocs, business
requirement software editing 232 scenario 309
IBM Workload Scheduler for example of defining dynamically 262 event rule prerequisite 222
Informatica PowerCenter 97 killing 245 exporting calendars, business
rerunning job placeholder 301 scenario 298
process chain 286, 290 task string 233 exporting factory calendars,
SAP 247, 286 variable substitution 262 command 298
restarting verifying status 242 extended agent job 223
Informatica PowerCenter SAP PI Channel external command step definition
Workflow 107 introduction 339 attribute 260
process chain 290 job for using with IBM Workload external program step definition
RETRYCOUNT, z/OS option 167 Scheduler 341 attribute 260
return code mapping 181 roles and responsibilities 340 features 190
considerations 180 scenario 339 filtering events in security file 308
example 180 SAP R/3 getting CCMS alert status by external
feature 179 ABAP step definition attribute 258 task 323
file name 183 ABAP/4 modules, importing 201 global options 209
file, creating 179 access method 193 IDoc record used in event rule 309
intercepted job 184 application server 221 InfoPackage schedule options 279
parameter 179 authorization profile 197 installing ABAP modules 196
PeopleSoft 181 Business Information Warehouse introduction 189
R/3 181 support 278 job interception 269, 270, 271, 277
syntax 180 Business Warehouse components 278 job interception and parent-child 269
RFC profile calendars, exporting 298 job throttling 292
SAP R/3 197 CCMS alerts used in event rule 317 jobs dynamic definition 250
RFC user changing password, RFC user 203 local options 210
SAP R/3 328 committing MTE alert by external logon group 221
RFC user password task 324 mapping between definition and
SAP R/3 203 committing SAP event by external resolution of internetwork
roles and tasks task 303 dependency 303
PeopleSoft 143 common options 212 monitoring event 222
SAP 194 configuration 196 options file 193, 207, 219
SAP PI Channel 340 configuration options 209 options National Language
z/OS 155 configuration options usage 219 support 327
RUNLOCATION, PeopleSoft option 146 connecting 220 parameters to define job 233
connection 328 placeholder job for internetwork
control file 200 dependencies 301
S correction and transport files 200
creating job containing
prerequisite to define event rule 222
process chain schedule options 279
sample Employee Training by Year
InfoPackage 279 r3batch export function 298
Cognos
creating job containing process r3evman command 223
procedure for parameterized filter 32
chain 279 r3evmon configuration file 331
SAP
creating RFC user 197 refreshing variant 229
creating jobs 224
customization procedure 196 RFC user password 203
defining jobs 224
data file 200 roles and responsibilities 194
introduction 193
defining event as event rule 305 security file, filtering events 308
re-running jobs 249
defining event as internetwork setting a filter in security file 308
rerunning job 247, 286
dependency 302, 304 setting interception criteria 272
SAP data connection 228
defining event rule based on CCMS setting variant 229
SAP event
alerts 318 supported code pages 328
committing by external task 303
defining event rule based on task string 233
defining as event rule 305
IDocs 309 task string parameter 281
defining as internetwork
defining job 233 task string parameters 233
dependency 302, 304
defining job dynamically 249 transaction PFCG 198
filtering in security file 308
defining variant 229 transaction se38 294
monitoring 222
deleting variant 229 transaction su02 197
placeholder job for internetwork
dynamic job definition example 262 updating variant 229
dependencies 301
dynamic job definition parameter variable substitution 262
prerequisite to define a rule 222
description 251 viewing variant 229
prerequisites for defining as
dynamic job definition syntax 250 SAP R/3 table criteria
internetwork dependency 301
encrypting user password 219 setting template file 275
r3evman command 223
event defined as event rule 305
r3evmon configuration file 331
Index 381
SAP R/3 table criteria (continued)
setting using the Dynamic Workload
state (continued)
SAP job 244
T
Console, BC-XBP 2.0 272 status mapping table criteria SAP R/3
setting using the Dynamic Workload PeopleSoft job 153 setting template file 275
Console, BC-XBP 3.0 273 status UNKNOWN 108 setting using the Dynamic Workload
scenario submitting Console 272, 273
IBM Cognos 23 IBM Workload Scheduler for IBM tape
IBM InfoSphere DataStage 41 Cognos reports 33 unloading files, z/OS gateway
Oracle E-Business Suite 111 IBM Workload Scheduler for IBM installation 157
SAP PI Channel 339 InfoSphere DataStage job 47 task definition syntax
scheduling IBM Workload Scheduler for Oracle IBM Workload Scheduler for z/OS
IBM Workload Scheduler for IBM E-Business Suite job 115 job 168
Cognos reports 33 IBM Workload Scheduler for JES job, z/OS 168
IBM Workload Scheduler for IBM PowerCenter job 105 other z/OS jobs 169
InfoSphere DataStage job 47 IBM Workload Scheduler for SAP PI z/OS 168
IBM Workload Scheduler for Oracle Channel job 343 task string parameters
E-Business Suite job 115 jobs 9 PeopleSoft job 152
IBM Workload Scheduler for supported agent job 141 SAP job 233
PowerCenter job 105 SUBSYS, SYSTSIN variable 163 SAP R/3 job 233, 281
jobs 9 support, product, locating 166 TCP/IP: stack 163
script file 169 supported TCPIPSTACK, SYSTSIN variable 163
secure network communications 204 agents 3 TCPNAME, SYSTSIN variable 163
security 160 code pages, SAP R/3 328 technical overview
PeopleSoft 144 supported agent job z/OS 171
security file, filtering SAP events 308 submitting 141 technical training xii
security SAP supported agent job defining with template file
SNC 204 end-to-end scheduling 140 creating 275
server group supported agents job defining with description 275
balancing SAP R/3 workload 243 Dynamic Workload Console 139 setting placeholders for job
SERVER_NAME_LIST, PeopleSoft supported, agents 3 interception 276
option 146 SVCDUMP, SYSTSIN variable 163 temporary variants, examples 262
SETPROG syntax TERMINATOR, SYSTSIN variable 163
command 159 defining SAP jobs dynamically 250 throttling, job 292
setting return code mapping 180 time stamp syntax Cognos
job interception 272 task definition, z/OS 168 prompt type 31
job interception using interception syntax diagrams, how to read xiii time syntax Cognos
criteria and template files 272 syntax for date Cognos prompt type 31
job throttling 293 prompt type 31 timing consideration
SAP R/3 table criteria on the syntax for parameterized filter Cognos z/OS job 175
workstation 272 parameterized filter 31 trace file
SAP R/3 table criteria using the prompt type 31 trace-mvsjes.log 185
Dynamic Workload Console 273 syntax for time Cognos trace-mvsopc.log 185
SAP variant 229 parameterized filter 31 trace-psagent.log 185
template file 275 prompt type 31 trace-r3batch.log 185
SNC 204 syntax for time stamp Cognos tracing utility
software requirements prompt type 31 configuring 184
IBM Workload Scheduler for SYSTSIN variable tracking
Informatica PowerCenter 97 COMPLETIONCODE 161 PeopleSoft job 144
Solution Manager DEBUG 161 training
job scheduling 353, 356 JESCMDCHR 162 technical xii
direct 356 JESINTERFACE 162 transaction PFCG
from job documentation 357 MAXWAIT 162 SAP R/3 198
monitoring jobs 358 MCSSTORAGE 162 transactions, SAP R/3
registering 353 OPCINTERFACE 162 PFCG 198
SMSE adapter 353, 356, 357, 358 OPCMSGCLASS 162 sa38 333
tracing the SMSE adapter 359 OPCSUBSYSTEM 162 se16 272, 294
tws_smseadapter 359 PEERADDRESS 162 se37 301
spool data PORT 162 se38 294, 333
browsing 245 PUTLINE 162 sm69 261
introduction 244 QLIMIT 162 su02 197
SSL SUBSYS 163 troubleshooting
for IBM Cognos configuring agent 35 SVCDUMP 163 EEO0778E 333
SSL configuration 105 TCPIPSTACK 163 extended agent
start time not displayed 108 TCPNAME 163 job log 337
start up 160 TERMINATOR 163 PeopleSoft, job submission
state WTP 163 fails 337
JES job 172 ZOSV1R2 163 PeopleSoft, local options file
rights 337
382 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
troubleshooting (continued) user authorizations z/OS (continued)
extendend agent Business Warehouse InfoPackage 278 setting APF authorizations 159
Oracle, job submission fails 337 Business Warehouse process setting RACF authorizations 159
job throttling chain 278 task definition syntax 168, 169
alerts are not generated according user password technical overview 171
to threshold 336 encrypting on PeopleSoft 147 troubleshooting 176
does not start 335 encrypting on SAP 219 uninstalling gateway 159
does not start on HP-UX 335 user security z/OS gateway
does not stop 336 setting 195 downloading fix pack files by
error message when creating trace FTP 163
file on HP 336 unloading files from CD 156
saving MTE properties generates
message 336
V unloading files from tape 157
z/OS job
variable substitution
longlink file 335 diagnostic information 175
SAP R/3 262
mvsjes, S047 abend 337 timing consideration 175
variant SAP R/3
permission denied z/OS program component
defining 229
job log 336 EEWTCP00 160
deleting 229
r3batch 329 EEWTCP02 160
refreshing 229
does not start 334 zOS job dependencies 169
setting 229
environment variables 334 ZOSV1R2, SYSTSIN variable 163
updating 229
modifying job step error 334
viewing 229
monitoring IDoc events 332, 333
variants
monitoring SAP events 329, 330,
temporary 262
331
verifying status
output with unreadable
SAP job 242
characters 329
version 97
scheduling SAP jobs 334
viewing
system cannot intercept jobs 336
SAP variant 229
r3event
spool data 245
output with unreadable
characters 329
r3evmon
monitoring events 330 W
monitoring events increases Web Services Hub restart 109
memory consumption 332 workflow
restarting process of subchain 330 PowerCenter status to IBM Workload
SAP R/3 Scheduler job status 108
connection 328 worklet status 108
error defining internetwork workload, balancing SAP R/3 243
dependency 333 workstation for extended agent or
z/OS 176 dynamic agent, defining with
TWS_MAX_WAIT_TIME, PeopleSoft Dynamic Workload Console 134
option 146 workstation for extended agent, defining
TWS_MIN_WAIT_TIME, PeopleSoft with
option 146 ISPF 138
TWS_RETRY, PeopleSoft option 146 workstation for supported agent, defining
TWSA_SCHED_METH, PeopleSoft with
option 146 command line 135
TWSXA_INLINE_CI, PeopleSoft workstation, defining with
option 146 Dynamic Workload Console 134
WTP, SYSTSIN variable 163
U
u Z
jobthrottling parameter 295 z/OS
Unicode support access method options 166
R/3 205 configuring 166
uninstalling configuring gateway 158
z/OS gateway 159 defining job 167
unloading dependency on jobs 169
files from the CD, z/OS gateway features 155
installation 156 installing 156
files from the tape, z/OS gateway installing gateway 156
installation 157 introduction 155
updating JCL to unload the tape 157
SAP variant 229 program component 160
roles and responsibilities 155
Index 383
384 IBM Workload Automation: Scheduling Applications with IBM Workload Automation
IBM®
Printed in USA