PC_105_WorkflowBasicsGuide_en
PC_105_WorkflowBasicsGuide_en
10.5
This software and documentation are provided only under a separate license agreement containing restrictions on use and disclosure. No part of this document may be
reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise) without prior consent of Informatica LLC.
U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial
computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such,
the use, duplication, disclosure, modification, and adaptation is subject to the restrictions and license terms set forth in the applicable Government contract, and, to the
extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License.
Informatica, the Informatica logo, PowerCenter, and PowerExchange are trademarks or registered trademarks of Informatica LLC in the United States and many
jurisdictions throughout the world. A current list of Informatica trademarks is available on the web at https://www.informatica.com/trademarks.html. Other company
and product names may be trade names or trademarks of their respective owners.
Portions of this software and/or documentation are subject to copyright held by third parties. Required third party notices are included with the product.
The information in this documentation is subject to change without notice. If you find any problems in this documentation, report them to us at
[email protected].
Informatica products are warranted according to the terms and conditions of the agreements under which they are provided. INFORMATICA PROVIDES THE
INFORMATION IN THIS DOCUMENT "AS IS" WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING WITHOUT ANY WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND ANY WARRANTY OR CONDITION OF NON-INFRINGEMENT.
Table of Contents 3
Copying Sessions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Copying Workflow Segments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Comparing Repository Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Comparing Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Metadata Extensions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Creating a Metadata Extension. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Editing a Metadata Extension. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Deleting a Metadata Extension. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Expression Editor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Adding Comments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Validating Expressions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Expression Editor Display. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Keyboard Shortcuts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4 Table of Contents
Viewing Links in a Workflow or Worklet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Deleting Links in a Workflow or Worklet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Chapter 3: Sessions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Sessions Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Session Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Creating a Session Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Editing a Session. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Applying Attributes to All Instances. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Performance Details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Configuring Performance Details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Pre- and Post-Session Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Pre- and Post-Session SQL Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Using Pre- and Post-Session Shell Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Chapter 5: Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Tasks Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Creating a Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Creating a Task in the Task Developer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Creating a Task in the Workflow or Worklet Designer. . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Configuring Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Reusable Workflow Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
AND or OR Input Links. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Disabling Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Failing Parent Workflow or Worklet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Working with the Assignment Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Command Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Using Parameters and Variables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Assigning Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Creating a Command Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Executing Commands in the Command Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Log Files and Command Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Table of Contents 5
Control Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Creating a Control Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Working with the Decision Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Working with the Event Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Example of User-Defined Events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Event-Raise Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Event-Wait Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Timer Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Creating a Timer Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Chapter 6: Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Sources Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Globalization Features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Source Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Allocating Buffer Memory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Partitioning Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Configuring Sources in a Session. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Configuring Readers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Configuring Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Configuring Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Working with Relational Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Selecting the Source Database Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Defining the Treat Source Rows As Property. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
SQL Query Override. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Configuring the Table Owner Name. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Overriding the Source Table Name. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Working with File Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Configuring Source Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Configuring Commands for File Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Configuring Fixed-Width File Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Configuring Delimited File Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Configuring Line Sequential Buffer Length. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Integration Service Handling for File Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Character Set. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Multibyte Character Error Handling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Null Character Handling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Row Length Handling for Fixed-Width Flat Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Numeric Data Handling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Working with XML Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Server Handling for XML Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Using a File List. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Creating the File List. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Configuring a Session to Use a File List. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6 Table of Contents
Chapter 7: Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Targets Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Globalization Features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Target Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Partitioning Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Configuring Targets in a Session. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Configuring Writers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Configuring Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Configuring Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Performing a Test Load. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Configuring a Test Load. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Working with Relational Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Target Database Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Target Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Target Table Truncation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Truncating a Target Table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Deadlock Retry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Dropping and Recreating Indexes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Constraint-Based Loading. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Bulk Loading. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Table Name Prefix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Target Table Name. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Reserved Words. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Teradata Array Insert. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Working with Target Connection Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Working with Active Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Working with File Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Configuring Target Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Configuring Commands for File Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Configuring Fixed-Width Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Configuring Delimited Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Integration Service Handling for File Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Writing to Fixed-Width Flat Files with Relational Target Definitions. . . . . . . . . . . . . . . . . . 110
Writing to Fixed-Width Files with Flat File Target Definitions. . . . . . . . . . . . . . . . . . . . . . 111
Generating Flat File Targets By Transaction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Writing Empty Fields for Unconnected Ports in Fixed-Width File Definitions. . . . . . . . . . . . . 112
Writing Multibyte Data to Fixed-Width Flat Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Null Characters in Fixed-Width Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Character Set. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Writing Metadata to Flat File Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Working with XML Targets in a Session. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Table of Contents 7
Integration Service Handling for XML Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Character Set. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Special Characters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Null and Empty Strings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Handling Duplicate Group Rows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
DTD and Schema Reference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Flushing XML on Commits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
XML Caching Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Session Logs for XML Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Multiple XML Document Output. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Working with Heterogeneous Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Reject Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Locating Reject Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Reading Reject Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
8 Table of Contents
PowerExchange for Essbase Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
PowerExchange for Greenplum Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
PowerExchange for Google Analytics Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
PowerExchange for Google BigQuery Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
PowerExchange for Google Cloud Spanner Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
PowerExchange for Google Cloud Storage Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
PowerExchange for Hadoop Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
PowerExchange for HANA Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
PowerExchange for JD Edwards EnterpriseOne Connections. . . . . . . . . . . . . . . . . . . . . . . . . 158
PowerExchange for JMS Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
JNDI Application Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
JMS Application Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
PowerExchange for Kafka Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
PowerExchange for LDAP Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Microsoft Azure Blob Storage Connection Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
PowerExchange for Microsoft Azure SQL Data Warehouse V3 Connections. . . . . . . . . . . . . . . . 162
Microsoft Dynamics 365 for Sales Connection Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . 163
PowerExchange for MongoDB JDBC Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
PowerExchange for MSMQ Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
PowerExchange for Netezza Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
PowerExchange for Oracle E-Business Suite Connection Properties. . . . . . . . . . . . . . . . . . . . . 167
PowerExchange for PeopleSoft Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
PowerExchange for PostgreSQL Connection Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
PowerExchange for Salesforce Analytics Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
PowerExchange for Salesforce Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
PowerExchange for SAP NetWeaver Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
SAP R/3 Application Connection for ABAP Integration. . . . . . . . . . . . . . . . . . . . . . . . . . 172
Application Connection for HTTP Stream Mode Sessions. . . . . . . . . . . . . . . . . . . . . . . . 173
Application Connections for ALE Integration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Application Connection for BAPI/RFC Integration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
PowerExchange for SAP NetWeaver BI Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
SAP BW OHS Application Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
SAP BW Application Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
PowerExchange for Siebel Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Siebel Application Connections for Sources, Targets, and EIM Invoker Transformations. . . . . 177
Siebel Application Connection for EIM Read and Load Transformations. . . . . . . . . . . . . . . 178
PowerExchange for Tableau Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
PowerExchange for Tableau V3 Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
PowerExchange for Teradata Parallel Transporter Connections. . . . . . . . . . . . . . . . . . . . . . . 181
PowerExchange for TIBCO Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Connection Properties for TIB/Rendezvous Application Connections. . . . . . . . . . . . . . . . . 183
Connection Properties for TIB/Adapter SDK Connections. . . . . . . . . . . . . . . . . . . . . . . . 184
Table of Contents 9
PowerExchange for Web Services Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
PowerExchange for webMethods Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
webMethods Broker Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
webMethods Integration Server Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
PowerExchange for WebSphere MQ Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Testing a Queue Connection on Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Testing a Queue Connection on UNIX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Connection Object Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Creating a Connection Object. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Editing a Connection Object. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Deleting a Connection Object. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
10 Table of Contents
Verifying rmail on AIX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Verifying sendmail on Linux. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Configuring MAPI on Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Step 1. Configure a Microsoft Outlook User. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Step 2. Configure Logon Network Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Step 3. Create Distribution Lists. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Step 4. Verify the Integration Service Settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Configuring SMTP on Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Working with Email Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Using Email Tasks in a Workflow or Worklet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Email Address Tips and Guidelines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Creating an Email Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
Working with Post-Session Email. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
Email Variables and Format Tags. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
Post-Session Email. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Sample Email. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Suspension Email. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Configuring Suspension Email. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Using Service Variables to Address Email. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Tips for Sending Email. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Table of Contents 11
Stopping or Aborting Tasks and Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Scheduling Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Unscheduling Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Session and Workflow Logs in the Workflow Monitor. . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Viewing History Names. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Workflow and Task Status. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Using the Gantt Chart View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Listing Tasks and Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Navigating the Time Window in Gantt Chart View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Zooming the Gantt Chart View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Performing a Search. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Opening All Folders. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Using the Task View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Filtering in Task View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Opening All Folders. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Tips for Monitoring Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
12 Table of Contents
Log Codes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Message Severity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Writing Logs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
Passing Session Events to an External Library. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
Log Events Window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
Searching for Log Events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
Working with Log Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Writing to Log Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Archiving Log Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Session Log Rollover. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Configuring Workflow Log File Information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Configuring Session Log File Information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
Workflow Logs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Workflow Log Events Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Workflow Log Sample. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Session Logs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Session Log Events Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
Session Log File Sample. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
Tracing Levels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
Viewing Log Events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Viewing the Log Events Window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Viewing an Archived Binary Log File. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Viewing a Text Log File. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Table of Contents 13
Events Tab. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
14 Table of Contents
Preface
Use the PowerCenter® Workflow Basics Guide to learn how to create, run, schedule, and to monitor workflows
and sessions.
You can create and configure a workflow in the Workflow Manager. Workflows are a set of instructions to run
the mappings build in Designer. Workflows can contain a session and other tasks, such as an email
notification. You can also schedule a workflow. You can also create worklets, which are objects that contain
a set of tasks to reuse a set of workflow logic.
You can monitor the workflows and sessions in the Workflow Monitor. You can view the status of the
workflows and sessions in the log and you can review detailed log events from each service in the domain.
Informatica Resources
Informatica provides you with a range of product resources through the Informatica Network and other online
portals. Use the resources to get the most from your Informatica products and solutions and to learn from
other Informatica users and subject matter experts.
Informatica Network
The Informatica Network is the gateway to many resources, including the Informatica Knowledge Base and
Informatica Global Customer Support. To enter the Informatica Network, visit
https://network.informatica.com.
To search the Knowledge Base, visit https://search.informatica.com. If you have questions, comments, or
ideas about the Knowledge Base, contact the Informatica Knowledge Base team at
[email protected].
15
Informatica Documentation
Use the Informatica Documentation Portal to explore an extensive library of documentation for current and
recent product releases. To explore the Documentation Portal, visit https://docs.informatica.com.
If you have questions, comments, or ideas about the product documentation, contact the Informatica
Documentation team at [email protected].
Informatica Velocity
Informatica Velocity is a collection of tips and best practices developed by Informatica Professional Services
and based on real-world experiences from hundreds of data management projects. Informatica Velocity
represents the collective knowledge of Informatica consultants who work with organizations around the
world to plan, develop, deploy, and maintain successful data management solutions.
You can find Informatica Velocity resources at http://velocity.informatica.com. If you have questions,
comments, or ideas about Informatica Velocity, contact Informatica Professional Services at
[email protected].
Informatica Marketplace
The Informatica Marketplace is a forum where you can find solutions that extend and enhance your
Informatica implementations. Leverage any of the hundreds of solutions from Informatica developers and
partners on the Marketplace to improve your productivity and speed up time to implementation on your
projects. You can find the Informatica Marketplace at https://marketplace.informatica.com.
To find your local Informatica Global Customer Support telephone number, visit the Informatica website at
the following link:
https://www.informatica.com/services-and-training/customer-success-services/contact-us.html.
To find online support resources on the Informatica Network, visit https://network.informatica.com and
select the eSupport option.
16 Preface
Chapter 1
Workflow Manager
This chapter includes the following topics:
You can also create a worklet in the Workflow Manager. A worklet is an object that groups a set of tasks. A
worklet is similar to a workflow, but without scheduling information. You can run a batch of worklets inside a
workflow.
After you create a workflow, you run the workflow in the Workflow Manager and monitor it in the Workflow
Monitor.
17
Workflow Manager Tools
To create a workflow, you first create tasks such as a session, which contains the mapping you build in the
Designer. You then connect tasks with conditional links to specify the order of execution for the tasks you
created. The Workflow Manager consists of three tools to help you develop a workflow:
• Task Developer. Use the Task Developer to create tasks you want to run in the workflow.
• Workflow Designer. Use the Workflow Designer to create a workflow by connecting tasks with links. You
can also create tasks in the Workflow Designer as you develop the workflow.
• Worklet Designer. Use the Worklet Designer to create a worklet.
Workflow Tasks
You can create the following types of tasks in the Workflow Manager:
• Navigator. You can connect to and work in multiple repositories and folders. In the Navigator, the
Workflow Manager displays a red icon over invalid objects.
• Workspace. You can create, edit, and view tasks, workflows, and worklets.
• Output. Contains tabs to display different types of output messages. The Output window contains the
following tabs:
- Save. Displays messages when you save a workflow, worklet, or task. The Save tab displays a validation
summary when you save a workflow or a worklet.
- Fetch Log. Displays messages when the Workflow Manager fetches objects from the repository.
The Workflow Manager also displays a status bar that shows the status of the operation you perform.
Note: For the Timer task and schedule settings, the Workflow Manager displays date in short date format and
the time in 24-hour format (HH:mm).
To configure Workflow Manager options, click Tools > Options. You can configure the following options:
• General. You can configure workspace options, display options, and other general options on the General
tab.
• Format. You can configure font, color, and other format options on the Format tab.
• Miscellaneous. You can configure Copy Wizard and Versioning options on the Miscellaneous tab.
• Advanced. You can configure enhanced security for connection objects in the Advanced tab.
General Options
General options control tool behavior, such as whether or not a tool retains its view when you close it, how
the Overview window behaves, and where the Workflow Manager stores workspace files.
The following table describes general options you can configure in the Workflow Manager:
Option Description
Reload Tasks/Workflows Reloads the last view of a tool when you open it. For example, if you have a workflow
When Opening a Folder open when you disconnect from a repository, select this option so that the same
workflow appears the next time you open the folder and Workflow Designer. Default is
enabled.
Ask Whether to Reload Appears when you select Reload tasks/workflows when opening a folder. Select this
the Tasks/Workflows option if you want the Workflow Manager to prompt you to reload tasks, workflows, and
worklets each time you open a folder. Default is disabled.
Delay Overview Window By default, when you drag the focus of the Overview window, the focus of the workbook
Pans moves concurrently. When you select this option, the focus of the workspace does not
change until you release the mouse button. Default is disabled.
Allow Invoking In-Place By default, you can press F2 to edit objects directly in the workspace instead of opening
Editing Using the Mouse the Edit Task dialog box. Select this option so you can also click the object name in the
workspace to edit the object. Default is disabled.
Open Editor When a Task Opens the Edit Task dialog box when you create a task. By default, the Workflow
Is Created Manager creates the task in the workspace. If you do not enable this option, double-click
the task to open the Edit Task dialog box. Default is disabled.
Workspace File Directory Directory for workspace files created by the Workflow Manager. Workspace files
maintain the last task or workflow you saved. This directory should be local to the
PowerCenter Client to prevent file corruption or overwrites by multiple users. By default,
the Workflow Manager creates files in the PowerCenter Client installation directory.
Display Tool Names on Displays the name of the tool in the upper left corner of the workspace or workbook.
Views Default is enabled.
Always Show the Full Shows the full name of a task when you select it. By default, the Workflow Manager
Name of Tasks abbreviates the task name in the workspace. Default is disabled.
Show the Expression on a Shows the link condition in the workspace. If you do not enable this option, the Workflow
Link Manager abbreviates the link condition in the workspace. Default is enabled.
Show Background in Displays background color for objects in iconic view. Disable this option to remove
Partition Editor and background color from objects in iconic view. Default is disabled.
Pushdown Optimization
Launch Workflow Monitor Launches Workflow Monitor when you start a workflow or a task. Default is enabled.
when Workflow Is Started
Receive Notifications You can receive notification messages in the Workflow Manager and view them in the
from Repository Service Output window. Notification messages include information about objects that another
user creates, modifies, or deletes. You receive notifications about sessions, tasks,
workflows, and worklets. The Repository Service notifies you of the changes so you know
objects you are working with may be out of date. For the Workflow Manager to receive a
notification, the folder containing the object must be open in the Navigator, and the
object must be open in the workspace. You also receive user-created notifications
posted by the user who manages the Repository Service. Default is enabled.
Format Options
Format options control workspace colors and fonts. You can configure format options for each Workflow
Manager tool.
The following table describes the format options for the Workflow Manager:
Option Description
Current Theme Currently selected color theme for the Workflow Manager tools. This field is display-only.
Tools Workflow Manager tool that you want to configure. When you select a tool, the
configurable workspace elements appear in the list below Tools menu.
Orthogonal Links Link lines run horizontally and vertically but not diagonally in the workspace.
Solid Lines for Links Links appear as solid lines. By default, the Workflow Manager displays orthogonal links
as dotted lines.
Change Change the display font and language script for the selected category.
Note: You cannot set the font size for any category. While selecting the preferred display
font, some font scripts might appear larger than the others. For instance, an Arial font
might appear larger than the Calibri font.
Current Font Font of the Workflow Manager component that is currently selected in the Categories
menu. This field is display-only.
Miscellaneous Options
Miscellaneous options control the display settings and available functions of the Copy Wizard, versioning,
and target load options. Target options control how the Integration Service loads targets. To configure the
Copy Wizard, Versioning, and Target Load Type options, click Tools > Options and select the Miscellaneous
tab.
Option Description
Generate Unique Name When Generates unique names for copied objects if you select the Rename option.
Resolved to “Rename” For example, if the workflow wf_Sales has the same name as a workflow in the
destination folder, the Rename option generates the unique name wf_Sales1.
Default is enabled.
Get Default Object When Resolved Uses the object with the same name in the destination folder if you select the
to “Choose” Choose option. Default is disabled.
Show Check Out Image in Navigator Displays the Check Out icon when an object has been checked out. Default is
enabled.
Allow Delete Without Checkout You can delete versioned repository objects without first checking them out.
You cannot, however, delete an object that another user has checked out.
When you select this option, the Repository Service checks out an object to
you when you delete it. Default is disabled.
Check In Deleted Objects Checks in deleted objects after you save the changes to the repository. When
Automatically After They Are Saved you clear this option, the deleted object remains checked out and you must
check it in from the results view. Default is disabled.
Target Load Type Sets default load type for sessions. You can choose normal or bulk loading.
Any change you make takes effect after you restart the Workflow Manager.
You can override this setting in the session properties. Default is Bulk.
When you disable enable enhanced security, the Workflow Manager assigns read, write, and execute
permissions to all users that would otherwise receive permissions of the default group. If you delete the
owner from the repository, the Workflow Manager assigns ownership of the object to the administrator.
Option Description
Header and Footer Displays the window title, page number, number of pages, current date and current time in the
printout of the workspace. You can also indicate the alignment of the header and footer.
Options Adds a frame or corner to the page, shows full name of the tasks and options. You can also
choose to print in color or black and white.
• Customize windows.
• Customize toolbars.
• Search for tasks, links, events and variables.
• Arrange objects in the workspace.
• Zoom and pan the workspace.
• Display a window. From the menu, select View. Then select the window you want to open.
• Close a window. Click the small x in the upper right corner of the window.
• Dock or undock a window. Double-click the title bar or drag the title bar toward or away from the
workspace.
• Standard. Contains buttons to connect to and disconnect from repositories and folders, toggle windows,
zoom in and out, pan the workspace, and find objects.
• Connections. Contains buttons to create and edit connections, and assign Integration Services.
• Repository. Contains buttons to connect to and disconnect from repositories and folders, export and
import objects, save changes, and print the workspace.
• View. Contains buttons to customize toolbars, toggle the status bar and windows, toggle full-screen view,
create a new workbook, and view the properties of objects.
• Layout. Contains buttons to arrange and restore objects in the workspace, find objects, zoom in and out,
and pan the workspace.
• Tasks. Contains buttons to create tasks.
• Workflow. Contains buttons to edit workflow properties.
• Run. Contains buttons to schedule the workflow, start the workflow, or start a task.
• Versioning. Contains buttons to check in objects, undo checkouts, compare versions, list checked-out
objects, and list repository queries.
• Tools. Contains buttons to connect to the other PowerCenter Client applications. When you use a Tools
button to open another PowerCenter Client application, PowerCenter uses the same repository connection
to connect to the repository and opens the same folders.
You can perform the following operations with toolbars:
• Find in Workspace.
• Find Next.
1. In any Workflow Manager tool, click the Find in Workspace toolbar button or click Edit > Find in
Workspace.
The Find in Workspace dialog box appears.
2. Choose search for tasks, links, variables, or events.
3. Enter a search string, or select a string from the list.
The Workflow Manager saves the last 10 search strings in the list.
1. To search for a task, link, event, or variable, open the appropriate Workflow Manager tool and click a
task, link, or event. To search for text in the Output window, click the appropriate tab in the Output
window.
2. Enter a search string in the Find field on the standard toolbar.
The search is not case sensitive.
3. Click Edit > Find Next, click the Find Next button on the toolbar, or press Enter or F3 to search for the
string.
The Workflow Manager highlights the first task name, link condition, event name, or variable name that
contains the search string, or the first string in the Output window that matches the search string.
4. To search for the next item, press Enter or F3 again.
The Workflow Manager alerts you when you have searched through all items in the workspace or Output
window before it highlights the same objects a second time.
To pan the workspace, click Layout > Pan or click the Pan button on the toolbar. Drag the focus of the
workspace window and release the mouse button when it is in the appropriate position. Double-click the
workspace to stop panning.
You can view properties of a folder, task, worklet, or workflow. For folders, the Workflow Manager displays
folder name and whether the folder is shared. Object properties are read-only.
To refresh a folder, right-click the open folder, and then select Refresh.
To refresh the repository folder list, right-click the repository, and then select Refresh Folder List.
To check in an object from the Workflow Manager workspace, select the object or objects and click
Versioning > Check in. If you are checking in multiple objects, you can choose to apply comment to all
objects.
If you want to check out or check in scheduler objects in the Workflow Manager, you can run an object query
to search for them. You can also check out a scheduler object in the Scheduler Browser window when you
edit the object. However, you must run an object query to check in the object.
If you want to check out or check in session configuration objects in the Workflow Manager, you can run an
object query to search for them. You can also check out objects from the Session Config Browser window
when you edit them.
You also can check out and check in session configuration and scheduler objects from the Repository
Manager.
Use the following rules and guidelines when you view older versions of objects in the workspace:
• You cannot simultaneously view multiple versions of composite objects, such as workflows and worklets.
• Older versions of a composite object might not include the child objects that were used when the
composite object was checked in. If you open a composite object that includes a child object version that
is purged from the repository, the preceding version of the child object appears in the workspace as part
of the composite object. For example, you might want to view version 5 of a workflow that originally
included version 3 of a session, but version 3 of the session is purged from the repository. When you view
version 5 of the workflow, version 2 of the session appears as part of the workflow.
• You cannot view older versions of sessions if they reference deleted or invalid mappings, or if they do not
have a session configuration.
1. In the workspace or Navigator, select the object and click Versioning > View History.
2. Select the version you want to view in the workspace and click Tools > Open in Workspace.
1. In the workspace or Navigator, select an object and click Versioning > View History.
• Track repository objects during development. You can add Label, User, Last saved, or Comments
parameters to queries to track objects during development.
• Associate a query with a deployment group. When you create a dynamic deployment group, you can
associate an object query with it.
To create an object query, click Tools > Queries to open the Query Browser.
From the Query Browser, you can create, edit, and delete queries. You can also configure permissions for
each query from the Query Browser. You can run any queries for which you have read permissions from the
Query Browser.
Use the Copy Wizard in the Workflow Manager to copy objects. When you copy a workflow or a worklet, the
Copy Wizard copies all of the worklets, sessions, and tasks in the workflow. You must resolve all conflicts
that occur. Conflicts occur when the Copy Wizard finds a workflow or worklet with the same name in the
target folder or when the connection object does not exist in the target repository. If a connection object
does not exist, you can skip the conflict and choose a connection object after you copy the workflow. You
cannot copy connection objects. Conflicts may also occur when you copy Session tasks.
You can configure display settings and functions of the Copy Wizard by choosing Tools > Options.
Note: Use the Import Wizard in the Workflow Manager to import objects from an XML file. The Import Wizard
provides the same options to resolve conflicts as the Copy Wizard.
Copying Sessions
When you copy a Session task, the Copy Wizard looks for the database connection and associated mapping
in the destination folder. If the mapping or connection does not exist in the destination folder, you can select
a new mapping or connection. If the destination folder does not contain any mapping, you must first copy a
mapping to the destination folder in the Designer before you can copy the session.
You can compare objects across folders and repositories. You must open both folders to compare the
objects. You can compare a reusable object with a non-reusable object. You can also compare two versions
of the same object.
• Tasks
• Sessions
• Worklets
• Workflows
You can also compare instances of the same type. For example, if the workflows you compare contain
worklet instances with the same name, you can compare the instances to see if they differ. Use the Workflow
Manager to compare the following instances and attributes:
• Instances of sessions and tasks in a workflow or worklet comparison. For example, when you compare
workflows, you can compare task instances that have the same name.
• Instances of mappings and transformations in a session comparison. For example, when you compare
sessions, you can compare mapping instances.
When you compare objects, the Workflow Manager displays the results in the Diff Tool window. The Diff Tool
output contains different nodes for different types of objects.
When you import Workflow Manager objects, you can compare object conflicts.
Comparing Objects
Use the following procedure to compare objects:
1. Open the folders that contain the objects you want to compare.
2. Open the appropriate Workflow Manager tool.
3. Click Tasks > Compare.
-or-
Click Worklets > Compare.
-or-
Click Workflow > Compare.
4. In the dialog box that appears, select the objects that you want to compare.
5. Click Compare.
Tip: You can also compare objects from the Navigator or workspace. In the Navigator, select the objects,
right-click and select Compare Objects. In the workspace, select the objects, right-click and select
Compare Objects.
6. To view more differences between object properties, click the Compare Further icon or right-click the
differences.
7. If you want to save the comparison as a text or HTML file, click File > Save to File.
Metadata Extensions
You can extend the metadata stored in the repository by associating information with individual repository
objects. For example, you may want to store your name with the worklets you create. If you create a session,
you can store your telephone extension with that session. You associate information with repository objects
using metadata extensions. You can create and promote metadata extensions on the Metadata Extensions
tab.
Metadata Description
Extensions Tab
Options
Extension Name Name of the metadata extension. Metadata extension names must be unique for each type of
object in a domain. Metadata extension names cannot contain any special characters except
underscores and cannot begin with numbers.
Reusable Makes the metadata extension reusable or non-reusable. Check to apply the metadata
extension to all objects of this type (reusable). Clear to make the metadata extension apply to
this object only (non-reusable).
Note: If you make a metadata extension reusable, you cannot change it back to non-reusable.
The Workflow Manager makes the extension reusable as soon as you confirm the action.
UnOverride This column appears only if the value of one of the metadata extensions was changed. To
restore the default value, click Revert.
Tip: To create multiple reusable metadata extensions, use the Repository Manager.
Metadata Extensions 31
Editing a Metadata Extension
You can edit user-defined, reusable, and non-reusable metadata extensions for repository objects using the
Workflow Manager. To edit a metadata extension, you edit the repository object, and then make changes to
the Metadata Extensions tab.
What you can edit depends on whether the metadata extension is reusable or non-reusable. You can promote
a non-reusable metadata extension to reusable, but you cannot change a reusable metadata extension to
non-reusable.
To edit the value of a reusable metadata extension, click the Metadata Extensions tab and modify the Value
field. To restore the default value for a metadata extension, click Revert in the UnOverride column.
To edit a non-reusable metadata extension, click the Metadata Extensions tab. You can update the Datatype,
Value, Precision, and Description fields.
To make the metadata extension reusable, select Reusable. If you make a metadata extension reusable, you
cannot change it back to non-reusable. The Workflow Manager makes the extension reusable as soon as you
confirm the action.
To restore the default value for a metadata extension, click Revert in the UnOverride column.
Expression Editor
The Workflow Manager provides an Expression Editor for any expression in the workflow. You can enter
expressions using the Expression Editor for Link conditions, Decision tasks, and Assignment tasks.
The Expression Editor displays built-in variables, user-defined workflow variables, and predefined workflow
variables such as $Session.status.
Adding Comments
You can add comments using -- or // comment indicators with the Expression Editor. Use comments to give
descriptive information about the expression, or you can specify a valid URL to access business
documentation about the expression.
Validating Expressions
Use the Validate button to validate an expression. If you do not validate an expression, the Workflow Manager
validates it when you close the Expression Editor. You cannot run a workflow with invalid expressions.
Expressions in link conditions and Decision task conditions must evaluate to a numeric value. Workflow
variables used in expressions must exist in the workflow.
You can resize the Expression Editor. Expand the dialog box by dragging from the borders. The Workflow
Manager saves the new size for the dialog box as a client setting.
Keyboard Shortcuts
When editing a repository object or maneuvering around the Workflow Manager, use the following Keyboard
shortcuts to help you complete different operations quickly.
The following table lists the Workflow Manager keyboard shortcuts for editing a repository object:
Task Shortcut
Find all combination and list boxes. Type the first letter on the list.
Keyboard Shortcuts 33
Task Shortcut
Paste copied or cut text from the clipboard into a cell. Ctrl+V
The following table lists the Workflow Manager keyboard shortcuts for navigating in the workspace:
Task Shortcut
Create links. Ctrl+F2. Press Ctrl+F2 to select first task you want to
link. Press Tab to select the rest of the tasks you want
to link. Press Ctrl+F2 again to link all the tasks you
selected.
Expand selected node and all its children. SHIFT + * (use asterisk on numeric keypad )
• Workflows Overview, 35
• Creating a Workflow, 36
• Using the Workflow Wizard, 37
• Assigning an Integration Service, 39
• Workflow Reports (Deprecated), 40
• Working with Worklets, 40
• Workflow Links, 43
Workflows Overview
A workflow is a set of instructions that tells the Integration Service how to run tasks such as sessions, email
notifications, and shell commands. After you create tasks in the Task Developer and Workflow Designer, you
connect the tasks with links to create a workflow.
In the Workflow Designer, you can specify conditional links and use workflow variables to create branches in
the workflow. The Workflow Manager also provides Event-Wait and Event-Raise tasks to control the sequence
of task execution in the workflow. You can also create worklets and nest them inside the workflow.
Every workflow contains a Start task, which represents the beginning of the workflow.
When you create a workflow, select an Integration Service to run the workflow. You can start the workflow
using the Workflow Manager, Workflow Monitor, or pmcmd.
Use the Workflow Monitor to see the progress of a workflow during its run. The Workflow Monitor can also
show the history of a workflow.
1. Create a workflow. Create a workflow in the Workflow Designer or by using the Workflow Generation
Wizard in the PowerCenter Designer.
35
2. Add tasks to the workflow. You might have already created tasks in the Task Developer. Or, you can add
tasks to the workflow as you develop the workflow in the Workflow Designer.
3. Connect tasks with links. After you add tasks to the workflow, connect them with links to specify the
order of execution in the workflow.
4. Specify conditions for each link. You can specify conditions on the links to create branches and
dependencies.
5. Validate workflow. Validate the workflow in the Workflow Designer to identify errors.
6. Save workflow. When you save the workflow, the Workflow Manager validates the workflow and updates
the repository.
7. Run workflow. In the workflow properties, select an Integration Service to run the workflow. Run the
workflow from the Workflow Manager, Workflow Monitor, or pmcmd. You can monitor the workflow in
the Workflow Monitor.
Related Topics:
• “Manual Workflow Runs” on page 203
• “Workflow Monitor” on page 216
• “Workflow Properties Reference” on page 280
Creating a Workflow
A workflow must contain a Start task. The Start task represents the beginning of a workflow. When you
create a workflow, the Workflow Designer creates a Start task and adds it to the workflow. You cannot delete
the Start task.
After you create a workflow, you can add tasks to the workflow. The Workflow Manager includes tasks such
as the Session, Command, and Email tasks.
Finally, you connect workflow tasks with links to specify the order of execution in the workflow. You can add
conditions to links.
When you edit a workflow, the Repository Service updates the workflow information when you save the
workflow. If a workflow is running when you make edits, the Integration Service uses the updated information
the next time you run the workflow.
You can also create a workflow through the Workflow Wizard in the Workflow Manager or the Workflow
Generation Wizard in the PowerCenter Designer.
If you have already created tasks in the Task Developer, add them to the workflow by dragging the tasks from
the Navigator to the Workflow Designer workspace.
To create and add tasks as you develop the workflow, click Tasks > Create in the Workflow Designer. Or, use
the Tasks toolbar to create and add tasks to the workflow. Click the button on the Tasks toolbar for the task
you want to create. Click again in the Workflow Designer workspace to create and add the task.
Tasks you create in the Workflow Designer are non-reusable. Tasks you create in the Task Developer are
reusable.
Deleting a Workflow
You may decide to delete a workflow that you no longer use. When you delete a workflow, you delete all non-
reusable tasks and reusable task instances associated with the workflow. Reusable tasks used in the
workflow remain in the folder when you delete the workflow.
If you delete a workflow that is running, the Integration Service aborts the workflow. If you delete a workflow
that is scheduled to run, the Integration Service removes the workflow from the schedule.
You can delete a workflow in the Navigator window, or you can delete the workflow currently displayed in the
Workflow Designer workspace:
• To delete a workflow from the Navigator window, open the folder, select the workflow and press the
Delete key.
• To delete a workflow currently displayed in the Workflow Designer workspace, click Workflows > Delete.
Before you create a workflow, verify that the folder contains a valid mapping for the Session task.
Complete the following steps to build a workflow using the Workflow Wizard:
1. In the Workflow Manager, open the folder containing the mapping you want to use in the workflow.
2. Open the Workflow Designer.
3. Click Workflows > Wizard.
The Workflow Wizard appears.
4. Enter a name for the workflow.
The convention for naming workflows is wf_WorkflowName.
5. Enter a description for the workflow.
6. Select the Integration Service to run the workflow and click Next.
1. In the second step of the Workflow Wizard, select a valid mapping and click the right arrow button.
The Workflow Wizard creates a Session task in the right pane using the selected mapping and names it
s_MappingName by default.
2. You can select additional mappings to create more Session tasks in the workflow.
When you add multiple mappings to the list, the Workflow Wizard creates sequential sessions in the
order you add them.
3. Use the arrow buttons to change the session order.
4. Specify whether the session should be reusable.
When you create a reusable session, use the session in other workflows.
5. Specify how you want the Integration Service to run the workflow.
You can specify that the Integration Service runs sessions only if previous sessions complete, or you can
specify that the Integration Service always runs each session. When you select this option, it applies to
all sessions you create using the Workflow Wizard.
When you configure a task, you can configure the workflow to fail if the task fails. If you configure the
workflow to fail when a task fails, the Integration Service removes the workflow from the schedule, and you
must reschedule it. You can reschedule the workflow through the Workflow Manager or through pmcmd. If
you do not configure the workflow to fail when a task fails, the Integration Service reschedules the workflow.
1. In the third step of the Workflow Wizard, configure the scheduling and run options.
2. Click Next.
The Workflow Wizard displays the settings for the workflow.
3. Verify the workflow settings, then click Finish. To edit settings, click Back.
The completed workflow opens in the Workflow Designer workspace. From the workspace, you can add
tasks, create concurrent sessions, add conditions to links, or change properties.
An administrator uses the Administrator tool to create a Reporting and Dashboards Service and adds a
reporting source for the service. The reporting source must be the PowerCenter repository that contains the
workflows that you want to report on.
The Workflow Composite Report includes information about the following components in a workflow:
To run a worklet, include the worklet in a workflow. The workflow that contains the worklet is called the
parent workflow. When the Integration Service runs a worklet, it expands the worklet to run tasks and
evaluate links within the worklet. It writes information about worklet execution in the workflow log.
Suspending Worklets
When you choose Suspend on Error for the parent workflow, the Integration Service also suspends the
worklet if a task in the worklet fails. When a task in the worklet fails, the Integration Service stops executing
the failed task and other tasks in its path. If no other task is running in the worklet, the worklet status is
“Suspended.” If one or more tasks are still running in the worklet, the worklet status is “Suspending.” The
Developing a Worklet
To develop a worklet, you must first create a worklet. After you create a worklet, configure worklet properties
and add tasks to the worklet. You can create reusable worklets in the Worklet Designer. You can also create
non-reusable worklets in the Workflow Designer as you develop the workflow.
Note: You can promote non-reusable worklet to reusable worklet by selecting the Make Reusable option in
the worklet properties in a non-versioned repository. In a versioned repository, the reusable option is
unavailable. To rename a non-reusable worklet, open the worklet properties in the Workflow Designer.
In addition to general task settings, you can configure the following worklet properties:
• Worklet variables. Use worklet variables to reference values and record information. You use worklet
variables the same way you use workflow variables. You can assign a workflow variable to a worklet
variable to override its initial value.
Related Topics:
• “Metadata Extensions” on page 30
• “Working with the Event Task” on page 69
Nesting Worklets
You can nest a worklet within another worklet. When you run a workflow containing nested worklets, the
Integration Service runs the nested worklet from within the parent worklet. You can group several worklets
together by function or simplify the design of a complex workflow when you nest worklets.
You might choose to nest worklets to load data to fact and dimension tables. Create a nested worklet to load
fact and dimension data into a staging area. Then, create a nested worklet to load the fact and dimension
data from the staging area to the data warehouse.
You might choose to nest worklets to simplify the design of a complex workflow. Nest worklets that can be
grouped together within one worklet. To nest an existing reusable worklet, click Tasks > Insert Worklet. To
create a non-reusable nested worklet, click Tasks > Create, and select worklet.
After you create links between tasks, you can create conditions for each link to determine the order of
operation in the workflow. If you do not specify conditions for each link, the Integration Service runs the next
task in the workflow or worklet by default.
Use predefined or user-defined workflow and worklet variables in the link condition. If the link condition
evaluates to True, the Integration Service runs the next task in the workflow or worklet. If the link condition
evaluates to False, the Integration Service does not run the next task.
You can view results of link evaluation during workflow runs in the workflow log file.
1. In the Workflow Designer or Worklet Designer workspace, double-click the link you want to specify.
The Expression Editor appears.
2. In the Expression Editor, enter the link condition.
Workflow Links 43
The Expression Editor provides predefined workflow and worklet variables, user-defined workflow and
worklet variables, variable functions, and boolean and arithmetic operators.
3. Validate the expression using the Validate button.
The Workflow Manager displays validation results in the Output window.
Tip: Drag the end point of a link to move it from one task to another without losing the link condition.
To accomplish this, you can set the following link condition between the sessions so that the s_STORES_AZ
runs only if the number of failed target rows for S_STORES_CA is zero:
$s_STORES_CA.TgtFailedRows = 0
After you specify the link condition in the Expression Editor, the Workflow Manager validates the link
condition and displays it next to the link in the workflow or worklet.
1. In the Workflow Designer or Worklet Designer workspace, right-click a task and choose Highlight Path.
2. Select Forward Path, Backward Path, or Both.
The Workflow Manager highlights all links in the branch you select.
1. In the Workflow Designer or Worklet Designer workspace, select all links you want to delete.
Tip: Use the mouse to drag the selection, or you can Ctrl-click the tasks and links.
2. Click Edit > Delete Links.
The Workflow Manager removes all selected links.
Sessions
This chapter includes the following topics:
• Sessions Overview, 45
• Session Task, 45
• Editing a Session, 46
• Performance Details, 48
• Pre- and Post-Session Commands, 49
Sessions Overview
A session is a set of instructions that tells the Integration Service how and when to move data from sources
to targets. A session is a type of task, similar to other tasks available in the Workflow Manager. In the
Workflow Manager, you configure a session by creating a Session task. To run a session, you must first
create a workflow to contain the Session task.
When you create a Session task, enter general information such as the session name, session schedule, and
the Integration Service to run the session. You can select options to run pre-session shell commands, send
On-Success or On-Failure email, and use FTP to transfer source and target files.
Configure the session to override parameters established in the mapping, such as source and target location,
source and target type, error tracing levels, and transformation attributes. You can also configure the session
to collect performance details for the session and store them in the PowerCenter repository. You might view
performance details for a session to tune the session.
You can run as many sessions in a workflow as you need. You can run the Session tasks sequentially or
concurrently, depending on the requirement.
The Integration Service creates several files and in-memory caches depending on the transformations and
options used in the session.
Session Task
You create a Session task for each mapping that you want the Integration Service to run. The Integration
Service uses the instructions configured in the session to move data from sources to targets.
45
You can create a reusable Session task in the Task Developer. You can also create non-reusable Session
tasks in the Workflow Designer as you develop the workflow. After you create the session, you can edit the
session properties at any time.
Note: Before you create a Session task, you must configure the Workflow Manager to communicate with
databases and the Integration Service. You must assign appropriate permissions for any database, FTP, or
external loader connections you configure.
Editing a Session
After you create a session, you can edit it. For example, you might need to adjust the buffer and cache sizes,
modify the update strategy, or clear a variable value saved in the repository.
Double-click the Session task to open the session properties. The session has the following tabs, and each of
those tabs has multiple settings:
• General tab. Enter session name, mapping name, and description for the Session task, assign resources,
and configure additional task options.
• Properties tab. Enter session log information, test load settings, and performance configuration.
• Config Object tab. Enter advanced settings, log options, and error handling configuration.
• Mapping tab. Enter source and target information, override transformation properties, and configure the
session for partitioning.
• Components tab. Configure pre- or post-session shell commands and emails.
• Metadata Extension tab. Configure metadata extension options.
You can edit session properties at any time. The repository updates the session properties immediately.
If the session is running when you edit the session, the repository updates the session when the session
completes. If the mapping changes, the Workflow Manager might issue a warning that the session is invalid.
The Workflow Manager then lets you continue editing the session properties. After you edit the session
properties, the Integration Service validates the session and reschedules the session.
Related Topics:
• “Session Validation” on page 195
• “Session Properties Reference” on page 259
46 Chapter 3: Sessions
Applying Attributes to All Instances
When you edit the session properties, you can apply source, target, and transformation settings to all
instances of the same type in the session. You can also apply settings to all partitions in a pipeline. You can
apply reader or writer settings, connection settings, and properties settings.
For example, you might need to change a relational connection from a test to a production database for all
the target instances in a session. On the Mapping tab, you can change the connection value for one target in
a session and apply the connection to the other relational target objects.
The following table shows the options you can use to apply attributes to objects in a session. You can apply
different options depending on whether the setting is a reader or writer, connection, or an object property.
Reader Apply Type to All Instances Applies a reader or writer type to all instances of the same
Writer object type in the session. For example, you can apply a
relational reader type to all the other readers in the session.
Reader Apply Type to All Partitions Applies a reader or writer type to all the partitions in a
Writer pipeline. For example, if you have four partitions, you can
change the writer type in one partition for a target instance.
Use this option to apply the change to the other three
partitions.
Connections Apply Connection Type Applies the same type of connection to all instances.
Connection types are relational, FTP, queue, application, or
external loader.
Connections Apply Connection Value Apply a connection value to all instances or partitions. The
connection value defines a specific connection that you can
view in the connection browser. You can apply a connection
value that is valid for the existing connection type.
Connections Apply Connection Attributes Apply only the connection attribute values to all instances or
partitions. Each type of connection has different attributes.
You can apply connection attributes separately from
connection values.
Connections Apply Connection Data Apply the connection value and its connection attributes to
all the other instances that have the same connection type.
This option combines the connection option and the
connection attribute option.
Connections Apply All Connection Applies the connection value and its attributes to all the other
Information instances even if they do not have the same connection type.
This option is similar to Apply Connection Data, but it lets you
change the connection type.
Properties Apply Attribute to all Instances Applies an attribute value to all instances of the same object
type in the session. For example, if you have a relational
target you can choose to truncate a table before you load
data. You can apply the attribute value to all the relational
targets in the session.
Properties Apply Attribute to all Partitions Applies an attribute value to all partitions in a pipeline. For
example, you can change the name of the reject file name in
one partition for a target instance, then apply the file name
change to the other partitions.
Editing a Session 47
Applying Connection Settings
When you apply connection settings you can apply the connection type, connection value, and connection
attributes. You can only apply a connection value that is valid for a connection type unless you choose the
Apply All Connection Information option. For example, if a target instance uses an FTP connection, you can
only choose an FTP connection value to apply to it. The Apply All Connection Information option lets you
apply a new connection type, connection value, and connection attributes.
Performance Details
You can configure a session to collect performance details and store them in the PowerCenter repository.
Collect performance data for a session to view performance details while the session runs. Write
performance data for a session in the PowerCenter repository to store and view performance details for
previous session runs.
If you want to write performance data to the repository you must perform the following tasks:
1. In the Workflow Manager, open the session properties and select the Properties tab.
2. Select Collect performance data to view performance details while the session runs.
3. Select Write Performance Data to Repository to store and view performance details for previous session
runs.
48 Chapter 3: Sessions
You must also configure the Integration Service to store the run-time information at the verbose level.
4. Click OK.
The Integration Service runs pre-session SQL commands before it reads the source. It runs post-session SQL
commands after it writes to the target.
You can use parameters and variables in SQL executed against the source and target. Use any parameter or
variable type that you can define in the parameter file. You can enter a parameter or variable within the SQL
statement, or you can use a parameter or variable as the command. For example, you can use a session
parameter, $ParamMyPreSQL, as the source pre-session SQL command, and set $ParamMyPreSQL to the
SQL statement in the parameter file.
• Use any command that is valid for the database type. However, the Integration Service does not allow
nested comments, even though the database might.
• Use a semicolon (;) to separate multiple statements. The Integration Service issues a commit after each
statement.
• The Integration Service ignores semicolons within /* ...*/.
• If you need to use a semicolon outside of comments, you can escape it with a backslash (\).
• The Workflow Manager does not validate the SQL.
Error Handling
You can configure error handling on the Config Object tab. You can choose to stop or continue the session if
the Integration Service encounters an error issuing the pre- or post- session SQL command.
• Pre-session command. The Integration Service performs pre-session shell commands at the beginning of
a session. You can configure a session to stop or continue if a pre-session shell command fails.
• Post-session success command. The Integration Service performs post-session success commands only
if the session completed successfully.
• Post-session failure command. The Integration Service performs post-session failure commands only if
the session failed to complete.
Use the following guidelines to call a shell command:
• Use any valid UNIX command or shell script for UNIX nodes, or any valid DOS or batch file for Windows
nodes.
• Configure the session to run the pre- or post-session shell commands.
The Workflow Manager provides a task called the Command task that lets you configure shell commands
anywhere in the workflow. You can choose a reusable Command task for the pre- or post-session shell
command. Or, you can create non-reusable shell commands for the pre- or post-session shell commands.
If you create a non-reusable pre- or post-session shell command, you can make it into a reusable Command
task.
The Workflow Manager lets you choose from the following options when you configure shell commands:
• Create non-reusable shell commands. Create a non-reusable set of shell commands for the session.
Other sessions in the folder cannot use this set of shell commands.
• Use an existing reusable Command task. Select an existing Command task to run as the pre- or post-
session shell command.
Configure pre- and post-session shell commands in the Components tab of the session properties.
1. In the Components tab of the session properties, select Non-reusable for pre- or post-session shell
command.
2. Click the Edit button in the Value field to open the Edit Pre- or Post-Session Command dialog box.
3. Enter a name for the command in the General tab.
4. If you want the Integration Service to perform the next command only if the previous command
completed successfully, select Fail Task if Any Command Fails in the Properties tab.
5. In the Commands tab, click the Add button to add shell commands.
50 Chapter 3: Sessions
Enter one command for each line.
6. Click OK.
To create a Command Task from non-reusable pre- or post-session shell commands, click the Edit button to
open the Edit dialog box for the shell commands. In the General tab, select the Make Reusable check box.
After you select the Make Reusable check box and click OK, a new Command task appears in the Tasks
folder in the Navigator window. Use this Command task in other workflows, just as you do with any other
reusable workflow tasks.
1. In the Components tab of the session properties, click Reusable for the pre- or post-session shell
command.
2. Click the Edit button in the Value field to open the Task Browser dialog box.
3. Select the Command task you want to run as the pre- or post-session shell command.
4. Click the Override button in the Task Browser dialog box if you want to change the order of the
commands, or if you want to specify whether to run the next command when the previous command
fails.
Changes you make to the Command task from the session properties only apply to the session. In the
session properties, you cannot edit the commands in the Command task.
5. Click OK to select the Command task for the pre- or post-session shell command.
The name of the Command task you select appears in the Value field for the shell command.
Configure the session to stop or continue if a pre-session shell command fails in the Error Handling settings
on the Config Object tab.
When you create a session, the Workflow Manager applies the default configuration object settings to the
Config Object tab of the session. You can also choose a configuration object to use for the session.
When you edit a session configuration object, each session that uses the session configuration object
inherits the changes. When you override the configuration object settings in the Session task, the session
configuration object does not inherit changes.
• Advanced. Advanced settings allow you to configure constraint-based loading, lookup caches, and buffer
sizes.
• Log options. Log options allow you to configure how you want to save the session log. By default, the Log
Manager saves only the current session log.
• Error handling. Error Handling settings allow you to determine if the session fails or continues when it
encounters pre-session command errors, stored procedure errors, or a specified number of session
errors.
52
• Partitioning options. Partitioning options allow the Integration Service to determine the number of
partitions to create at run time.
• Session on grid. When Session on Grid is enabled, the Integration Service distributes session threads to
the nodes in a grid to increase performance and scalability.
Advanced Settings
Advanced settings allow you to configure constraint-based loading, lookup caches, and buffer sizes.
The following table describes the Advanced settings of the Config Object tab:
Constraint Based Load Ordering Integration Service loads targets based on primary key-foreign key constraints
where possible.
Cache Lookup() Function If selected, the Integration Service caches PowerMart 3.5 LOOKUP functions in
the mapping, overriding mapping-level LOOKUP configurations.
If not selected, the Integration Service performs lookups on a row-by-row basis,
unless otherwise specified in the mapping.
Default Buffer Block Size Size of buffer blocks used to move data from sources to targets. By default,
this value is set to auto.
You can specify auto or a numeric value. The default unit is bytes. Append KB,
MB, or GB to the value to specify other units. For example, 1048576 or 1024KB
or 1MB.
Line Sequential Buffer Length Number of bytes that the PowerCenter Integration Service reads for each line.
Increase this setting from the default of 1024 bytes if source flat file records
are larger than 1024 bytes.
Maximum Partial Session Log Files The maximum number of partial log files to save. Configure this option with
Session Log File Max Size or Session Log File Max Time Period. Default is one.
Maximum Memory Allowed for Auto Maximum memory allocated for automatic cache when you configure the
Memory Attributes Integration Service to determine session cache size at run time.
You enable automatic memory settings by configuring a value for this attribute.
The default unit is bytes. Append KB, MB, or GB to the value to specify other
units. For example, 1048576 or 1024KB or 1MB.
Maximum Percentage of Total Maximum percentage of memory allocated for automatic cache when you
Memory Allowed for Auto Memory configure the Integration Service to determine session cache size at run time.
Attributes
Advanced Settings 53
Advanced Settings Description
Additional Concurrent Pipelines for Restricts the number of pipelines that the Integration Service can create
Lookup Cache Creation concurrently to pre-build lookup caches. Configure this property when the Pre-
build Lookup Cache property is enabled for a session or transformation.
When the Pre-build Lookup Cache property is enabled, the Integration Service
creates a lookup cache before the Lookup transformation receives the data. If
the session has multiple Lookup transformations, the Integration Service
creates an additional pipeline for each lookup cache that it builds.
To configure the number of pipelines that the Integration Service can create
concurrently, select Auto or enter a numeric value:
- Auto. The Integration Service determines the number of pipelines it can
create at run time.
- Numeric value. The Integration Service can create the specified number of
pipelines to create lookup caches.
Custom Properties Configure custom properties of the Integration Service for the session. You can
override custom properties that the Integration Service uses after the DTM
process has started. The Integration Service also writes the override value of
the property to the session log.
Pre-build Lookup Cache Allows the Integration Service to build the lookup cache before the Lookup
transformation receives the data. The Integration Service can build multiple
lookup cache files at the same time to improve performance.
You can configure this option in the mapping or the session. The Integration
Service uses the session-level setting if you configure the Lookup
transformation option as Auto.
Configure one of the following options:
- Auto. The Integration Service uses the value configured in the session.
- Always allowed. The Integration Service can build the lookup cache before
the Lookup transformation receives the first source row. The Integration
Service creates an additional pipeline to build the cache.
- Always disallowed. The Integration Service cannot build the lookup cache
before the Lookup transformation receives the first row.
You must configure the number of pipelines that the Integration Service can
build concurrently. Configure the Additional Concurrent Pipelines for Lookup
Cache Creation session property. The Integration Service can pre-build lookup
cache if this property is greater than zero.
DateTime Format String Date time format defined in the session configuration object. Default format
specifies microseconds: MM/DD/YYYY HH24:MI:SS.US.
You can specify seconds, milliseconds, or nanoseconds.
MM/DD/YYYY HH24:MI:SS, specifies seconds.
MM/DD/YYYY HH24:MI:SS.MS, specifies milliseconds.
MM/DD/YYYY HH24:MI:SS.US, specifies microseconds.
MM/DD/YYYY HH24:MI:SS.NS, specifies nanoseconds.
Pre 85 Timestamp Compatibility Trims subseconds to maintain compatibility with versions prior to 8.5. The
Integration Service converts the Oracle Timestamp datatype to the Oracle Date
datatype. The Integration Service trims subsecond data for the following
sources, targets, and transformations:
- Relational sources and targets
- XML sources and targets
- SQL transformation
- XML Generator transformation
- XML Parser transformation
Default is disabled.
The following table shows the Log Options settings of the Config Object tab:
Save Session Log By Configure this option to save session log files.
If you select Save Session Log by Timestamp, the Log Manager saves all session logs,
appending a time stamp to each log.
If you select Save Session Log by Runs, the Log Manager saves a designated number of
session logs. Configure the number of sessions in the Save Session Log for These
Runs option.
You can also use the $PMSessionLogCount service variable to save the configured
number of session logs for the Integration Service.
Save Session Log for These Number of historical session logs you want the Log Manager to save.
Runs The Log Manager saves the number of historical logs you specify, plus the most recent
session log. When you configure five runs, the Log Manager saves the most recent
session log, plus historical logs 0-4.
You can configure up to 2,147,483,647 historical logs. If you configure zero logs, the
Log Manager saves the most recent session log.
Session Log File Max Size Maximum number of megabytes for a session log file. Configure a maximum size to
enable log file rollover. When the log file reaches the maximum size, the Integration
Service creates a another log file. If you set the size to zero the session log file size
has no limit.
Configure this option for real-time sessions that generate large session logs. The
Integration Service writes the session logs to multiple files. Each file is a partial log
file. Default is zero.
Session Log File Max Time Maximum number of hours that the Integration Service writes to a session log file.
Period Configure the maximum period to enable log file rollover by time. When the period is
over, the Integration service creates another log file.
Configure this option for real-time sessions that might generate large session logs. The
Integration Service writes the session logs to multiple files. Each file is a partial log
file. Default is zero.
Maximum Partial Session Maximum number of session log files to save. The Integration Service overwrites the
Log Files oldest partial log file if the number of log files has reached the limit.
Configure this option in conjunction with the maximum time period or maximum file
size option. You must configure one of these options to enable session log rollover.
If you set the maximum number to 0, the number of session log files is unlimited.
Default is 1.
Writer Commit Statistics Frequency that the Integration Service writes commit statistics in the session log. The
Log Frequency Integration Service writes commit statistics to the session log after the specified
number of commits occurs. The Integration Service writes commit statistics after each
commit. Default is 1.
Writer Commit Statistics Time interval, in minutes, to write commit statistics to the session log. The Integration
Log Interval Service writes commit statistics to the session log after each time interval.
Related Topics:
• “Session Logs” on page 255
Stop On Errors Indicates how many non-fatal errors the Integration Service can encounter before it stops
the session. Non-fatal errors include reader, writer, and DTM errors. Enter the number of
non-fatal errors you want to allow before stopping the session. The Integration Service
maintains an independent error count for each source, target, and transformation. If you
specify 0, non-fatal errors do not cause the session to stop.
Optionally use the $PMSessionErrorThreshold service variable to stop on the configured
number of errors for the Integration Service.
Override Tracing Overrides tracing levels set on a transformation level. Selecting this option enables a menu
from which you choose a tracing level: None, Terse, Normal, Verbose Initialization, or
Verbose Data.
On Stored Procedure Required if the session uses pre- or post-session stored procedures.
Error If you select Stop Session, the Integration Service stops the session on errors executing a
pre-session or post-session stored procedure.
If you select Continue Session, the Integration Service continues the session regardless of
errors executing pre-session or post-session stored procedures.
By default, the Integration Service stops the session on Stored Procedure error and marks
the session failed.
On Pre-Post SQL Error Required if the session uses pre- or post-session SQL.
If you select Stop Session, the Integration Service stops the session errors executing pre-
session or post-session SQL.
If you select Continue, the Integration Service continues the session regardless of errors
executing pre-session or post-session SQL.
By default, the Integration Service stops the session upon pre- or post-session SQL error
and marks the session failed.
Error Log Type Specifies the type of error log to create. You can specify relational, file, or no log. Default
is none.
Note: You cannot log row errors from XML file sources. You can view the XML source
errors in the session log.
Error Log DB Specifies the database connection for a relational error log.
Connection
Error Log Table Name Specifies table name prefix for a relational error log. Oracle and Sybase have a 30
Prefix character limit for table names. If a table name exceeds 30 characters, the session fails.
Error Log File Directory Specifies the directory where errors are logged. By default, the error log file directory is
$PMBadFilesDir\.
Error Log File Name Specifies error log file name. By default, the error log file name is PMError.log.
Log Row Data Specifies whether or not to log transformation row data. When you enable error logging,
the Integration Service logs transformation row data by default. If you disable this property,
n/a or -1 appears in transformation row data fields.
Log Source Row Data Specifies whether or not to log source row data. By default, the check box is clear and
source row data is not logged.
Data Column Delimiter Delimiter for string type source row data and transformation group row data. By default,
the Integration Service uses a pipe ( | ) delimiter. Verify that you do not use the same
delimiter for the row data as the error logging columns. If you use the same delimiter, you
may find it difficult to read the error log file.
The following table describes the Partitioning Options settings on the Config Object tab:
Dynamic Partitioning Configure dynamic partitioning using one of the following methods:
- Disabled. Do not use dynamic partitioning. Define the number of partitions on the Mapping
tab.
- Based on number of partitions. Sets the partitions to a number that you define in the
Number of Partitions attribute. Use the $DynamicPartitionCount session parameter, or
enter a number greater than 1.
- Based on number of nodes in grid. Sets the partitions to the number of nodes in the grid
running the session. If you configure this option for sessions that do not run on a grid, the
session runs in one partition and logs a message in the session log.
- Based on source partitioning. Determines the number of partitions using database
partition information. The number of partitions is the maximum of the number of partitions
at the source.
- Based on number of CPUs. Sets the number of partitions equal to the number of CPUs on
the node that prepares the session. If the session is configured to run on a grid, dynamic
partitioning sets the number of partitions equal to the number of CPUs on the node that
prepares the session multiplied by the number of nodes in the grid.
Default is disabled.
Number of Partitions Determines the number of partitions that the Integration Service creates when you configure
dynamic partitioning based on the number of partitions. Enter a value greater than 1 or use
the $DynamicPartitionCount session parameter.
The following table describes the Session on Grid setting on the Config Object tab:
1. In the Workflow Manager, open a folder and click Tasks > Session Configuration.
The Session Configuration Browser appears.
2. Click New to create a new session configuration object.
3. Enter a name for the session configuration object.
4. On the Properties tab, configure the settings.
5. Click OK.
1. In the Workflow Manager, open the session properties and click the Config Object tab.
2. Click the Open button in the Config Name field.
A list of session configuration objects appears.
3. Select the configuration object you want to use and click OK.
The settings associated with the configuration object appear on the Config Object tab.
4. Click OK.
Tasks
This chapter includes the following topics:
• Tasks Overview, 60
• Creating a Task, 61
• Configuring Tasks, 62
• Working with the Assignment Task, 64
• Command Task, 65
• Control Task, 66
• Working with the Event Task, 69
• Timer Task, 72
Tasks Overview
The Workflow Manager contains many types of tasks to help you build workflows and worklets. You can
create reusable tasks in the Task Developer. Or, create and add tasks in the Workflow or Worklet Designer as
you develop the workflow.
Command Task Developer Yes Specifies shell commands to run during the workflow. You
Workflow Designer can choose to run the Command task if the previous task
in the workflow completes.
Worklet Designer
Decision Workflow Designer No Specifies a condition to evaluate in the workflow. Use the
Worklet Designer Decision task to create branches in a workflow.
60
Task Name Tool Reusable Description
Event-Raise Workflow Designer No Represents the location of a user-defined event. The Event-
Worklet Designer Raise task triggers the user-defined event when the
Integration Service runs the Event-Raise task.
Timer Workflow Designer No Waits for a specified period of time to run the next task.
Worklet Designer
The Workflow Manager validates tasks attributes and links. If a task is invalid, the workflow becomes invalid.
Workflows containing invalid sessions may still be valid.
Creating a Task
You can create tasks in the Task Developer, or you can create them in the Workflow Designer or the Worklet
Designer as you develop the workflow or worklet. Tasks you create in the Task Developer are reusable. Tasks
you create in the Workflow Designer and Worklet Designer are non-reusable by default.
Creating a Task 61
the Workflow Designer or Worklet Designer are non-reusable. Edit the General tab of the task properties to
promote a non-reusable task to a reusable task.
Configuring Tasks
After you create the task, you can configure general task options on the General tab. For each task instance
in the workflow, you can configure how the Integration Service runs the task and the other objects associated
with the selected task. You can also disable the task so you can run rest of the workflow without the selected
task.
When you use a task in the workflow, you can edit the task in the Workflow Designer and configure the
following task options in the General tab:
• Fail parent if this task fails. Choose to fail the workflow or worklet containing the task if the task fails.
• Fail parent if this task does not run. Choose to fail the workflow or worklet containing the task if the task
does not run.
• Disable this task. Choose to disable the task so you can run the rest of the workflow without the task.
• Treat input link as AND or OR. Choose to have the Integration Service run the task when all or one of the
input link conditions evaluates to True.
You can create any task as non-reusable or reusable. Tasks you create in the Task Developer are reusable.
Tasks you create in the Workflow Designer are non-reusable by default. However, you can edit the general
properties of a task to promote it to a reusable task.
The Workflow Manager stores each reusable task separate from the workflows that use the task. You can
view a list of reusable tasks in the Tasks node in the Navigator window. You can see a list of all reusable
Session tasks in the Sessions node in the Navigator window.
62 Chapter 5: Tasks
To promote a non-reusable workflow task:
1. In the Workflow Designer, double-click the task you want to make reusable.
2. In the General tab of the Edit Task dialog box, select the Make Reusable option.
3. When prompted whether you are sure you want to promote the task, click Yes.
4. Click OK.
The newly promoted task appears in the list of reusable tasks in the Tasks node in the Navigator
window.
You can edit the task instance in the Workflow Designer. Changes you make in the task instance exist only in
the workflow. The task definition remains unchanged in the Task Developer.
When you make changes to a reusable task definition in the Task Developer, the changes reflect in the
instance of the task in the workflow if you have not edited the instance.
To set the type of input links, double-click the task to open the Edit Tasks dialog box. Select AND or OR for
the input link type.
Disabling Tasks
In the Workflow Designer, you can disable a workflow task so that the Integration Service runs the workflow
without the disabled task. The status of a disabled task is DISABLED. Disable a task in the workflow by
selecting the Disable This Task option in the Edit Tasks dialog box.
To fail the parent workflow or worklet if the task fails, double-click the task and select the Fail Parent If This
Task Fails option in the General tab. When you select this option and a task fails, it does not prevent the other
tasks in the workflow or worklet from running. Instead, the Integration Service marks the status of the
Configuring Tasks 63
workflow or worklet as failed. If you have a session nested within multiple worklets, you must select the Fail
Parent If This Task Fails option for each worklet instance to see the failure at the workflow level.
To fail the parent workflow or worklet if the task does not run, double-click the task and select the Fail Parent
If This Task Does Not Run option in the General tab. When you choose this option, the Integration Service
fails the parent workflow if a task did not run.
Note: The Integration Service does not fail the parent workflow if you disable a task.
64 Chapter 5: Tasks
Command Task
You can specify one or more shell commands to run during the workflow with the Command task. For
example, you can specify shell commands in the Command task to delete reject files, copy a file, or archive
target files.
• Standalone Command task. Use a Command task anywhere in the workflow or worklet to run shell
commands.
• Pre- and post-session shell command. You can call a Command task as the pre- or post-session shell
command for a Session task.
Use any valid UNIX command or shell script for UNIX servers, or any valid DOS or batch file for Windows
servers. For example, you might use a shell command to copy a file from one directory to another. For a
Windows server you would use the following shell command to copy the SALES_ ADJ file from the source
directory, L, to the target, H:
copy L:\sales\sales_adj H:\marketing\
For a UNIX server, you would use the following command to perform a similar operation:
cp sales/sales_adj marketing/
Each shell command runs in the same environment as the Integration Service. Environment settings in one
shell command script do not carry over to other scripts. To run all shell commands in the same environment,
call a single shell script that invokes other scripts.
• Standalone Command tasks. You can use service, service process, workflow, and worklet variables in
standalone Command tasks. You cannot use session parameters, mapping parameters, or mapping
variables in standalone Command tasks. The Integration Service does not expand these types of
parameters and variables in standalone Command tasks.
• Pre- and post-session shell commands. You can use any parameter or variable type that you can define in
the parameter file.
Assigning Resources
You can assign resources to Command task instances in the Worklet or Workflow Designer. You might want
to assign resources to a Command task if you assign the workflow to an Integration Service associated with
a grid. When you assign a resource to a Command task and the Integration Service is configured to check
resources, the Load Balancer dispatches the task to a node that has the resource available. A task fails if the
Load Balancer cannot find a node where the required resource is available.
1. In the Workflow Designer or the Task Developer, click Task > Create.
2. Select Command Task for the task type.
Command Task 65
3. Enter a name for the Command task. Click Create. Then click Done.
4. Double-click the Command task in the workspace to open the Edit Tasks dialog box.
5. In the Commands tab, click the Add button to add a command.
6. In the Name field, enter a name for the new command.
7. In the Command field, click the Edit button to open the Command Editor.
8. Enter the command you want to run. Enter one command in the Command Editor. You can use service,
service process, workflow, and worklet variables in the command.
9. Click OK to close the Command Editor.
10. Repeat steps 4 to 9 to add more commands in the task.
11. Optionally, click the General tab in the Edit Tasks dialog to assign resources to the Command task.
12. Click OK.
If you specify non-reusable shell commands for a session, you can promote the non-reusable shell
commands to a reusable Command task.
You can choose to run a command only if the previous command completed successfully. Or, you can
choose to run all commands in the Command task, regardless of the result of the previous command. If you
configure multiple commands in a Command task to run on UNIX, each command runs in a separate shell.
If you choose to run a command only if the previous command completes successfully, the Integration
Service stops running the rest of the commands and fails the task when one of the commands in the
Command task fails. If you do not choose this option, the Integration Service runs all the commands in the
Command task and treats the task as completed, even if a command fails. If you want the Integration Service
to perform the next command only if the previous command completes successfully, select Fail Task if Any
Command Fails in the Properties tab of the Command task.
You can choose a recovery strategy for the task. The recovery strategy determines how the Integration
Service recovers the task when you configure workflow recovery and the task fails. You can configure the
task to restart or you can configure the task to fail and continue running the workflow.
Control Task
Use the Control task to stop, abort, or fail the top-level workflow or the parent workflow based on an input link
condition. A parent workflow or worklet is the workflow or worklet that contains the Control task.
66 Chapter 5: Tasks
The following table describes the options you can configure in the Control task:
Fail Me Marks the Control task as “Failed.” The Integration Service fails the Control task if
you choose this option. If you choose Fail Me in the Properties tab and choose Fail
Parent If This Task Fails in the General tab, the Integration Service fails the parent
workflow.
Fail Parent Marks the status of the workflow or worklet that contains the Control task as failed
after the workflow or worklet completes.
Stop Parent Stops the workflow or worklet that contains the Control task.
Abort Parent Aborts the workflow or worklet that contains the Control task.
You can specify one decision condition per Decision task. After the Integration Service evaluates the Decision
task, use the predefined condition variable in other expressions in the workflow to help you develop the
workflow.
Depending on the workflow, you might use link conditions instead of a Decision task. However, the Decision
task simplifies the workflow. If you do not specify a condition in the Decision task, the Integration Service
evaluates the Decision task to True.
Control Task 67
Example
For example, you have a Command task that depends on the status of the three sessions in the workflow.
You want the Integration Service to run the Command task when any of the three sessions fails. To
accomplish this, use a Decision task with the following decision condition:
$Q1_session.status = FAILED OR $Q2_session.status = FAILED OR $Q3_session.status = FAILED
You can then use the predefined condition variable in the input link condition of the Command task.
Configure the input link with the following link condition:
$Decision.condition = True
The following figure shows a sample workflow using a Decision task:
You can configure the same logic in the workflow without the Decision task. Without the Decision task, you
need to use three link conditions and treat the input links to the Command task as OR links.
You can further expand the workflow. The Integration Service runs the Command task if any of the three
Session tasks fails. Suppose now you want the Integration Service to also run an Email task if all three
Session tasks succeed. To do this, add an Email task and use the decision condition variable in the link
condition.
The following figure shows the expanded sample workflow using a Decision task:
68 Chapter 5: Tasks
5. Click the Open button in the Value field to open the Expression Editor.
6. In the Expression Editor, enter the condition you want the Integration Service to evaluate.
Validate the expression before you close the Expression Editor.
7. Click OK.
• Event-Raise task. Event-Raise task represents a user-defined event. When the Integration Service runs the
Event-Raise task, the Event-Raise task triggers the event. Use the Event-Raise task with the Event-Wait
task to define events.
• Event-Wait task. The Event-Wait task waits for an event to occur. Once the event triggers, the Integration
Service continues executing the rest of the workflow.
To coordinate the execution of the workflow, you may specify the following types of events for the Event-Wait
and Event-Raise tasks:
• Predefined event. A predefined event is a file-watch event. For predefined events, use an Event-Wait task
to instruct the Integration Service to wait for the specified indicator file to appear before continuing with
the rest of the workflow. When the Integration Service locates the indicator file, it starts the next task in
the workflow.
• User-defined event. A user-defined event is a sequence of tasks in the workflow. Use an Event-Raise task
to specify the location of the user-defined event in the workflow. A user-defined event is sequence of
tasks in the branch from the Start task leading to the Event-Raise task.
When all the tasks in the branch from the Start task to the Event-Raise task complete, the Event-Raise task
triggers the event. The Event-Wait task waits for the Event-Raise task to trigger the event before
continuing with the rest of the tasks in its branch.
Related Topics:
• “Configuring Worklet Properties” on page 41
• “Metadata Extensions” on page 30
Event-Raise Tasks
The Event-Raise task represents the location of a user-defined event. A user-defined event is the sequence of
tasks in the branch from the Start task to the Event-Raise task. When the Integration Service runs the Event-
Raise task, the Event-Raise task triggers the user-defined event.
To use an Event-Raise task, you must first declare the user-defined event. Then, create an Event-Raise task in
the workflow to represent the location of the user-defined event you just declared. In the Event-Raise task
properties, specify the name of a user-defined event.
70 Chapter 5: Tasks
Event name is not case sensitive.
4. Click OK.
1. In the Workflow Designer workspace, create an Event-Raise task and place it in the workflow to represent
the user-defined event you want to trigger.
A user-defined event is the sequence of tasks in the branch from the Start task to the Event-Raise task.
2. Double-click the Event-Raise task to open it.
3. On the Properties tab, click the Open button in the Value field to open the Events Browser for user-
defined events.
4. Choose an event in the Events Browser.
5. Click OK twice.
Event-Wait Tasks
The Event-Wait task waits for a predefined event or a user-defined event. A predefined event is a file-watch
event. When you use the Event-Wait task to wait for a predefined event, you specify an indicator file for the
Integration Service to watch. The Integration Service waits for the indicator file to appear. Once the indicator
file appears, the Integration Service continues running tasks after the Event-Wait task.
You can assign resources to Event-Wait tasks that wait for predefined events. You may want to assign a
resource to a predefined Event-Wait task if you are running on a grid and the indicator file appears on a
specific node or in a specific directory. When you assign a resource to a predefined Event-Wait task and the
Integration Service is configured to check resources, the Load Balancer distributes the task to a node where
the required resource is available.
Note: If you use the Event-Raise task to trigger the event when you wait for a predefined event, you may not
be able to successfully recover the workflow.
You can also use the Event-Wait task to wait for a user-defined event. To use the Event-Wait task for a user-
defined event, specify the name of the user-defined event in the Event-Wait task properties. The Integration
Service waits for the Event-Raise task to trigger the user-defined event. Once the user-defined event is
triggered, the Integration Service continues running tasks after the Event-Wait task.
1. In the workflow, create an Event-Wait task and double-click the Event-Wait task to open it.
2. In the Events tab of the task, select User-Defined.
3. Click the Event button to open the Events Browser dialog box.
4. Select a user-defined event for the Integration Service to wait.
5. Click OK twice.
When you specify the indicator file in the Event-Wait task, enter the directory in which the file appears and the
name of the indicator file. You must provide the absolute path for the file. If you specify the file name and not
the directory, the Integration Service looks for the indicator file in the following directory:
• On Windows, the Integration Service looks for the file in the system directory. For example, on Windows
2000, the system directory is c:\winnt\system32.
• On UNIX, the Integration Service looks for the indicator file in the current working directory for the
Integration Service process. On UNIX this directory is /server/bin.
You can enter the actual name of the file or use process variables to specify the location of the file. You can
also use user-defined workflow and worklet variables to specify the file name and location. For example,
create a workflow variable, $$MyFileWatchFile, for the indicator file name and location, and set $
$MyFileWatchFile to the file name and location in the parameter file.
The Integration Service writes the time the file appears in the workflow log.
Note: Do not use a source or target file name as the indicator file name because you may accidentally delete
a source or target file. Or, the Integration Service may try to delete the file before the session finishes writing
to the target.
When you select Enable Past Events, the Integration Service continues executing the next tasks if the event
already occurred.
Select the Enable Past Events option in the Properties tab of the Event-Wait task.
Timer Task
You can specify the period of time to wait before the Integration Service runs the next task in the workflow
with the Timer task. You can choose to start the next task in the workflow at a specified time and date. You
72 Chapter 5: Tasks
can also choose to wait a period of time after the start time of another task, workflow, or worklet before
starting the next task.
• Absolute time. You specify the time that the Integration Service starts running the next task in the
workflow. You may specify the date and time, or you can choose a user-defined workflow variable to
specify the time.
• Relative time. You instruct the Integration Service to wait for a specified period of time after the Timer
task, the parent workflow, or the top-level workflow starts.
For example, a workflow contains two sessions. You want the Integration Service wait 10 minutes after the
first session completes before it runs the second session. Use a Timer task after the first session. In the
Relative Time setting of the Timer task, specify ten minutes from the start time of the Timer task. Use a
Timer task anywhere in the workflow after the Start task.
The following table describes the attributes you configure in the Timer task:
Absolute Time: Specify the exact Integration Service starts the next task in the workflow at the date and time you
time to start specify.
Absolute Time: Use this workflow Specify a user-defined date-time workflow variable. The Integration Service
date-time variable to calculate the starts the next task in the workflow at the time you choose.
wait The Workflow Manager verifies that the variable you specify has the Date/Time
datatype. If the variable precision includes subseconds, the Integration Service
ignores the subsecond portion of the time value.
The Timer task fails if the date-time workflow variable evaluates to NULL.
Relative time: Start after Specify the period of time the Integration Service waits to start executing the
next task in the workflow.
Relative time: from the start time Select this option to wait a specified period of time after the start time of the
of this task Timer task to run the next task.
Relative time: from the start time Select this option to wait a specified period of time after the start time of the
of the parent workflow/worklet parent workflow/worklet to run the next task.
Relative time: from the start time Choose this option to wait a specified period of time after the start time of the
of the top-level workflow top-level workflow to run the next task.
Timer Task 73
Chapter 6
Sources
This chapter includes the following topics:
• Sources Overview, 74
• Configuring Sources in a Session, 75
• Working with Relational Sources, 76
• Working with File Sources, 78
• Integration Service Handling for File Sources, 83
• Working with XML Sources, 85
• Using a File List, 87
Sources Overview
In the Workflow Manager, you can create sessions with the following sources:
• Relational. You can extract data from any relational database that the Integration Service can connect to.
When extracting data from relational sources and Application sources, you must configure the database
connection to the data source prior to configuring the session.
• File. You can create a session to extract data from a flat file, COBOL, or XML source. Use an operating
system command to generate source data for a flat file or COBOL source or generate a file list.
If you use a flat file or XML source, the Integration Service can extract data from any local directory or FTP
connection for the source file. If the file source requires an FTP connection, you need to configure the FTP
connection to the host machine before you create the session.
• Heterogeneous. You can extract data from multiple sources in the same session. You can extract from
multiple relational sources, such as Oracle and Microsoft SQL Server. Or, you can extract from multiple
source types, such as relational and flat file. When you configure a session with heterogeneous sources,
configure each source instance separately.
Globalization Features
You can choose a code page that you want the Integration Service to use for relational sources and flat files.
You specify code pages for relational sources when you configure database connections in the Workflow
Manager. You can set the code page for file sources in the session properties.
74
Source Connections
Before you can extract data from a source, you must configure the connection properties the Integration
Service uses to connect to the source file or database. You can configure source database and FTP
connections in the Workflow Manager.
Partitioning Sources
You can create multiple partitions for relational, Application, and file sources. For relational or Application
sources, the Integration Service creates a separate connection to the source database for each partition you
set in the session properties. For file sources, you can configure the session to read the source with one
thread or multiple threads.
The Sources node lists the sources used in the session and displays their settings. To view and configure
settings for a source, select the source from the list. You can configure the following settings for a source:
• Readers
• Connections
• Properties
Configuring Readers
You can click the Readers settings on the Sources node to view the reader the Integration Service uses with
each source instance. The Workflow Manager specifies the necessary reader for each source instance in the
Readers settings on the Sources node.
Configuring Connections
Click the Connections settings on the Sources node to define source connection information. For relational
sources, choose a configured database connection in the Value column for each relational source instance.
By default, the Workflow Manager displays the source type for relational sources.
• FTP. To read data from a flat file or XML source using FTP, you must specify an FTP connection when you
configure source options. You must define the FTP connection in the Workflow Manager prior to
configuring the session.
• None. Choose None to read from a local flat file or XML file.
Configuring Properties
Click the Properties settings in the Sources node to define source property information. The Workflow
Manager displays properties, such as source file name and location for flat file, COBOL, and XML source file
types. You do not need to define any properties on the Properties settings for relational sources.
• Source database connection. Select the database connection for each relational source.
• Treat source rows as. Define how the Integration Service treats each source row as it reads it from the
source table.
• Override SQL query. You can override the default SQL query to extract source data.
• Table owner name. Define the table owner name for each relational source.
• Source table name. You can override the source table name for each relational source.
On the Connections settings in the Sources node, choose the database connection. You can select a
connection object, use a connection variable, or use a session parameter to define the connection value in a
parameter file.
76 Chapter 6: Sources
The following table describes the options you can choose for the Treat Source Rows As property:
Insert Integration Service marks all rows to insert into the target.
Delete Integration Service marks all rows to delete from the target.
Update Integration Service marks all rows to update the target. You can further define the update
operation in the target options.
Data Driven Integration Service uses the Update Strategy transformations in the mapping to determine
the operation on a row-by-row basis. You define the update operation in the target options.
If the mapping contains an Update Strategy transformation, this option defaults to Data
Driven. You can also use this option when the mapping contains Custom transformations
configured to set the update strategy.
After you determine how to treat all rows in the session, you also need to set update strategy options for
individual targets.
The Workflow Manager does not validate the SQL override. The following types of errors can cause data
errors and session failure:
Specify the table owner name in the Owner Name field in the Properties settings on the Mapping tab.
You can use a parameter or variable as the table owner name. Use any parameter or variable type that you
can define in the parameter file. For example, you can use a session parameter, $ParamMyTableOwner, as
Note: If you override the source table name on the Properties tab of the source instance, and you override the
source table name using an SQL query, the Integration Service uses the source table name defined in the SQL
query.
• Source properties. You can define source properties on the Properties settings in the Sources node, such
as source file options.
• Flat file properties. You can edit fixed-width and delimited source file properties.
• Line sequential buffer length. You can change the buffer length for flat files on the Advanced settings on
the Config Object tab.
• Treat source rows as. You can define how the Integration Service treats each source row as it reads it
from the source.
The following table describes the properties you define for flat file source definitions:
Input Type Type of source input. You can choose the following types of source input:
- File. For flat file, COBOL, or XML sources.
- Command. For source data or a file list generated by a command.
You cannot use a command to generate XML source data.
Source File Directory Directory name of flat file source. By default, the Integration Service looks in the service
process variable directory, $PMSourceFileDir, for file sources.
If you specify both the directory and file name in the Source Filename field, clear this field.
The Integration Service concatenates this field with the Source Filename field when it runs
the session.
You can also use the $InputFileName session parameter to specify the file location.
78 Chapter 6: Sources
File Source Options Description
Source File Name File name, or file name and path of flat file source. Optionally, use the $InputFileName
session parameter for the file name.
The Integration Service concatenates this field with the Source File Directory field when it
runs the session. For example, if you have “C:\data\” in the Source File Directory field, then
enter “filename.dat” in the Source Filename field. When the Integration Service begins the
session, it looks for “C:\data\filename.dat”.
By default, the Workflow Manager enters the file name configured in the source definition.
Source File Type Indicates whether the source file contains the source data, or whether it contains a list of
files with the same file properties. You can choose the following source file types:
- Direct. For source files that contain the source data.
- Indirect. For source files that contain a list of files. When you select Indirect, the
Integration Service finds the file list and reads each listed file when it runs the session.
Command Type Type of source data the command generates. You can choose the following command types:
- Command generating data for commands that generate source data input rows.
- Command generating file list for commands that generate a file list.
Set File Properties Overrides source file properties. By default, the Workflow Manager displays file properties as
link configured in the source definition.
Truncate string null Strips the first null character and all characters after the first null character from string
values.
Enable this option for delimited flat files that contain null characters in strings. If you do not
enable this option, the PowerCenter Integration Service generates a row error for any row that
contains null characters in a string.
Default is disabled.
For example, to uncompress a data file and use the uncompressed data as the source data input rows, use
the following command:
uncompress -c $PMSourceFileDir/myCompressedFile.Z
The command uncompresses the file and sends the standard output of the command to the flat file reader.
The flat file reader reads the standard output of the command as the flat file source data.
For example, to use a directory listing as a file list, use the following command:
cd $PMSourceFileDir; ls -1 sales-records-Sep-*-2005.dat
The command generates a file list from the source file directory listing. When the session runs, the flat file
reader reads each file as it reads the file names from the command.
To use the output of a command as a file list, select Command as the Input Type, Command generating file
list as the Command Type, and enter a command for the Command property.
Click Set File Properties to open the Flat Files dialog box. To edit the fixed-width properties, select Fixed
Width and click Advanced. The Fixed Width Properties dialog box appears. By default, the Workflow Manager
displays file properties as configured in the mapping. Edit these settings to override those configured in the
source definition.
The following table describes options you can define in the Fixed Width Properties dialog box for file sources:
Fixed-Width Description
Properties Options
Text/Binary Indicates the character representing a null value in the file. This can be any valid character
in the file code page, or any binary value from 0 to 255.
Repeat Null Character If selected, the Integration Service reads repeat null characters in a single field as a single
null value. If you do not select this option, the Integration Service reads a single null
character at the beginning of a field as a null field.
Important: For multibyte code pages, specify a single-byte null character if you use
repeating non-binary null characters. This ensures that repeating null characters fit into the
column.
Code Page Code page of the fixed-width file. Select a code page or a variable:
- Code page. Select the code page.
- Use Variable. Enter a user-defined workflow or worklet variable or the session parameter
$ParamName, and define the code page in the parameter file. Use the code page name.
Default is the PowerCenter Client code page.
Number of Initial Rows Integration Service skips the specified number of rows before reading the file. Use this to
to Skip skip header rows. One row may contain multiple records. If you select the Line Sequential
File Format option, the Integration Service ignores this option.
80 Chapter 6: Sources
Fixed-Width Description
Properties Options
Number of Bytes to Integration Service skips the specified number of bytes between records. For example, you
Skip Between Records have an ASCII file on Windows with one record on each line, and a carriage return and line
feed appear at the end of each line. If you want the Integration Service to skip these two
single-byte characters, enter 2.
If you have an ASCII file on UNIX with one record for each line, ending in a carriage return,
skip the single character by entering 1.
Strip Trailing Blanks If selected, the Integration Service strips trailing blanks from string values.
Line Sequential File Select this option if the file uses a carriage return at the end of each record, shortening the
Format final column.
To edit the delimited properties, select Delimited and click Advanced. The Delimited File Properties dialog
box appears. By default, the Workflow Manager displays file properties as configured in the mapping. Edit
these settings to override those configured in the source definition.
The following table describes options you can define in the Delimited File Properties dialog box for file
sources:
Column Delimiters One or more characters used to separate columns of data. Delimiters can be either
printable or single-byte unprintable characters and must be different from the escape
character and the quote character. You can enter a single-byte unprintable character by
browsing the delimiter list in the Delimiters dialog box.
You cannot select unprintable multibyte characters as delimiters. You cannot select the
NULL character as the column delimiter for a flat file source.
Maximum number of delimiters is 80.
Treat Consecutive By default, the Integration Service treats multiple delimiters separately. If selected, the
Delimiters as One Integration Service reads any number of consecutive delimiter characters as one.
For example, a source file uses a comma as the delimiter character and contains the
following record: 56, , , Jane Doe. By default, the Integration Service reads that record as
four columns separated by three delimiters: 56, NULL, NULL, Jane Doe. If you select this
option, the Integration Service reads the record as two columns separated by one delimiter:
56, Jane Doe.
Treat Multiple If selected, the Integration Service treats a specified set of delimiters as one. For example,
Delimiters as AND a source file contains the following record: abc~def|ghi~|~|jkl|~mno. By default, the
Integration Service reads the record as nine columns separated by eight delimiters: abc,
def, ghi, NULL, NULL, NULL, jkl, NULL, mno. If you select this option and specify the
delimiter as ( ~ | ), the Integration Service reads the record as three columns separated by
two delimiters: abc~def|ghi, NULL, jkl|~mno.
Optional Quotes Select No Quotes, Single Quote, or Double Quotes. If you select a quote character, the
Integration Service ignores delimiter characters within the quote characters. Therefore, the
Integration Service uses quote characters to escape the delimiter.
For example, a source file uses a comma as a delimiter and contains the following row:
342-3849, ‘Smith, Jenna’, ‘Rockville, MD’, 6.
If you select the optional single quote character, the Integration Service ignores the
commas within the quotes and reads the row as four fields.
If you do not select the optional single quote, the Integration Service reads six separate
fields.
When the Integration Service reads two optional quote characters within a quoted string, it
treats them as one quote character. For example, the Integration Service reads the
following quoted string as I’m going tomorrow:
2353, ‘I’’m going tomorrow’, MD
Additionally, if you select an optional quote character, the Integration Service reads a string
as a quoted string if the quote character is the first character of the field.
Note: You can improve session performance if the source file does not contain quotes or
escape characters.
Code Page Code page of the delimited file. Select a code page or a variable:
- Code page. Select the code page.
- Use Variable. Enter a user-defined workflow or worklet variable or the session parameter
$ParamName, and define the code page in the parameter file. Use the code page name.
Default is the PowerCenter Client code page.
Row Delimiter Specify a line break character. Select from the list or enter a character. Preface an octal
code with a backslash (\). To use a single character, enter the character.
The Integration Service uses only the first character when the entry is not preceded by a
backslash. The character must be a single-byte character, and no other character in the
code page can contain that byte. Default is line-feed, \012 LF (\n).
Escape Character Character immediately preceding a delimiter character embedded in an unquoted string, or
immediately preceding the quote character in a quoted string. When you specify an escape
character, the Integration Service reads the delimiter character as a regular character
(called escaping the delimiter or quote character).
Note: You can improve session performance for mappings containing Sequence Generator
transformations if the source file does not contain quotes or escape characters.
Remove Escape This option is selected by default. Clear this option to include the escape character in the
Character From Data output string.
Number of Initial Rows Integration Service skips the specified number of rows before reading the file. Use this to
to Skip skip title or header rows in the file.
82 Chapter 6: Sources
Integration Service Handling for File Sources
When you configure a session with file sources, you might take these additional features into account when
creating mappings with file sources:
• Character set
• Multibyte character error handling
• Null character handling
• Row length handling for fixed-width flat files
• Numeric data handling
• Tab handling
Character Set
You can configure the Integration Service to run sessions in either ASCII or Unicode data movement mode.
The following table describes source file formats supported by each data movement path in PowerCenter:
EBCDIC-based SBCS Supported Not supported. The Integration Service terminates the session.
EBCDIC-based MBCS Supported Not supported. The Integration Service terminates the session.
If you configure a session to run in ASCII data movement mode, delimiters, escape characters, and null
characters must be valid in the ISO Western European Latin 1 code page. Any 8-bit characters you specified
in previous versions of PowerCenter are still valid. In Unicode data movement mode, delimiters, escape
characters, and null characters must be valid in the specified code page of the flat file.
When you import a fixed-width flat file, you can create, move, or delete column breaks using the Flat File
Wizard. Incorrect positioning of column breaks can create alignment errors when you run a session
containing multibyte characters.
• Non-line sequential file. The Integration Service skips rows containing misaligned data and resumes
reading the next row. The skipped row appears in the session log with a corresponding error message. If
an alignment error occurs at the end of a row, the Integration Service skips both the current row and the
next row, and writes them to the session log.
• Line sequential file. The Integration Service skips rows containing misaligned data and resumes reading
the next row. The skipped row appears in the session log with a corresponding error message.
• Reader error threshold. You can configure a session to stop after a specified number of non-fatal errors.
A row containing an alignment error increases the error count by 1. The session stops if the number of
rows containing errors reaches the threshold set in the session properties. Errors and corresponding error
messages appear in the session log file.
Fixed-width COBOL sources are always byte-oriented and can be line sequential. The Integration Service
handles COBOL files according to the following guidelines:
• Line sequential files. The Integration Service skips rows containing misaligned data and writes the
skipped rows to the session log. The session stops if the number of error rows reaches the error
threshold.
• Non-line sequential files. The session stops at the first row containing misaligned data.
The following table describes how the Integration Service uses the Null Character and Repeat Null Character
properties to determine if a column is null:
Binary Disabled A column is null if the first byte in the column is the binary null character. The
Integration Service reads the rest of the column as text data to determine the
column alignment and track the shift state for shift sensitive code pages. If data in
the column is misaligned, the Integration Service skips the row and writes the
skipped row and a corresponding error message to the session log.
Non-binary Disabled A column is null if the first character in the column is the null character. The
Integration Service reads the rest of the column to determine the column alignment
and track the shift state for shift sensitive code pages. If data in the column is
misaligned, the Integration Service skips the row and writes the skipped row and a
corresponding error message to the session log.
Binary Enabled A column is null if it contains the specified binary null character. The next column
inherits the initial shift state of the code page.
Non-binary Enabled A column is null if the repeating null character fits into the column with no bytes
leftover. For example, a five-byte column is not null if you specify a two-byte
repeating null character. In shift-sensitive code pages, shift bytes do not affect the
null value of a column. A column is still null if it contains a shift byte at the
beginning or end of the column.
Specify a single-byte null character if you use repeating non-binary null characters.
This ensures that repeating null characters fit into a column.
84 Chapter 6: Sources
Row Length Handling for Fixed-Width Flat Files
For fixed-width flat files, data in a row can be shorter than the row length in the following situations:
• The file is fixed-width line-sequential with a carriage return or line feed that appears sooner than
expected.
• The file is fixed-width non-line sequential, and the last line in the file is shorter than expected.
In these cases, the Integration Service reads the data but does not append any blanks to fill the remaining
bytes. The Integration Service reads subsequent fields as NULL. Fields containing repeating null characters
that do not fill the entire field length are not considered NULL.
The following table describes the properties you can override for XML readers in a session:
Treat Empty Treat empty XML components as Null. By default, the Integration Service does not output
Content as Null element tags for Null values. The Integration Service outputs tags for empty content.
Source File Location of the Source XML file. By default, the Integration Service looks in the service process
Directory variable directory, $PMSourceFileDir.
You can enter the full path and file name. If you specify both the directory and file name in the
Source Filename field, clear the Source File Directory. The Integration Service concatenates
this field with the Source Filename field.
You can also use the $InputFileName session parameter to specify the file directory.
Source Filename Enter the file name or file name and path. Optionally, use the $InputFileName session
parameter for the file name.
If you specify both the directory and file name in the Source File Directory field, clear this field.
The Integration Service concatenates this field with the Source File Directory field when it runs
the session. For example, if you have “C:\XMLdata\” in the Source File Directory field, then
enter “filename.xml” in the Source Filename field. When the Integration Service begins the
session, it looks for “C:\data\filename.xml”.
Source Filetype Use to configure multiple file sources with a file list. Choose Direct or Indirect. The option
indicates whether the source file contains the source data, or whether the source file contains
a list of files with the same file properties. Choose Direct if the source file contains the source
data. Choose Indirect if the source file contains a list of files.
When you select Indirect, the Integration Service finds the file list and reads each listed file
when it runs the session.
The following table describes the properties you can override for an XML Source Qualifier in a session:
Validate XML Provides flexibility for validating an XML source against a schema or DTD file. Select Do Not
Source Validate to skip validation, even if the instance document has an associated DTD or schema
reference. Select Validate Only if DTD is Present to validate when the XML source has a
corresponding DTD or schema file. The session fails if the instance document specifies a DTD
or schema and one is not present. Select Always Validate to always validate the XML file. The
session fails if the DTD or schema does not exist or the data is invalid.
Partitionable You can create multiple partitions for the source pipeline.
You can choose to omit fixed elements from the XML source definition. If the DTD or XML schema specifies
a fixed or default value for an element, the value appears in the XML source definition.
You can define attributes as required, optional, or prohibited in an element tag. You can also specify fixed or
default values for attributes. When a DTD or XML schema contains an attribute with a fixed or default value,
the Integration Service passes the value into the pipeline even if the element tag in the instance document
does not contain the attribute. If the attribute does not have a fixed or default value, the Integration Service
passes a null value for the attribute. A parser error occurs when a required attribute is not present in an
element or a prohibited attribute appears in the element tag. The Integration Service writes this error to the
session log.
86 Chapter 6: Sources
Using a File List
You can create a session to run multiple source files for one source instance in the mapping. You might use
this feature if, for example, the organization collects data at several locations which you then want to move
through the same session. When you create a mapping to use multiple source files for one source instance,
the properties of all files must match the source definition.
To use multiple source files, you create a file containing the names and directories of each source file you
want the Integration Service to use. This file is referred to as a file list.
When you configure the session properties, enter the file name of the file list in the Source Filename field and
enter the location of the file list in the Source File Directory field. When the session starts, the Integration
Service reads the file list, then locates and reads the first file source in the list. After the Integration Service
reads the first file, it locates and reads the next file in the list.
The Integration Service writes the path and name of the file list to the session log. If the Integration Service
encounters an error while accessing a source file, it logs the error in the session log and stops the session.
Note: When you use a file list and the session performs incremental aggregation, the Integration Service
performs incremental aggregation across all listed source files.
The Integration Service interprets the file list using the Integration Service code page. Map the drives on an
Integration Service on Windows or mount the drives on an Integration Service on UNIX. The Integration
Service skips blank lines and ignores leading blank spaces. Any characters indicating a new line, such as \n
in ASCII files, must be valid in the code page of the Integration Service.
Use the following rules and guidelines when you create the file list:
• Each file in the list must use the user-defined code page configured in the source definition.
• Each file in the file list must share the same file properties as configured in the source definition or as
entered for the source instance in the session property sheet.
• Enter one file name or one path and file name on a line. If you do not specify a path for a file, the
Integration Service assumes the file is in the same directory as the file list.
• Each path must be local to the Integration Service node.
The following example shows a valid file list created for an Integration Service on Windows. Each of the
drives listed are mapped on the Integration Service node. The western_trans.dat file is located in the same
directory as the file list.
western_trans.dat
d:\data\eastern_trans.dat
e:\data\midwest_trans.dat
f:\data\canada_trans.dat
After you create the file list, place it in a directory local to the Integration Service.
88 Chapter 6: Sources
Chapter 7
Targets
This chapter includes the following topics:
• Targets Overview, 89
• Configuring Targets in a Session, 91
• Performing a Test Load, 92
• Working with Relational Targets, 93
• Working with Target Connection Groups, 104
• Working with Active Sources, 105
• Working with File Targets, 106
• Integration Service Handling for File Targets, 109
• Working with XML Targets in a Session, 115
• Integration Service Handling for XML Targets, 116
• Working with Heterogeneous Targets, 121
• Reject Files, 122
Targets Overview
In the Workflow Manager, you can create sessions with the following targets:
• Relational. You can load data to any relational database that the Integration Service can connect to. When
loading data to relational targets, you must configure the database connection to the target before you
configure the session.
• File. You can load data to a flat file or XML target or write data to an operating system command. For flat
file or XML targets, the Integration Service can load data to any local directory or FTP connection for the
target file. If the file target requires an FTP connection, you need to configure the FTP connection to the
host machine before you create the session.
• Heterogeneous. You can output data to multiple targets in the same session. You can output to multiple
relational targets, such as Oracle and Microsoft SQL Server. Or, you can output to multiple target types,
such as relational and flat file.
Globalization Features
You can configure the Integration Service to run sessions in either ASCII or Unicode data movement mode.
89
The following table describes target character sets supported by each data movement mode in PowerCenter:
UTF-8 Supported (Targets Only) Integration Service generates a warning message, but
does not terminate the session.
EBCDIC-based SBCS Supported Not supported. The Integration Service terminates the
session.
EBCDIC-based MBCS Supported Not supported. The Integration Service terminates the
session.
You can work with targets that use multibyte character sets with PowerCenter. You can choose a code page
that you want the Integration Service to use for relational objects and flat files. You specify code pages for
relational objects when you configure database connections in the Workflow Manager. The code page for a
database connection used as a target must be a superset of the source code page.
When you change the database connection code page to one that is not two-way compatible with the old
code page, the Workflow Manager generates a warning and invalidates all sessions that use that database
connection.
Code pages you select for a file represent the code page of the data contained in these files. If you are
working with flat files, you can also specify delimiters and null characters supported by the code page you
have specified for the file.
However, if you configure the Integration Service and Client for code page relaxation, you can select any code
page supported by PowerCenter for the target database connection. When using code page relaxation, select
compatible code pages for the source and target data to prevent data inconsistencies.
If the target contains multibyte character data, configure the Integration Service to run in Unicode mode.
When the Integration Service runs a session in Unicode mode, it uses the database code page to translate
data.
If the target contains only single-byte characters, configure the Integration Service to run in ASCII mode.
When the Integration Service runs a session in ASCII mode, it does not validate code pages.
Target Connections
Before you can load data to a target, you must configure the connection properties the Integration Service
uses to connect to the target file or database. You can configure target database and FTP connections in the
Workflow Manager.
Related Topics:
• “Relational Database Connections” on page 136
• “FTP Connections” on page 140
90 Chapter 7: Targets
Partitioning Targets
When you create multiple partitions in a session with a relational target, the Integration Service creates
multiple connections to the target database to write target data concurrently. When you create multiple
partitions in a session with a file target, the Integration Service creates one target file for each partition. You
can configure the session properties to merge these target files.
The Targets node contains the following settings where you define properties:
• Writers
• Connections
• Properties
Configuring Writers
Click the Writers settings in the Transformations view to define the writer to use with each target instance.
When the mapping target is a flat file, an XML file, an SAP NetWeaver BI target, or a WebSphere MQ target,
the Workflow Manager specifies the necessary writer in the session properties. However, when the target is
relational, you can change the writer type to File Writer if you plan to use an external loader.
Note: You can change the writer type for non-reusable sessions in the Workflow Designer and for reusable
sessions in the Task Developer. You cannot change the writer type for instances of reusable sessions in the
Workflow Designer.
When you override a relational target to use the file writer, the Workflow Manager changes the properties for
that target instance on the Properties settings. It also changes the connection options you can define in the
Connections settings.
If the target contains a column with datetime values, the Integration Service compares the date formats
defined for the target column and the session. When the date formats do not match, the Integration Service
uses the date format with the lesser precision. For example, a session writes to a Microsoft SQL Server target
that includes a Datetime column with precision to the millisecond. The date format for the session is
MM/DD/YYYY HH24:MI:SS.NS. If you override the Microsoft SQL Server target with a flat file writer, the
Integration Service writes datetime values to the flat file with precision to the millisecond. If the date format
for the session is MM/DD/YYYY HH24:MI:SS, the Integration Service writes datetime values to the flat file
with precision to the second.
After you override a relational target to use a file writer, define the file properties for the target. Click Set File
Properties and choose the target to define.
Configuring Connections
View the Connections settings on the Mapping tab to define target connection information. For relational
targets, the Workflow Manager displays Relational as the target type by default. In the Value column, choose
a configured database connection for each relational target instance.
• FTP. If you want to load data to a flat file or XML target using FTP, you must specify an FTP connection
when you configure target options. FTP connections must be defined in the Workflow Manager prior to
configuring sessions.
• Loader. Use the external loader option to improve the load speed to Oracle, DB2, Sybase IQ, or Teradata
target databases.
To use this option, you must use a mapping with a relational target definition and choose File as the writer
type on the Writers settings for the relational target instance. The Integration Service uses an external
loader to load target files to the Oracle, DB2, Sybase IQ, or Teradata database. You cannot choose
external loader if the target is defined in the mapping as a flat file, XML, MQ, or SAP BW target.
• Queue. Choose Queue when you want to output to a WebSphere MQ or MSMQ message queue.
• None. Choose None when you want to write to a local flat file or XML file.
Configuring Properties
View the Properties settings on the Mapping tab to define target property information. The Workflow
Manager displays different properties for the different target types: relational, flat file, and XML.
The Integration Service writes data to relational targets, but rolls back the data when the session completes.
For all other target types, such as flat file and SAP BW, the Integration Service does not write data to the
targets.
Use the following rules and guidelines when performing a test load:
92 Chapter 7: Targets
Working with Relational Targets
When you configure a session to load data to a relational target, you define most properties in the
Transformations view on the Mapping tab. You also define some properties on the Properties tab and the
Config Object tab.
• Table name prefix. You can specify the target owner name or prefix in the session properties to override
the table name prefix in the mapping.
• Pre-session SQL. You can create SQL commands and execute them in the target database before loading
data to the target. For example, you might want to drop the index for the target table before loading data
into it.
• Post-session SQL. You can create SQL commands and execute them in the target database after loading
data to the target. For example, you might want to recreate the index for the target table after loading data
into it.
• Target table name. You can override the target table name for each relational target.
If any target table or column name contains a database reserved word, you can create and maintain a
reserved words file containing database reserved words. When the Integration Service executes SQL against
the database, it places quotes around the reserved words.
When the Integration Service runs a session with at least one relational target, it performs database
transactions per target connection group. For example, it commits all data to targets in a target connection
group at the same time.
On the Connections settings in the Targets node, choose the database connection. You can select a
connection object, use a connection variable, or use a session parameter to define the connection value in a
parameter file.
The following table describes the properties available in the Properties settings on the Mapping tab of the
session properties:
Update (as Update) Integration Service updates all rows flagged for update.
Default is enabled.
Update (as Insert) Integration Service inserts all rows flagged for update.
Default is disabled.
Update (else Insert) Integration Service updates rows flagged for update if they exist in the target, then inserts
any remaining rows marked for insert.
Default is disabled.
Enable array upsert or Integration Service updates or upserts data in batches of arrays. Array update and upsert
update operations reduce the network traffic and optimizes the session performance.
Applicable for Oracle targets.
94 Chapter 7: Targets
Target Property Description
Reject File Directory Reject-file directory name. By default, the Integration Service writes all reject files to the
service process variable directory, $PMBadFileDir.
If you specify both the directory and file name in the Reject Filename field, clear this field.
The Integration Service concatenates this field with the Reject Filename field when it runs
the session.
You can also use the $BadFileName session parameter to specify the file directory.
Reject Filename File name or file name and path for the reject file. By default, the Integration Service
names the reject file after the target instance name: target_name.bad. Optionally, use the
$BadFileName session parameter for the file name.
The Integration Service concatenates this field with the Reject File Directory field when it
runs the session. For example, if you have “C:\reject_file\” in the Reject File Directory field,
and enter “filename.bad” in the Reject Filename field, the Integration Service writes
rejected rows to C:\reject_file\filename.bad.
At the source level, you can specify whether the Integration Service inserts, updates, or deletes source rows
or whether it treats rows as data driven. If you treat source rows as data driven, you must use an Update
Strategy transformation to indicate how the Integration Service handles rows.
This section explains how the Integration Service writes data based on the source and target row properties.
PowerCenter uses the source and target row options to provide an extra check on the session-level
properties. In addition, when you use both the source and target row options, you can control inserts,
updates, and deletes for the entire session or, if you use an Update Strategy transformation, based on the
data.
When you set the row-handling property for a source, you can treat source rows as inserts, deletes, updates,
or data driven according to the following guidelines:
• Inserts. If you treat source rows as inserts, select Insert for the target option. When you enable the Insert
target row option, the Integration Service ignores the other target row options and treats all rows as
inserts. If you disable the Insert target row option, the Integration Service rejects all rows.
• Deletes. If you treat source rows as deletes, select Delete for the target option. When you enable the
Delete target option, the Integration Service ignores the other target-level row options and treats all rows
as deletes. If you disable the Delete target option, the Integration Service rejects all rows.
• Updates. If you treat source rows as updates, the behavior of the Integration Service depends on the
target options you select.
Insert If enabled, the Integration Service uses the target update option (Update as Update,
Update as Insert, or Update else Insert) to update rows.
If disabled, the Integration Service rejects all rows when you select Update as Insert or
Update else Insert as the target-level update option.
Update as Insert Integration Service updates all rows as inserts. You must also select the Insert target
option.
Update else Insert Integration Service updates existing rows and inserts other rows as if marked for insert.
You must also select the Insert target option.
Enable array upsert Integration Service updates or upserts data in batches of arrays. Array update and upsert
or update operations reduce the network traffic and optimizes the session performance.
Applicable for Oracle targets.
Delete Integration Service ignores this setting and uses the selected target update option.
The Integration Service rejects all rows if you do not select one of the target update options.
• Data Driven. If you treat source rows as data driven, you use an Update Strategy transformation to specify
how the Integration Service handles rows. However, the behavior of the Integration Service also depends
on the target options you select.
The following table describes how the Integration Service loads the target when you configure the session
to treat source rows as data driven:
Insert If enabled, the Integration Service inserts all rows flagged for insert. Enabled by
default.
If disabled, the Integration Service rejects the following rows:
- Rows flagged for insert
- Rows flagged for update if you enable Update as Insert or Update else Insert
Update as Update Integration Service updates all rows flagged for update. Enabled by default.
Update as Insert Integration Service inserts all rows flagged for update. Disabled by default.
Update else Insert Integration Service updates rows flagged for update and inserts remaining rows as
if marked for insert.
Delete If enabled, the Integration Service deletes all rows flagged for delete.
If disabled, the Integration Service rejects all rows flagged for delete.
The Integration Service rejects rows flagged for update if you do not select one of the target update
options.
96 Chapter 7: Targets
Target Table Truncation
The Integration Service can truncate target tables before running a session. You can choose to truncate
tables on a target-by-target basis. If you have more than one target instance, select the truncate target table
option for one target instance.
The Integration Service issues a delete or truncate command based on the target database and primary key-
foreign key relationships in the session target. To optimize performance, use the truncate table command.
The delete from command may impact performance.
The following table describes the commands that the Integration Service issues for each database:
Target Database Table contains a primary key referenced by Table does not contain a primary key
a foreign key referenced by a foreign key
1
DB2 delete from <table_name> truncate table <table_name> immediate
2
Microsoft SQL Server delete from <table_name> truncate table <table_name>
1. If you use a DB2 database on AS/400, the Integration Service issues a clrpfm command in both cases.
2. If you use the Microsoft SQL Server ODBC driver, the Integration Service issues a delete statement.
If the Integration Service issues a truncate target table command and the target table instance specifies a
table name prefix, the Integration Service verifies the database user privileges for the target table by issuing a
truncate command. If the database user is not specified as the target owner name or does not have the
database privilege to truncate the target table, the Integration Service issues a delete command instead.
If the Integration Service issues a delete command and the database has logging enabled, the database
saves all deleted records to the log for rollback. If you do not want to save deleted records for rollback, you
can disable logging to improve the speed of the delete.
For all databases, if the Integration Service fails to truncate or delete any selected table because the user
lacks the necessary privileges, the session fails.
If you enable truncate target tables with the following sessions, the Integration Service does not truncate
target tables:
• Incremental aggregation. When you enable both truncate target tables and incremental aggregation in the
session properties, the Workflow Manager issues a warning that you cannot enable truncate target tables
and incremental aggregation in the same session.
• Test load. When you enable both truncate target tables and test load, the Integration Service disables the
truncate table function, runs a test load session, and writes a message to the session log indicating that
the truncate target tables option is turned off for the test load session.
• Real-time. The Integration Service does not truncate target tables when you restart a JMS or WebSphere
MQ real-time session that has recovery data.
Deadlock Retry
Select the Session Retry on Deadlock option in the session properties if you want the Integration Service to
retry writes to a target database or recovery table on a deadlock. A deadlock occurs when the Integration
Service attempts to take control of the same lock for a database row.
The Integration Service may encounter a deadlock under the following conditions:
You can retry sessions on deadlock for targets configured for normal load. If you select this option and
configure a target for bulk mode, the Integration Service does not retry target writes on a deadlock for that
target. You can also configure the Integration Service to set the number of deadlock retries and the deadlock
sleep time period.
To retry a session on deadlock, click the Properties tab in the session properties and then scroll down to the
Performance settings.
• Using pre- and post-session SQL. The preferred method for dropping and re-creating indexes is to define
an SQL statement in the Pre SQL property that drops indexes before loading data to the target. Use the
Post SQL property to recreate the indexes after loading data to the target. Define the Pre SQL and Post
SQL properties for relational targets in the Transformations view on the Mapping tab in the session
properties.
• Using the Designer. The same dialog box you use to generate and execute DDL code for table creation
can drop and recreate indexes. However, this process is not automatic. Every time you run a session that
modifies the target table, you need to launch the Designer and use this feature.
Constraint-Based Loading
In the Workflow Manager, you can specify constraint-based loading for a session. When you select this
option, the Integration Service orders the target load on a row-by-row basis. For every row generated by an
98 Chapter 7: Targets
active source, the Integration Service loads the corresponding transformed row first to the primary key table,
then to any foreign key tables. Constraint-based loading depends on the following requirements:
• Active source. Related target tables must have the same active source.
• Key relationships. Target tables must have key relationships.
• Target connection groups. Targets must be in one target connection group.
• Treat rows as insert. Use this option when you insert into the target. You cannot use updates with
constraint-based loading.
Active Source
When target tables receive rows from different active sources, the Integration Service reverts to normal
loading for those tables, but loads all other targets in the session using constraint-based loading when
possible. For example, a mapping contains three distinct pipelines. The first two contain a source, source
qualifier, and target. Since these two targets receive data from different active sources, the Integration
Service reverts to normal loading for both targets. The third pipeline contains a source, Normalizer, and two
targets. Since these two targets share a single active source (the Normalizer), the Integration Service
performs constraint-based loading: loading the primary key table first, then the foreign key table.
Key Relationships
When target tables have no key relationships, the Integration Service does not perform constraint-based
loading. Similarly, when target tables have circular key relationships, the Integration Service reverts to a
normal load. For example, you have one target containing a primary key and a foreign key related to the
primary key in a second target. The second target also contains a foreign key that references the primary key
in the first target. The Integration Service cannot enforce constraint-based loading for these tables. It reverts
to a normal load.
To verify that all targets are in the same target connection group, complete the following tasks:
• Verify all targets are in the same target load order group and receive data from the same active source.
• Use the default partition properties and do not add partitions or partition points.
• Define the same target type for all targets in the session properties.
• Define the same database connection name for all targets in the session properties.
• Choose normal mode for the target load type for all targets in the session properties.
• Load primary key table in one mapping and dependent tables in another mapping. Use constraint-based
loading to load the primary table.
• Perform inserts in one mapping and updates in another mapping.
Constraint-based loading does not affect the target load ordering of the mapping. Target load ordering
defines the order the Integration Service reads the sources in each target load order group in the mapping. A
target load order group is a collection of source qualifiers, transformations, and targets linked together in a
mapping. Constraint-based loading establishes the order in which the Integration Service loads individual
targets within a set of targets receiving data from a single source qualifier.
Example
The following mapping is configured to perform constraint-based loading:
In the first pipeline, target T_1 has a primary key, T_2 and T_3 contain foreign keys referencing the T1 primary
key. T_3 has a primary key that T_4 references as a foreign key.
Since these tables receive records from a single active source, SQ_A, the Integration Service loads rows to
the target in the following order:
1. T_1
2. T_2 and T_3 (in no particular order)
3. T_4
The Integration Service loads T_1 first because it has no foreign key dependencies and contains a primary
key referenced by T_2 and T_3. The Integration Service then loads T_2 and T_3, but since T_2 and T_3 have
no dependencies, they are not loaded in any particular order. The Integration Service loads T_4 last, because
it has a foreign key that references a primary key in T_3.
After loading the first set of targets, the Integration Service begins reading source B. If there are no key
relationships between T_5 and T_6, the Integration Service reverts to a normal load for both targets.
• T_5
• T_6
T_1, T_2, T_3, and T_4 are in one target connection group if you use the same database connection for each
target, and you use the default partition properties. T_5 and T_6 are in another target connection group
together if you use the same database connection for each target and you use the default partition
properties. The Integration Service includes T_5 and T_6 in a different target connection group because they
are in a different target load order group from the first four targets.
1. In the General Options settings of the Properties tab, choose Insert for the Treat Source Rows As
property.
2. Click the Config Object tab. In the Advanced settings, select Constraint Based Load Ordering.
3. Click OK.
Bulk Loading
You can enable bulk loading when you load to DB2, Sybase, Oracle, or Microsoft SQL Server.
If you enable bulk loading for other database types, the Integration Service reverts to a normal load. Bulk
loading improves the performance of a session that inserts a large amount of data to the target database.
Configure bulk loading on the Mapping tab.
When bulk loading, the Integration Service invokes the database bulk utility and bypasses the database log,
which speeds performance. Without writing to the database log, however, the target database cannot
perform rollback. As a result, you may not be able to perform recovery. Therefore, you must weigh the
importance of improved session performance against the ability to recover an incomplete session.
Note: When loading to DB2, Microsoft SQL Server, and Oracle targets, you must specify a normal load for data
driven sessions. When you specify bulk mode and data driven, the Integration Service reverts to normal load.
Committing Data
When bulk loading to Sybase and DB2 targets, the Integration Service ignores the commit interval you define
in the session properties and commits data when the writer block is full.
When bulk loading to Microsoft SQL Server and Oracle targets, the Integration Service commits data at each
commit interval. Also, Microsoft SQL Server and Oracle start a new bulk load transaction after each commit.
Tip: When bulk loading to Microsoft SQL Server or Oracle targets, define a large commit interval to reduce the
number of bulk load transactions and increase performance.
Oracle Guidelines
When you enable bulk load to Oracle, the Integration Service invokes the standard Oracle client interface with
the bulk routines for direct path loads.
DB2 Guidelines
Use the following guidelines when bulk loading to DB2:
• You must drop indexes and constraints in the target tables before running a bulk load session. After the
session completes, you can rebuild them. If you use bulk loading with the session on a regular basis, use
pre- and post-session SQL to drop and rebuild indexes and key constraints.
• You cannot use source-based or user-defined commit when you run bulk load sessions on DB2.
• If you create multiple partitions for a DB2 bulk load session, you must use database partitioning for the
target partition type. If you choose any other partition type, the Integration Service reverts to normal load.
• When you bulk load to DB2, the DB2 database writes non-fatal errors and warnings to a message log file in
the session log directory. The message log file name is
<session_log_name>.<target_instance_name>.<partition_index>.log. You can check both the message log
file and the session log when you troubleshoot a DB2 bulk load session.
• If you want to bulk load flat files to DB2 for z/OS, use PowerExchange®.
For more information, see the DB2 documentation.
You can specify the table owner name in the target instance or on the Mapping tab of the session properties.
When you specify the table owner name in the session properties, you override table owner name in the
transformation properties.
You can use a parameter or variable as the target table name prefix. Use any parameter or variable type that
you can define in the parameter file. For example, you can use a session parameter, $ParamMyPrefix, as the
table name prefix, and set $ParamMyPrefix to the table name prefix in the parameter file.
Configure the target table name on the Transformation view of the Mapping tab.
Reserved Words
If any table name or column name contains a database reserved word, such as MONTH or YEAR, the session
fails with database errors when the Integration Service executes SQL against the database. You can create
and maintain a reserved words file, reswords.txt, in the server/bin directory. When the Integration Service
initializes a session, it searches for reswords.txt. If the file exists, the Integration Service places quotes
around matching reserved words when it executes SQL against the database.
Use the following rules and guidelines when working with reserved words:
• The Integration Service searches the reserved words file when it generates SQL to connect to source,
target, and lookup databases.
• If you override the SQL for a source, target, or lookup, you must enclose any reserved word in quotes.
• You may need to enable some databases, such as Microsoft SQL Server and Sybase, to use SQL-92
standards regarding quoted identifiers. Use connection environment SQL to issue the command. For
example, use the following command with Microsoft SQL Server:
SET QUOTED_IDENTIFIER ON
To insert arrays of data into a Teradata target by using ODBC, configure the OptimizeTeradataWrite custom
property at the session level or at the PowerCenter Integration Service level. Set the value of the
OptimizeTeradataWrite custom property to 1 to insert arrays of data into the target.
Note that the OptimizeTeradataWrite custom property is applicable only for inserting data into the target, and
not for updating data in the target, deleting data from the target, or reading data from the source.
The Integration Service performs the following database transactions per target connection group:
• Deadlock retry. If the Integration Service encounters a deadlock when it writes to a target, the deadlock
affects targets in the same target connection group. The Integration Service still writes to targets in other
target connection groups.
• Constraint-based loading. The Integration Service enforces constraint-based loading for targets in a
target connection group. If you want to specify constraint-based loading, you must verify the primary table
and foreign table are in the same target connection group.
Targets in the same target connection group meet the following criteria:
Suppose you create a session based on the same mapping. In the Workflow Manager, you do not create
multiple partitions. However, you use one Oracle database connection name for one target, and you use a
different Oracle database connection name for the other target. You specify normal mode for the target load
type for both target tables. The targets in the session belong to different target connection groups.
Note: When you define the target database connections for multiple targets in a session using session
parameters, the targets may or may not belong to the same target connection group. The targets belong to
the same target connection group if all session parameters resolve to the same target connection name. For
example, you create a session with two targets and specify the session parameter $DBConnection1 for one
target, and $DBConnection2 for the other target. In the parameter file, you define $DBConnection1 as Sales1
• Aggregator
• Application Source Qualifier
• Custom, configured as an active transformation
• Joiner
• MQ Source Qualifier
• Normalizer (VSAM or pipeline)
• Rank
• Sorter
• Source Qualifier
• XML Source Qualifier
• Mapplet, if it contains any of the above transformations
Note: The Filter, Router, Transaction Control, and Update Strategy transformations are active transformations
in that they can change the number of rows that pass through. However, they are not active sources in the
mapping because they do not generate rows. Only transformations that can generate rows are active
sources.
Active sources affect how the Integration Service processes a session when you use any of the following
transformations or session properties:
• XML targets. The Integration Service can load data from different active sources to an XML target when
each input group receives data from one active source.
• Transaction generators. Transaction generators, such as Transaction Control transformations, become
ineffective for downstream transformations or targets if you put a transaction control point after it.
Transaction control points are transaction generators and active sources that generate commits.
• Mapplets. An Input transformation must receive data from a single active source.
• Source-based commit. Some active sources generate commits. When you run a source-based commit
session, the Integration Service generates a commit from these active sources at every commit interval.
• Constraint-based loading. To use constraint-based loading, you must connect all related targets to the
same active source. The Integration Service orders the target load on a row-by-row basis based on rows
generated by an active source.
• Row error logging. If an error occurs downstream from an active source that is not a source qualifier, the
Integration Service cannot identify the source row information for the logged error row.
• Use a flat file target definition. Create a mapping with a flat file target definition. Create a session using
the flat file target definition. When the Integration Service runs the session, it creates the target flat file or
generates the target data based on the connected ports in the mapping and on the flat file target
definition. The Integration Service does not write data in unconnected ports to a fixed-width flat file target.
• Use a relational target definition. Use a relational definition to write to a flat file when you want to use an
external loader to load the target. Create a mapping with a relational target definition. Create a session
using the relational target definition. Configure the session to output to a flat file by specifying the File
Writer in the Writers settings on the Mapping tab.
You can configure the following properties for flat file targets:
• Target properties. You can define target properties such as partitioning options, merge options, output
file options, reject options, and command options.
• Flat file properties. You can choose to create delimited or fixed-width files, and define their properties.
The following table describes the properties you define on the Mapping tab for flat file target definitions:
Merge Type Type of merge the Integration Service performs on the data for partitioned targets.
Merge File Directory Name of the merge file directory. By default, the Integration Service writes the merge file in
the service process variable directory, $PMTargetFileDir.
If you enter a full directory and file name in the Merge File Name field, clear this field.
Merge File Name Name of the merge file. Default is target_name.out. This property is required if you select a
merge type.
Append if Exists Appends the output data to the target files and reject files for each partition. Appends output
data to the merge file if you merge the target files. You cannot use this option for target files
that are non-disk files, such as FTP target files.
If you do not select this option, the Integration Service truncates each target file before
writing the output data to the target file. If the file does not exist, the Integration Service
creates it.
Header Options Create a header row in the file target. You can choose the following options:
- No Header. Do not create a header row in the flat file target.
- Output Field Names. Create a header row in the file target with the output port names.
- Use header command output. Use the command in the Header Command field to generate a
header row. For example, you can use a command to add the date to a header row for the
file target.
Default is No Header.
Header Command Command used to generate the header row in the file target.
Footer Command Command used to generate a footer row in the file target.
Output Type Type of target for the session. Select File to write the target data to a file target. Select
Command to output data to a command. You cannot select Command for FTP or Queue target
connections.
Merge Command Command used to process the output data from all partitioned targets.
Output File Directory Name of output directory for a flat file target. By default, the Integration Service writes output
files in the service process variable directory, $PMTargetFileDir.
If you specify both the directory and file name in the Output Filename field, clear this field.
The Integration Service concatenates this field with the Output Filename field when it runs the
session.
You can also use the $OutputFileName session parameter to specify the file directory.
Output File Name File name, or file name and path of the flat file target. Optionally, use the $OutputFileName
session parameter for the file name. By default, the Workflow Manager names the target file
based on the target definition used in the mapping: target_name.out. The Integration Service
concatenates this field with the Output File Directory field when it runs the session.
If the target definition contains a slash character, the Workflow Manager replaces the slash
character with an underscore.
When you use an external loader to load to an Oracle database, you must specify a file
extension. If you do not specify a file extension, the Oracle loader cannot find the flat file and
the Integration Service fails the session.
Note: If you specify an absolute path file name when using FTP, the Integration Service
ignores the Default Remote Directory specified in the FTP connection. When you specify an
absolute path file name, do not use single or double quotes.
Reject File Directory Name of the directory for the reject file. By default, the Integration Service writes all reject
files to the service process variable directory, $PMBadFileDir.
If you specify both the directory and file name in the Reject File Name field, clear this field.
The Integration Service concatenates this field with the Reject File Name field when it runs
the session.
You can also use the $BadFileName session parameter to specify the file directory.
Reject File Name File name, or file name and path of the reject file. By default, the Integration Service names
the reject file after the target instance name: target_name.bad. Optionally use the
$BadFileName session parameter for the file name.
The Integration Service concatenates this field with the Reject File Directory field when it runs
the session. For example, if you have “C:\reject_file\” in the Reject File Directory field,
and enter “filename.bad” in the Reject Filename field, the Integration Service writes rejected
rows to C:\reject_file\filename.bad.
Use a command to perform additional processing of flat file target data. For example, use a command to sort
target data or compress target data. You can increase session performance by pushing transformation tasks
to the command instead of the Integration Service.
To send the target data to a command, select Command for the output type and enter a command for the
Command property.
For example, to generate a compressed file from the target data, use the following command:
compress -c - > $PMTargetFileDir/myCompressedFile.Z
The Integration Service sends the output data to the command, and the command generates a compressed
file that contains the target data.
Note: You can also use service process variables, such as $PMTargetFileDir, in the command.
In the Transformations view on the Mapping tab, click the Targets node and then click Set File Properties to
open the Flat Files dialog box.
To edit the fixed-width properties, select Fixed Width and click Advanced.
The following table describes the options you define in the Fixed Width Properties dialog box:
Null Character Optional. Character that the PowerCenter Integration Service substitutes for null values
when it reads null values from a database or a flat file. You can enter any valid character
in the file code page.
Repeat Null Character Optional. Fills null value fields with the character specified in the Null Character option.
If you do not select this option, then the PowerCenter Integration Service substitutes
each null value with one null character.
Code Page Optional. Code page of the fixed-width file. Select a code page or a variable:
- Code page. Select the code page.
- Use Variable. Enter a user-defined workflow or worklet variable or the session
parameter $ParamName, and define the code page in the parameter file. Use the code
page name.
Default is the PowerCenter Client code page.
In the Transformations view on the Mapping tab, click the Targets node and then click Set File Properties to
open the Flat Files dialog box. To edit the delimited properties, select Delimited and click Advanced.
The following table describes the options you can define in the Delimited File Properties dialog box:
Delimiters Character used to separate columns of data. Delimiters can be either printable or single-byte
unprintable characters, and must be different from the escape character and the quote
character (if selected). To enter a single-byte unprintable character, click the Browse button
to the right of this field. In the Delimiters dialog box, select an unprintable character from the
Insert Delimiter list and click Add. You cannot select unprintable multibyte characters as
delimiters.
Optional Quotes Select None, Single, or Double. If you select a quote character, the Integration Service does
not treat delimiter characters within the quote characters as a delimiter. For example,
suppose an output file uses a comma as a delimiter and the Integration Service receives the
following row: 342-3849, ‘Smith, Jenna’, ‘Rockville, MD’, 6.
If you select the optional single quote character, the Integration Service ignores the commas
within the quotes and writes the row as four fields.
If you do not select the optional single quote, the Integration Service writes six separate
fields.
Code Page Code page of the delimited file. Select a code page or a variable:
- Code page. Select the code page.
- Use Variable. Enter a user-defined workflow or worklet variable or the session parameter
$ParamName, and define the code page in the parameter file. Use the code page name.
Default is the PowerCenter Client code page.
• Write to fixed-width flat files from relational target definitions. The Integration Service adds spaces to
target columns based on transformation datatype.
• Write to fixed-width flat files from flat file target definitions. You must configure the precision and field
width for flat file target definitions to accommodate the total length of the target field.
• Generate flat file targets by transaction. You can configure the file target to generate a separate output
file for each transaction.
• Write empty fields for unconnected ports in fixed-width file definitions. You can configure the mapping
so that the Integration Service writes empty fields for unconnected ports in a fixed-width flat file target
definition.
• Write multibyte data to fixed-width files. You must configure the precision of string columns to
accommodate character data. When writing shift-sensitive data to a fixed-width flat file target, the
Integration Service adds shift characters and spaces to meet file requirements.
When the Integration Service writes to a fixed-width flat file based on a relational target definition in the
mapping, it adds spaces to columns based on the transformation datatype connected to the target. This
allows the Integration Service to write optional symbols necessary for the datatype, such as a negative sign
or decimal point, without sending the row to the reject file.
For example, you connect a transformation Integer(10) port to a Number(10) column in a relational target
definition. In the session properties, you override the relational target definition to use the File Writer and you
specify to output a fixed-width flat file. In the target flat file, the Integration Service appends an additional
byte to the Number(10) column to allow for negative signs that might be associated with Integer data.
The following table describes the number of bytes the Integration Service adds to the target column and
optional characters it uses for each datatype:
Note: When the Integration Service writes a row to the reject file, it writes a message in the session log.
When a session writes to a fixed-width flat file based on a fixed-width flat file target definition in the mapping,
the Integration Service defines the total length of a field by the precision or field width defined in the target.
Fixed-width files are byte-oriented, which means the total length of a field is measured in bytes.
The following table describes how the Integration Service measures the total field length for fields in a fixed-
width flat file target definition:
String Precision
The following table lists the characters you must accommodate when you configure the precision or field
width for flat file target definitions to accommodate the total length of the target field:
Datetime - Date and time separators, such as slashes (/), dashes (-), and colons (:).
- For example, the format MM/DD/YYYY HH24:MI:SS.US has a total length of 26 bytes.
When you edit the flat file target definition in the mapping, define the precision or field width great enough to
accommodate both the target data and the characters in the preceding table.
For example, suppose you have a mapping with a fixed-width flat file target definition. The target definition
contains a number column with a precision of 10 and a scale of 2. You use a comma as the decimal
separator and a period as the thousands separator. You know some rows of data might have a negative
value. Based on this information, you know the longest possible number is formatted with the following
format:
-NN.NNN.NNN,NN
Open the flat file target definition in the mapping and define the field width for this number column as a
minimum of 14 bytes.
• EmployeeID
• EmployeeName
• Street
• City
• State
In the mapping, you connect only the EmployeeID and EmployeeName ports in the flat file target definition.
You configure the flat file target definition to create a header row with the output port names. The Integration
Service generates an output file with the following rows:
EmployeeID EmployeeName
If you want the Integration Service to write empty fields for the unconnected ports, create output ports in an
upstream transformation that do not contain data. Then connect these ports containing null values to the
fixed-width flat file target definition. For example, you connect the ports containing null values to the Street,
City, and State ports in the flat file target definition. The Integration Service generates an output file with the
following rows:
For string columns, the Integration Service truncates the data if the precision is not large enough to
accommodate the multibyte data.
• Non shift-sensitive multibyte data. The file contains all multibyte data. Configure the precision in the
target definition to allow for the additional bytes.
For example, you know that the target data contains four double-byte characters, so you define the target
definition with a precision of 8 bytes.
If you configure the target definition with a precision of 4, the Integration Service truncates the data
before writing to the target.
• Shift-sensitive multibyte data. The file contains single-byte and multibyte data. When writing to a shift-
sensitive flat file target, the Integration Service adds shift characters and spaces to meet file
requirements. You must configure the precision in the target definition to allow for the additional bytes
and the shift characters.
Note: Delimited files are character-oriented, and you do not need to allow for additional precision for
multibyte data.
The Integration Service writes shift characters and spaces in the following ways:
• If a column begins or ends with a double-byte character, the Integration Service adds shift characters so
the column begins and ends with a single-byte shift character.
• If the data is shorter than the column width, the Integration Service pads the rest of the column with
spaces.
• If the data is longer than the column width, the Integration Service truncates the data so the column ends
with a single-byte shift character.
To illustrate how the Integration Service handles a fixed-width file containing shift-sensitive data, say you
want to output the following data to the target:
SourceCol1 SourceCol2
AAAA aaaa
The first target column contains eight bytes and the second target column contains four bytes.
The Integration Service must add shift characters to handle shift-sensitive data. Since the first target column
can handle eight bytes, the Integration Service truncates the data before it can add the shift characters.
TargetCol1 TargetCol2
-oAAA-i aaaa
Notation Description
A Double-byte character
-o Shift-out character
-i Shift-in character
For the first target column, the Integration Service writes three of the double-byte characters to the target. It
cannot write any additional double-byte characters to the output column because the column must end in a
single-byte character. If you add two more bytes to the first target column definition, then the Integration
Service can add shift characters and write all the data without truncation.
For the second target column, the Integration Service writes all four single-byte characters to the target. It
does not add write shift characters to the column because the column begins and ends with single-byte
characters.
The null character can be repeating or non-repeating. If the null character is repeating, the Integration Service
writes as many null characters as possible into a target column. If you specify a multibyte null character and
there are extra bytes left after writing null characters, the Integration Service pads the column with single-
byte spaces. If a column is smaller than the multibyte character specified as the null character, the session
fails at initialization.
Character Set
You can configure the Integration Service to run sessions with flat file targets in either ASCII or Unicode data
movement mode.
If you configure a session with a flat file target to run in Unicode data movement mode, the target file code
page must be a superset of the source code page. Delimiters, escape, and null characters must be valid in the
specified code page of the flat file.
If you configure a session to run in ASCII data movement mode, delimiters, escape, and null characters must
be valid in the ISO Western European Latin1 code page. Any 8‑bit character you specified in previous versions
of PowerCenter is still valid.
When writing to fixed-width files, the Integration Service truncates the target definition port name if it is
longer than the column width.
ITEM_ID number
ITEM_NAME string
PRICE number
The column width for ITEM_ID is six. When you enable the Output Metadata For Flat File Target option, the
Integration Service writes the following text to a flat file:
#ITEM_ITEM_NAME PRICE
100001Screwdriver 9.50
100002Hammer 12.90
100003Small nails 3.00
The following table describes the properties you define in the XML Writer:
Output File Directory Enter the directory name in this field. By default, the Integration Service writes output files
in the service process variable directory, $PMTargetFileDir.
You can enter the full path and file name. If you specify both the directory and file name in
the Output Filename field, clear this field. The Integration Service concatenates this field
with the Output Filename field when it runs the session.
You can also use the $OutputFileName session parameter to specify the file directory.
Output Filename Enter the file name or file name and path. By default, the Workflow Manager names the
target file based on the target definition used in the mapping: target_name.xml.
If the target definition contains a slash character, the Workflow Manager replaces the slash
character with an underscore.
Enter the file name, or file name and path. Optionally, use the $OutputFileName session
parameter for the file name.
If you specify both the directory and file name in the Output File Directory field, clear this
field. The Integration Service concatenates this field with the Output File Directory field
when it runs the session.
If you specify an absolute path file name when using FTP, the Integration Service ignores
the Default Remote Directory specified in the FTP connection. When you specify an
absolute path file name, do not use single or double quotes.
Validate Target Validates simple data types. The Integration Service does not validate the target XML
structure against a schema.
Format Output Format the XML target file so the XML elements and attributes indent. If you do not select
Format Output, each line of the XML file starts in the same position.
XML Datetime Format Select local time, local time with time zone, or UTC. Local time with time zone is the
difference in hours between the server time zone and Greenwich Mean Time. UTC is
Greenwich Mean Time.
Null Content Choose how to represent null content in the target. Default is No Tag.
Representation
Empty String Content Choose how to represent empty string content in the target. Default is Tag with Empty
Representation Content.
Empty String Attribute Choose how to represent empty string attributes in the target. Default is Attribute Name
Representation with Empty String.
• Character set. Configure the Integration Service to run sessions with XML targets in either ASCII or
Unicode data movement mode.
• Null and empty string. Choose how the Integration Service handles null data or empty strings when it
writes data to an XML target.
• Handling duplicate group rows. Choose how the Integration Service handles duplicate rows of data.
• DTD and schema reference. Define a DTD or schema file name for the target XML file.
• Flushing XML on commits. Configure the Integration Service to periodically flush data to the target.
• XML caching properties. Define a cache directory for an XML target.
• Session logs for XML targets. View session logs for an XML session.
• Multiple XML output. Configure the Integration Service to output a new XML document when the data in
the root changes.
• Partitioning the XML Generator. When you generate XML in multiple partitions, you always generate
separate documents for each partition.
• Generating XML files with no data. Configure the WriteNullXMLFile custom property to skip creating an
XML file when the XML Generator transformation receives no data.
Character Set
You can configure the Integration Service to run sessions with XML targets in either ASCII or Unicode data
movement mode. XML files contain an encoding declaration that indicates the code page used in the file. The
most commonly used code pages are UTF-8 and UTF-16. PowerCenter supports UTF-8 code pages for XML
targets only. Use the same set of code pages for XML files as for relational databases and other files.
For XML targets, PowerCenter uses the code page declared in the XML file. When you run the Integration
Service in Unicode data movement mode, the XML target code page must be a superset of the Integration
Service code page and the source code page.
To change these defaults, you can change the Null Content Representation and Empty String Content
Representation XML target properties. For attributes, change Null Attribute Representation and the Empty
String Attribute Representation properties.
Null Content or Empty String Content - No Tag - Does not output a tag.
- Tag with Empty Content - Outputs the XML tag with no content.
Null Attribute or Empty String - No Attribute - Does not output the attribute.
Attribute - Attribute Name with Empty - Outputs the attribute name with no
String content.
You can specify fixed or default values for elements and attributes. When an element in an XML schema or a
DTD has a default value, the Integration Service inserts the value instead of writing empty content. When an
element has a fixed value in the schema, the value is always inserted in the XML file. If the XML schema or
DTD does not specify a value for an attribute and the attribute has a null value, the Integration Service omits
the attribute.
If a required attribute does not have a fixed value, the attribute must be a projected field. The Integration
Service does not output invalid attributes to a target. An error occurs when a prohibited attribute appears in
an element tag. An error also occurs if a required attribute is not present in an element tag. The Integration
Service writes these errors to the session log or the error log when you enable row error logging.
The following table describes the format of XML file elements and attributes that contain null values or
empty strings:
The Integration Service does not write duplicate rows to the reject file. The Integration Service writes
duplicate rows to the session log. You can skip writing warning messages in the session log for the duplicate
rows. Disable the XMLWarnDupRows Integration Service option in the Informatica Administrator.
The Integration Service handles duplicate rows passed to the XML target root group differently than it
handles rows passed to other XML target groups:
• For the XML target root group, the Integration Service always passes the first row to the target. When the
Integration Service encounters duplicate rows, it increases the number of rejected rows in the session
load summary.
• For any XML target group other than the root group, you can configure duplicate group row handling in the
XML target definition in the Mapping Designer.
• If you choose to warn about duplicate rows, the Integration Service writes all duplicate rows for the root
group to the session log. Otherwise, the Integration Service drops the rows without logging any error
messages.
You can select which row the Integration Service passes to the XML target:
• First row. The Integration Service passes the first row to the target. When the Integration Service
encounters other rows with the same primary key, the Integration Service increases the number of
rejected rows in the session load summary.
• Last row. The Integration Service passes the last duplicate row to the target. You can configure the
Integration Service to write the duplicate XML rows to the session log by setting the Warn About Duplicate
XML Rows option.
For example, the Integration Service encounters five duplicate rows. If you configure the Integration
Service to write the duplicate XML rows to the session log, the Integration Service passes the fifth row to
the XML target and writes the first four duplicate rows to the session log. Otherwise, the Integration
Service passes the fifth row to the XML target but does not write anything to the session log.
• Error. The Integration Service passes the first row to the target. When the Integration Service encounters
a duplicate row, it increases the number of rejected rows in the session load summary and increments the
error count.
When the Integration Service reaches the error threshold, the session fails and the Integration Service
does not write any rows to the XML target.
The Integration Service sets an error threshold for each XML group.
Note: An XML instance document must refer to the full relative path of a schema if a midstream XML
transformation is processing the file. Otherwise, the full path is not required.
• Large XML files. If you are processing a large XML file of several gigabytes, the Integration Service may
have reduced performance. You can set the On Commit attribute to Append to Doc. This flushes XML data
periodically to the target document.
• Real-time processing. If you process real-time data that requires commits at specific times, use Append
to Doc.
You can set the On Commit attribute to one of the following values:
• Ignore commit. Generate and write to the XML document at end of file.
• Append to document. Write to the same XML document at the end of each commit. The XML document
closes at end of file. This option is not available for XML Generator transformations.
• Create new document. Create and write to a new document at each commit. You create multiple XML
documents.
You can flush data if all groups in the XML target are connected to the same single commit or transaction
point. The transformation at the commit point generates denormalized output. The denormalized output
contains repeating primary key values for all but the lowest level node in the XML schema. The Integration
Service extracts rows from this output for each group in the XML target.
You must have only one child group for the root group in the XML target.
Ignoring Commit
You can choose to generate the XML document after the session has read all the source records. This option
causes the Integration Service to store all of the XML data in cache during a session. Use this option when
you are not processing a lot of data.
For sessions using source-based commits, the single transaction point might be a source or nearest active
source to the XML target, such as the last active transformation before the target. For sessions with user-
defined commits, the transaction point is a transaction generating transformation.
Warning: When you create new a document on commit, you need to provide a unique file name for each
document. Otherwise, the Integration Service overwrites the document it created from the previous commit.
You can configure the Integration Service to automatically determine the XML cache size, or you can
configure the cache size. When the memory requirements exceed the cache size, the Integration Service
pages data to index and data files in the cache directory. When the session completes, the Integration
Service releases cache memory and deletes the cache files.
You can specify the cache directory and cache size for the XML target. The default cache directory is
$PMCacheDir, which is a service process variable that represents the directory where the Integration Service
stores cache files by default.
The Integration Service creates multiple XML files when the root group has more than one distinct primary
key value. If the Integration Service receives multiple rows with the same primary key value, the Integration
Service chooses the first or last row based on the way you configure duplicate row handling.
If you pass data to a column in the root group, but you do not pass data to the primary key, the Integration
Service does not generate a new XML document. The Integration Service writes a warning message to the
session log indicating that the primary key for the root group is not projected, and the Integration Service is
generating one document.
Example
The following example includes a mapping that contains a flat file source of country names, regions, and
revenue dollars per region. The target is an XML file. The root view contains the primary key, XPK_COL_0,
which is a string.
Each time the Integration Service passes a new country name to the root view the Integration Service
generates a new target file. Each target XML file contains country name, region, and revenue data for one
country.
The Integration Service passes the following rows to the XML target:
Country,Region,Revenue
USA,region1,1000
Canada,region1,100
USA,region2,200
USA,region3,300
If you specify “revenue_file.xml” as the output file name in the session properties, the session produces the
following files:
To create a session with heterogeneous targets, you can create a session based on a mapping with
heterogeneous targets. Or, you can create a session based on a mapping with homogeneous targets and
select different database connections.
• Multiple target types. You can create a session that writes to both relational and flat file targets.
• Multiple target connection types. You can create a session that writes to a target on an Oracle database
and to a target on a DB2 database. Or, you can create a session that writes to multiple targets of the same
type, but you specify different target connections for each target in the session.
All database connections you define in the Workflow Manager are unique to the Integration Service, even if
you define the same connection information. For example, you define two database connections, Sales1 and
Sales2. You define the same user name, password, connect string, code page, and attributes for both Sales1
and Sales2. Even though both Sales1 and Sales2 define the same connection information, the Integration
Service treats them as different database connections. When you create a session with two relational targets
and specify Sales1 for one target and Sales2 for the other target, you create a session with heterogeneous
targets.
You can create a session with heterogeneous targets in one of the following ways:
• Create a session based on a mapping with targets of different types or different database types. In the
session properties, keep the default target types and database types.
• Create a session based on a mapping with the same target types. However, in the session properties,
specify different target connections for the different target instances, or override the target type to a
different type.
Note: When the Integration Service runs a session with at least one relational target, it performs database
transactions per target connection group. For example, it orders the target load for targets in a target
connection group when you enable constraint-based loading.
Reject Files
During a session, the Integration Service creates a reject file for each target instance in the mapping. If the
writer or the target rejects data, the Integration Service writes the rejected row into the reject file. The reject
file and session log contain information that helps you determine the cause of the reject.
Each time you run a session, the Integration Service appends rejected data to the reject file. Depending on
the source of the problem, you can correct the mapping and target database to prevent rejects in subsequent
sessions.
Note: If you enable row error logging in the session properties, the Integration Service does not create a reject
file. It writes the reject rows to the row error tables or file.
When you run a session that contains multiple partitions, the Integration Service creates a separate reject file
for each partition. The Integration Service names reject files after the target instance name. The default
name for reject files is filename_partitionnumber.bad. The reject file name for the first partition does not
contain a partition number.
For example,
/home/directory/filename.bad
/home/directory/filename2.bad
/home/directory/filename3.bad
The Workflow Manager replaces slash characters in the target instance name with underscore characters.
To find a reject file name and path, view the target properties settings on the Mapping tab of session
properties.
• Row indicator. The first column in each row of the reject file is the row indicator. The row indicator
defines whether the row was marked for insert, update, delete, or reject.
If the session is a user-defined commit session, the row indicator might indicate whether the transaction
was rolled back due to a non-fatal error, or if the committed transaction was in a failed target connection
group.
• Column indicator. Column indicators appear after every column of data. The column indicator defines
whether the column contains valid, overflow, null, or truncated data.
The following sample reject file shows the row and column indicators:
0,D,1921,D,Nelson,D,William,D,415-541-5145,D
0,D,1922,D,Page,D,Ian,D,415-541-5145,D
0,D,1923,D,Osborne,D,Lyle,D,415-541-5145,D
0,D,1928,D,De Souza,D,Leo,D,415-541-5145,D
0,D,2001123456789,O,S. MacDonald,D,Ira,D,415-541-514566,T
Row Indicators
The first column in the reject file is the row indicator. The row indicator is a flag that defines the update
strategy for the data row.
Column Indicators
A column indicator appears after every column of data. A column indicator defines whether the data is valid,
overflow, null, or truncated.
The column indicator “D” also appears after each row indicator.
D Valid data. Good data. Writer passes it to the target database. The
target accepts it unless a database error occurs, such as
finding a duplicate key.
N Null. The column contains a null value. Good data. Writer passes it to the target, which rejects it if
the target database does not accept null values.
T Truncated. String data exceeded a Bad data, if you configured the mapping target to reject
specified precision for the column, so overflow or truncated data.
the value was truncated.
Null columns appear in the reject file with commas marking their column. The following example shows a
null column surrounded by good data:
0,D,5,D,,N,5,D
Either the writer or target database can reject a row. Consult the log to determine the cause for rejection.
Connection Objects
This chapter includes the following topics:
125
• PowerExchange for MSMQ Connections, 165
• PowerExchange for Netezza Connections, 166
• PowerExchange for Oracle E-Business Suite Connection Properties, 167
• PowerExchange for PeopleSoft Connections, 167
• PowerExchange for PostgreSQL Connection Properties, 168
• PowerExchange for Salesforce Analytics Connections, 170
• PowerExchange for Salesforce Connections, 170
• PowerExchange for SAP NetWeaver Connections, 171
• PowerExchange for SAP NetWeaver BI Connections, 176
• PowerExchange for Siebel Connections, 177
• PowerExchange for Tableau Connections, 179
• PowerExchange for Tableau V3 Connections, 180
• PowerExchange for Teradata Parallel Transporter Connections, 181
• PowerExchange for TIBCO Connections, 183
• PowerExchange for Web Services Connections, 185
• PowerExchange for webMethods Connections, 187
• PowerExchange for WebSphere MQ Connections, 189
• Connection Object Management, 190
Connection Types
When you create a connection object, choose the connection type in the Connection Browser. Some
connection types also have connection subtypes. For example, a relational connection type has subtypes
such as Oracle and Microsoft SQL Server. Define the values for the connection based on the connection type
and subtype.
When you configure a session, you can choose the connection type and select a connection to use. You can
also override the connection attributes for the session or create a connection. Set the connection type on the
mapping tab for each object.
Connection Description
Types
Loader Relational connection to the external loader for the target, such as IBM DB2 Autoloader or Teradata
FastLoad.
When you configure a session, choose File as the writer type for the relational target instance.
Select a Loader connection to load output files to teradata, Oracle, DB2, or Sybase IQ through an
external loader. Select a loader connection in the Value column.
Note: For information about connections to PowerExchange see PowerExchange Interfaces for PowerCenter.
Session Parameters
You can enter session parameter $ParamName as the database user name and password, and define the
user name and password in a parameter file. For example, you can use a session parameter,
$ParamMyDBUser, as the database user name, and set $ParamMyDBUser to the user name in the parameter
file.
To use a session parameter for the database password, enable the Use Parameter in Password option and
encrypt the password by using the pmpasswd command line program. Encrypt the password by using the
• PmNullUser
• PmNullPasswd
Use the PmNullUser user name if you use one of the following authentication methods:
• Oracle OS Authentication. Oracle OS Authentication lets you log in to an Oracle database if you have a
login name and password for the operating system. You do not need to know a database user name and
password. PowerCenter uses Oracle OS Authentication when the connection user name is PmNullUser
and the connection is for an Oracle database.
• IBM DB2 client authentication. IBM DB2 client authentication lets you log in to an IBM DB2 database
without specifying a database user name or password if the IBM DB2 server is configured for external
authentication or if the IBM DB2 server is on the same as the Integration Service process. PowerCenter
uses IBM DB2 client authentication when the connection user name is PmNullUser and the connection is
for an IBM DB2 database.
Use the PmNullUser user name with any of the following connection types:
• Relational database connections. Use for Oracle OS Authentication, IBM DB2 client authentication, or
databases such as ISG Navigator that do not allow user names,
• External loader connections. Use for Oracle OS Authentication or IBM DB2 client authentication.
• HTTP connections. Use if the HTTP server does not require authentication.
• PowerChannel relational database connections. Use for Oracle OS Authentication, IBM DB2 client
authentication, or databases such as ISG Navigator that do not allow user names.
• Web Services connections. Use if the web service does not require a user name.
Grant the database user permission to access and create temporary tablespaces. If the user does not have
sufficient permission, the Integration Service fails the session.
• Relational database connections. Use to connect to all databases except Microsoft SQL Server and
Sybase ASE.
• External loader connection. Use to connect to all databases.
• PowerChannel relational database connections. Use to connect with all databases except Microsoft SQL
Server and Sybase ASE.
1
Teradata ODBC_data_source_name or TeradataODBC
ODBC_data_source_name@db_name or TeradataODBC@mydatabase
ODBC_data_source_name@db_user_name TeradataODBC@jsmith
When you configure a mapping, you can specify the database location to use $Source or $Target variable for
Lookup and Stored Procedure transformations. You can also configure the $Source variable to specify the
source connection for relational sources and the $Target variable to specify the target connection for
relational targets in the session properties.
If you use $Source or $Target in a Lookup or Stored Procedure transformation, you can configure the
connection value on the Properties tab or Mapping tab of the session. When you configure $Source
Connection Value or $Target Connection Value, the Integration Service uses that connection when it runs the
session. If you do not configure $Source Connection Value or $Target Connection Value, the Integration
Service determines the database connection to use when it runs the session.
The following table describes how the Integration Service determines the value of $Source when you do not
configure $Source Connection Value:
One source The database connection you specify for the source.
Joiner transformation is before a Lookup or The database connection for the detail source.
Stored Procedure transformation
Lookup or Stored Procedure transformation is The database connection for the source connected to the
before a Joiner transformation transformation.
The following table describes how the Integration Services determines the value of $Target when you do not
configure $Target Connection Value in the session properties:
One target The database connection you specify for the target.
To enter the database connection for the $Source and $Target connection variables:
1. In the session properties, select the Properties tab or the Mapping tab, Connections node.
2. Click the Open button in $Source Connection Value or $Target Connection Value field.
The Connection Browser dialog box appears.
3. Select a connection variable or session parameter.
You can enter the $Source or $Target connection variable, or the $DBConnectionName or
$AppConnectionName session parameter. If you enter a session parameter, define the parameter in the
parameter file. If you do not define a value for the session parameter, the Integration Service determines
which database connection to use when it runs the session.
4. Click OK.
You can override connection attributes when you configure the source or target session properties in the
following ways:
• You use an FTP, queue, external loader, or application connection for a non-relational source or target.
• You use an FTP, queue, or external loader connection for a relational target.
• You use an application connection for a relational source.
You configure connections in the Connections settings on the Mapping tab.
• Session. Select the connection object and override attributes in the session.
• Parameter file. Use a session parameter to define the connection and override connection attributes in
the parameter file.
1. On the Mapping tab, select the source or target instance in the Connections node.
2. Select the connection type.
3. Click the Open button in the value field to select a connection object.
4. Choose the connection object.
5. Click Override.
6. Update the attributes you want to change.
7. Click OK.
If you configure the Integration Service for code page validation, the Integration Service enforces code page
compatibility at run time. The Integration Service ensures that the target database code page is a superset of
the source database code page.
When you change the code page in a connection object, you must choose one that is compatible with the
previous code page. If the code pages are incompatible, the Workflow Manager invalidates all sessions using
that connection.
If you configure the PowerCenter Client and Integration Service for relaxed code page validation, you can
select any supported code page for source and target connections. If you are familiar with the data and are
confident that it will convert safely from one code page to another, you can run sessions with incompatible
source and target data code pages. It is your responsibility to ensure your data will convert properly.
The trust certificates file (ca-bundle.crt) contains certificate files from major, trusted certificate authorities. If
the certificate bundle does not contain a certificate from a certificate authority that the session uses, you can
convert the certificate of the HTTP server or web service provider to PEM format and append it to the ca-
bundle.crt file.
You can generate the client certificate and private key files in a single file or as separate files.
For example, to convert the DER file named server.der to PEM format, use the following command:
openssl x509 -in server.der -inform DER -out server.pem -outform PEM
If you want to convert the PKCS12 file named server.pfx to PEM format, use the following command:
openssl pkcs12 -in server.pfx -out server.pem
To convert a private key named key.der from DER to PEM format, use the following command:
openssl rsa -in key.der -inform DER -outform PEM -out keyout.pem
For more information, refer to the OpenSSL documentation. After you convert certificate files to the PEM
format, you can append them to the trust certificates file. Also, you can use PEM format private key files with
the HTTP transformation or PowerExchange for Web Services.
The Workflow Manager assigns default permissions for connection objects to users, groups, and all others if
you enable enhanced security.
• Read. View the connection object in the Workflow Manager and Repository Manager. When you have read
permission, you can perform tasks in which you view, copy, or edit repository objects associated with the
connection object.
• Write. Edit the connection object.
• Execute. Run sessions that use the connection object.
To assign or edit permissions on a connection object, select an object from the Connection Object Browser,
and click Permissions.
You can perform the following tasks to manage permissions on a connection object:
Environment SQL
The Integration Service runs environment SQL in auto-commit mode and closes the transaction after it issues
the SQL. Use SQL commands that do not depend on a transaction being open during the entire read or write
process. For example, if a source database is set to read only mode and you create an environment SQL
statement in the source connection to set the transaction to read only, the Integration Service issues a
commit after it runs the SQL and cannot read the source in read only mode.
Use environment SQL for source, target, lookup, and stored procedure connections. If the SQL syntax is not
valid, the Integration Service does not connect to the database, and the session fails.
Note: When a connection object has “environment SQL,” the connection uses “connection environment SQL.”
• You want to set up the connection environment so that double quotation marks are object identifiers.
• You configure the target load type to Normal and the Microsoft SQL Server target name includes spaces.
Use SQL commands that depend on a transaction being open during the entire read or write process. For
example, you might use the following statement as transaction environment SQL to modify how the session
handles characters:
ALTER SESSION SET NLS_LENGTH_SEMANTICS=CHAR
This command must be run before each transaction. The command is not appropriate for connection
environment SQL because setting the parameter once for each connection is not sufficient.
• You can enter any SQL command that is valid in the database associated with the connection object. The
Integration Service does not allow nested comments, even though the database might.
• When you enter SQL in the SQL Editor, you type the SQL statements.
• Use a semicolon (;) to separate multiple statements.
• The Integration Service ignores semicolons within /*...*/.
• If you need to use a semicolon outside of comments, you can escape it with a backslash (\).
• You can use parameters and variables in the environment SQL. Use any parameter or variable type that
you can define in the parameter file. You can enter a parameter or variable within the SQL statement, or
you can use a parameter or variable as the environment SQL. For example, you can use a session
parameter, $ParamMyEnvSQL, as the connection or transaction environment SQL, and set
$ParamMyEnvSQL to the SQL statement in a parameter file.
• You can configure the table owner name using sqlid in the connection environment SQL for a DB2
connection. However, the table owner name in the target instance overrides the SET sqlid statement in
environment SQL. To use the table owner name specified in the SET sqlid statement, do not enter a name
in the target name prefix.
Connection Resilience
Connection resilience is the ability of the Integration Service to tolerate temporary network failures when
connecting to a relational database, an application, or the PowerExchange Listener. The Integration Service
can also tolerate the temporary unavailability of the relational database, application, or PowerExchange
Listener. The Integration Services is resilient to failures when it initializes the connection to the source or
target and when it reads data from a source or writes data to a target.
PowerExchange does not support runtime connection resilience for database connections other than those
used for PowerExchange Express CDC for Oracle. Configure the workflow for automatic recovery of
terminated tasks if recovery from a dropped PowerExchange connection is required. PowerExchange also
does not support runtime resilience of connections between the Integration Service and PowerExchange
Listener after the initial connection attempt. However, you can configure resilience for the initial connection
attempt by setting the Connection Retry Period property to a value greater than 0 when you define
PowerExchange Client for PowerCenter (PWXPC) relational and application connections. The Integration
Service then retries the connection to the PowerExchange Listener after the initial connection attempt fails. If
the Integration Service cannot connect to the PowerExchange Listener within the retry period, the session
fails.
The Integration Service will not attempt to reconnect to a source or target in the following situations:
Note: For a database connection to be resilient, the source or target must be a highly available database and
you must have the high availability option or the real-time option.
Property Description
Name Name you want to use for this connection. The connection name cannot contain spaces or
other special characters, except for the underscore.
Use Kerberos Indicates that the database to connect to runs on a network that uses Kerberos authentication.
Authentication If this option is selected, you cannot set the user name and password in the connection object.
The connection uses the credentials of the user account that runs the session that connects to
the database. The user account must have a user principal on the Kerberos network where the
database runs.
Informatica supports Kerberos authentication for native relational connections to the following
databases: Oracle, DB2, SQL Server, and Sybase.
User Name Database user name with the appropriate read and write database permissions to access the
database.
For Oracle connections that process BLOB, CLOB, or NCLOB data, the user must have
permission to access and create temporary tablespaces.
To define the user name in the parameter file, enter session parameter $ParamName as the
user name, and define the value in the session or workflow parameter file. The Integration
Service interprets user names that start with $Param as session parameters.
If you use Oracle OS Authentication, IBM DB2 client authentication, or databases such as ISG
Navigator that do not allow user names, enter PmNullUser. For Teradata connections, this
overrides the default database user name in the ODBC entry.
Not available if the Use Kerberos Authentication option is selected.
Use Parameter in Indicates that the password for the database user name is a session parameter, $ParamName.
Password Define the password in the workflow or session parameter file, and encrypt it by using the
pmpasswd CRYPT_DATA option. Default is disabled.
Password Password for the database user name. For Oracle OS Authentication, IBM DB2 client
authentication, or databases such as ISG Navigator that do not allow passwords, enter
PmNullPassword. For Teradata connections, this overrides the database password in the ODBC
entry.
Passwords must be in 7-bit ASCII.
Not available if the Use Kerberos Authentication option is selected.
Connect String Connect string used to communicate with the database. For syntax, see “Native Connect
Strings” on page 128.
Required for all databases except Microsoft SQL Server and Sybase ASE.
Note: You can parameterize the connect string attribute for Oracle connections.
Provider Type The connection provider that you want to use to connect to the Microsoft SQL Server database.
You can select the following provider types:
- ODBC
- Oledb(Deprecated)
Default is ODBC.
Use DSN Enables the PowerCenter Integration Service to use the Data Source Name for the connection.
If you select the Use DSN option, the PowerCenter Integration Service retrieves the database
and server names from the DSN.
If you do not select the Use DSN option, you must provide the database and server names.
Code Page Code page the Integration Service uses to read from a source database or write to a target
database or file.
Connection Runs an SQL command with each database connection. Default is disabled.
Environment SQL
Transaction Runs an SQL command before the initiation of each transaction. Default is disabled.
Environment SQL
Enable Parallel Enables parallel processing when loading data into a table in bulk mode. Default is enabled.
Mode
Database Name Name of the database. For Teradata connections, this overrides the default database name in
the ODBC entry. Also, if you do not enter a database name for a Teradata or Sybase ASE
connection, the Integration Service uses the default database name in the ODBC entry. If you
do not enter a database name, connection-related messages do not show a database name
when the default database is used.
Packet Size Use to optimize the native drivers for Sybase ASE and Microsoft SQL Server.
Domain Name The name of the domain. Used for Microsoft SQL Server on Windows.
Use Trusted If selected, the Integration Service uses Windows authentication to access the Microsoft SQL
Connection Server database. The user name that starts the Integration Service must be a valid Windows
user with access to the Microsoft SQL Server database.
Connection Retry Number of seconds the Integration Service attempts to reconnect to the database if the
Period connection fails. If the Integration Service cannot connect to the database in the retry period,
the session fails. Default value is 0.
Impersonate User The name of the impersonate user to connect to Oracle. The user name specified in the Oracle
connection must have the impersonate user privileges.
Applicable only for Oracle connections.
Related Topics:
• “Target Connections” on page 90
• “FTP Connections” on page 140
The Workflow Manager appends an underscore and the first three letters of the relational database type to
the name of the new database connection. For example, you have lookup table in the same database as your
source definition. You you make a copy of the Microsoft SQL Server database connection called Dev_Source.
The Workflow Manager names the new database connection Dev_Source_Mic. You can edit the copied
connection to use a different name.
When you replace database connections, the Workflow Manager replaces the relational database
connections in the following locations for all sessions using the connection:
• Source connection
• Target connection
• Connection Information property in Lookup and Stored Procedure transformations
• $Source Connection Value session property
• $Target Connection Value session property
When the repository contains both relational and application connections with the same name, the Workflow
Manager replaces the relational connections only if you specified the connection type as relational in all
locations.
The Integration Service uses the updated connection information the next time the session runs.
You must close all folders before replacing a relational database connection.
FTP Connections
Use an FTP connection object for each source or target that you want to access through FTP or SFTP.
To connect to an SFTP server, create an FTP connection and enable SFTP. SFTP uses the SSH2
authentication protocol. Configure the authentication properties to use the SFTP connection. You can
configure publickey or password authentication. The Integration Service connects to the SFTP server with the
authentication properties you configure. If the authentication does not succeed, the session fails.
The following table describes the properties that you configure for an FTP connection:
Property Description
Name Connection name used by the Workflow Manager. Connection name cannot contain spaces or
other special characters, except for the underscore.
User Name User name necessary to access the host machine. Must be in 7-bit ASCII only. Required to
connect to an SFTP server with password based authentication.
To define the user name in the parameter file, enter session parameter $ParamName as the user
name, and define the value in the session or workflow parameter file. The Integration Service
interprets user names that start with $Param as session parameters.
Use Parameter in Indicates the password for the user name is a session parameter, $ParamName. Define the
Password password in the workflow or session parameter file, and encrypt it by using the pmpasswd
CRYPT_DATA option. Default is disabled.
Password Password for the user name. Must be in 7-bit ASCII only. Required to connect to an SFTP server
with password based authentication.
Note: When you specify pmnullpasswd, the PowerCenter Integration Service authenticates the
user directly based on public key without performing the password authentication.
Default Remote Default directory on the FTP host used by the Integration Service. Do not enclose the directory in
Directory quotation marks.
You can enter a parameter or variable for the directory. Use any parameter or variable type that
you can define in the parameter file.
Depending on the FTP server you use, you may have limited options to enter FTP directories.
In the session, when you enter a file name without a directory, the Integration Service appends
the file name to this directory. This path must contain the appropriate trailing delimiter. For
example, if you enter c:\staging\ and specify data.out in the session, the Integration Service
reads the path and file name as c:\staging\data.out.
For SAP, you can leave this value blank. SAP sessions use the Source File Directory session
property for the FTP remote directory. If you enter a value, the Source File Directory session
property overrides it.
Retry Period Number of seconds the Integration Service attempts to reconnect to the FTP host if the
connection fails. If the Integration Service cannot reconnect to the FTP host in the retry period,
the session fails. Default value is 0 and indicates an infinite retry period.
Public Key File Public key file path and file name. Required if the SFTP server uses publickey authentication.
Name Enabled for SFTP.
Private Key File Private key file path and file name. Required if the SFTP server uses publickey authentication.
Name Enabled for SFTP.
Private Key File Private key file password used to decrypt the private key file. Required if the SFTP server uses
Password public key authentication and the private key is encrypted. Enabled for SFTP.
Property Description
Name Connection name used by the Workflow Manager. Connection name cannot contain
spaces or other special characters, except for the underscore.
User Name Database user name with the appropriate read and write database permissions to access
the database. If you use Oracle OS Authentication or IBM DB2 client authentication, enter
PmNullUser. PowerCenter uses Oracle OS Authentication when the connection user name
is PmNullUser and the connection is to an Oracle database. PowerCenter uses IBM DB2
client authentication when the connection user name is PmNullUser and the connection
is to an IBM DB2 database.
To define the user name in the parameter file, enter session parameter $ParamName as
the user name, and define the value in the session or workflow parameter file. The
Integration Service interprets user names that start with $Param as session parameters.
You can connect to a database runs on a network that uses Kerberos authentication. To
use Kerberos authentication for the database connection, set the user name to the
reserved word PmKerberosUser. If you use Kerberos authentication, the connection uses
the credentials of the user account that runs the session that connects to the database.
The user account must have a user principal on the Kerberos network where the
database runs.
Use Parameter in Indicates the password for the database user name is a session parameter,
Password $ParamName. Define the password in the workflow or session parameter file, and
encrypt it by using the pmpasswd CRYPT_DATA option. Default is disabled.
Password Password for the database user name. For Oracle OS Authentication or IBM DB2 client
authentication, enter PmNullPassword. For Teradata connections, you can enter
PmNullPasswd to prevent the password from appearing in the control file. Instead, the
Integration Service writes an empty string for the password in the control file.
Passwords must be in 7-bit ASCII.
If you set the user name to PmKerberosUser to use Kerberos authentication for the
database connection, set the password to the reserved word PmKerberosPassword. The
connection uses the credentials of the user account that runs the session that connects
to the database.
Connect String Connect string used to communicate with the database. For syntax, see “Native Connect
Strings” on page 128.
Note: You can parameterize the connect string attribute for Oracle external loader
connections.
HTTP Connections
Use an application connection object for each HTTP server that you want to connect to.
Configure connection information for an HTTP transformation in an HTTP application connection. The
Integration Service can use HTTP application connections to connect to HTTP servers. HTTP application
connections enable you to control connection attributes, including the base URL and other parameters.
If you want to connect to an HTTP proxy server, configure the HTTP proxy server settings in the Integration
Service.
Note: Before you configure an HTTP connection to use SSL authentication, you may need to configure
certificate files. For information about SSL authentication, see “SSL Authentication Certificate Files” on page
131.
The following table describes the properties that you configure for an HTTP connection:
Property Description
Name Connection name used by the Workflow Manager. Connection name cannot contain spaces
or other special characters, except for the underscore.
User Name Authenticated user name for the HTTP server. If the HTTP server does not require
authentication, enter PmNullUser.
To define the user name in the parameter file, enter session parameter $ParamName as
the user name, and define the value in the session or workflow parameter file. The
Integration Service interprets user names that start with $Param as session parameters.
Use Parameter in Indicates the password for the authenticated user is a session parameter, $ParamName.
Password Define the password in the workflow or session parameter file, and encrypt it by using the
pmpasswd CRYPT_DATA option. Default is disabled.
Password Password for the authenticated user. If the HTTP server does not require authentication,
enter PmNullPasswd.
Base URL URL of the HTTP server. This value overrides the base URL defined in the HTTP
transformation.
You can use a session parameter to configure the base URL. For example, enter the
session parameter $ParamBaseURL in the Base URL field, and then define
$ParamBaseURL in the parameter file.
Timeout Number of seconds the Integration Service waits for a connection to the HTTP server
before it closes the connection.
Domain Authentication domain for the HTTP server. This is required for NTLM authentication.
Trust Certificates File File containing the bundle of trusted certificates that the client uses when authenticating
the SSL certificate of a server. You specify the trust certificates file to have the Integration
Service authenticate the HTTP server. By default, the name of the trust certificates file is
ca-bundle.crt. For information about adding certificates to the trust certificates file, see
“SSL Authentication Certificate Files” on page 131.
Certificate File Client certificate that an HTTP server uses when authenticating a client. You specify the
client certificate file if the HTTP server needs to authenticate the Integration Service.
Certificate File Password for the client certificate. You specify the certificate file password if the HTTP
Password server needs to authenticate the Integration Service.
Certificate File Type File type of the client certificate. You specify the certificate file type if the HTTP server
needs to authenticate the Integration Service. The file type can be PEM or DER. For
information about converting certificate file types to PEM or DER, see “SSL Authentication
Certificate Files” on page 131. Default is PEM.
Private Key File Private key file for the client certificate. You specify the private key file if the HTTP server
needs to authenticate the Integration Service.
Key Password Password for the private key of the client certificate. You specify the key password if the
web service provider needs to authenticate the Integration Service.
Key File Type File type of the private key of the client certificate. You specify the key file type if the
HTTP server needs to authenticate the Integration Service. The HTTP transformation uses
the PEM file type for SSL authentication.
Authentication Type Select one of the following authentication types to use when the HTTP server does not
return an authentication type to the Integration Service:
- Auto. The Integration Service attempts to determine the authentication type of the HTTP
server.
- Basic. Based on a non-encrypted user name and password.
- Digest. Based on an encrypted user name and password.
- NTLM. Based on encrypted user name, password, and domain.
Default is Auto.
Property Description
Property Description
Customer Master Optional. Specify the customer master key ID or alias name generated by AWS Key
Key ID Management Service (AWS KMS).
You must generate the customer master key ID for the same region where Amazon S3 bucket
reside. You can specify any of the following values:
Customer Generated Customer Master Key
Enables client-side or server-side encryption. Only the administrator user of the account can
use the default customer master key ID to enable client-side encryption.
Property Description
Access Key The access key ID used to access the Amazon account resources.
Required if you do not use AWS Identity and Access Management (IAM) authentication.
Note: Ensure that you have valid AWS credentials before you create a connection.
Secret Key The secret access key used to access the Amazon account resources. This value is associated
with the access key and uniquely identifies the account. You must specify this value if you
specify the access key ID.
Required if you do not use AWS Identity and Access Management (IAM) authentication.
Folder Path The complete path to the Amazon S3 objects and must include the bucket name and any folder
name. Ensure that you do not use a forward slash at the end of the folder path.
For example, <bucket name>/<my folder name>
Master Symmetric Optional. Provide a 256-bit AES encryption key in the Base64 format when you enable client-side
Key encryption. You can generate a key using a third-party tool.
If you specify a value, ensure that you specify the Encryption Type as Client Side Encryption in
the target session properties.
Customer Master Optional. Specify the customer master key ID or alias name generated by AWS Key Management
Key ID Service (AWS KMS). You must generate the customer master key for the same region where
Amazon S3 bucket reside. You can specify any of the following values:
Customer Generated Customer Master Key
Enables client-side or server-side encryption. Only the administrator user of the account can
use the default customer master key ID to enable client-side encryption.
Code Page The code page compatible with the Amazon S3 source. Select one of the following code pages:
- MS Windows Latin 1. Select for ISO 8859-1 Western European data.
- UTF-8. Select for Unicode and non-Unicode data.
- Shift-JIS. Select for double-byte character data.
- ISO 8859-15 Latin 9 (Western European).
- ISO 8859-2 Eastern European.
- ISO 8859-3 Southeast European.
- ISO 8859-5 Cyrillic.
- ISO 8859-9 Latin 5 (Turkish).
- IBM EBCDIC International Latin-1.
Region Name The name of the region where the Amazon S3 bucket is available. Select one of the following
regions:
- Asia Pacific (Mumbai)
- Asia Pacific (Seoul)
- Asia Pacific (Singapore)
- Asia Pacific (Sydney)
- Asia Pacific (Tokyo)
- AWS GovCloud
- Canada (Central)
- China (Beijing)
- EU (Ireland)
- EU (Frankfurt)
- South America (Sao Paulo)
- US East (Ohio)
- US East (N. Virginia)
- US West (N. California)
- US West (Oregon)
Default is US East (N. Virginia).
The following table describes the properties that you configure for a PowerChannel relational database
connection:
Property Description
Name Connection name used by the Workflow Manager. Connection name cannot contain
spaces or other special characters, except for the underscore.
User Name Database user name with the appropriate read and write database permissions to access
the database. If you use Oracle OS Authentication, IBM DB2 client authentication, or
databases such as ISG Navigator that do not allow user names, enter PmNullUser.
To define the user name in the parameter file, enter session parameter $ParamName as
the user name, and define the value in the session or workflow parameter file. The
Integration Service interprets user names that start with $Param as session parameters.
Use Parameter in Indicates the password for the database user name is a session parameter,
Password $ParamName. Define the password in the workflow or session parameter file, and
encrypt it by using the pmpasswd CRYPT_DATA option. Default is disabled.
Password Password for the database user name. For Oracle OS Authentication, IBM DB2 client
authentication, or databases such as ISG Navigator that do not allow passwords, enter
PmNullPassword. For Teradata connections, this overrides the database password in the
ODBC entry.
Passwords must be in 7-bit ASCII.
Connect String Connect string used to communicate with the database. For syntax, see “Native Connect
Strings” on page 128.
Required for all databases except Microsoft SQL Server.
Code Page Code page the Integration Service uses to read from a source database or write to a
target database or file.
Database Name Name of the database. If you do not enter a database name, connection-related
messages do not show a database name when the default database is used.
Environment SQL Runs an SQL command with each database connection. Default is disabled.
Packet Size Use to optimize the native drivers for Sybase ASE and Microsoft SQL Server.
Domain Name The name of the domain. Used for Microsoft SQL Server on Windows.
Use Trusted Connection If selected, the Integration Service uses Windows authentication to access the Microsoft
SQL Server database. The user name that starts the Integration Service must be a valid
Windows user with access to the Microsoft SQL Server database.
Remote PowerChannel Host name or IP address for the remote PowerChannel Server that can access the
Host Name database data.
Remote PowerChannel Port number for the remote PowerChannel Server. Make sure the PORT attribute of the
Port Number ACTIVE_LISTENERS property in the PowerChannel.properties file uses a value that other
applications on the PowerChannel Server do not use.
Use Local PowerChannel Select to use compression or encryption while extracting or loading data. When you
select this option, you need to specify the local PowerChannel Server address and port
number. The Integration Service uses the local PowerChannel Server as a client to
connect to the remote PowerChannel Server and access the remote database.
Local PowerChannel Host Host name or IP address for the local PowerChannel Server. Enter this option when you
Name select the Use Local PowerChannel option.
Local PowerChannel Port Port number for the local PowerChannel Server. Specify this option when you select the
Number Use Local PowerChannel option. Make sure the PORT attribute of the ACTIVE_LISTENERS
property in the PowerChannel.properties file uses a value that other applications on the
PowerChannel Server do not use.
Encryption Level Encryption level for the data transfer. Encryption levels range from 0 to 3. 0 indicates no
encryption and 3 is the highest encryption level. Default is 0.
Use this option only if you have selected the Use Local PowerChannel option.
Compression Level Compression level for the data transfer. Compression levels range from 0 to 9. 0
indicates no compression and 9 is the highest compression level. Default is 2.
Use this option only if you have selected the Use Local PowerChannel option.
Certificate Account Certificate account to authenticate the local PowerChannel Server to the remote
PowerChannel Server. Use this option only if you have selected the Use Local
PowerChannel option.
If you use the sample PowerChannel repository that the installation program set up, and
you want to use the default certificate account in the repository, you can enter “default”
as the certificate account.
Property Description
User Name Database user name with the appropriate read and write database permissions to access Db2
Warehouse.
Use Parameter Indicates the password for the database user name is a session parameter, $ParamName. Define
in Password the password in the workflow or session parameter file, and encrypt it by using the pmpasswd
CRYPT_DATA option. Default is disabled.
Database Database name of Db2 Warehouse that you want to connect to.
Name
Schema Name The schema name in Db2 Warehouse from where you want to fetch the metadata.
Port Number Network port number used to connect to the Db2 Warehouse server.
Driver Name Specify the name of the IBM Data Server driver that you configured in the odbcinst.ini file.
For example, IBM DB2 ODBC DRIVER - IBMDBCL1.
The following table describes the Essbase connection properties that you must configure:
The following table describes the Greenplum connection properties that you must configure:
Connection Description
Attribute
User Name User name with permissions to access the Greenplum database.
You can connect to a database that runs on a network that uses Kerberos authentication. To
configure Kerberos authentication for the database connection, set the user name to the reserved
word PmKerberosUser. If you use Kerberos authentication, the connection uses the credentials of
the user account that runs the session to connect to the database. The user account must have a
user principal on the Kerberos network where the database runs.
Port Greenplum server port number. If you enter 0, the gpload utility reads from the environment
variable $PGPORT. Default is 5432.
Enable SSL Select this option to establish secure communication between the gpload utility and the Greenplum
server over SSL.
Applicable for Greenplum connections used for loading data to Greenplum.
Not applicable for Greenplum reader.
Not applicable for Greenplum writer on the Windows platform.
Certificate Path Path where the SSL certificates for the Greenplum server are stored.
For information about the files that need to be present in the certificates path, see the gpload
documentation.
Schema Name of the schema that contains the metadata for Greenplum targets. Default is public.
Property Description
Service Account ID Specifies the client_email value present in the JSON file that you download after you create a
service account.
Service Account Specifies the private_key value present in the JSON file that you download after you create a
Key service account.
APIVersion API that PowerExchange for Google Analytics uses to read from Google Analytics reports.
Select Core Reporting API v3.
Note: PowerExchange for Google Analytics does not support Analytics Reporting API v4.
Property Description
Service Account Specifies the client_email value present in the JSON file that you download after you create a
ID service account.
Service Account Specifies the private_key value present in the JSON file that you download after you create a
Key service account.
Connection The mode that you want to use to read data from or write data to Google BigQuery.
mode Select one of the following connection modes:
- Simple. Flattens each field within the Record data type field as a separate field in the mapping.
- Hybrid. Displays all the top-level fields in the Google BigQuery table including Record data type
fields. PowerExchange for Google BigQuery displays the top-level Record data type field as a
single field of the String data type in the mapping.
- Complex. Displays all the columns in the Google BigQuery table as a single field of the String
data type in the mapping.
Default is Simple.
Schema Specifies a directory on the client machine where the PowerCenter Integration Service must create
Definition File a JSON file with the sample schema of the Google BigQuery table. The JSON file name is the
Path same as the Google BigQuery table name.
Alternatively, you can specify a storage path in Google Cloud Storage where the PowerCenter
Integration Service must create a JSON file with the sample schema of the Google BigQuery table.
You can download the JSON file from the specified storage path in Google Cloud Storage to a
local machine.
Project ID Specifies the project_id value present in the JSON file that you download after you create a
service account.
If you have created multiple projects with the same service account, enter the ID of the project
that contains the dataset that you want to connect to.
Storage Path This property applies when you read or write large volumes of data.
Path in Google Cloud Storage where the PowerCenter Integration Service creates a local stage file
to store the data temporarily.
You can either enter the bucket name or the bucket name and folder name.
For example, enter gs://<bucket_name> or gs://<bucket_name>/<folder_name>
Dataset Name When you define a custom query, you must specify a Google BigQuery dataset.
for Custom
Query
Region id The region name where the Google BigQuery dataset resides.
For example, if you want to connect to a Google BigQuery dataset that resides in Las Vegas
region, specify us-west4 as the Region ID.
Note: In the Storage Path connection property, ensure that you specify a bucket name or the
bucket name and folder name that resides in the same region as the dataset in Google BigQuery.
For more information about the regions supported by Google BigQuery, see the following Google
BigQuery documentation:https://cloud.google.com/bigquery/docs/locations
Optional Specifies whether you can configure certain source and target functionalities through custom
Properties properties.
You can select one of the following options:
- None. Select if you do not want to configure any custom properties.
- Required. If you want to specify custom properties to configure the source and target
functionalities.
Default is None.
Provide Optional Comma-separated key-value pairs of custom properties to enable additional source and target
Properties functionalities.
Appears only when you select Required in the Optional Properties.
The following table describes the Google Cloud Spanner connection properties:
Property Description
Name The name of the connection. The name is not case sensitive and must be unique within the
domain. You can change this property after you create the connection. The name cannot exceed
128 characters, contain spaces, or contain the following special characters:
~`!$%^&*()-+={[}]|\:;"'<,>.?/
ID String that the PowerCenter Integration Service uses to identify the connection.
The ID is not case sensitive. The ID must be 255 characters or fewer and must be unique in the
domain. You cannot change this property after you create the connection.
Default value is the connection name.
Description Optional. The description of the connection. The description cannot exceed 4,000 characters.
Project ID Specifies the project_id value present in the JSON file that you download after you create a service
account.
If you have created multiple projects with the same service account, enter the ID of the project
that contains the bucket that you want to connect to.
Service Account Specifies the client_email value present in the JSON file that you download after you create a
ID service account.
Service Account Specifies the private_key value present in the JSON file that you download after you create a
Key service account.
Instance ID Name of the instance that you created in Google Cloud Spanner.
The following table describes the Google Cloud Storage connection properties:
Property Description
Service Account Specifies the client_email value present in the JSON file that you download after you create a
ID service account.
Service Account Specifies the private_key value present in the JSON file that you download after you create a
Key service account.
Project ID Specifies the project_id value present in the JSON file that you download after you create a
service account.
If you have created multiple projects with the same service account, enter the ID of the project
that contains the dataset that you want to connect to.
You connect to a Hadoop cluster through an HDFS host that runs the name node service for a Hadoop
cluster.
The following table describes the properties that you configure for a Hadoop HDFS application connection:
Property Description
Name The connection name used by the Workflow Manager. Connection name cannot contain spaces or
other special characters, except for the underscore character.
User Name The name of the user in the Hadoop group that is used to access the HDFS host.
Password Password to access the HDFS host. Reserved for future use.
HDFS The URI to access HDFS. Use the value for the fs.default.name property for the NameNode
Connection URI URI. You can find the value for the property for the NameNode URI. You can find the value for the
fs.default.name property in the core-site.xml configuration set.
Syntax for Hadoop distributions:
hdfs://<namenode>:<port>
Where
- <namenode> is the host name or IP address of the NameNode.
- <port> is the port that the NameNode listens for remote procedure calls (RPC).
Syntax for the MapR distribution:
maprfs:///
Syntax for the HDInsight distribution:
- adl:// <nameservices>
- wasb://<nameservices>
Hive User Name The Hive user name. Reserved for future use.
Hive Password The password for the Hive user. Reserved for future use.
Property Decription
User Name Database user name with the appropriate read and write database permissions to access the
database.
To define the user name in the parameter file, enter session parameter $ParamName as the
user name, and define the value in the session or workflow parameter file.
The Integration Service interprets user names that start with $Param as session parameters.
Use Parameter in Indicates that the password for the database user name is a session parameter, $ParamName.
Password Define the password in the workflow or session parameter file, and encrypt it by using the
pmpasswd CRYPT_DATA option. Default is disabled.
Password Password for the database user name. Must be in 7-bit ASCII.
Connect String Connect string used to communicate with the SAP HANA database.
Code Page Code page the Integration Service uses to read from a source database or write to a target
database.
Connection Runs an SQL command with each database connection. Default is disabled.
Environment SQL
Transaction Runs an SQL command before the initiation of each transaction. Default is disabled.
Environment SQL
Connection Retry Number of seconds the Integration Service attempts to reconnect to the database if the
Period connection fails.
If the Integration Service cannot connect to the database in the retry period, the session fails.
Default value is 0.
ODBC Subtype Type of database to which ODBC connects. Select SAP HANA.
Property Description
Connection Retry Number of seconds that the PowerCenter Integration Service waits after making a request to
Period connect to the database. If the PowerCenter Integration Service does not receive any response,
the session fails.
Default value is 0.
Control Table Owner of the F0005 control table that contains UDC values. If the database user specified in the
Name Prefix database connection is not the owner of the F0005 control table and the session is configured
for UDC validation, specify the owner of the F0005 control table as the control table name
prefix.
You can use a parameter for this connection attribute.
When the Integration Service connects to the JNDI server, it retrieves information from JNDI about the JMS
provider during the session. When you configure a JNDI application connection, you must specify connection
properties in the Connection Object Definition dialog box.
Property Description
JNDI Context Factory Name of the context factory that you specified when you defined the context factory for
your JMS provider.
JNDI Provider URL Provider URL that you specified when you defined the provider URL for your JMS
provider.
When you configure a JMS application connection, you specify connection properties the Integration Service
uses to connect to JMS providers during a session. Specify the JMS application connection properties in the
Connection Object Definition dialog box.
The following table describes the properties that you configure for a JMS application connection:
Property Description
JMS Destination Type Select QUEUE or TOPIC for the JMS Destination Type. Select QUEUE if you want
to read source messages from a JMS provider queue or write target messages to
a JMS provider queue. Select TOPIC if you want to read source messages based
on the message topic or write target messages with a particular message topic.
JMS Connection Factory Name Name of the connection factory. The name of the connection factory must be the
same as the connection factory name you configured in JNDI. The Integration
Service uses the connection factory to create a connection with the JMS
provider.
JMS Destination Name of the destination. The destination name must match the name you
configured in JNDI. Optionally, you can use the $ParamName session parameter
for the destination name.
JMS Recovery Destination Recovery queue or recovery topic name, based on what you configure for the JMS
Destination Type. Configure this option when you enable recovery for a real-time
session that reads from a JMS or WebSphere MQ source and writes to a JMS
target.
Note: The session fails if the recovery destination does not match a recovery
queue or topic name in the JMS provider.
Connection Retry Period Number of seconds the Integration Service attempts to reconnect to JMS if the
connection fails. If the Integration Service cannot connect to JMS in the retry
period, the session fails. Default value is 0.
Retry Connection Error Code File Name of the properties file that contains error codes that identify JMS
Name connection errors. Default is pmjmsconnerr.properties.
Property Description
Kafka Broker The IP address and port combinations of the Kafka messaging system broker list.
List The IP address and port combination has the following format: <IP Address>:<port>
You can enter multiple comma-separated IP address and port combinations.
Retry Number of seconds the Integration Service attempts to reconnect to the Kafka broker to write data.
Timeout in If the source or target is not available for the time you specify, the mapping execution stops to avoid
seconds any data loss.
Default is 180 seconds.
Kafka Broker Select Apache 0.10.1.1 and above as the Kafka messaging broker version.
Version
SSL Mode Specifies whether the PowerCenter Integration Service establishes a secure connection to the Kafka
broker.You can select one of the following options:
- disabled. The PowerCenter Integration Service establishes an unencrypted connection to the Kafka
broker.
- require. The PowerCenter Integration Service establishes an encrypted connection to the Kafka
broker without verifying the identity of the server.
- one-way. The PowerCenter Integration Service establishes an encrypted connection to the Kafka
broker using truststore file and truststore password.
- two-way. The PowerCenter Integration Service establishes an encrypted connection to the Kafka
broker using truststore file and truststore password.
SSL Applicable only if you select one-way or two-way as the SSL mode.
TrustStore The complete path and file name of the truststore file. The truststore file contains the SSL certificate
File Path that the Kafka cluster validates against the Kafka broker certificate.
SSL Applicable only if you select one-way or two-way as the SSL mode.
TrustStore The password for the truststore file.
Password
Additional Optional. Comma-separated list of connection properties to connect to the Kafka broker in a secured
Security way.
Properties For example, security.protocol=SASL_PLAINTEXT,sasl.kerberos.
service.name=<kerberos name>,sasl.mechanism=GSSAPI,
sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true doNotPrompt=true storeKey=true client=true keyTab="<Keytab
Location>" principal="<principal>";
Property Description
Password Password to connect to the LDAP directory server. If the user name does not require the
password, enter infa_blank.
Anonymous Select this option to establish an anonymous connection with the LDAP directory server. If you
Access select this option, enter the user name and password as anonymous.
Security Type of security used to establish secure connection with SSL or TLS. Default is None.
If you do not select the security type or select the SSL option to establish a secure connection,
the PowerCenter Integration Service ignores the TLS options.
TLS Options TLS options used to establish secure connection or transfer data, or both, with the LDAP
directory server. Default is None.
The following table describes the Microsoft Azure Blob Storage connection properties:
File Delimiter Character used to separate fields in the file. Default is a comma (,).
Use a printable single-byte character delimiter that is not present in the data.
You cannot use multibyte characters as delimiters.
The following table describes PowerExchange for Microsoft Azure SQL Data Warehouse V3 connection
properties:
Azure DW JDBC URL Microsoft Azure SQL Data Warehouse JDBC connection string.
For example, you can enter the following connection string: jdbc:sqlserver://
<Server>.database.windows.net:1433;database=<Database>
Azure DW JDBC Username User name to connect to the Microsoft Azure SQL Data Warehouse account.
Azure DW JDBC Password Password to connect to the Microsoft Azure SQL Data Warehouse account.
Azure DW Schema Name Name of the schema in Microsoft Azure SQL Data Warehouse.
Azure Blob Account Name Name of the Microsoft Azure Storage account to stage the files.
Azure Blob Account Key Microsoft Azure Storage access key to stage the files.
Blob End-point Type of Microsoft Azure end-points. You can select any of the following end-points:
- core.windows.net: Default
- core.usgovcloudapi.net: To select the US government Microsoft Azure end-
points
- core.chinacloudapi.cn: Not applicable
VNet Rule Enable to connect to a Microsoft Azure SQL Data Warehouse endpoint residing in a
virtual network (VNet).
The following table describes the Microsoft Dynamics 365 for Sales connection properties:
Property Description
Runtime Environment The name of the runtime environment where you want to run the tasks.
Authentication Type The authentication method that the connector must use to login to the web application. Select
one of the following authentication types:
OAuth 2.0 Password Grant. Not Supported.
OAuth 2.0 Client Certificate Grant. Requires you to select web API url, application id, tenant id,
keystore file, keystore password, key alias, and key password.
Web API url The URL of the Microsoft Dynamics 365 for Sales endpoint.
Username The user name to connect to the Microsoft Dynamics 365 for Sales account.
Password The password to connect to the Microsoft Dynamics 365 for Sales account.
Application ID The Azure application ID for Microsoft Dynamics 365 for Sales.
Keystore File The location and the file name of the key store. Not applicable when you use the Hosted
Agent.
Keystore Password The password for the keystore file required for secure communication.
Key Password The password for the individual keys in the keystore file required for secure communication.
Not applicable when you use the Hosted Agent.
Retry Error Codes The comma-separated http error codes for which the retries are made.
Retry Count The number of retries to get the response from an endpoint based on the retry interval.
The default value is 5.
Retry Interval The time in seconds to wait before Microsoft Dynamics 365 for Sales Connector retries for a
response.
The default value is 60 seconds.
Property Description
Connection The name of the connection. The name is not case sensitive and must be unique within the domain.
Name You can change this property after you create the connection. The name cannot exceed 128
characters, contain spaces, or contain the following special characters:
~`!$%^&*()-+={[}]|\:;"'<,>.?/
Password Password corresponding to the user name to access the MongoDB server.
Additional Enter one or more JDBC connection parameters in the following format:
Connection <param1>=<value>&<param2>=<value>&<param3>=<value>
Properties
You must provide the JDBC parameters as ampersand-separated key-value pairs.
You can configure the following JDBC connection parameters in a MongoDB connection:
- AuthSource
- BatchSize
- connectTimeoutMS
- DefaultStringColumnLength
- DmlBatchSize
- EnableDoubleBuffer
- EnableTransaction
- LogLevel
- LogPath
- SamplingLimit
- SamplingStepSize
- SamplingStrategy
- useJSONColumn
For example,
DefaultStringColumnLength=512&DmlBatchSize=1000&
EnableDoubleBuffer=false&EnableTransaction=true&
SamplingLimit=200&SamplingStepSize=2&SamplingStrategy=Backwards
Note: If you specify the host name, port number, user name, and password of the MongoDB server in
the Additional Connection Properties, the values specified in the Additional Connection Properties
takes precedence.
Note: If you select the Enable Reading/Writing as JSON option, a column documentAsJSON appears in the
collection when you read data from MongoDB through which you can read data as JSON. Default is disabled.
To enable reading or writing as JSON, select useJSONColumn=true.
The following table describes the properties that you configure for an MSMQ application connection:
Property Description
Machine Name Name of the MSMQ machine. If MSMQ is running on the same machine as the Integration Service,
you can enter a period (.).
Queue Type Select public if the MSMQ queue is a public queue. Select private if the MSMQ queue is a private
queue.
Is Transactional Define whether the MSMQ queue is transactional or not. When a session writes to a remote private
queue, the Integration Service cannot determine whether the queue is transactional or not.
Configure the Is Transactional attribute to match the queue configuration.
Choose one of the following options:
- Auto. The Integration Service determines if the queue is transactional or not transactional.
Choose Auto for a local queue or a remote queue that is not private.
- Yes. The queue is transactional.
- No. The queue is not transactional.
Default is Auto. If you configure this property incorrectly, the session will not fail, but the target
queue will not persist the data.
The relational database connection defines how the Integration Service accesses the underlying database for
Netezza Performance Server. When you configure a Netezza connection, you specify the connection
attributes that the Integration Service uses to connect to Netezza.
The following table describes the properties that you configure for a Netezza connection:
Property Description
User Name Database user name with the appropriate read and write database permissions to access
Netezza Performance Server.
Use Parameter in Indicates the password for the database user name is a session parameter, $ParamName.
Password Define the password in the workflow or session parameter file, and encrypt it by using the
pmpasswd CRYPT_DATA option. Default is disabled.
Connection Runs an SQL command with each database connection. Default is disabled.
Environment SQL
Transaction Runs an SQL command before the initiation of each transaction. Default is disabled.
Environment SQL
Connection Retry Number of seconds the Integration Service attempts to reconnect to the database if the
Period connection fails. If the Integration Service cannot connect to the database in the retry period,
the session fails. Default value is 0.
Property Description
Password Password for the user name. You cannot use a parameter to specify the password.
Apps Schema Name of the application schema that contains metadata for Oracle E-Business Suite. Default is
Name apps.
The following table describes the properties that you configure for a PeopleSoft application connection:
Property Description
User Name Database user name with SELECT permission on physical database tables in the PeopleSoft source
system.
To define the user name in the parameter file, enter session parameter $ParamName as the user
name, and define the value in the session or workflow parameter file. The Integration Service
interprets user names that start with $Param as session parameters.
Use Indicates the password for the database user name is a session parameter, $ParamName. Define the
Parameter in password in the workflow or session parameter file, and encrypt it by using the pmpasswd
Password CRYPT_DATA option. Default is disabled.
Connect Connect string for the underlying database of the PeopleSoft system. This option appears for DB2,
String Oracle, and Informix.
Code Page Code page the Integration Service uses to extract data from the source database. When using
relaxed code page validation, select compatible code pages for the source and target data to prevent
data inconsistencies.
Language PeopleSoft language code. Enter a language code for language-sensitive data. When you enter a
Code language code, the Integration Service extracts language-sensitive data from related language
tables. If no data exists for the language code, the PowerCenter extracts data from the base table.
When you do not enter a language code, the Integration Service extracts all data from the base table.
Database Name of the underlying database of the PeopleSoft system. This option appears for Sybase ASE and
Name Microsoft SQL Server.
Server Name Name of the server for the underlying database of the PeopleSoft system. This option appears for
Sybase ASE and Microsoft SQL Server.
Packet Size Packet size used to transmit data. This option appears for Sybase ASE and Microsoft SQL Server.
Use Trusted If selected, the Integration Service uses Windows authentication to access the Microsoft SQL Server
Connection database. The user name that enables the Integration Service must be a valid Windows user with
access to the Microsoft SQL Server database. This option appears for Microsoft SQL Server.
Rollback Name of the rollback segment for the underlying database of the PeopleSoft system. This option
Segment appears for Oracle.
Environment SQL commands used to set the environment for the underlying database of the PeopleSoft system.
SQL
Connection Description
Property
Host Name Host name of the PostgreSQL server to which you want to connect.
Port Port number for the PostgreSQL server to which you want to connect.
Default is 5432.
Encryption Determines whether the data exchanged between the PowerCenter Integration Service and the
Method PostgreSQL database server is encrypted:
Select one of the following encryption methods:
- noEncryption. Establishes a connection without using SSL. Data is not encrypted.
- SSL. Establishes a connection using SSL. Data is encrypted using SSL. If the PostgreSQL
database server does not support SSL, the connection fails.
- requestSSL. Attempts to establish a connection using SSL. If the PostgreSQL database server
does not support SSL, the PowerCenter Integration Service establishes an unencrypted
connection.
Default is noEncryption.
Validate Server Applicable if you enable the encryption method to SSL or requestSSL.
Certificate Select the Validate Server Certificate option so that the PowerCenter Integration Service validates
the server certificate that is sent by the PostgreSQL database server. If you specify the Hostname
In Certificate parameter, the PowerCenter Integration Service also validates the host name in the
certificate.
TrustStore Applicable if you select SSL or requestSSL as the encryption method and the Validate Server
Certificate option.
The path and name of the truststore file, which contains the list of the Certificate Authorities (CAs)
that the PostgreSQL client trusts.
TrustStore Applicable if you select SSL or requestSSL as the encryption method and the Validate Server
Password Certificate option.
The password to access the truststore file that contains the SSL certificate.
Host Name In Optional when you select SSL or requestSSL as the encryption method and the Validate Server
Certificate Certificate option.
Specifying a host name ensures additional security and the PowerCenter Integration Service
validates the host name included in the connection with the host name in the SSL certificate.
KeyStore Applicable if you select SSL as the encryption method and when client authentication is enabled on
the PostgreSQL database server.
The path and the file name of the key store. The keystore file contains the certificates that the
PostgreSQL client sends to the PostgreSQL server in response to the server's certificate request.
KeyStore Applicable if you select SSL as the encryption method and when client authentication is enabled on
Password the PostgreSQL database server.
The password for the keystore file required for secure communication.
Key Password Applicable if you select SSL as the encryption method and when client authentication is enabled on
the PostgreSQL database server.
Required when individual keys in the keystore file have a different password than the keystore file.
Crypto Protocol Required if you enable the encryption method to SSL or requestSSL.
Versions Specifies a cryptographic protocol or a list of cryptographic protocols when you use an encrypted
connection.
You can select from the following protocols:
- SSLv3
- TLSv1
- TLSv1_1
- TLSv1_2
The following table describes the connection attributes for a Salesforce Analytics application connection:
Password Password for the Salesforce Analytics user name. The password is case sensitive.
Security Token The token used to login to Salesforce Analytics from an untrusted network.
Service URL URL of the Salesforce Analytics service that you want to access.
In a test or development environment, you might want to access the Salesforce Analytics
Sandbox testing environment. For more information about the Salesforce Analytics Sandbox,
see the Salesforce documentation.
Temp Folder Name The directory where the JSON files are stored.
Default Date The date format to read date columns in the JSON file.
Format Use the hyphen (-) delimiter for the Windows platform, and the forward slash (/) delimiter for
the Linux platform.
You can also create an OAuth type connection to access to Salesforce using the Salesforce API. OAuth is a
standard protocol that allows for secure API authorization. A benefit of OAuth is that users do not need to
disclose their Salesforce credentials and the Salesforce administrator can revoke the consumer's access at
any time.
Type Select the Use OAuth checkbox to use the OAuth connection.
Consumer Key The Consumer Key obtained from Salesforce, required to generate the Refresh Token.
Consumer Secret The Consumer Secret obtained from Salesforce, required to generate the Refresh Token.
• SAP R/3 application connection. Configure SAP R/3 application connections to access the SAP system
when you run an RFC stream or file mode session.
• SAPTableReader application connection. Configure SAPTableReader application connections to read
data from SAP tables and ABAP CDS views through ABAP by using the HTTP/HTTPS protocol.
• FTP connection. Configure FTP connections to access the staging file through FTP. When you run a file
mode session, you can configure the session to access the staging file on the SAP system through FTP.
SAP R/3 application connection ABAP integration with RFC stream and RFC file mode sessions.
SAPTableReader application connection ABAP integration with HTTP stream mode sessions.
SAP_ALE _IDoc_Writer application connection IDoc ALE and business content integration.
BCI Metadata Connection IDoc ALE and business content integration for segments in SAP
longer than 1,000 characters.
File Mode
Use an RFC file mode connection when you extract data through file mode. The connection information
for RFC is stored in the sapnwrfc.ini file. You must also have authorizations on the SAP system to read
SAP tables and to run file mode sessions.
Stream Mode (RFC/HTTP)
To extract data through stream mode by using the RFC protocol, use an SAP R/3 application connection
The connection information for RFC is stored in the sapnwrfc.ini file. You must also have
authorizations on the SAP system to read SAP tables and to run stream mode sessions. RFC stream
mode sessions use foreground processing.
You cannot use an SAP R/3 application connection to extract data through stream mode by using the
HTTP protocol. Use an SAPTableReader application connection to extract data through stream mode by
using the HTTP protocol.
To create one connection for both modes, the SAP administrator must have created a single profile with
authorizations for both file and stream mode sessions.
The following table describes the properties that you configure for an SAP ECC connection:
Property Values for RFC File Mode and RFC Stream Mode
User Name SAP user name with authorization on S_DATASET, S_TABU_DIS, S_PROGRAM, and B_BTCH_JOB
objects.
To define the user name in the parameter file, enter session parameter $ParamName as the user
name, and define the value in the session or workflow parameter file. The Integration Service
interprets user names that start with $Param as session parameters.
Use Parameter in Indicates the password for the SAP user name is a session parameter, $ParamName. Define the
Password password in the workflow or session parameter file, and encrypt it by using the pmpasswd
CRYPT_DATA option. Default is disabled.
Connect String DEST entry defined in the sapnwrfc.ini file for a connection to a specific SAP application
server or for an SAP load balancing connection.
Code Page Code page compatible with the SAP server. The code page must correspond to the Language
Code.
The following table describes the properties that you configure for an SAPTableReader connection:
Port Range HTTP port range that the PowerCenter Integration Service must use to read data from the SAP
server in streaming mode.
Enter the minimum and maximum port numbers with a hyphen as the separator. The minimum
and maximum port number can range between 10000 and 65535. You can also specify the port
range according to your organization.
Default is 10000-65535.
Use HTTPS Enables you to read data from SAP tables and ABAP CDS views through HTTPS streaming.
By default, the Use HTTPS check box is not selected.
Key store file Path to the keystore file that contains the private or public key pairs and the associated
path certificates.
Required if you enable HTTPS.
The following table describes the properties that you configure for an SAP_ALE_IDoc_Reader application
connection:
Property Description
Destination Entry DEST entry defined in the sapnwrfc.ini file for a connection to an RFC server program. The
Program ID for this destination entry must be the same as the Program ID for the logical
system you defined in SAP to receive IDocs or consume business content data. For business
content integration, set to INFACONTNT.
Property Description
User Name SAP user name with authorization on S_DATASET, S_TABU_DIS, S_PROGRAM, and B_BTCH_JOB
objects.
To define the user name in the parameter file, enter session parameter $ParamName as the
user name, and define the value in the session or workflow parameter file. The Integration
Service interprets user names that start with $Param as session parameters.
Use Parameter in Indicates the password for the SAP user name is a session parameter, $ParamName. Define
Password the password in the workflow or session parameter file, and encrypt it by using the pmpasswd
CRYPT_DATA option. Default is disabled.
Connect String DEST entry defined in the sapnwrfc.ini file for a connection to a specific SAP application
server.
Code Page Code page compatible with the SAP server. Must also correspond to the Language Code.
The following table describes the properties that you configure for an SAP RFC/BAPI application connection:
Property Description
User Name SAP user name with authorization on S_DATASET, S_TABU_DIS, S_PROGRAM, and B_BTCH_JOB
objects.
To define the user name in the parameter file, enter session parameter $ParamName as the
user name, and define the value in the session or workflow parameter file. The Integration
Service interprets user names that start with $Param as session parameters.
Use Parameter in Indicates the password for the SAP user name is a session parameter, $ParamName. Define the
Password password in the workflow or session parameter file, and encrypt it by using the pmpasswd
CRYPT_DATA option. Default is disabled.
Connect String DEST entry defined in the sapnwrfc.ini file for a connection to a specific SAP application
server.
Code Page Code page compatible with the SAP server. Must also correspond to the Language Code.
The following table describes the properties that you configure for an SAP BW OHS application connection:
Property Description
Use Parameter in Indicates the SAP NetWeaver BI password is a session parameter, $ParamName. Define the
Password password in the workflow or session parameter file, and encrypt it by using the pmpasswd
CRYPT_DATA option. Default is disabled.
Connect String DEST entry defined in the sapnwrfc.ini file for a connection to a specific SAP application
server. The Integration Service uses the sapnwrfc.ini file to connect to the SAP NetWeaver BI
system.
Code Page Code page compatible with the SAP NetWeaver BI server.
Client Code SAP NetWeaver BI client. Must match the client you use to log on to the SAP NetWeaver BI
server.
The following table describes the properties that you configure for an SAP BW application connection:
Property Description
Use Parameter in Indicates the SAP NetWeaver BI password is a session parameter, $ParamName. Define the
Password password in the workflow or session parameter file, and encrypt it by using the pmpasswd
CRYPT_DATA option. Default is disabled.
Connect String DEST entry defined in the sapnwrfc.ini file for a connection to a specific SAP application
server. The Integration Service uses the sapnwrfc.ini file to connect to the SAP NetWeaver
BI system. If you do not enter a connection string, the Integration Service obtains the connection
parameters from the SAP BW Service.
Code Page Code page compatible with the SAP NetWeaver BI server.
Client Code SAP NetWeaver BI client. Must match the client you use to log in to the SAP NetWeaver BI
server.
• Siebel Application Connections for Sources, Targets, and EIM Invoker Transformations
• Siebel Application Connection for EIM Read and Load Transformations
Protocol Protocol used to connect to Siebel. Specify the following protocol parameters:
- Transport. Enter HTTP or TCP/IP. Default is TCP/IP.
- Encryption. Enter NONE or RSA. Default is NONE.
- Compression. Enter NONE or ZLIB. Default is ZLIB.
Specify the parameters in the following format:
siebel[[.transport][.[encryption][.[compression]]]]
Siebel Server Host Host name or IP address of the Siebel server. If you configure native load
balancing, specify the virtual host name.
Encoding Encoding defined in the code page the PowerCenter Integration Service uses to
communicate with the Siebel Server. Default is UTF-8.
The following table describes the Siebel EIM Read or Load transformations:
Connection Retry Period Number of seconds the PowerCenter Integration Service attempts to reconnect to
the database if the connection fails. If the PowerCenter Integration Service fails to
connect to the database in the retry period, the session fails. If you set the
connection retry period to 0, the PowerCenter Integration Service does not attempt
to reconnect to the database if the connection fails. Default is 0.
Table Name Prefix If required, configure the table name prefix to establish connection with the
database. Default is Blank.
Note: Enter the name of the Siebel database schema as the table name prefix when
Oracle is the target database.
Property Description
Tableau The name of the Tableau product to which you want to connect.
Product You can choose one of the following Tableau products to publish the TDE or TWBX file:
- Tableau Desktop. Creates a TDE file in the Data Integration Service machine. You can then
manually import the TDE file to Tableau Desktop.
Note: Tableau Desktop is not applicable for TWBX file.
- Tableau Server. Publishes the generated TDE or TWBX file to Tableau Server.
- Tableau Online. Publishes the generated TDE or TWBX file to Tableau Online.
Connection URL of Tableau Server or Tableau Online to which you want to publish the TDE or TWBX file. The
URL URL has the following format: http://<Host name of Tableau Server or Tableau
Online>:<port>
User Name User name of the Tableau Server or Tableau Online account.
Content URL The name of the site on Tableau Server or Tableau Online where you want to publish the TDE or
TWBX file.
Contact the Tableau administrator to provide the site name.
Template File The path to a sample TDE file from where the Integration Service imports the Tableau metadata.
Path Enter one of the following options for the template file path:
- Absolute path to the TDE file.
- Directory path for the TDE files.
- Empty directory path.
The path you specify for the template file becomes the default path for the target TDE file. If you do
not specify a file path, the Integration Service uses the following default file path for the target TDE
file: <Data Integration Installation Directory>/main/java/lib
Property Description
ID String that the PowerCenter Integration Service uses to identify the connection. The ID is not case
sensitive. It must be 255 characters or less and must be unique in the domain. You cannot change this
property after you create the connection. Default value is the connection name.
Description Description of the connection. The description cannot exceed 765 characters.
Location The Informatica domain where you want to create the connection.
Connection Description
Property
Tableau Product The name of the Tableau product to which you want to connect.
You can choose one of the following Tableau products to publish the .hyper or TWBX file:
Tableau Desktop
Creates a .hyper file in the PowerCenter Integration Service machine. You can then manually
import the .hyper file to Tableau Desktop.
Tableau Server
Tableau Online
Connection URL The URL of Tableau Server or Tableau Online to which you want to publish the .hyper or TWBX
file.
Enter the URL in the following format: http://<Host name of Tableau Server or Tableau
Online>:<port>
User Name The user name of the Tableau Server or Tableau Online account.
Password The password for the Tableau Server or Tableau Online account.
Site ID The ID of the site on Tableau Server or Tableau Online where you want to publish the or TWBX file.
Note: Contact the Tableau administrator to provide the site ID.
Schema File The path to a sample .hyper file from where the PowerCenter Integration Service imports the
Path Tableau metadata.
Enter one of the following options for the schema file path:
- Absolute path to the .hyper file.
- Directory path for the .hyper files.
- Empty directory path.
The path you specify for the schema file becomes the default path for the target .hyper file. If
you do not specify a file path, the PowerCenter Integration Service uses the following default file
path for the target .hyper file:
<PowerCenter Integration Service installation directory>/apps/
PowerCenter_Integration_Server/<latest version>/bin/rtdm
The following table describes the Teradata Parallel Transporter API connection properties that you must
configure:
Property Description
Name Connection name used by the Workflow Manager. Connection name cannot contain spaces or other
special characters, except for the underscore.
User Name Database user name with the appropriate read and write database permissions to access the
database.
To define the user name in the parameter file, enter session parameter $ParamName as the user
name, and define the value in the session or workflow parameter file. The PowerCenter Integration
Service interprets user names that start with $Param as session parameters.
You can connect to a database that runs on a network that uses Kerberos authentication. To
configure Kerberos authentication for the database connection, set the user name to the reserved
word PmKerberosUser. If you use Kerberos authentication, the connection uses the credentials of the
user account that runs the session to connect to the database. The user account must have a user
principal on the Kerberos network where the database runs.
Use Indicates the password for the database user name is a session parameter, $ParamName. If you
Parameter in enable this option, define the password in the workflow or session parameter file, and encrypt it
Password using the pmpasswd CRYPT_DATA option. Default is disabled.
The following table describes the Teradata connection properties that you must configure:
Attribute Description
Tenacity Amount of time, in hours, that Teradata PT API continues trying to log on when the maximum
number of operations runs on the Teradata database.
Must be a positive integer. Default is 4.
Max Sessions Maximum number of sessions that Teradata PT API establishes with the Teradata database.
Must be a positive, non-zero integer. Default is 4.
Min Sessions Minimum number of Teradata PT API sessions required for the Teradata PT API job to continue.
Must be a positive integer between 1 and the Max Sessions value. Default is 1.
Sleep Amount of time, in minutes, that Teradata PT API pauses before it retries to log on when the
maximum number of operations runs on the Teradata database.
Must be a positive, non-zero integer. Default is 6.
Data Encryption Enables full security encryption of SQL requests, responses, and data.
Default is disabled.
Block Size Maximum block size, in bytes, Teradata PT API uses when it returns data to the PowerCenter
Integration Service.
Minimum is 256. Maximum is 64,000. Default is 64,000.
Authentication Method to authenticate the user. Select one of the following authentication types:
Type - Native. Authenticates your user name and password against the Teradata database specified
in the connection.
- LDAP. Authenticates user credentials against the external LDAP directory service.
- KRB5. Authenticates the credentials of the user account that runs the session against the
Kerberos network where the database runs.
Default is Native.
The following table describes the properties you configure for a TIB/Rendezvous application connection:
Property Description
Code Page Code page the Integration Service uses to extract data from the TIBCO. When using relaxed code
page validation, select compatible code pages for the source and target data to prevent data
inconsistencies.
Subject Default subject for source and target messages. During a session, the Integration Service reads
messages with this subject from TIBCO sources. It also writes messages with this subject to TIBCO
targets.
You can overwrite the default subject for TIBCO targets when you link the SendSubject port in a
TIBCO target definition in a mapping.
Service Service attribute value. Enter a value if you want to include a service name, service number, or port
number.
Network Network attribute value. Enter a value if your machine contains more than one network card.
Daemon TIBCO daemon you want to connect to during a session. If you leave this option blank, the
Integration Service connects to the local daemon during a session.
If you want to specify a remote daemon, which resides on a different host than the Integration
Service, enter the following values:
<remote hostname>:<port number>
For example, you can enter host2:7501 to specify a remote daemon.
Certified Select if you want the Integration Service to read or write certified messages.
CmName Unique CM name for the CM transport when you choose certified messaging.
Relay Agent Enter a relay agent when you choose certified messaging and the node running the Integration
Service is not constantly connected to a network. The Relay Agent name must be fewer than 127
characters.
Ledger File Enter a unique ledger file name when you want the Integration Service to read or write certified
messages. The ledger file records the status of each certified message.
Configure a file-based ledger when you want the TIBCO daemon to send unconfirmed certified
messages to TIBCO targets. You also configure a file-based ledger with Request Old when you want
the Integration Service to receive unconfirmed certified messages from TIBCO sources.
Synchronized Select if you want PowerCenter to wait until it writes the status of each certified message to the
Ledger ledger file before continuing message delivery or receipt.
Request Old Select if you want the Integration Service to receive certified messages that it did not confirm with
the source during a previous session run. When you select Request Old, you should also specify a
file-based ledger for the Ledger File attribute.
User Register the user certificate with a private key when you want to connect to a secure TIB/
Certificate Rendezvous daemon during the session. The text of the user certificate must be in PEM encoding or
PKCS #12 binary format.
Note: The adapter instances you specify in TIB/Adapter SDK connections should only contain one session.
Property Description
Code Page Code page the Integration Service uses to extract data from the TIBCO. When using relaxed
code page validation, select compatible code pages for the source and target data to prevent
data inconsistencies.
Subject Default subject for source and target messages. During a workflow, the Integration Service
reads messages with this subject from TIBCO sources. It also writes messages with this
subject to TIBCO targets.
You can overwrite the default subject for TIBCO targets when you link the SendSubject port in
a TIBCO target definition in a mapping.
Repository URL URL for the TIB/Repository instance you want to connect to. You can enter the server process
variable $PMSourceFileDir for the Repository URL.
Session Name Name of the TIBCO session associated with the adapter instance.
Validate Messages Select Validate Messages when you want the Integration Service to read and write messages
in AE format.
To connect to a web service, the Integration Service requires an endpoint URL. If you do not configure a Web
Services Consumer application connection or if you configure one without providing an endpoint URL, the
Integration Service uses the endpoint URL contained in the WSDL file on which the source, target, or Web
Services Consumer transformation is based.
Use the following guidelines to determine when to configure a Web Services Consumer application
connection:
• Configure a Web Services Consumer application connection with an endpoint URL if the web service you
connect to requires authentication or if you want to use an endpoint URL that differs from the one
contained in the WSDL file.
• Configure a Web Services Consumer application connection without an endpoint URL if the web service
you connect to requires authentication but you want to use the endpoint URL contained in the WSDL file.
• You do not need to configure a Web Services Consumer application connection if the web service you
connect to does not require authentication and you want to use the endpoint URL contained in the WSDL
file.
The following table describes the properties that you configure for a Web Services Consumer application
connection:
Property Description
User Name User name that the web service requires. If the web service does not require a user name,
enter PmNullUser.
To define the user name in the parameter file, enter session parameter $ParamName as the
user name, and define the value in the session or workflow parameter file. The Integration
Service interprets user names that start with $Param as session parameters.
Use Parameter in Indicates the web service password is a session parameter, $ParamName. Define the
Password password in the workflow or session parameter file, and encrypt it by using the pmpasswd
CRYPT_DATA option. Default is disabled.
Password Password that the web service requires. If the web service does not require a password, enter
PmNullPasswd.
Code Page Connection code page. The Repository Service uses the character set encoded in the
repository code page when writing data to the repository.
End Point URL Endpoint URL for the web service that you want to access. The WSDL file specifies this URL in
the location element.
You can use session parameter $ParamName, a mapping parameter, or a mapping variable as
the endpoint URL. For example, you can use a session parameter, $ParamMyURL, as the
endpoint URL, and set $ParamMyURL to the URL in the parameter file.
Timeout Number of seconds the Integration Service waits for a connection to the web service provider
before it closes the connection and fails the session. Also, the number of seconds the
Integration Service waits for a SOAP response after sending a SOAP request before it fails the
session. Default is 60 seconds.
Trust Certificates File File containing the bundle of trusted certificates that the Integration Service uses when
authenticating the SSL certificate of the web services provider. Default is ca-bundle.crt.
Certificate File Client certificate that a web service provider uses when authenticating a client. You specify
the client certificate file if the web service provider needs to authenticate the Integration
Service.
Certificate File Password for the client certificate. You specify the certificate file password if the web service
Password provider needs to authenticate the Integration Service.
Certificate File Type File type of the client certificate. You specify the certificate file type if the web service
provider needs to authenticate the Integration Service. The file type can be either PEM or
DER.
Private Key File Private key file for the client certificate. You specify the private key file if the web service
provider needs to authenticate the Integration Service.
Key Password Password for the private key of the client certificate. You specify the key password if the web
service provider needs to authenticate the Integration Service.
Key File Type File type of the private key of the client certificate. You specify the key file type if the web
service provider needs to authenticate the Integration Service. PowerExchange for Web
Services requires the PEM file type for SSL authentication.
Authentication Type Select one of the following authentication types to use when the web service provider does
not return an authentication type to the Integration Service:
- Auto. The Integration Service attempts to determine the authentication type of the web
service provider.
- Basic. Based on a non-encrypted user name and password.
- Digest. Based on a non-encrypted user name and encrypted password.
- NTLM. Based on encrypted user name, password, and domain.
Default is Auto.
Note: You cannot write to webMethods target documents that have special characters.
Property Description
Broker Host Enter the host name of the Broker you want the PowerCenter Integration Service to connect to. If
the port number for the Broker is not the default port number, also enter the port number. Default
port number is 6849.
Enter the host name and port number in the following format:
<host name:port>
Broker Name Enter the name of the Broker. If you do not enter a Broker name, the PowerCenter Integration
Service uses the default Broker.
Client ID Enter a client ID for the PowerCenter Integration Service to use when it connects to the Broker
during the session. If you do not enter a client ID, the Broker generates a random client ID.
If you select Preserve Client State, enter a client ID.
Client Group Enter the name of the group to which the client belongs.
Application Enter the name of the application that will run the Broker Client.
Name
Automatic Select this option to enable the PowerCenter Integration Service to reconnect to the Broker if the
Reconnection connection to the Broker is lost.
Preserve Client Select this option to maintain the client state across sessions. The client state is the information
State the Broker keeps about the client, such as the client ID, application name, and client group.
Preserving the client state enables the webMethods Broker to retain documents it sends when a
subscribing client application, such as the PowerCenter Integration Service, is not listening for
documents. Preserving the client state also allows the Broker to maintain the publication ID
sequence across sessions when writing documents to webMethods targets.
If you select this option, configure a Client ID in the application connection. You should also
configure guaranteed storage for your webMethods Broker.
If you do not select this option, the PowerCenter Integration Service destroys the client state when
it disconnects from the Broker.
Property Description
User Name User name of a user with read access in the webMethods Integration Server.
Use Parameter Enables the PowerCenter Integration Service to parameterize the password. Password for the
in Password webMethods Integration Server user name is a session parameter, $ParamName. Define the
password in the workflow or session parameter file, and encrypt it by using the pmpasswd
CRYPT_DATA option. Default is disabled.
IS Host Host name and port number of the webMethods Integration Server in the following format:
<host name:port>
Certificate Files Client certificate that the webMethods Integration Server uses to authenticate a client. Specify the
client certificate file if the webMethods Integration Server is configured as HTTPS. Use a
semicolon (;) to separate multiple certificate files.
Certificate File The file type of the client certificate. You specify the certificate file type if the webMethods
Type Integration Server needs to authenticate the Integration Service. Supported file type is DER.
Private Key File Private key file for the client certificate. Specify the private key file if the webMethods Integration
Server is configured as HTTPS.
Key File Type File type of the private key of the client certificate. You specify the key file type if the webMethods
Integration Server is configured as HTTPS. Supported file type is DER.
Before you use PowerExchange for WebSphere MQ to extract data from message queues or load data to
message queues, you can test the queue connections configured in the Workflow Manager.
The following table describes the properties that you configure for a Message Queue queue connection:
Property Description
Code Page Code page that is the same as or a a subset of the code page of the queue manager
coded character set identifier (CCSID).
Queue Manager Name of the queue manager for the message queue.
Connection Retry Period Number of seconds the Integration Service attempts to reconnect to the WebSphere MQ
queue if the connection fails. If the Integration Service cannot reconnect to the
WebSphere MQ queue in the retry period, the session fails. Default is 0.
Recovery Queue Name Name of the recovery queue. The recovery queue enables message recovery for a
session that writes to a queue target.
1. Open the Connection Browser dialog box for the connection object. For example, click Connections >
Relational to open the Connection Browser dialog box for a relational database connection.
2. Click Edit.
The Connection Object Definition dialog box appears.
3. Enter the values for the properties you want to modify.
The connection properties vary depending on the type of connection you select. For more information
about connection properties, see the section for each specific connection type in this chapter.
4. Click OK.
1. Open the Connection Browser dialog box for the connection object. For example, click Connections >
Relational to open the Connection Browser dialog box for a relational database connection.
2. Select the connection object you want to delete in the Connection Browser dialog box.
Tip: Hold the shift key to select more than one connection to delete.
3. Click Delete, and then click Yes.
Validation
This chapter includes the following topics:
Workflow Validation
Before you can run a workflow, you must validate it. When you validate the workflow, you validate all task
instances in the workflow, including nested worklets.
When you validate a workflow, you validate worklet instances, worklet objects, and all other nested worklets
in the workflow. You validate task instances and worklets, regardless of whether you have edited them.
The Workflow Manager validates the worklet object using the same validation rules for workflows. The
Workflow Manager validates the worklet instance by verifying attributes in the Parameter tab of the worklet
instance.
If the workflow contains nested worklets, you can select a worklet to validate the worklet and all other
worklets nested under it. To validate a worklet and its nested worklets, right-click the worklet and choose
Validate.
Note: The Workflow Manager validates Session tasks separately. If a session is invalid, the workflow may still
be valid.
192
Example
You have a workflow that contains a non-reusable worklet called Worklet_1. Worklet_1 contains a nested
worklet called Worklet_a. The workflow also contains a reusable worklet instance called Worklet_2. Worklet_2
contains a nested worklet called Worklet_b.
The Workflow Manager validates links, conditions, and tasks in the workflow. The Workflow Manager
validates all tasks in the workflow, including tasks in Worklet_1, Worklet_2, Worklet_a, and Worklet_b.
You can validate a part of the workflow. Right-click Worklet_1 and choose Validate. The Workflow Manager
validates all tasks in Worklet_1 and Worklet_a.
Worklet Validation
The Workflow Manager validates worklets when you save the worklet in the Worklet Designer. In addition,
when you use worklets in a workflow, the Integration Service validates the workflow according to the
following validation rules at run time:
• If the parent workflow is configured to run concurrently, each worklet instance in the workflow must be
configured to run concurrently.
• Each worklet instance in the workflow can run once.
When a worklet instance is invalid, the workflow using the worklet instance remains valid.
The Workflow Manager displays a red invalid icon if the worklet object is invalid. The Workflow Manager
validates the worklet object using the same validation rules for workflows. The Workflow Manager displays a
blue invalid icon if the worklet instance in the workflow is invalid. The worklet instance may be invalid when
any of the following conditions occurs:
• The parent workflow or worklet variable you assign to the user-defined worklet variable does not have a
matching datatype.
• The user-defined worklet variable you used in the worklet properties does not exist.
• You do not specify the parent workflow or worklet variable you want to assign.
For non-reusable worklets, you may see both red and blue invalid icons displayed over the worklet icon in the
Navigator.
The Workflow Manager verifies that attributes in the tasks follow validation rules. For example, the user-
defined event you specify in an Event task must exist in the workflow. The Workflow Manager also verifies
that you linked each task properly. For example, you must link the Start task to at least one task in the
workflow.
When you delete a reusable task, the Workflow Manager removes the instance of the deleted task from each
workflow that contains the task. The Workflow Manager also marks the workflow as not valid when you
delete a reusable task that a workflow uses.
The Workflow Manager verifies that a folder does not contain duplicate task names, and it verfies that a
workflow does not contain duplicate task instances.
You can validate reusable tasks in the Task Developer. Or, you can validate task instances in the Workflow
Designer. When you validate a task, the Workflow Manager validates the task attributes and the links. For
example, the user-defined event you specify in an Event tasks must exist in the workflow.
• Assignment. The Workflow Manager validates the expression that you enter for the Assignment task. For
example, the Workflow Manager verifies that you assigned a matching datatype value to the workflow
variable in the assignment expression.
• Command. The Workflow Manager does not validate the shell command you enter for the Command task.
• Event-Wait. If you choose to wait for a predefined event, the Workflow Manager verifies that you specified
a file to watch. If you choose to use the Event-Wait task to wait for a user-defined event, the Workflow
Manager verifies that you specified an event.
• Event-Raise. The Workflow Manager verifies that you specified a user-defined event for the Event-Raise
task.
• Human Task. The Workflow Manager verifies that a Human task has a potential owner. The task must
also have a business administrator and an escalation user. The Workflow Manager verifies that a task
notification has a recipient. It also verifies that the Human task receives the results of a mapping task in
the workflow.
• Timer. The Workflow Manager verifies that the variable you specified for the Absolute Time setting has
the Date/Time datatype.
• Start. The Workflow Manager verifies that you linked the Start task to at least one task in the workflow.
When a task instance is not valid, the workflow running the task instance becomes not valid. When a reusable
task is not valid, it does not affect the validity of the task instance in the workflow. However, if a Session task
instance is not valid, the workflow might still be valid. The Workflow Manager validates sessions differently.
To validate a task, select the task in the workspace and click Tasks > Validate. Or, right-click the task in the
workspace and choose Validate.
The Workflow Manager marks a reusable session or session instance invalid if you perform one of the
following tasks:
• Edit the mapping in a way that might invalidate the session. You can edit the mapping used by a session
at any time. When you edit and save a mapping, the repository might invalidate sessions that already use
the mapping. The Integration Service does not run invalid sessions.
You must reconnect to the folder to see the effect of mapping changes on Session tasks.
When you edit a session based on an invalid mapping, the Workflow Manager displays a warning
message:
The mapping [mapping_name] associated with the session [session_name] is invalid.
• Delete a database, FTP, or external loader connection used by the session.
• Leave session attributes blank. For example, the session is invalid if you do not specify the source file
name.
• Change the code page of a session database connection to an incompatible code page.
If you delete objects associated with a Session task such as session configuration object, Email, or
Command task, the Workflow Manager marks a reusable session invalid. However, the Workflow Manager
does not mark a non-reusable session invalid if you delete an object associated with the session.
If you delete a shortcut to a source or target from the mapping, the Workflow Manager does not mark the
session invalid.
The Workflow Manager does not validate SQL overrides or filter conditions entered in the session properties
when you validate a session. You must validate SQL override and filter conditions in the SQL Editor.
If a reusable session task is invalid, the Workflow Manager displays an invalid icon over the session task in
the Navigator and in the Task Developer workspace. This does not affect the validity of the session instance
and the workflows using the session instance.
If a reusable or non-reusable session instance is invalid, the Workflow Manager marks it invalid in the
Navigator and in the Workflow Designer workspace. Workflows using the session instance remain valid.
To validate a session, select the session in the workspace and click Tasks > Validate. Or, right-click the
session instance in the workspace and choose Validate.
Related Topics:
• “Editing a Session” on page 46
• “Session Properties Reference” on page 259
Note: If you use the Repository Manager, you can select and validate multiple sessions from the Navigator.
Expression Validation
The Workflow Manager validates all expressions in the workflow. You can enter expressions in the
Assignment task, Decision task, and link conditions. The Workflow Manager writes any error message to the
Output window.
Expressions in link conditions and Decision task conditions must evaluate to a numerical value. Workflow
variables used in expressions must exist in the workflow.
The Workflow Manager marks the workflow invalid if a link condition is invalid.
Workflow Schedulers
Each workflow has an associated scheduler. A workflow scheduler is a repository object that contains a set
of schedule settings. It contains information about how and when to run a workflow.
You can schedule a workflow to run continuously, repeat at a specified time or interval, or you can manually
start a workflow. By default, workflows run on demand. You can create a non-reusable scheduler for an
individual workflow. Or, you can create a reusable scheduler to use the same schedule settings for all
workflows in a folder.
If you configure multiple instances of a workflow, and you schedule the workflow run time, the Integration
Service runs all instances at the scheduled time. You cannot schedule workflow instances to run at different
times.
• On Windows, the Integration Service does not run a scheduled workflow during the last hour of Daylight
Saving Time (DST). If a workflow is scheduled to run between 1:00 a.m. and 1:59 a.m. DST, the Integration
Service resumes the workflow after 1:00 a.m. Standard Time (ST). If you try to schedule a workflow during
the last hour of DST or the first hour of ST, you receive an error. Wait until 2:00 a.m. to create a scheduler.
197
• The Integration Service schedules the workflow in the time zone of the Integration Service node. For
example, the PowerCenter Client is in the local time zone and the Integration Service is in a time zone two
hours later. If you schedule the workflow to start at 9:00 a.m., it starts at 9:00 a.m. in the time zone of the
Integration Service node and 7:00 a.m. local time.
Non-reusable scheduler
When you configure or edit a non-reusable scheduler, check in the workflow to allow the schedule to take
effect. You can update the schedule manually with the workflow checked out. Note that the changes are
applied to the latest checked-in version of the workflow.
Reusable scheduler
When you create a reusable scheduler for a workflow, you must check in the workflow and the scheduler
to enable the schedule to take effect.
When you edit a reusable scheduler and check it in, workflows are updated with the latest schedule. Note
that the workflow schedule is updated even for workflows that are checked out.
When you edit a reusable scheduler and do not check it in, you must manually update a workflow to
update the workflow schedule. Note that the workflow schedule is updated only for workflows that are
checked in.
You can configure the following options on the Schedule tab of the scheduler:
Run Options
Indicates the how to run the workflow. You can choose one of the following options:
• Run On Integration Service Initialization. The Integration Service runs the workflow as soon as the
service is initialized. The Integration Service then starts the next run of the workflow according to
settings in Schedule Options.
• Run On Demand. The Integration Service runs the workflow when you start the workflow manually.
• Run Continuously. The Integration Service runs the workflow as soon as the service initializes. The
Integration Service then starts the next run of the workflow as soon as it finishes the previous run. If
you edit a workflow that is set to run continuously, you must stop or unschedule the workflow, save
the workflow, and then restart or reschedule the workflow.
Schedule Options
Indicates the type of schedule. Required if you select Run On Integration Service Initialization, or if you
do not choose any setting in Run Options. You can choose one of the following options:
• Run Once. The Integration Service runs the workflow once, as scheduled in the scheduler.
• Run Every. The Integration Service runs the workflow at regular intervals, as configured.
Start Options
Indicates when to start the workflow schedule. You can choose one of the following options:
• Start Date. The date that the Integration Service begins the workflow schedule.
• Start Time. The time when the Integration Service begins the workflow schedule.
End Options
Indicates when to end the workflow schedule. Required if the workflow schedule is Run Every or
Customized Repeat. You can choose one of the following options:
• End On. The Integration Service stops scheduling the workflow on the selected date.
• End After. The Integration Service stops scheduling the workflow after the configured number of
workflow runs.
• Forever. The Integration Service schedules the workflow as long as the workflow does not fail.
Enter the numeric interval you would like the Integration Service to schedule the workflow. You can
choose one of the following frequencies:
Weekly
Required to enter a weekly schedule. Select the day or days of the week on that you want to run the
workflow.
Monthly
Required to enter a monthly schedule. You can choose one of the following options:
• Run On Day. Select the dates on which you want the workflow scheduled on a monthly basis. The
Integration Service schedules the workflow to run on the selected dates. If you select a numeric date
exceeding the number of days within a particular month, the Integration Service schedules the
workflow for the last day of the month, including leap years. For example, if you schedule the
workflow to run on the 31st of every month, the Integration Service schedules the session on the 30th
of April, June, September, and November.
• Run On The. Select the week or weeks of the month, and then select the day of the week on which you
want the workflow to run. For example, if you select Second and Last, and then select Wednesday, the
Integration Service schedules the workflow to run on the second and last Wednesday of every month.
The number of times you want the workflow to run on any day the session is scheduled. Choose one of
the following options:
• Run Once. The Integration Service runs the workflow one time on the selected day, at the time
entered on the Start Time setting on the Time tab.
• Run Every. The Integration Service runs the workflow on the hour and minute interval that you
configure. The Integration Service then schedules the workflow at regular intervals on the selected
day. The Integration Service uses the Start Time setting for the first scheduled workflow of the day. If
you choose an interval that is greater than the start time, the workflow runs one time each day. The
Integration Service then schedules the workflow at regular intervals on the selected day.
Scheduled States
The scheduled state of a workflow includes historical run-time information such as the last time the workflow
ran and how many times a repeating workflow has run. A workflow can get removed from the schedule based
on changes to the workflow status or the Integration Service state.
When a workflow is removed from the schedule, the Integration Service either discards or maintains the
scheduled state. If the Integration Service discards the scheduled state, it resets the state when the workflow
is rescheduled. If the Integration Service maintains the scheduled state, it restores the state when the
workflow is rescheduled.
When the Integration Service resets the scheduled state, it maintains the scheduler configuration. It does not
check for missed schedules, and it schedules the workflow as though the workflow never ran. For example,
you configure a workflow to run five times, and it stops during the second run. When you reschedule the
workflow, the Integration Service resets the schedule to run five times.
The Integration Service can restore the scheduled state of a workflow in a highly available environment when
it successfully recovers a terminated workflow or when you restart a workflow. When the Integration Service
restores the scheduled state, it reschedules the workflow based on the scheduler configuration and the
schedule frequency.
The Integration Service maintains or discards the scheduled state based on the following situations:
You disable a workflow.
When you enable a workflow, the Integration Service resets the schedule.
When you reschedule a workflow, the Integration Service resets the schedule.
The Integration Service reschedules the workflow according to the updated settings. If you change a
schedule that is configured to run at repeated intervals, the Integration Service resets the frequency
counter.
The Integration Service resets the schedule for all workflows in the folder.
The Integration Service resets the schedule for the workflow if it is unscheduled or is scheduled to run
continuously but the start time has passed. You must reschedule the workflow if the start time is passed
and the workflow is not scheduled to run continuously.
The Integration Service resets the schedule for all workflows that are unscheduled or are scheduled to
run continuously but the start time has passed. If a workflow is not configured to run on service
initialization, you must reschedule it if the start time is passed and it is not scheduled to run
continuously. If a workflow is configured to run on service initialization, you do not need to reschedule it.
In safe mode, workflows remain scheduled, but the Integration Service does not run them, including
workflows that are scheduled to run continuously or run on service initialization.
A workflow can become suspended when you configure it to suspend on error. The Integration Service
removes a suspended workflow from the schedule and it maintains the state of operation. You can
recover a suspended workflow to restore the schedule.
A workflow fails.
To re-establish the schedule, you can reschedule the workflow. In a highly available domain, if you restart
the workflow, and the workflow succeeds, the Integration Service restores the scheduled state and
determines whether a scheduled run was missed.
To re-establish the schedule, you can recover or reschedule the workflow. If the domain is not highly
available, the Integration Service resets the schedule. If the domain is highly available, the Integration
Service restores the schedule. If you restart the workflow, and the workflow succeeds, the Integration
Service restores the scheduled state and determines whether a scheduled run was missed.
A workflow terminates.
The Integration Service terminates all running workflows when it shuts down unexpectedly. If the domain
is not highly available, the Integration Service resets the schedule when you reschedule the workflow. If
the domain is highly available, and the workflow is recoverable, you can recover the workflow to restore
the scheduled state. If the workflow is not recoverable, you can reset the schedule by rescheduling the
workflow. If you restart the workflow, and the workflow succeeds, the Integration Service restores the
scheduled state and determines whether a scheduled run was missed.
Important: If you manually start a failed, terminated, stopped, or aborted workflow in a highly available
domain, Informatica recommends that you unschedule it first. If you do not unschedule the workflow, and the
Integration Service detects that the scheduled run time was missed, it immediately runs the workflow again.
This can result in errors such as key violations and invalid data. When you unschedule the workflow first and
reschedule it after the manual run completes, the Integration Service does not run the workflow based on the
missed schedule.
The following scheduler configurations determine how the Integration Service restores the scheduled state:
When the Integration Service restores the scheduled state, it determines whether a scheduled run was
missed. If the workflow did not miss a scheduled run, it runs at the next scheduled time. If the workflow
missed a scheduled run, the Integration Service schedules it to run immediately.
When the Integration Service restores the scheduled state, it determines how many more times the
workflow is scheduled to run and begins the schedule at that point. It does not determine missed
workflow runs. For example, you configure a workflow to run five times, and the workflow stops or
aborts after it runs two times. When the Integration Service restores the schedule, the workflow runs
three more times beginning with the next scheduled time.
The Integration Service restores the scheduled state and begins running the workflow immediately.
If you restart the Integration Service or choose a different Integration Service for a workflow, you must
reschedule workflows that are not scheduled to run continuously. The Integration Service reschedules
workflows that are scheduled to run continuously. The Integration Service also reschedules workflows in a
folder if you copy the folder.
Scheduling a Workflow
You can schedule a workflow to run continuously, repeat at a given time or interval, or you can manually start
a workflow.
Note: When you delete a reusable scheduler, all workflows that use the deleted scheduler becomes invalid.
To make the workflows valid, you must edit them and replace the missing scheduler
To permanently remove a workflow from a schedule, configure the workflow schedule to run on demand.
Note: When the Integration Service restarts, it reschedules all unscheduled workflows that are scheduled to
run continuously.
Disabling a Workflow
You might want to disable the workflow while you edit it. When you disable a workflow, the Integration
Service does not run the workflow until you enable it.
To disable a workflow select Disable Workflows on the General tab of the workflow properties.
Before you can run a workflow, you must select an Integration Service to run the workflow. You can select an
Integration Service when you edit a workflow or from the Assign Integration Service dialog box. If you select
an Integration Service from the Assign Integration Service dialog box, the Workflow Manager overwrites the
Integration Service assigned in the workflow properties.
You can also use advanced options to override the Integration Service or operating system profile assigned
to the workflow and select concurrent workflow run instances.
- Operating System Profile. Overrides the operating system profile assigned to the folder.
Integration Service Overrides the Integration Service configured for the workflow.
Operating System Profile Overrides the operating system profile assigned to the folder.
Workflow Run Instances The workflow instances you want to run. Appears if the workflow is configured
for concurrent execution.
5. Click OK.
To run a task using the Workflow Manager, select the task in the Workflow Designer workspace. Right-click
the task and choose Start Task.
You can also use menu commands in the Workflow Manager to start a task. In the Navigator, drill down the
Workflow node to locate the task. Right-click the task you want to start and choose Start Task.
Sending Email
This chapter includes the following topics:
To send email when the Integration Service runs a workflow, perform the following steps:
• Configure the Integration Service to send email. Before creating Email tasks, configure the Integration
Service to send email.
If you use a grid or high availability in a Windows environment, you must use the same Microsoft Outlook
profile on each node to ensure the Email task can succeed.
• Create Email tasks. Before you can configure a session or workflow to send email, you need to create an
Email task.
• Configure sessions to send post-session email. You can configure the session to send an email when the
session completes or fails. You create an Email task and use it for post-session email.
When you configure the subject and body of post-session email, use email variables to include
information about the session run, such as session name, status, and the total number of rows loaded.
You can also use email variables to attach the session log or other files to email messages.
• Configure workflows to send suspension email. You can configure the workflow to send an email when
the workflow suspends. You create an Email task and use it for suspension email.
205
The Integration Service sends the email based on the locale set for the Integration Service process running
the session.
You can use parameters and variables in the email user name, subject, and text. For Email tasks and
suspension email, you can use service, service process, workflow, and worklet variables. For post-session
email, you can use any parameter or variable type that you can define in the parameter file. For example, you
can use the $PMSuccessEmailUser or $PMFailureEmailUser service variable to specify the email recipient for
post-session email.
If you want to send email to more than one person, separate the email address entries with a comma. Do not
put spaces between addresses.
1. Log in to the UNIX system as the PowerCenter user who starts the Informatica services.
2. Type the following lines at the prompt and press Enter:
rmail <your fully qualified email address>,<second fully qualified email address>
From <your_user_name>
3. To indicate the end of the message, type ^D.
You should receive a blank email from the email account of the user you specify in the From line. If not,
locate the directory where rmail resides and add that directory to the path.
1. Log in to the Linux machine as the PowerCenter user who starts the Informatica services.
2. Add/usr/sbin to the $PATH environment variable to send emails.
3. Type the following line at the prompt and press Enter:
sendmail <your fully qualified email address>,<second fully qualified email address>
4. To indicate the end of the message, enter a period (.) on a separate line and press Enter. Or, type ^D.
You should receive a blank email from the email account of the PowerCenter user. If not, find the
directory where sendmail resides and add that directory to the path.
To send email using MAPI on Windows, you must meet the following requirements:
• Install the Microsoft Outlook mail client on each node configured to run the Integration Service.
• Run Microsoft Outlook on a Microsoft Exchange Server.
Complete the following steps to configure the Integration Service on Windows to send email:
Note: If you have high availability or if you use a grid, use the same profile for each node configured to run a
service process.
1. Open the Control Panel on the node running the Integration Service process.
2. Double-click the Mail icon.
3. In the Mail Setup - Outlook dialog box, click Show Profiles.
The Mail dialog box displays the list of profiles configured for the computer.
4. Click Add.
5. In the New Profile dialog box, enter a profile name. Click OK.
The E-mail Accounts wizard appears.
6. Select Add a new e-mail account. Click Next.
7. Select Microsoft Exchange Server for the server type. Click Next.
8. Enter the Microsoft Exchange Server name and the mailbox name. Click Next.
9. Click Finish.
10. In the Mail dialog box, select the profile you added and click Properties.
11. In the Mail Setup dialog box, click E-mail Accounts.
The E-mail Accounts wizard appears.
12. Select Add a new directory or address book. Click Next.
13. Select Additional Address Books. Click Next.
14. Select Personal Address Book. Click Next.
15. Enter the path to a personal address book. Click OK.
For more information about working with a Personal Address Book, refer to Microsoft Outlook
documentation.
1. From the Administrator tool, click the Properties tab for the Integration Service.
2. In the Configuration Properties tab, select Edit.
3. In the MSExchangeProfile field, verify that the name of Microsoft Exchange profile matches the
Microsoft Outlook profile you created.
Property Description
SMTPServerAddress* The server address for the SMTP outbound mail server.
SMTPPortNumber* The port number for the SMTP outbound mail server.
SMTPServerTimeout Amount of time in seconds the Integration Service waits to connect to the SMTP server
before it times out. Default is 20.
* If you omit one of these properties, the Integration Service sends email using the MAPI interface.
Note: After you set the SMTP custom properties, you must recycle the Integration Service.
• Session properties. You can configure the session to send email when the session completes or fails.
• Workflow properties. You can configure the workflow to send email when the workflow is interrupted.
• Workflows or worklets. You can include an Email task anywhere in the workflow or worklet to send email
based on a condition you define.
For example, you may have a Session task in the workflow and you want the Integration Service to send an
email if more than 20 rows are dropped. To do this, you create a condition in the link, and create a non-
reusable Email task. The workflow sends an email if the session fails more than 20 rows are dropped.
The Integration Service sends post-session email at the end of a session, after executing post-session shell
commands or stored procedures. When the Integration Service encounters an error sending the email, it
writes a message to the Log Service. It does not fail the session.
You can specify a reusable Email that task you create in the Task Developer for either success email or
failure email. Or, you can create a non-reusable Email task for each session property. When you create a non-
reusable Email task for a session, you cannot use the Email task in a workflow or worklet.
You can use parameters and variables in the email user name, subject, and text. Use any parameter or
variable type that you can define in the parameter file. For example, you can use the service variable
$PMSuccessEmailUser or $PMFailureEmailUser for the email recipient. Ensure that you specify the values of
the service variables for the Integration Service that runs the session. You can also enter a parameter or
variable within the email subject or text, and define it in the parameter file.
Note: The Integration Service does not limit the type or size of attached files. However, since large
attachments can cause problems with the email system, avoid attaching excessively large files, such as
session logs generated using verbose tracing. The Integration Service generates an error message in the
email if an error occurs attaching the file.
The following table describes the email variables that you can use in a post-session email:
%a<filename> Attach the named file. The file must be local to the Integration Service. The following file names
are valid: %a<c:\data\sales.txt> or %a</users/john/data/sales.txt>. The email does not display
the full path for the file. Only the attachment file name appears in the email.
Note: The file name cannot include the greater than character (>) or a line break.
%e Session status.
%g Attach the session log to the message. The Integration Service attaches a session log if you
configure the session to create a log file. If you do not configure the session to create a log file
or if you run a session on a grid, the Integration Service creates a temporary file in the
PowerCenter Services installation directory and attaches the file. If the Integration Service does
not use operating system profiles, verify that the user that starts Informatica Services has
permissions on PowerCenter Services installation directory to create a temporary log file. If the
Integration Service uses operating system profiles, verify that the operating system user of the
operating system profile has permissions on PowerCenter Services installation directory to
create a temporary log file.
%s Session name.
%t Source and target table details, including read throughput in bytes per second and write
throughput in rows per second. The Integration Service includes all information displayed in the
session detail dialog box.
%w Workflow name.
Note: The Integration Service ignores %a, %g, and %t when you include them in the email subject. Include these
variables in the email message only.
The following table lists the format tags you can use in an Email task:
tab \t
new line \n
Post-Session Email
You can configure post-session email to use a reusable or non-reusable Email task.
Sample Email
The following example shows a user-entered text from a sample post-session email configuration using
variables:
Session complete.
Session name: %s
Integration Service name: %v
%l
%r
%e
%b
%c
%i
%g
The following is sample output from the configuration above:
Session complete.
Session name: sInstrTest
Integration Service name: Node01IS
Total Rows Loaded = 1
Total Rows Rejected = 0
Completed
Start Time: Tue Nov 22 12:26:31 2005
Completion Time: Tue Nov 22 12:26:41 2005
Elapsed time: 0:00:10 (h:m:s)
Suspension Email
You can configure a workflow to send email when the Integration Service suspends the workflow. For
example, when a task fails, the Integration Service suspends the workflow and sends the suspension email.
You can fix the error and recover the workflow.
If another task fails while the Integration Service is suspending the workflow, you do not get the suspension
email again. However, the Integration Service sends another suspension email if another task fails after you
recover the workflow.
Configure suspension email on the General tab of the workflow properties. You can use service, service
process, workflow, and worklet variables in the email user name, subject, and text. For example, you can use
the service variable $PMSuccessEmailUser or $PMFailureEmailUser for the email recipient. Ensure that you
specify the values of the service variables for the Integration Service that runs the session. You can also
enter a parameter or variable within the email subject or text, and define it in the parameter file.
• $PMSuccessEmailUser. Defines the email address of the user to receive email when a session completes
successfully. Use this variable with post-session email. You can also use it to address email in standalone
Email tasks or suspension email.
• $PMFailureEmailUser. Defines the email address of the user to receive email when a session completes
with failure or when the Integration Service suspends a workflow. Use this variable with post-session or
suspension email. You can also use it to address email in standalone Email tasks.
When you use one of these service variables, the Integration Service sends email to the address configured
for the service variable. $PMSuccessEmailUser and $PMFailureEmailUser are optional process variables.
Verify that you define a variable before using it to address email.
You might use this functionality when you have an administrator who troubleshoots all failed sessions.
Instead of entering the administrator email address for each session, use the email variable
$PMFailureEmailUser as the recipient for post-session email. If the administrator changes, you can correct all
sessions by editing the $PMFailureEmailUser service variable, instead of editing the email address in each
session.
You might also use this functionality when you have different administrators for different Integration
Services. If you deploy a folder from one repository to another or otherwise change the Integration Service
that runs the session, the new service sends email to users associated with the new service when you use
process variables instead of hard-coded email addresses.
Workflow Monitor
This chapter includes the following topics:
With the Workflow Monitor, you can view details about a workflow or task in Gantt Chart view or Task view.
You can also view details about the Integration Service, nodes, and grids.
The Workflow Monitor displays workflows that have run at least once. You can run, stop, abort, and resume
workflows from the Workflow Monitor. The Workflow Monitor continuously receives information from the
Integration Service and Repository Service. It also fetches information from the repository to display historic
information.
• Navigator window. Displays monitored repositories, Integration Services, and repository objects.
• Output window. Displays messages from the Integration Service and the Repository Service.
• Properties window. Displays details about services, workflows, worklets, and tasks.
• Time window. Displays progress of workflow runs.
• Gantt Chart view. Displays details about workflow runs in chronological (Gantt Chart) format.
• Task view. Displays details about workflow runs in a report format, organized by workflow run.
The Workflow Monitor displays time relative to the time configured on the Integration Service node. For
example, a folder contains two workflows. One workflow runs on an Integration Service in the local time zone,
216
and the other runs on an Integration Service in a time zone two hours later. If you start both workflows at 9
a.m. local time, the Workflow Monitor displays the start time as 9 a.m. for one workflow and as 11 a.m. for
the other workflow.
Toggle between Gantt Chart view and Task view by clicking the tabs on the bottom of the Workflow Monitor.
You can view and hide the Output and Properties windows in the Workflow Monitor. To view or hide the
Output window, click View > Output. To view or hide the Properties window, click View > Properties View.
You can also dock the Output and Properties windows at the bottom of the Workflow Monitor workspace. To
dock the Output or Properties window, right-click a window and select Allow Docking. If the window is
floating, drag the window to the bottom of the workspace. If you do not allow docking, the windows float in
the Workflow Monitor workspace.
You can customize the Workflow Monitor display by configuring the maximum days or workflow runs the
Workflow Monitor shows. You can also filter tasks and Integration Services in both Gantt Chart and Task
view.
1. Select Start > Programs > Informatica PowerCenter [version] > Client > Workflow Monitor from the
Windows Start menu.
-or-
Configure the Workflow Manager to open the Workflow Monitor when you run a workflow from the
Workflow Manager.
-or-
Click Tools > Workflow Monitor from the Designer, Workflow Manager, or Repository Manager.
-or-
Click the Workflow Monitor icon on the Tools toolbar. When you use a Tools button to open the Workflow
Monitor, PowerCenter uses the same repository connection to connect to the repository and opens the
same folders.
-or-
From the Workflow Manager, right-click an Integration Service or a repository, and select Run Monitor.
You can open multiple instances of the Workflow Monitor on one machine using the Windows Start menu.
After you connect to a repository, the Workflow Monitor displays a list of Integration Services available for
the repository. The Workflow Monitor can monitor multiple repositories, Integration Services, and workflows
at the same time.
Note: If you are not connected to a repository, you can remove the repository from the Navigator. Select the
repository in the Navigator and click Edit > Delete. The Workflow Monitor displays a message verifying that
you want to remove the repository from the Navigator list. Click Yes to remove the repository. You can
connect to the repository again at any time.
To connect to an Integration Service, right-click it and select Connect. When you connect to an Integration
Service, you can view all folders that you have permission for. To disconnect from an Integration Service,
right-click it and select Disconnect. When you disconnect from an Integration Service, or when the Workflow
Monitor cannot connect to an Integration Service, the Workflow Monitor displays disconnected for the
Integration Service status.
The Workflow Monitor is resilient to the Integration Service. If the Workflow Monitor loses connection to the
Integration Service, LMAPI tries to reestablish the connection for the duration of the PowerCenter Client
resilience time-out period.
After the connection is reestablished, the Workflow Monitor retrieves the workflow status from the
repository. Depending on your Workflow Monitor advanced settings, you may have to reopen the workflow to
view the latest status of child tasks.
You can also ping an Integration Service to verify that it is running. Right-click the Integration Service in the
Navigator and select Ping Integration Service. You can view the ping response time in the Output window.
Note: You can also open an Integration Service in the Navigator without connecting to it. When you open an
Integration Service, the Workflow Monitor gets workflow run information stored in the repository. It does not
get dynamic workflow run information from currently running workflows.
Filtering Tasks
You can view all or some workflow tasks. You can filter tasks you do not want to view. For example, if you
want to view only Session tasks, you can hide all other tasks. You can view all tasks at any time.
When you hide an Integration Service, the Workflow Monitor hides the Integration Service from the Navigator
for the Gantt Chart and Task views. You can show the Integration Service again at any time.
You can hide unconnected Integration Services. When you hide a connected Integration Service, the Workflow
Monitor asks if you want to disconnect from the Integration Service and then filter it. You must disconnect
from an Integration Service before hiding it.
1. In the Navigator, right-click a repository to which you are connected and select Filter Integration
Services.
The Filter Integration Services dialog box appears.
2. Select the Integration Services you want to view and clear the Integration Services you want to filter.
Click OK.
If you are connected to an Integration Service that you clear, the Workflow Monitor prompts you to
disconnect from the Integration Service before filtering.
3. Click Yes to disconnect from the Integration Service and filter it.
-or-
Click No to remain connected to the Integration Service.
Tip: To filter an Integration Service in the Navigator, right-click it and select Filter Integration Service.
You can open and close folders in the Gantt Chart and Task views. When you open a folder, it opens in both
views. To open a folder, right-click it in the Navigator and select Open. Or, you can double-click the folder.
Viewing Statistics
You can view statistics about the objects you monitor in the Workflow Monitor. Click View > Statistics. The
Statistics window displays the following information:
• Number of opened repositories. Number of repositories you are connected to in the Workflow Monitor.
Viewing Properties
You can view properties for the following items:
• Tasks. You can view properties, such as task name, start time, and status.
• Sessions. You can view properties about the Session task and session run, such as mapping name and
number of rows successfully loaded. You can also view load statistics about the session run. You can
also view performance details about the session run.
• Workflows. You can view properties such as start time, status, and run type.
• Links. When you double-click a link between tasks in Gantt Chart view, you can view tasks that you filtered
out.
• Integration Services. You can view properties such as Integration Service version and startup time. You
can also view the sessions and workflows running on the Integration Service.
• Grid. You can view properties such as the name, Integration Service type, and code page of a node in the
Integration Service grid. You can view these details in the Integration Service Monitor.
• Folders. You can view properties such as the number of workflow runs displayed in the Time window.
To view properties for all objects, right-click the object and select Properties. You can right-click items in the
Navigator or the Time window in either Gantt Chart view or Task view.
To view link properties, double-click the link in the Time window of Gantt Chart view. When you view link
properties, you can double-click a task in the Link Properties dialog box to view the properties for the filtered
task.
• General. Customize general options such as the maximum number of workflow runs to display and
whether to receive messages from the Workflow Manager. See “Configuring General Options” on page
221.
• Gantt Chart view. Configure Gantt Chart view options such as workspace color, status colors, and time
format. See “Configuring Gantt Chart View Options” on page 221.
• Task view. Configure which columns to display in Task view. See “Configuring Task View Options” on
page 221.
• Advanced. Configure advanced options such as the number of workflow runs the Workflow Monitor holds
in memory for each Integration Service. See “Configuring Advanced Options” on page 221.
The following table describes the options you can configure on the General tab:
Setting Description
Maximum Days Number of tasks the Workflow Monitor displays up to a maximum number of days. Default is
5.
Maximum Workflow Maximum number of workflow runs the Workflow Monitor displays for each folder. Default is
Runs per Folder 200.
Receive Messages Select to receive messages from the Workflow Manager. The Workflow Manager sends
from Workflow messages when you start or schedule a workflow in the Workflow Manager. The Workflow
Manager Monitor displays these messages in the Output window.
Receive Notifications Select to receive notification messages in the Workflow Monitor and view them in the Output
from Repository window. You must be connected to the repository to receive notifications. Notification
Service messages include information about objects that another user creates, modifies, or delete.
You receive notifications about folders and Integration Services. The Repository Service
notifies you of the changes so you know objects you are working with may be out of date. You
also receive notices posted by the user who manages the Repository Service.
The following table describes the options you can configure on the Gantt Chart tab:
Setting Description
Status Color Select a status and configure the color for the status. The Workflow Monitor displays tasks with
the selected status in the colors you select. You can select two colors to display a gradient.
Recovery Color Configure the color for the recovery sessions. The Workflow Monitor uses the status color for
the body of the status bar, and it uses and the recovery color as a gradient in the status bar.
Setting Description
Refresh Workflow Tasks When the Connection Refreshes workflow tasks when you reconnect to the Integration
to the Integration Service is Re-established Service.
Expand Workflow Runs When Opening the Expands workflows when you open the latest run.
Latest Runs
Hide Folders/Workflows That Do Not Contain Hides folders or workflows under the Workflow Run column in the
Any Runs When Filtering By Running/Schedule Time window when you filter running or scheduled tasks.
Runs
Highlight the Entire Row When an Item Is Highlights the entire row in the Time window for selected items.
Selected When you disable this option, the Workflow Monitor highlights the
item in the Workflow Run column in the Time window.
Open Latest 20 Runs At a Time You can open the number of workflow runs. Default is 20.
Minimum Number of Workflow Runs (Per Specifies the minimum number of workflow runs for each
Integration Service) the Workflow Monitor Will Integration Service that the Workflow Monitor holds in memory
Accumulate in Memory before it starts releasing older runs from memory.
When you connect to an Integration Service, the Workflow Monitor
fetches the number of workflow runs specified on the General tab
for each folder you connect to. When the number of runs is less than
the number specified in this option, the Workflow Monitor stores
new runs in memory until it reaches this number.
• Standard. Contains buttons to connect to and disconnect from repositories, print, view print previews,
search the workspace, show or hide the navigator in task view, and show or hide the output window.
• Integration Service. Contains buttons to connect to and disconnect from Integration Services, ping
Integration Service, and perform workflows operations.
• View. Contains buttons to configure time increments and show properties, workflow logs, or session logs.
• Filters. Contains buttons to display most recent runs, and to filter tasks, Integration Services, and folders.
After a toolbar appears, it displays until you exit the Workflow Monitor or hide the toolbar. You can drag each
toolbar to resize and reposition each toolbar.
1. In the Navigator or Workflow Run List, select the workflow with the runs you want to see.
2. Right-click the workflow and select Open Latest 20 Runs.
The menu option is disabled when the latest 20 workflow runs are already open.
Up to 20 of the latest runs appear.
You can also run part of a workflow. When you run part of a workflow, the Integration Service runs the
workflow from the selected task to the end of the workflow.
The Integration Service appends log events to the existing log events when you recover the workflow. The
Integration Service creates another session log when you recover a session.
1. In the Navigator, select the task, workflow, or worklet you want to stop or abort.
2. Click Tasks > Stop.
-or-
Click Tasks > Abort.
The Workflow Monitor displays the status of the stop or abort command in the Output window.
Scheduling Workflows
You can schedule workflows in the Workflow Monitor. You can schedule any workflow that is not configured
to run on demand. When you try to schedule a run on demand workflow, the Workflow Monitor displays an
error message in the Output window.
When you schedule an unscheduled workflow, the workflow uses its original schedule specified in the
workflow properties. If you want to specify a different schedule for the workflow, you must edit the scheduler
in the Workflow Manager.
If you want to view past session or workflow logs, configure the session or workflow to save logs by
timestamp. When you configure the workflow to save log files, the workflow creates a text file and the binary
file that displays in the Log Events window. You can save log files by timestamp or by workflow or session
runs. You can configure how many workflow or session runs to save.
When you open a session or workflow log, the Log Events window sends a request to the Log Agent. The Log
Agent retrieves logs from each node that ran the session or workflow. The Log Events window displays the
logs by node.
Related Topics:
• “Session and Workflow Logs” on page 246
Aborted Workflows You choose to abort the workflow or task in the Workflow Monitor or through
Tasks pmcmd. The Integration Service kills the DTM process and aborts the task. You
can recover an aborted workflow if you enable the workflow for recovery.
Aborting Workflows The Integration Service is in the process of aborting the workflow or task.
Tasks
Disabled Workflows You select the Disabled option in the workflow or task properties. The Integration
Tasks Service does not run the disabled workflow or task until you clear the Disabled
option.
Failed Workflows The Integration Service fails the workflow or task because it encountered errors.
Tasks You cannot recover a failed workflow.
Preparing to Workflows The Integration Service is waiting for an execution lock for the workflow.
Run
Scheduled Workflows You schedule the workflow to run at a future date. The Integration Service runs
the workflow for the duration of the schedule.
Stopped Workflows You choose to stop the workflow or task in the Workflow Monitor or through
Tasks pmcmd. The Integration Service stops processing the task and all other tasks in
its path. The Integration Service continues running concurrent tasks. You can
recover a stopped workflow if you enable the workflow for recovery.
Stopping Workflows The Integration Service is in the process of stopping the workflow or task.
Tasks
Succeeded Workflows The Integration Service successfully completes the workflow or task.
Tasks
Suspended Workflows The Integration Service suspends the workflow because a task failed and no other
Worklets tasks are running in the workflow. This status is available when you select the
Suspend on Error option. You can recover a suspended workflow.
Suspending Workflows A task fails in the workflow when other tasks are still running. The Integration
Worklets Service stops running the failed task and continues running tasks in other paths.
This status is available when you select the Suspend on Error option.
Terminated Workflows The Integration Service shuts down unexpectedly when running this workflow or
Tasks task. You can recover a terminated workflow if you enable the workflow for
recovery.
Terminating Workflows The Integration Service is in the process of terminating the workflow or task.
Tasks
Waiting Workflows The Integration Service is waiting for available resources so it can run the
Tasks workflow or task. For example, you may set the maximum number of running
Session and Command tasks allowed for each Integration Service process on the
node to 10. If the Integration Service is already running 10 concurrent sessions, all
other workflows and tasks have the Waiting status until the Integration Service is
free to run more tasks.
To see a list of tasks by status, view the workflow in the Task view and filter by status. Or, click Edit > List
Tasks in Gantt Chart view.
1. Open the Gantt Chart view and click Edit > List Tasks.
2. In the List What field, select the type of task status you want to list.
For example, select Failed to view a list of failed tasks and workflows.
3. Click List to view the list.
Tip: Double-click the task name in the List Tasks dialog box to highlight the task in Gantt Chart view.
To zoom the Time window in Gantt Chart view, click View > Zoom, and then select the time increment. You
can also select the time increment in the Zoom button on the toolbar.
Performing a Search
Use the search tool in the Gantt Chart view to search for tasks, workflows, and worklets in all repositories you
connect to. The Workflow Monitor searches for the word you specify in task names, workflow names, and
worklet names. You can highlight the task in Gantt Chart view by double-clicking the task after searching.
To perform a search:
1. Open the Gantt Chart view and click Edit > Find.
The Find Object dialog box appears.
2. In the Find What field, enter the keyword you want to find.
3. Click Find Now.
The Workflow Monitor displays a list of tasks, workflows, and worklets that match the keyword.
Tip: Double-click the task name in the Find Object dialog box to highlight the task in Gantt Chart view.
• Workflow run list. The list of workflow runs. The workflow run list contains folder, workflow, worklet, and
task names. The Workflow Monitor displays workflow runs chronologically with the most recent run at the
top. It displays folders and Integration Services alphabetically.
• Filter tasks. Use the Filter menu to select the tasks you want to display or hide.
• Hide and view columns. Hide or view an entire column in Task view.
• Hide and view the Navigator. You can hide the Navigator in Task view. Click View > Navigator to hide or
view the Navigator.
To view the tasks in Task view, select the Integration Service you want to monitor in the Navigator.
• By task type. You can filter out tasks you do not want to view. For example, if you want to view only
Session tasks, you can filter out all other tasks.
• By nodes in the Navigator. You can filter the workflow runs in the Time window by selecting different
nodes in the Navigator. For example, when you select a repository name in the Navigator, the Time
window displays all workflow runs that ran on the Integration Services registered to that repository. When
you select a folder name in the Navigator, the Time window displays all workflow runs in that folder.
• By the most recent runs. To display by the most recent runs, click Filters > Most Recent Runs and select
the number of runs you want to display.
• By Time window columns. You can click Filters > Auto Filter and filter by properties you specify in the
Time window columns.
To filter by Time view columns:
• Repository Service details. View information about repositories, such as the number of connected
Integration Services.
• Integration Service properties. View information about the Integration Service, such as the Integration
Service Version. You can also view system resources that running workflows consume, such as the
system swap usage at the time of the running workflow.
• Repository folder details. View information about a repository folder, such as the folder owner.
• Workflow run properties. View information about a workflow, such as the start and end time.
• Worklet run properties. View information about a worklet, such as the execution nodes on which the
worklet is run.
• Command task run properties. View the information about Command tasks in a running workflow, such
as the start and end time.
• Session task run properties. View information about Session tasks in a running workflow, such as details
on session failures.
• Performance details. View counters that help you understand the session and mapping efficiency, such
as information on the data cache size for an Aggregator transformation.
231
Repository Service Details
To view details about a repository, right-click on the repository and choose Properties.
The following table describes the attributes that appear in the Repository Details area:
Is Opened Yes, if you are connected to the repository. Otherwise, value is No.
User Name Name of the user connected to the repository. Attribute appears if you are connected to
the repository.
Number of Connected Number of Integration Services you are connected to in the Workflow Monitor. Attribute
Integration Services appears if you are connected to the repository.
The following table describes the attributes that appear in the Integration Service Details area:
Integration Service PowerCenter version and build. Appears if you are connected to the Integration Service in the
Version Workflow Monitor.
Integration Service Data movement mode of the Integration Service. Appears if you are connected to the
Mode Integration Service in the Workflow Monitor.
Integration Service The operating mode of the Integration Service. Appears if you are connected to the
OperatingMode Integration Service in the Workflow Monitor.
Startup Time Time the Integration Service started. Startup Time appears in the following format: MM/DD/
YYYY HH:MM:SS AM|PM. Appears if you are connected to the Integration Service in the
Workflow Monitor.
Last Updated Time Time the Integration Service was last updated. Last Updated Time appears in the following
format: MM/DD/YYYY HH:MM:SS AM|PM. Appears if you are connected to the Integration
Service in the Workflow Monitor.
Grid Assigned Grid the Integration Service is assigned to. Attribute appears if the Integration Service is
assigned to a grid. Appears if you are connected to the Integration Service in the Workflow
Monitor.
Node(s) Names of nodes configured to run Integration Service processes. Appears if you are
connected to the Integration Service in the Workflow Monitor.
To view the Integration Service Monitor, right-click an Integration Service and choose Properties. The
Integration Service Monitor area appears if you are connected to an Integration Service. You can view the
Integration Service type and code page for each node the Integration Service is running on. To view the tool
tip for the Integration Service type and code page, move the pointer over the node name.
The following table describes the attributes that appear in the Integration Service Monitor area:
Node Name Name of the node on which the Integration Service is running.
Task/Partition Name of the session and partition that is running. Or, name of Command task that is running.
CPU % For a node, this is the percent of CPU usage of processes running on the node. For a task,
this is the percent of CPU usage by the task process.
Memory Usage For a node, this is the memory usage of processes running on the node. For a task, this is the
memory usage of the task process.
Swap Usage Amount of swap space usage of processes running on the node.
The following table describes the attributes that appear in the Folder Details area:
Number of Workflow Number of workflows that have run in the time window during which the Workflow Monitor
Runs Within Time displays workflow statistics.
Window
Number of Fetched Number of workflow runs displayed during the time window.
Workflow Runs
Workflows Fetched Time period during which the Integration Service fetched the workflows.
Between Appears as, DD/MM/YYYT HH:MM:SS and DD/MM/YYYT HH:MM:SS.
When you view workflow properties, the following areas appear in the Properties window:
The following table describes the attributes that appear in the Workflow Details area:
Concurrent Type -
OS Profile Name of the operating system profile assigned to the workflow. Value is empty if an
operating system profile is not assigned to the workflow.
Deleted Yes if the workflow is deleted from the repository. Otherwise, value is No.
Session Statistics
The Session Statistics area displays information about sessions, such as the session run time and the
number or rows loaded to the targets.
Source Success Rows Number of rows the Integration Service successfully read from the source.
Source Failed Rows Number of rows the Integration Service failed to read from the source.
Target Success Rows Number of rows the Integration Service wrote to the target.
Target Failed Rows Number of rows the Integration Service failed to write the target.
When you view worklet properties, the following areas appear in the Properties window:
Worklet Details
To view worklet details in the Properties window, right-click on a worklet and choose Get Run Properties.
The following table describes the attributes that appear in the Worklet Details area:
Integration Service Name Name of the Integration Service assigned to the workflow associated with the worklet.
The following table describes the attributes that appear in the Task Details area:
Integration Service Name Name of the Integration Service assigned to the workflow associated with the
Command task.
When you load data to a target with multiple groups, such as an XML target, the Integration Service provides
session details for each group.
Failure Information
The Failure Information area displays information about session errors.
The following table describes the attributes that appear in the Failure Information area:
The following table describes the attributes that appear in the Task Details area:
Integration Service Name of the Integration Service assigned to the workflow associated with the session.
Name
Source Success Rows Number of rows the Integration Service successfully read from the source.
Source Failed Rows Number of rows the Integration Service failed to read from the source.
1
Target Success Rows Number of rows the Integration Service wrote to the target.
Target Failed Rows Number of rows the Integration Service failed to write the target.
1. For a recovery session, this value lists the number of rows the Integration Service processed after recovery. To
determine the number of rows processed before recovery, see the session log.
The following table describes the attributes that appear in the Source/Target Statistics area:
Transformation Name Name of the source qualifier instance or the target instance in the mapping. If you create
multiple partitions in the source or target, the Instance Name displays the partition
number. If the source or target contains multiple groups, the Instance Name displays the
group name.
Applied Rows For sources, shows the number of rows the Integration Service successfully read from the
source. For targets, shows the number of rows the Integration Service successfully applied
to the target.
For example, you have a target table with one column called SALES_ID and five rows that
contain the values 1, 2, 3, 2, and 2. You have a source table with one column called
SALES_ID_IN and five rows that contain the values 1, 2, 3, 4, and 5. You mark rows for
update where SALES_ID_IN is 2. The Integration Service applies one row, which updates
three rows in the target. If you mark rows for update where SALES_ID_IN is 4, the
Integration Service applies one row. The Integration Service does not update any rows at
the target as the target does not contain rows with SALES_ID as 4.
For a recovery session, this value lists the number of rows that the Integration Service
affected or applied to the target after recovery. To determine the number of rows
processed before recovery, see the session log.
Affected Rows For sources, shows the number of rows the Integration Service successfully read from the
source.
For targets, shows the number of rows affected by the specified operation. For example,
you have a table with one column called SALES_ID and five rows that contain the values 1,
2, 3, 2, and 2. You mark rows for update where SALES_ID is 2. The Integration Service
updates three rows, even though there was one update request. If you mark rows for
update where SALES_ID is 4, the Integration Service updates no rows.
For a recovery session, this value lists the number of rows that the Integration Service
affected or applied to the target after recovery. To determine the number of rows
processed before recovery, see the session log.
Rejected Rows Number of rows the Integration Service dropped when reading from the source, or the
number of rows the Integration Service rejected when writing to the target.
Throughput (Rows/Sec) Rate at which the Integration Service read rows from the source or wrote data into the
target per second.
Throughput (Bytes/Sec) Estimated rate at which the Integration Service read data from the source and wrote data
to the target in bytes per second. Throughput (Bytes/Sec) is based on the Throughput
(Rows/Sec) and the row size. The row size is based on the number of columns the
Integration Service read from the source and wrote to the target, the data movement mode,
column metadata, and if you enabled high precision for the session. The calculation is not
based on the actual data size in each row.
Bytes Total bytes processed in the PowerCenter Integration Service memory for the source and
target.
Last Error Code Error message code of the most recent error message written to the session log. If you
view details after the session completes, this field displays the last error code.
Last Error Message Most recent error message written to the session log. If you view details after the session
completes, this field displays the last error message.
Start Time Time the Integration Service started to read from the source or write to the target.
The Workflow Monitor displays time relative to the Integration Service.
End Time Time the Integration Service finished reading from the source or writing to the target.
The Workflow Monitor displays time relative to the Integration Service.
Partition Details
The Partition Details area displays information about partitions in a session. When you create multiple
partitions in a session, the Integration Service provides session details for each partition. Use these details to
determine if the data is evenly distributed among the partitions. For example, if the Integration Service moves
more rows through one target partition than another, or if the throughput is not evenly distributed, you might
want to adjust the data range for the partitions.
CPU % Percent of the CPU the partition is consuming during the current session run.
CPU Seconds Amount of process time in seconds the CPU is taking to process the data in the partition
during the current session run.
Memory Usage Amount of memory the partition consumes during the current session run.
Performance Details
The performance details provide counters that help you understand the session and mapping efficiency. Each
source qualifier and target definition appears in the performance details, along with counters that display
performance information about each transformation. You can view session performance details in the
Workflow Monitor or in the performance details file.
By evaluating the final performance details, you can determine where session performance slows down. The
Workflow Monitor also provides session-specific details that can help tune the following memory settings:
1. Right-click a session in the Workflow Monitor and choose Get Run Properties.
2. Click the Performance area in the Properties window.
When you create multiple partitions, the Performance Area displays a column for each partition. The
columns display the counter values for each partition.
3. Click OK.
Source Qualifier, Normalizer, and target transformations have additional counters that indicate the efficiency
of data moving into and out of buffers. Use these counters to locate performance bottlenecks.
Some transformations have counters specific to their functionality. For example, each Lookup transformation
has a counter that indicates the number of rows stored in the lookup cache.
When you view the performance details file, the first column displays the transformation name as it appears
in the mapping, the second column contains the counter name, and the third column holds the resulting
number or efficiency percentage. If you use a Joiner transformation, the first column shows two instances of
the Joiner transformation:
• <Joiner transformation> [M]. Displays performance details about the master pipeline of the Joiner
transformation.
• <Joiner transformation> [D]. Displays performance details about the detail pipeline of the Joiner
transformation.
When you create multiple partitions, the Integration Service generates one set of counters for each partition.
The following performance counters illustrate two partitions for an Expression transformation:
Note: When you increase the number of partitions, the number of aggregate or rank input rows may be
different from the number of output rows from the previous transformation.
The following table describes the Aggegator and Rank Transformation counters/descriptions that may
appear in the Session Performance Details area or in the performance details file:
Counters Description
Aggregator/Rank_readfromcache Number of times the Integration Service read from the index or
data cache.
Aggregator/Rank_writetocache Number of times the Integration Service wrote to the index or data
cache.
Aggregator/Rank_readfromdisk Number of times the Integration Service read from the index or
data file on the local disk, instead of using cached data.
Aggregator/Rank_writetodisk Number of times the Integration Service wrote to the index or data
file on the local disk, instead of using cached data.
The following table describes the Lookup Transformation counters/descriptions that may appear in the
Session Performance Details area or in the performance details file:
Counters Description
Counters Description
Joiner_inputMasterRows Number of rows the master source passed into the transformation.
Joiner_inputDetailRows Number of rows the detail source passed into the transformation.
Joiner_readfromcache Number of times the Integration Service read from the index or
data cache.
Joiner_writetocache Number of times the Integration Service wrote to the index or data
cache.
Joiner_readfromdisk Number of times the Integration Service read from the index or
data files on the local disk, instead of using cached data.
The Integration Service generates this counter when you use
sorted input for the Joiner transformation.
Joiner_writetodisk Number of times the Integration Service wrote to the index or data
files on the local disk, instead of using cached data.
The Integration Service generates this counter when you use
sorted input for the Joiner transformation.
Joiner_readBlockFromDisk Number of times the Integration Service read from the index or
data files on the local disk, instead of using cached data.
The Integration Service generates this counter when you do not
use sorted input for the Joiner transformation.
Joiner_writeBlockToDisk Number of times the Integration Service wrote to the index or data
cache.
The Integration Service generates this counter when you do not
use sorted input for the Joiner transformation.
Joiner_insertInDetailCache Number of times the Integration Service wrote to the detail cache.
The Integration Service generates this counter if you join data from
a single source.
The Integration Service generates this counter when you use
sorted input for the Joiner transformation.
Joiner_duplicaterowsused Number of times the Integration Service used the duplicate rows in
the master relation.
Counters Description
If you have multiple source qualifiers and targets, evaluate them as a whole. For source qualifiers and
targets, a high value is considered 80-100 percent. Low is considered 0-20 percent.
Log events for workflows include information about tasks performed by the Integration Service, workflow
processing, and workflow errors. Log events for sessions include information about the tasks performed by
the Integration Service, session errors, and load summary and transformation statistics for the session.
You can view log events for workflows with the Log Events window in the Workflow Monitor. The Log Events
window displays information about log events including severity level, message code, run time, workflow
name, and session name. For session logs, you can set the tracing level to log more information. All log
events display severity regardless of tracing level.
The following steps describe how the Log Manager processes session and workflow logs:
1. The Integration Service writes binary log files on the node. It sends information about the sessions and
workflows to the Log Manager.
2. The Log Manager stores information about workflow and session logs in the domain configuration
database. The domain configuration database stores information such as the path to the log file
location, the node that contains the log, and the Integration Service that created the log.
3. When you view a session or workflow in the Log Events window, the Log Manager retrieves the
information from the domain configuration database to determine the location of the session or
workflow logs.
246
4. The Log Manager dispatches a Log Agent to retrieve the log events on each node to display in the Log
Events window.
To access log events for more than the last workflow run, you can configure sessions and workflows to
archive logs by time stamp. You can also configure a workflow to produce text log files. You can archive text
log files by run or by time stamp. When you configure the workflow or session to produce text log files, the
Integration Service creates the binary log and the text log file.
You can limit the size of session logs for long-running and real-time sessions. You can limit the log size by
configuring a maximum time frame or a maximum file size. When a log reaches the maximum size, the
Integration Service starts a new log.
Log Events
You can view log events in the Workflow Monitor Log Events window and you can view them as text files. The
Log Events window displays log events in a tabular format.
Log Codes
Use log events to determine the cause of workflow or session problems. To resolve problems, locate the
relevant log codes and text prefixes in the workflow and session log.
The Integration Service precedes each workflow and session log event with a thread identification, a code,
and a number. The code defines a group of messages for a process. The number defines a message. The
message can provide general information or it can be an error message.
Some log events are embedded within other log events. For example, a code CMN_1039 might contain
informational messages from Microsoft SQL Server.
Message Severity
The Log Events window categorizes workflow and session log events into severity levels. It prioritizes error
severity based on the embedded message type. The error severity level appears with log events in the Log
Events window in the Workflow Monitor. It also appears with messages in the workflow and session log files.
Note: If you cannot view all the workflow log messages when the error severity level is at warning, change the
error severity level of the workflow log. Change the log level from warning to info in the advanced properties
of the PowerCenter Integration Service process.
FATAL Fatal error occurred. Fatal error messages have the highest severity level.
ERROR Indicates the service failed to perform an operation or respond to a request from a client
application. Error messages have the second highest severity level.
WARNING Indicates the service is performing an operation that may cause an error. This can cause
repository inconsistencies. Warning messages have the third highest severity level.
INFO Indicates the service is performing an operation that does not indicate errors or problems.
Information messages have the third lowest severity level.
TRACE Indicates service operations at a more specific level than Information. Tracing messages are
generally record message sizes. Trace messages have the second lowest severity level.
DEBUG Indicates service operations at the thread level. Debug messages generally record the success or
failure of service operations. Debug messages have the lowest severity level.
Writing Logs
The Integration Service writes the workflow and session logs as binary files on the node where the service
process runs. It adds a .bin extension to the log file name you configure in the session and workflow
properties.
When you run a session on a grid, the Integration Service creates one session log for each DTM process. The
log file on the primary node has the configured log file name. The log file on a worker node has a .w<Partition
Group Id> extension:
<session or workflow name>.w<Partition Group ID>.bin
For example, if you run the session s_m_PhoneList on a grid with three nodes, the session log files use the
names, s_m_PhoneList.bin, s_m_PhoneList.w1.bin, and s_m_PhoneList.w2.bin.
When you rerun a session or workflow, the Integration Service overwrites the binary log file unless you
choose to save workflow logs by time stamp. When you save workflow logs by time stamp, the Integration
Service adds a time stamp to the log file name and archives them.
To view log files for more than one run, configure the workflow or session to create log files.
A workflow or session continues to run even if there are errors while writing to the log file after the workflow
or session initializes. If the log file is incomplete, the Log Events window cannot display all the log events.
The Integration Service starts a new log file for each workflow and session run. When you recover a workflow
or session, the Integration Service appends a recovery.time stamp extension to the file name for the recovery
run.
For real-time sessions, the Integration Service overwrites the log file when you restart a session in cold start
mode or when you restart a JMS or WebSphere MQ session that does not have recovery data. The Integration
Service appends the log file when you restart a JMS or WebSphere MQ session that has recovery data.
To convert the binary file to a text file, use the infacmd convertLog or the infacmd GetLog command.
The Session Log Interface lets you pass session event messages, but not workflow event messages, to an
external shared library.
You can perform the following tasks in the Log Events window:
• Save log events to file. Click Save As to save log events as a binary, text, or XML file.
• Copy log event text to a file. Click Copy to copy one or more log events and paste them into a text file.
• Sort log events. Click a column heading to sort log events.
• Search for log events. Click Find to search for text in log events.
• Refresh log events. Click Refresh to view updated log events during a workflow or session run.
Note: When you view a log larger than 2 GB, the Log Events window displays a warning that the file might be
too large for system memory. If you continue, the Log Events window might shut down unexpectedly.
To Press
By default, the Integration Service writes log files based on the Integration Service code page. If you enable
the LogInUTF8 option in the Advanced Properties for the Integration Service, the Integration Service writes to
the logs using the UTF-8 character set. If you configure the Integration Service to run in ASCII mode, it sorts
all character data using a binary sort order even if you select a different sort order in the session properties.
• Write Backward Compatible Log File. Select this option to create a text file for workflow or session logs.
If you do not select the option, the Integration Service creates the binary log only.
• Log File Directory. The directory where you want the log file created. By default, the Integration Service
writes the workflow log file in the directory specified in the service process variable, $PMWorkflowLogDir.
It writes the session log file in the directory specified in the service process variable, $PMSessionLogDir.
If you enter a directory name that the Integration Service cannot access, the workflow or session fails.
The following table shows the default location for each type of log file and the associated service process
variables:
Log File Type Default Directory (Service Process Default Value for Service Process
Variable) Variable
Note: The Integration Service stores the workflow and session log names in the domain configuration
database. If you want to use Unicode characters in the workflow or session log file names, the domain
configuration database must be a Unicode database.
To create a log file for more than one workflow or session run, configure the workflow or session to archive
logs in the following ways:
• By run. Archive text log files by run. Configure a number of text logs to save.
• By time stamp. Archive binary logs and text files by time stamp. The Integration Service saves an
unlimited number of logs and labels them by time stamp. When you configure the workflow or session to
archive by time stamp, the Integration Service always archives binary logs.
Note: When you run concurrent workflows with the same instance name, the Integration Service appends a
timestamp to the log file name, even if you configure the workflow to archive logs by run.
The Integration Service uses the following naming convention to create historical logs:
<session or workflow name>.n
where n=0 for the first historical log. The variable increments by one for each workflow or session run.
If you run a session on a grid, the worker service processes use the following naming convention for a
session:
<session name>.n.w<DTM ID>
• yyyy = year
• mm = month, ranging from 01-12
• dd = day, ranging from 01-31
• hh = hour, ranging from 00-23
• mi = minute, ranging from 00-59
To prevent filling the log directory, periodically purge or back up log files when using the time stamp option.
If you run a session on a grid, the worker service processes use the following naming convention for
sessions:
<session name>.yyyymmddhhmi.w<DTM ID>
<session name>.yyyymmddhhmi.w<DTM ID>.bin
When you archive text log files, view the logs by navigating to the workflow or session log folder and viewing
the files in a text reader. When you archive binary log files, you can view the logs by navigating to the
workflow or session log folder and importing the files in the Log Events window. You can archive binary files
when you configure the workflow or session to archive logs by time stamp. You do not have to create text log
files to archive binary files. You might need to archive binary files to send to Informatica Global Customer
Support for review.
Configure the session log to roll over to a new file after the log file reaches a maximum size. Or, configure the
session log to roll over to a new file after a maximum period of time. The Integration Service saves the
previous log files.
You can configure the maximum number of partial log files to save for the session. The Integration Service
saves one more log file that the number of files you configure. The Integration Service does not purge the
first session log file. The first log file contains details about the session initialization.
The Integration Service names each partial session log file with the following syntax:
<session log file>.part.n
Configure the following attributes on the Advanced settings of the Config Object tab:
• Session Log File Max Size. The maximum number of megabytes for a log file. Configure a maximum size
to enable log file rollover by file size. When the log file reaches the maximum size, the Integration Service
creates a new log file. Default is zero.
• Session Log File Max Time Period. The maximum number of hours that the Integration Service writes to a
session log. Configure the maximum time period to enable log file rollover by time. When the period is
over, the Integration service creates another log file. Default is zero.
• Maximum Partial Session Log Files. Maximum number of session log files to save. The Integration
Service overwrites the oldest partial log file if the number of log files has reached the limit. If you
configure a maximum of zero, then the number of session log files is unlimited. Default is one.
Note: You can configure a combination of log file maximum size and log file maximum time. You must
configure one of the properties to enable session log file rollover. If you configure only maximum partial
session log files, log file rollover is not enabled.
Write Backward Writes workflow logs to a text log file. Select this option if you want to create a log
Compatible Workflow file in addition to the binary log for the Log Events window.
Log File
Workflow Log File Name Enter a file name or a file name and directory. You can use a service, service process,
or user-defined workflow or worklet variable for the workflow log file name.
The Integration Service appends this value to that entered in the Workflow Log File
Directory field. For example, if you have $PMWorkflowLogDir\ in the Workflow Log
File Directory field, enter “logname.txt” in the Workflow Log File Name field, the
Integration Service writes logname.txt to the $PMWorkflowLogDir\ directory.
Workflow Log File Location for the workflow log file. By default, the Integration Service writes the log
Directory file in the process variable directory, $PMWorkflowLogDir.
If you enter a full directory and file name in the Workflow Log File Name field, clear
this field.
Save Workflow Log By You can create workflow logs according to the following options:
- By Runs. The Integration Service creates a designated number of workflow logs.
Configure the number of workflow logs in the Save Workflow Log for These Runs
option. The Integration Service does not archive binary logs.
- By time stamp. The Integration Service creates a log for all workflows, appending a
time stamp to each log. When you save workflow logs by time stamp, the
Integration Service archives binary logs and workflow log files.
You can also use the $PMWorkflowLogCount service variable to create the configured
number of workflow logs for the Integration Service.
Save Workflow Log for Number of historical workflow logs you want the Integration Service to create.
These Runs The Integration Service creates the number of historical logs you specify, plus the
most recent workflow log.
3. Click OK.
Write Backward Writes session logs to a log file. Select this option if you want to create a log file in
Compatible Session Log addition to the binary log for the Log Events window.
File
Session Log File Name By default, the Integration Service uses the session name for the log file name:
s_mapping name.log. For a debug session, it uses DebugSession_mapping name.log.
Enter a file name, a file name and directory, or use the $PMSessionLogFile session
parameter. The Integration Service appends information in this field to that entered in
the Session Log File Directory field. For example, if you have “C:\session_logs\” in the
Session Log File Directory File field and then enter “logname.txt” in the Session Log
File field, the Integration Service writes the logname.txt to the C:\session_logs\
directory.
You can also use the $PMSessionLogFile session parameter to represent the name of
the session log or the name and location of the session log.
Session Log File Location for the session log file. By default, the Integration Service writes the log file
Directory in the process variable directory, $PMSessionLogDir.
If you enter a full directory and file name in the Session Log File Name field, clear this
field.
Save Session You can create session logs according to the following options:
Log By - Session Runs. The Integration Service creates a designated number of session log files.
Configure the number of session logs in the Save Session Log for These Runs option. The
Integration Service does not archive binary logs.
- Session Time Stamp. The Integration Service creates a log for all sessions, appending a
time stamp to each log. When you save a session log by time stamp, the Integration
Service archives the binary logs and text log files.
You can also use the $PMSessionLogCount service variable to create the configured number
of session logs for the Integration Service.
Save Session Number of historical session logs you want the Integration Service to create.
Log for These The Integration Service creates the number of historical logs you specify, plus the most
Runs recent session log.
5. Click OK.
Workflow Logs
Workflow logs contain information about the workflow runs. You can view workflow log events in the Log
Events window of the Workflow Monitor. You can also create an XML, text, or binary log file for workflow log
events.
• Workflow name
• Workflow status
• Status of tasks and worklets in the workflow
• Start and end times for tasks and worklets
• Results of link conditions
• Errors encountered during the workflow and general information
• Some session messages and errors
Session Logs
Session logs contain information about the tasks that the Integration Service performs during a session, plus
load summary and transformation statistics. By default, the Integration Service creates one session log for
each session it runs. If a workflow contains multiple sessions, the Integration Service creates a separate
Related Topics:
• “Log Options Settings” on page 55
Tracing Levels
The amount of detail that logs contain depends on the tracing level that you set. You can configure tracing
levels for each transformation or for the entire session. By default, the Integration Service uses tracing levels
configured in the mapping.
Set the tracing level on the Config Object tab in the session properties.
None Integration Service uses the tracing level set in the mapping.
Terse Integration Service logs initialization information, error messages, and notification of rejected
data.
Normal Integration Service logs initialization and status information, errors encountered, and skipped
rows due to transformation row errors. Summarizes session results, but not at the level of
individual rows.
Verbose In addition to normal tracing, the Integration Service logs additional initialization details, names
Initialization of index and data files used, and detailed transformation statistics.
Verbose Data In addition to verbose initialization tracing, the Integration Service logs each row that passes into
the mapping. Also notes where the Integration Service truncates string data to fit the precision of
a column and provides detailed transformation statistics.
When you configure the tracing level to verbose data, the Integration Service writes row data for
all rows in a block when it processes a transformation.
You can also enter tracing levels for individual transformations in the mapping. When you enter a tracing
level in the session properties, you override tracing levels configured for transformations in the mapping.
1. If you do not know the session or workflow log file name and location, check the Log File Name and Log
File Directory attributes on the Session or Workflow Properties tab.
If you are running the Integration Service on UNIX and the binary log file is not accessible on the
Windows machine where the PowerCenter client is running, you can transfer the binary log file to the
Windows machine using FTP.
2. In the Workflow Monitor, click Tools > Import Log.
3. Navigate to the session or workflow log file directory.
4. Select the binary log file you want to view.
5. Click Open.
1. If you do not know the session or workflow log file name and location, check the Log File Name and Log
File Directory attributes on the Session or Workflow Properties tab.
2. Navigate to the session or workflow log file directory.
The session and workflow log file directory contains the text log files and the binary log files. If you
archive log files, check the file date to find the latest log file for the session.
3. Open the log file in any text editor.
General Tab
The following table describes settings on the General tab:
Rename You can enter a new name for the session task with the Rename button.
Description You can enter a description for the session task in the Description field.
Mapping name Name of the mapping associated with the session task.
Fail Parent if This Task Fails Fails the parent worklet or workflow if this task fails.
Appears only in the Workflow Designer.
Fail Parent if This Task Does Not Run Fails the parent worklet or workflow if this task does not run.
Appears only in the Workflow Designer.
Treat the Input Links as AND or OR Runs the task when all or one of the input link conditions evaluate to True.
Appears only in the Workflow Designer.
259
Properties Tab
On the Properties tab, you can configure the following settings:
• General Options. General Options settings allow you to configure session log file name, session log file
directory, parameter file name and other general session settings.
• Performance. The Performance settings allow you to increase memory size, collect performance details,
and set configuration parameters.
General Description
Options
Settings
Session Log File Enter a file name, a file name and directory, or use the $PMSessionLogFile session parameter. The
Name Integration Service appends information in this field to that entered in the Session Log File
Directory field. For example, if you have “C:\session_logs\” in the Session Log File Directory File
field and then enter “logname.txt” in the Session Log File field, the Integration Service writes the
logname.txt to the C:\session_logs\ directory.
Session Log File Location for the session log file. By default, the Integration Service writes the log file in the service
Directory process variable directory, $PMSessionLogDir.
If you enter a full directory and file name in the Session Log File Name field, clear this field.
Parameter File The name and directory for the parameter file. Use the parameter file to define session parameters
Name and override values of mapping parameters and variables.
You can enter a workflow or worklet variable as the session parameter file name if you configure a
workflow to run concurrently, and you want to use different parameter files for the sessions in
each workflow run instance.
Enable Test You can configure the Integration Service to perform a test load.
Load With a test load, the Integration Service reads and transforms data without writing to targets. The
Integration Service generates all session files and performs all pre- and post-session functions, as
if running the full session.
Enter the number of source rows you want to test in the Number of Rows to Test field.
Number of Rows Enter the number of source rows you want the Integration Service to test load.
to Test
$Source The database connection you want the Integration Service to use for the $Source connection
Connection variable. You can select a relational or application connection object, or you can use the
Value $DBConnectionName or $AppConnectionName session parameter if you want to define the
connection value in a parameter file.
$Target The database connection you want the Integration Service to use for the $Target connection
Connection variable. You can select a relational or application connection object, or you can use the
Value $DBConnectionName or $AppConnectionName session parameter if you want to define the
connection value in a parameter file.
Treat Source Indicates how the Integration Service treats all source rows. If the mapping for the session
Rows As contains an Update Strategy transformation or a Custom transformation configured to set the
update strategy, the default option is Data Driven.
When you select Data Driven and you load to either a Microsoft SQL Server or Oracle database, you
must use a normal load. If you bulk load, the Integration Service fails the session.
Commit Type Determines if the Integration Service uses a source- or target-based, or user-defined commit. You
can choose source- or target-based commit if the mapping has no Transaction Control
transformation or only ineffective Transaction Control transformations. By default, the Integration
Service performs a target-based commit.
A user-defined commit is enabled by default if the mapping has effective Transaction Control
transformations.
Commit Interval In conjunction with the selected commit interval type, indicates the number of rows. By default, the
Integration Service uses a commit interval of 10,000 rows.
This option is not available for user-defined commit.
Commit On End By default, this option is enabled and the Integration Service performs a commit at the end of the
of File file. Clear this option if you want to roll back open transactions.
This option is enabled by default for a target-based commit. You cannot disable it.
Rollback The Integration Service rolls back the transaction at the next commit point when it encounters a
Transactions on non-fatal writer error.
Errors
Java Classpath If you enter a Java Classpath in this field, the Java Classpath is added to the beginning of the
system classpath when the Integration Service runs the session. Use this option if you use third-
party Java packages, built-in Java packages, or custom Java packages in a Java transformation.
You can use service process variables to define the classpath. For example, you can use
$PMRootDir to define a classpath within the $PMRootDir folder.
Performance Description
Settings
DTM Buffer Size Amount of memory allocated to the session from the DTM process.
By default, the PowerCenter Integration Service determines the DTM buffer size at run time.
The Workflow Manager allocates a minimum of 12 MB for DTM buffer memory.
You can specify auto or a numeric value. If you enter 2000, the PowerCenter Integration
Service interprets the number as 2000 bytes. Append KB, MB, or GB to the value to specify
other units. For example, you can specify 512MB.
Increase the DTM buffer size in the following circumstances:
- A session contains large amounts of character data and you configure it to run in Unicode
mode. Increase the DTM buffer size to 24MB.
- A session contains n partitions. Increase the DTM buffer size to at least n times the value
for the session with one partition.
- A source contains a large binary object with a precision larger than the allocated DTM
buffer size. Increase the DTM buffer size so that the session does not fail.
Collect Performance Collects performance details when the session runs. Use the Workflow Monitor to view
Data performance details while the session runs.
Write Performance Writes performance details for the session to the PowerCenter repository. Write performance
Data to Repository details to the repository to view performance details for previous session runs. Use the
Workflow Monitor to view performance details for previous session runs.
Session Retry On The PowerCenter Integration Service retries target writes on deadlock for normal load. You
Deadlock can configure the PowerCenter Integration Service to set the number of deadlock retries and
the deadlock sleep time period.
Pushdown The PowerCenter Integration Service analyzes the transformation logic, mapping, and session
Optimization configuration to determine the transformation logic it can push to the database. Select one of
the following pushdown optimization values:
- None. The PowerCenter Integration Service does not push any transformation logic to the
database.
- To Source. The PowerCenter Integration Service pushes as much transformation logic as
possible to the source database.
- To Target. The PowerCenter Integration Service pushes as much transformation logic as
possible to the target database.
- Full. The PowerCenter Integration Service pushes as much transformation logic as possible
to both the source database and target database.
- $$PushdownConfig. The $$PushdownConfig mapping parameter allows you to run the
same session with different pushdown optimization configurations at different times.
Default is None.
Allow Temporary Allows the PowerCenter Integration Service to create temporary views in the database when it
View for Pushdown pushes the session to the database. The PowerCenter Integration Service must create a view
in the database if the session contains an SQL override, a filtered lookup, or an unconnected
lookup.
Allow Temporary Allows the PowerCenter Integration Service to create temporary sequence objects in the
Sequence for database. The PowerCenter Integration Service must create a sequence object in the
Pushdown database if the session contains a Sequence Generator transformation.
Session Sort Order Sort order for the session. The session properties display the options that you can select
based on the client locale settings. You can select one of the following values for the sort
order:
- 0. BINARY
- 2. SPANISH
- 3. TRADITIONAL_SPANISH
- 4. DANISH
- 5. SWEDISH
- 6. FINNISH
When the PowerCenter Integration Service runs in Unicode mode, it sorts character data in
the session using the selected sort order. When the PowerCenter Integration Service runs in
ASCII mode, it ignores this setting and uses a binary sort order to sort character data.
• Readers. Displays the reader that the Integration Service uses with each source instance. The Workflow
Manager specifies the necessary reader for each source instance.
• Connections. Displays the source connections. You can choose connection types and connection values.
You can also edit connection object values.
• Properties. Displays source and source qualifier properties. For relational sources, you can override
properties that you configured in the Mapping Designer.
For file sources, you can override properties that you configured in the Source Analyzer. You can also
configure the following session properties for file sources:
Source File Enter the directory name in this field. By default, the Integration Service looks in the service
Directory process variable directory, $PMSourceFileDir, for file sources.
If you specify both the directory and file name in the Source Filename field, clear this field. The
Integration Service concatenates this field with the Source Filename field when it runs the session.
You can also use the $InputFileName session parameter to specify the file directory.
Source Enter the file name, or file name and path. Optionally use the $InputFileName session parameter
Filename for the file name.
The Integration Service concatenates this field with the Source File Directory field when it runs the
session. For example, if you have “C:\data\” in the Source File Directory field, then enter
“filename.dat” in the Source Filename field. When the Integration Service begins the session, it
looks for “C:\data\filename.dat”.
By default, the Workflow Manager enters the file name configured in the source definition.
Source You can configure multiple file sources using a file list.
Filetype Indicates whether the source file contains the source data, or a list of files with the same file
properties. Select Direct if the source file contains the source data. Select Indirect if the source file
contains a list of files.
When you select Indirect, the Integration Service finds the file list then reads each listed file when
it executes the session.
When you configure a session to extract data from a PowerExchange nonrelational source in batch mode,
you can configure the following session properties for the source:
Schema Name Overrides the schema name in the source PowerExchange data map.
Override
Map Name Overrides the data map name of the source PowerExchange data map.
Override
File Name For the ADABAS Unload source type, specifies the file name of the unloaded Adabas
database.
Required for the ADABAS Unload source type.
Database Id For the ADABAS and ADABAS Unload source types, overrides the ADABAS Database ID in the
Override PowerExchange data map.
File Id Override For the ADABAS and ADABAS Unload source types, overrides the Adabas file ID in the
PowerExchange data map.
DB2 Sub System For the DB2 Datamaps source type, overrides the DB2 subsystem ID in the PowerExchange
Id data map.
DB2 Table name For the DB2 Datamaps source type, overrides the DB2 table name in the PowerExchange data
map.
Unload File Name For the DB2 Unload Datasets source type, overrides the DB2 unload file name in the
PowerExchange data map.
Filter Overrides Filters the source data that PowerExchange reads based on specific conditions that you
define.
PWXPC adds the filter conditions in a WHERE clause on a SELECT SQL statement and then
passes the SQL statement to PowerExchange for processing. You can use any filter condition
syntax that PowerExchange supports for NRDB SQL.
For a single-record source, use the following syntax:
filter_condition
For example, the following filter condition selects records where a column called TYPE has a
value of A or D:
TYPE=‘A’ or TYPE=‘D’
For a multiple-record source, use one of the following syntax alternatives:
filter_condition
group_name1=filter; group_name2=filter;...
The group_name syntax limits the SQL query condition to a specific record in a multi-record
source definition. If you do not use the group_name syntax, the SQL query condition applies to
all records in the multi-record source definition.
For example, to select only records that contain an ID column value of "DBA" for a multi-record
source that has USER1 and USER2 records, specify one of the following SQL query conditions:
USER1=ID=’DBA’;USER2=ID=’DBA’
ID=’DBA’
Note: If you specify both the Filter Overrides attribute and a SQL Query Override attribute that
contains a filtering WHERE clause, the resulting SELECT statement contains a WHERE clause
that uses the AND operator to associate the Filter Overrides filter conditions with the SQL
Query Override conditions. For example:
SELECT * from schema.table WHERE Filter_Overrides_conditions AND
SQL_Query_Override_conditions
IMS Unload File For the IMS source type, an IMS database unload file name. Required if you want to read
Name source data from the backup file instead of from the IMS database. For a multiple-record write
to an IMS unload file, required for both the source and target.
IMS AM Override For the IMS source type, overrides the IMS access method in the imported data map for the
source with the other available access method. The session then uses the override access
method at run time.
- If you imported a source data map that specifies the DL/1 BATCH access method, enter O
to override it with the IMS ODBA access method. For ODBA access, you must also specify
the IMS PSBNAME Override and IMS PCBNAME Override attributes.
- If you imported a source data map that specifies the IMS ODBA access method, enter D to
override it with the DL/1 BATCH access method, which provides DL/I or BMP access. You
must also specify the IMS PCBNUMBER Override attribute.
Important: Before you run the session with an access method override, ensure that you
complete the PowerExchange configuration tasks for the new access method. For example, if
the override is DL/1 BATCH, you must configure LISTENER and NETPORT statements in the
DBMOVER member and configure the netport JCL. If the override is IMS ODBA, you must
perform other configuration tasks. For more information, see "IMS Data Maps" in the
PowerExchange Navigator User Guide.
IMS SSID For the IMS source type, if you imported an IMS ODBA data map for the source and did not
Override override the access method, use this attribute to override the IMS subsystem ID (SSID) from
the data map for the session. If you specified ODBA access as an override in the IMS AM
Override session attribute, you must enter this value. An SSID is required for ODBA access.
If the session has an IMS unload file source, you can use this override to point to another
IMSID statement in the DBMOVER member for the purpose of changing from one DBD library
to another DBD library. By using the override, you can switch DBD libraries without editing or
adding any IMSID statement and restarting the PowerExchange Listener. For example, use this
override to test changes that you made to a DBD library against an unload file.
If you use a netport job with BMP access to IMS, you can use this override with the %IMSID
substitution variable in the netport JCL to specify an IMS SSID to use for the session. This
override replaces the substitution variable. By using the override with the substitution
variable, you can use the same netport JCL to access multiple IMS environments, such as
development, test, and production environments.
Note: An IMS SSID is not required for DL/I batch access to IMS data or for access to an IMS
unload file.
IMS PSBNAME For the IMS source type, if you imported an IMS ODBA data map for the source and did not
Override override the access method, this value overrides the PSB name from the data map. If you
specified ODBA access as an override in the IMS AM Override attribute, you must enter this
value. A PSB name is required for ODBA access.
If you use DL/I batch or BMP access and specify this override, you must also specify the
PSB=%PSBNAME substitution variable in the netport JCL. The override value then replaces the
substitution variable in the JCL.
If you specify the PSB=%1 substitution variable instead of PSB=%PSBNAME in the netport JCL,
the session uses the PSB name from the NETPORT statement, if specified. In this case, you
need a separate NETPORT statement for each PSB. To avoid exceeding the limit of ten
NETPORT statements in the DBMOVER member, use this override with %PSBNAME
substitution variable instead.
Note: A PSB name is not used for access to an IMS source unload file.
IMS PCBNAME For the IMS source type, if you imported an IMS ODBA data map for the source and did not
Override override the access method, this value overrides the PCB name from the data map. If you
specified ODBA access as an override in the IMS AM Override attribute, you must enter this
value. A PCB name is required for ODBA access.
A PCB name is not used for DL/I batch or BMP access or for access to an IMS unload file.
IMS PCBNUMBER For the IMS source type, if you imported a DL/1 BATCH data map for the source and did not
Override override the access method, this value overrides the PCB number from the data map. If you
specified DL/I access as an override in the IMS AM Override attribute, you must enter this
value. A PCB number is required for DL/I or BMP access.
A PCB number is not used for IMS ODBA access or for access to an IMS unload file.
File Name For the VSAM Files and Sequential Files source types, overrides the data set or file name in
Override the PowerExchange data map.
Enter the complete data set or file name.
For the i5/OS, the format is: library_name/file_name.
If you select the Filelist File check box, enter the name of a filelist file in this attribute. A
filelist file is a list of files.
Filelist File For the VSAM Files and Sequential Files source types, identifies the file that contains a list of
files. Select this attribute only if you entered a filelist file in the File Name Override field.
PWX Partition For offloaded DB2 Unload, VSAM Files, and Sequential Files source types, specifies one of the
Strategy following partitioning strategies:
- Single Connection. PowerExchange creates a single connection to the data source. Any
overrides specified for the first partition are used for all partitions. With this option, if you
specify any overrides for other partitions that differ from the overrides for the first partition,
the session fails with an error message.
- Overrides Driven. If the specified overrides are the same for all partitions, PowerExchange
creates a single connection to the data source. If the overrides are not identical for all
partitions, PowerExchange creates multiple connections.
Flush After N For multiple-record sources, specifies the maximum number of block flushes that can occur
Blocks without any one block being flushed.
For bulk multiple-record sources, by default, PWXPC flushes blocks of data only when the
buffers are completely full or at end-of-file. If some record types do not have as much data as
others, flushing might not occur often. In this case, the record types might not have data on
the target for a long time, thereby blocking flushes on the writer side.
To ensure that buffers for all record types are flushed at a regular interval, define this Flush
After N Blocks session property. This property specifies the maximum number of block
flushes that can occur across all record types without any one block being flushed. A value of
zero disables this feature and causes flushing to occur only when blocks are full.
Valid values for the property are -1 to 100000.
The default value of -1 works in the following manner:
- For all multiple-record sources that do not use sequence fields, process the same as Flush
After N Blocks = 0, which disables this feature and flushes only when blocks are full .
- For all multiple-record sources that use sequence fields, use Flush After N Blocks = 7 *
(number of record types in the source).
When you configure a session to extract data from a PowerExchange relational source in batch mode, you
can configure the following session properties for the source:
DB2 Sub System Overrides the DB2 instance name in the PowerExchange data map.
Id
Image Copy For DB2 image copy sources, provides the image copy data set name. If not specified and the
Dataset table is in a non-partitioned table space, the most current image copy data set with TYPE=FULL
and SHRLEVEL=REFERENECE is used. If the table is in a partitioned table space, you must
specify the Image Copy Dataset attribute.
Disable If cleared for a DB2 image copy source, PowerExchange reads the catalog to verify that the
Consistency DSN of the specified image copy data set is defined with SHRLEVEL=REFERENCE and
Checking TYPE=FULL and is an image copy of the specified table. If the DSN is not defined with these
properties, the session fails.
If selected, PowerExchange reads the Image Copy Dataset regardless of the values of
SHRLEVEL and TYPE and without verifying that the object ID in the image copy matches the
object ID in the DB2 catalog.
Filter Overrides Filters the source data that PowerExchange reads based on specified conditions.
PWXPC adds filter conditions specified to the WHERE clause on the SELECT SQL statement
and passes the SQL statement to PowerExchange for processing. You can use any filter
condition syntax that PowerExchange supports for NRDB SQL. For more information, see the
PowerExchange Reference Manual.
For example, you can select records where a column called TYPE has a value of A or D by
specifying the following filter condition:
TYPE=‘A’ or TYPE=‘D’
Note: If you specify both the Filter Overrides attribute and a SQL Query Override attribute that
contains a filtering WHERE clause, the resulting SELECT statement contains a WHERE clause
that uses the AND operator to associate the Filter Overrides filter conditions with the SQL
Query Override conditions. For example:
SELECT * from schema.table WHERE Filter_Overrides_conditions AND
SQL_Query_Override_conditions
When you create a source definition for a CDC source by using an extraction map and then configure a
session to extract data from the source, you can configure the following session properties for the source:
Attribute Description
Name
Schema Name Overrides the schema name in the PowerExchange extraction map.
Override
ADABAS For the Adabas source type, an Adabas password for the source file.
Password If the Adabas FDT for the source file is password-protected, enter the Adabas FDT password.
Note: PowerCenter encrypts the password and displays the encrypted password in the XML file
that it generates for the workflow.
Database Id For the Adabas source type, overrides the Adabas database ID in the PowerExchange data map.
Override
File Id For the Adabas source type, overrides the Adabas file ID in the PowerExchange data map.
Override
Library/File For the DB2i5OS Real Time source type, overrides the library and file names in the extraction
Override map.
Specify the full library name and file name in the format:
library/file
Alternatively, specify an asterisk (*) wildcard for the library name to retrieve changes for all files
of the same file name across multiple libraries.
This attribute overrides the Library/File Override attribute on the application connection.
Source For the Oracle source type, overrides the source schema name.
Schema
Override
Filter Filters the source data that PowerExchange reads based on specified conditions.
Overrides PWXPC adds filter conditions specified to the WHERE clause on the SELECT SQL statement and
passes the SQL statement to PowerExchange for processing. You can use any filter condition
syntax that PowerExchange supports for NRDB SQL. For more information, see the
PowerExchange Reference Manual.
For example, you can select records where a column called TYPE has a value of A or D by
specifying the following filter condition:
TYPE=‘A’ or TYPE=‘D’
To select change records where columns ID and ACCOUNT have changed, you can use the
DTL__CI columns by specifying the following filter condition:
DTL__CI_ID=‘Y’ and DTL__CI_ACCOUNT=’Y’
Note: If you specify both the Filter Overrides attribute and a SQL Query Override attribute that
contains a filtering WHERE clause, the resulting SELECT statement contains a WHERE clause that
uses the AND operator to associate the Filter Overrides filter conditions with the SQL Query
Override conditions. For example:
SELECT * from schema.table WHERE Filter_Overrides_conditions AND
SQL_Query_Override_conditions
Extraction Map Required. The PowerExchange extraction map name for the CDC source. You must specify the
Name extraction map name for the relational source.
Library/File Optional. For the DB2i5OS Real Time source type, overrides the library and file names in the
Override extraction map.
Specify the full library name and file name in the format:
library/file
Alternatively, specify an asterisk (*) wildcard for the library name to retrieve changes for all
files of the same file name across multiple libraries.
This attribute overrides the Library/File Override value on the application connection.
Source Schema Optional. For the Oracle Change and Real Time source types, overrides the source schema
Override name.
Targets Node
The Targets node lists the mapping targets and displays the settings. To view and configure the settings of a
specific target, select the target from the list. You can configure the following settings:
• Writers. Displays the writer that the Integration Service uses with each target instance. For relational
targets, you can choose a relational writer or a file writer. Choose a file writer to use an external loader.
After you override a relational target to use a file writer, define the file properties for the target. Click Set
File Properties and choose the target to define.
• Connections. Displays the target connections. You can choose connection types and connection values.
You can also edit connection object values.
• Properties. Displays different properties for different target types. For relational targets, you can override
properties that you configured in the Mapping Designer. You can also configure the following session
properties for relational targets:
Relational Description
Target Property
Insert The Integration Service inserts all rows flagged for insert.
Update (as The Integration Service updates all rows flagged for update.
Update)
Update (as The Integration Service inserts all rows flagged for update.
Insert)
Update (else The Integration Service updates rows flagged for update if they exist in the target, and inserts
Insert) remaining rows marked for insert.
Delete The Integration Service deletes all rows flagged for delete.
Truncate Table The Integration Service truncates the target before loading.
Reject File Reject-file directory name. By default, the Integration Service writes all reject files to the
Directory service process variable directory, $PMBadFileDir.
If you specify both the directory and file name in the Reject Filename field, clear this field. The
Integration Service concatenates this field with the Reject Filename field when it runs the
session.
You can also use the $BadFileName session parameter to specify the file directory.
Reject Filename File name or file name and path for the reject file. By default, the Integration Service names the
reject file after the target instance name: target_name.bad. Optionally, use the $BadFileName
session parameter for the file name.
The Integration Service concatenates this field with the Reject File Directory field when it runs
the session. For example, if you have “C:\reject_file\” in the Reject File Directory field, and
enter “filename.bad” in the Reject Filename field, the Integration Service writes rejected rows
to C:\reject_file\filename.bad.
For file targets, you can override properties that you configured in the Target Designer. You can also
configure the following session properties for file targets:
Merge When selected, the Integration Service merges the partitioned target files into one file when the
Partitioned session completes, and then deletes the individual output files. If the Integration Service fails to
Files create the merged file, it does not delete the individual output files.
You cannot merge files if the session uses FTP, an external loader, or a message queue.
Merge File Enter the directory name in this field. By default, the Integration Service writes the merged file
Directory in the service process variable directory, $PMTargetFileDir.
If you enter a full directory and file name in the Merge File Name field, clear this field.
Merge File Name of the merge file. Default is target_name.out. This property is required if you select Merge
Name Partitioned Files.
Output File Enter the directory name in this field. By default, the Integration Service writes output files in
Directory the service process variable directory, $PMTargetFileDir.
If you specify both the directory and file name in the Output Filename field, clear this field. The
Integration Service concatenates this field with the Output Filename field when it runs the
session.
You can also use the $OutputFileName session parameter to specify the file directory.
Output Enter the file name, or file name and path. By default, the Workflow Manager names the target
Filename file based on the target definition used in the mapping: target_name.out.
If the target definition contains a slash character, the Workflow Manager replaces the slash
character with an underscore.
When you use an external loader to load to an Oracle database, you must specify a file
extension. If you do not specify a file extension, the Oracle loader cannot find the flat file and
the Integration Service fails the session.
Enter the file name, or file name and path. Optionally use the $OutputFileName session
parameter for the file name.
The Integration Service concatenates this field with the Output File Directory field when it runs
the session.
Note: If you specify an absolute path file name when using FTP, the Integration Service ignores
the Default Remote Directory specified in the FTP connection. When you specify an absolute
path file name, do not use single or double quotes.
Reject File Enter the directory name in this field. By default, the Integration Service writes all reject files to
Directory the service process variable directory, $PMBadFileDir.
If you specify both the directory and file name in the Reject Filename field, clear this field. The
Integration Service concatenates this field with the Reject Filename field when it runs the
session.
You can also use the $BadFileName session parameter to specify the file directory.
Reject Enter the file name, or file name and path. By default, the Integration Service names the reject
Filename file after the target instance name: target_name.bad. Optionally use the $BadFileName session
parameter for the file name.
The Integration Service concatenates this field with the Reject File Directory field when it runs
the session. For example, if you have “C:\reject_file\” in the Reject File Directory field, and enter
“filename.bad” in the Reject Filename field, the Integration Service writes rejected rows to
C:\reject_file\filename.bad.
You can configure the following session properties for PowerExchange nonrelational targets:
ADABAS For the ADABAS target type, the Adabas file password.
Password If the ADABAS FDT for the target file is password protected, enter the ADABAS FDT password.
Note: PowerCenter encrypts the password and displays the encrypted password in the XML file
that it generates for the workflow.
BLKSIZE For the SEQ target type on z/OS, the z/OS data set block size.
Default is 0, which means use the best possible block size.
If you select VB for the RECFM value, the actual block size might be up to four bytes greater
than the value you specify for BLKSIZE.
DATACLAS For the SEQ target type on z/OS, the z/OS SMS data class name.
Delete SQL For the ADABAS and VSAM target types, overrides the default Delete SQL that is sent to
Override PowerExchange.
Disp For the SEQ target type on z/OS, the z/OS data set disposition.
Valid values:
- OLD
- SHR
- NEW
- MOD
Default is MOD if the data set exists, and NEW if it does not.
File Name For the SEQ and VSAM target types, overrides the data set or file name in the PowerExchange
Override data map. Enter the complete data set or file name.
For i5/OS, use the following format: library_name/file_name.
IMS AM Override For the IMS target type, overrides the IMS access method in the imported data map for the
target with the other allowable access method. The session then uses the override access
method at run time.
- If you imported a target data map that specifies the DL/1 BATCH access method, enter O to
override it with the IMS ODBA access method. For ODBA access, you must also specify the
IMS PSBNAME Override and IMS PCBNAME Override attributes.
- If you imported a target data map that specifies the IMS ODBA access method, enter D to
override it with the DL/1 BATCH access method, which provides DL/I or BMP access. You
must also specify the IMS PCBNUMBER Override attribute.
Important: Before you run the session with an access method override, ensure that you
complete the PowerExchange configuration tasks for the new access method. For example, if
the override is DL/1 BATCH, you must configure LISTENER and NETPORT statements in the
DBMOVER member and configure the netport JCL. If the override is IMS ODBA, you must
perform other configuration tasks. For more information, see "IMS Data Maps" in the
PowerExchange Navigator User Guide.
IMS PCBNAME For the IMS target type, if you imported an IMS ODBA data map for the target and did not
Override override the access method, this value overrides the PCB name from the data map. If you
specified ODBA access as an override in the IMS AM Override attribute, you must enter this
value. A PCB name is required for ODBA access.
A PCB name is not used for DL/I or BMP access.
IMS For the IMS target type, if you imported a DL/1 BATCH data map for the target and did not
PCBNUMBER override the access method, this value overrides the PCB number from the data map. If you
Override specified DL/I or BMP access as an override in the IMS AM Override attribute, you must enter
this value. A PCB number is required for DL/I or BMP access.
A PCB number is not used for IMS ODBA access.
IMS PSBNAME If you imported an IMS ODBA data map for the target and did not override the access method,
Override this value overrides the PSB name from the data map. If you specified ODBA access as an
override in the IMS AM Override attribute, you must enter this value. A PSB name is required
for ODBA access.
If you use DL/I batch or BMP access and specify this override, you must also specify the
PSB=%PSBNAME substitution variable in the netport JCL. The override value then replaces the
substitution variable in the JCL.
If you specify the PSB=%1 substitution variable instead of PSB=%PSBNAME in the netport JCL,
the session uses the PSB name in the NETPORT statement, if specified. In this case, you need
a separate NETPORT statement for each PSB. To avoid exceeding the limit of ten NETPORT
statements, use this override with %PSBNAME substitution variable instead.
IMS SSID For the IMS target type, if you imported an IMS ODBA data map for the target and did not
Override override the access method, use this value to override the IMS subsystem ID (SSID). If you
specified ODBA access as an override in the IMS AM Override attribute, you must enter this
value. An SSID is required for ODBA access.
If you use the IMS DL/1 BATCH access method and a BMP netport job, you can use this
override with the %IMSID substitution variable in the netport JCL. This override replaces the
substitution variable to specify the IMS SSID to use for the session. By using the substitution
variable and override together, you can use the same netport JCL to access multiple IMS
environments, such as development, testing, and production environments.
Note: An IMS SSID is not required for DL/I batch access to IMS data or for access to an IMS
unload file.
Initialize Target For the VSAM target type, select this option to have PowerExchange allow both inserts and
updates into empty VSAM data sets.
If this option is not selected, PowerExchange only allows inserts into empty VSAM data sets.
Insert Only For the ADABAS and VSAM target types, processes updates and deletes as inserts.
Note: You must select this option when the target has no keys.
Insert SQL For all nonrelational target types, overrides the default Insert SQL sent to PowerExchange.
Override
LRECL For the SEQ target type on z/OS, the data set logical record length.
This value is ignored if Disp is not MOD or NEW.
Default is 256.
If you select VB for the RECFM value, specify the maximum number of data bytes in a logical
record for LRECL. PowerExchange adds 4 to this value for the record descriptor word (RDW).
Map Name For all nonrelational target types, overrides the target PowerExchange data map name.
Override Note: PWXPC sends the file name that is specified for the source in the mapping unless this
name is overridden in the File Name Override attribute.
MGMTCLAS For the SEQ target type on z/OS, the SMS management class name.
This value is ignored if Disp is not MOD or NEW.
MODELDCB for the SEQ target type on z/OS, the Model DCB for non-SMS-managed GDG data sets.
This value is ignored if Disp is not MOD or NEW.
Post SQL For all nonrelational target types, one or more SQL statements that are executed after the
session runs with the target database connection.
Pre SQL For all nonrelational target types, one or more SQL statements that are executed before the
session runs with the target database connection.
Note: In certain cases, you must specify the Pre SQL run once per Connection attribute along
with the Pre SQL attribute.
Pre SQL run For all nonrelational target types, runs the SQL that you specify in the Pre SQL attribute only
once per once for a connection.
Connection Select this attribute in either of the following cases:
- In the Pre SQL attribute for a session that uses writer partitioning, you specify a SQL
statement such as CREATEFILE that can run only once for the session. If you do not select
Pre SQL run once per Connection, the session tries to run the statement once for each
partition.
- In the Pre SQL attribute for a session that performs a multiple-record write, you specify a
CREATEFILE statement that creates a new generation of a GDG or creates an empty file. If
you do not select Pre SQL run once per Connection, the session creates a generation or
tries to create a new empty file for each record that the session writes.
Primary Space For the SEQ target type on z/OS, the primary space allocation, in the units specified in the
Space attribute.
This value is ignored if Disp is not MOD or NEW.
Default is 1.
RECFM For the SEQ target type on z/OS, the z/OS record format. Valid values are F, V, FU, FB, VU, VB,
FBA, and VBA.
This value is ignored if DISP is not MOD or NEW.
Schema Name For all nonrelational target types, overrides the schema name in the target PowerExchange
Override data map.
Note: PWXPC sends the file name for the source in the mapping unless this name is overridden
in File Name Override attribute.
Secondary Space For the SEQ target type on z/OS, the secondary space allocation, in the units specified in the
Space attribute.
This value is ignored if Disp is not MOD or NEW.
Default is 1.
Space For the SEQ target type on z/OS, the type of units for expressing primary or secondary space
for z/OS data sets. Valid values are:
- CYLINDER
- TRACK
This value is ignored if Disp is not MOD or NEW.
Default is TRACK.
STORCLAS For the SEQ target type on z/OS, the SMS storage class name.
This value is ignored if Disp is not MOD or NEW.
Truncate target For the VSAM target type, truncates, or deletes, table contents before loading new data.
option Note: VSAM data sets must be defined with the REUSE option for this truncate option to
function correctly.
UNIT For the SEQ target type on z/OS, the z/OS unit type.
This value is ignored if Disp is not MOD or NEW.
Default is SYSDA.
Update SQL For the ADABAS and VSAM target type, overrides the default Update SQL that is sent to
Override PowerExchange.
Upsert For the ADABAS and VSAM target type, processes failed inserts as updates and updates as
inserts.
VOLSER For the SEQ target type on z/OS, the volume serial number.
This value is ignored if Disp is not MOD or NEW.
Transformations Node
On the Transformations node, you can override transformation properties that you configure in the Designer.
The attributes you can configure depends on the type of transformation you select.
Components Tab
In the Components tab, you can configure pre-session shell commands, post-session commands, email
messages if the session succeeds or fails, and variable assignments.
Components Description
Tab Option
Task Configure pre- or post-session shell commands, success or failure email messages, and variable
assignments.
Type Select None if you do not want to configure commands and emails in the Components tab.
For pre- and post-session commands, select Reusable to call an existing reusable Command task
as the pre- or post-session shell command. Select Non-Reusable to create pre- or post-session
shell commands for this session task.
For success or failure emails, select Reusable to call an existing Email task as the success or
failure email. Select Non-Reusable to create email messages for this session task.
The following table describes the tasks available in the Components tab:
Pre-Session Command Shell commands that the Integration Service performs at the beginning of a session.
Post-Session Success Shell commands that the Integration Service performs after the session completes
Command successfully.
Post-Session Failure Shell commands that the Integration Service performs if the session fails.
Command
On Success Email Integration Service sends On Success email message if the session completes
successfully.
On Failure Email Integration Service sends On Failure email message if the session fails.
Pre-session variable Assign values to mapping parameters, mapping variables, and session parameters
assignment before a session runs. Read-only for reusable sessions.
Post-session on success Assign values to parent workflow and worklet variables after a session completes
variable assignment successfully. Read-only for reusable sessions.
Post-session on failure Assign values to parent workflow and worklet variables after a session fails. Read-
variable assignment only for reusable sessions.
Metadata Description
Extensions Tab
Options
Extension Name Name of the metadata extension. Metadata extension names must be unique in a domain.
Reusable Select to make the metadata extension apply to all objects of this type (reusable). Clear to
make the metadata extension apply to this object only (non-reusable).
General Tab
You can change the workflow name and enter a comment for the workflow on the General tab. By default, the
General tab appears when you open the workflow properties.
Integration Service Integration Service that runs the workflow by default. You can also assign an Integration
Service when you run the workflow.
Suspension Email Email message that the Integration Service sends when a task fails and the Integration
Service suspends the workflow.
Disabled Disables the workflow from the schedule. The Integration Service stops running the
workflow until you clear the Disabled option.
Suspend on Error The Integration Service suspends the workflow when a task in the workflow fails.
Web Services Creates a service workflow. Click Config Service to configure service information.
280
General Tab Options Description
Configure Concurrent Enables the Integration Service to run more than one instance of the workflow at a time.
Execution You can run multiple instances of the same workflow name, or you can configure a
different name and parameter file for each instance.
Click Configure Concurrent Execution to configure instance names.
Service Level Determines the order in which the Load Balancer dispatches tasks from the dispatch
queue when multiple tasks are waiting to be dispatched. Default is “Default.”
You create service levels in the Administrator tool.
Properties Tab
Configure parameter file name and workflow log options on the Properties tab.
Parameter File Name Designates the name and directory for the parameter file. Use the parameter file to define
workflow variables.
Workflow Log File Enter a file name, or a file name and directory. Required.
Name The Integration Service appends information in this field to that entered in the Workflow Log
File Directory field. For example, if you have “C:\workflow_logs\” in the Workflow Log File
Directory field, then enter “logname.txt” in the Workflow Log File Name field, the Integration
Service writes logname.txt to the C:\workflow_logs\ directory.
Workflow Log File Designates a location for the workflow log file. By default, the Integration Service writes the
Directory log file in the service variable directory, $PMWorkflowLogDir.
If you enter a full directory and file name in the Workflow Log File Name field, clear this field.
Save Workflow Log If you select Save Workflow Log by Timestamp, the Integration Service saves all workflow
By logs, appending a timestamp to each log.
If you select Save Workflow Log by Runs, the Integration Service saves a designated number
of workflow logs. Configure the number of workflow logs in the Save Workflow Log for These
Runs option.
You can also use the $PMWorkflowLogCount service variable to save the configured number
of workflow logs for the Integration Service.
Save Workflow Log Number of historical workflow logs you want the Integration Service to save.
For These Runs The Integration Service saves the number of historical logs you specify, plus the most recent
workflow log. Therefore, if you specify 5 runs, the Integration Service saves the most recent
workflow log, plus historical logs 0–4, for a total of 6 logs.
You can specify up to 2,147,483,647 historical logs. If you specify 0 logs, the Integration
Service saves only the most recent workflow log.
Enable HA Recovery Enable workflow recovery. Not available for web service workflows.
Automatically Recover terminated tasks without user intervention. You must have high availability and the
recover terminated workflow must still be running. Not available for web service workflows.
tasks
Maximum automatic When you automatically recover terminated tasks you can choose the number of times the
recovery attempts Integration Service attempts to recover the task. Default is 5.
Scheduler Tab
The Scheduler Tab lets you schedule a workflow to run continuously, run at a given interval, or manually start
a workflow.
Schedule Options: Run Once/Run Required if you select Run On Integration Service Initialization in Run Options.
Every/Customized Repeat Also required if you do not choose any setting in Run Options.
If you select Run Once, the Integration Service runs the workflow once, as
scheduled in the scheduler.
If you select Run Every, the Integration Service runs the workflow at regular
intervals, as configured.
If you select Customized Repeat, the Integration Service runs the workflow on the
dates and times specified in the Repeat dialog box.
Edit Required if you select Customized Repeat in Schedule Options. Opens the Repeat
dialog box, allowing you to schedule specific dates and times for the workflow to
run. The selected scheduler appears at the bottom of the page.
Start Date Required if you select Run On Integration Service Initialization in Run Options.
Also required if you do not choose any setting in Run Options.
Indicates the date on which the Integration Service begins scheduling the
workflow.
Start Time Required if you select Run On Integration Service Initialization in Run Options.
Also required if you do not choose any setting in Run Options.
Indicates the time at which the Integration Service begins scheduling the
workflow.
End Options: End On/End After/ Required if the workflow schedule is Run Every or Customized Repeat.
Forever If you select End On, the Integration Service stops scheduling the workflow in the
selected date.
If you select End After, the Integration Service stops scheduling the workflow
after the set number of workflow runs.
If you select Forever, the Integration Service schedules the workflow as long as
the workflow does not fail.
Repeat Description
Option
Repeat Enter the numeric interval you want to schedule the workflow, then select Days, Weeks, or Months, as
Every appropriate.
If you select Days, select the appropriate Daily Frequency settings.
If you select Weeks, select the appropriate Weekly and Daily Frequency settings.
If you select Months, select the appropriate Monthly and Daily Frequency settings.
Weekly Required to enter a weekly schedule. Select the day or days of the week on which you want to schedule
the workflow.
Daily Enter the number of times you would like the Integration Service to run the workflow on any day the
session is scheduled.
If you select Run Once, the Integration Service schedules the workflow once on the selected day, at the
time entered on the Start Time setting on the Time tab.
If you select Run Every, enter Hours and Minutes to define the interval at which the Integration Service
runs the workflow. The Integration Service then schedules the workflow at regular intervals on the
selected day. The Integration Service uses the Start Time setting for the first scheduled workflow of the
day. If you choose an interval that is bigger than the start time, the workflow runs one time each day. The
Integration Service then schedules the workflow at regular intervals on the selected day.
Variables Tab
Before using workflow variables, you must declare them on the Variables tab.
Persistent Indicates whether the Integration Service maintains the value of the variable from the
previous workflow run.
Events Tab
Before using the Event-Raise task, declare a user-defined event on the Events tab.
A arrange
workflows vertically 20
aborted workspace objects 25
status 225 assigning
aborting Integration Services 39
Control tasks 66 Assignment tasks
status 225 creating 64
tasks in Workflow Monitor 224 definition 64
Absolute Time description 60
specifying 72 using Expression Editor 32
Timer task 72
active sources
constraint-based loading 99
definition 105
B
row error logging 105 Backward Compatible Session Log
transaction generators 105 configuring 253
XML targets 105 Backward Compatible Workflow Log
adding configuring 252
tasks to workflows 37 buffer block size
Additional Concurrent Pipelines configuring for sessions 53
restricting pre-built lookup cache 53 bulk loading
advanced settings commit interval 101
session properties 53 data driven session 101
aggregate caches DB2 guidelines 102
reinitializing 262 Oracle guidelines 101
Amazon Redshift relational targets 101
connection 144 session properties 94, 101, 271
Amazon S3 connection test load 92
properties 145
AND links
input type 63
Append if Exists
C
flat file target property 106 caches
append to document configuring concurrent lookup caches for sessions 53
flushing XML 119 configuring lookup in sessions 53
application connection configuring maximum numeric memory limit for sessions 53
configuring for Siebel EIM Read and Load transformations 178 specifying maximum memory by percentage 53
configuring for Siebel sources, targets, and EIM Invoker caching
transformations 177 XML properties 120
application connections certified messages
CPI-C 172 configuring TIB/Rendezvous application connections 183
JMS 158 checking in
JNDI 158 versioned objects 26
PeopleSoft 167 checking out
RFC/BAPI 175 versioned objects 26
Salesforce 170 COBOL sources
Salesforce Analytics 170 error handling 83
SAP ALE IDoc Reader 174 numeric data handling 85
SAP ALE IDoc Writer 174 code page compatibility
SAP NetWeaver 171 multiple file sources 87
SAP NetWeaver BI 176 targets 89
TIB/Rendezvous 183 code pages
TIBCO 183 connection objects 131
Web Services 185 database connections 89, 131
webMethods 187 delimited source 81
delimited target 108
286
code pages (continued) connection objects (continued)
fixed-width source 80 owner 133
fixed-width target 108 connection properties 162
relaxed validation 131 Connection Retry Period (property)
cold start database connections 136
tasks and workflows in Workflow Monitor 224 WebSphere MQ 189
color themes connection settings
selecting 21 applying to all session instances 48
colors connection variables
setting 21 defining for Lookup transformations 130
workspace 21 defining for Stored Procedure transformations 130
command specifying $Source and $Target 129
file targets 108 connections
generating file list 80 configuring for sessions 126
generating source data 79 copy as 138
processing target data 108 copying a relational database connection 138
Command property external loader 141
configuring flat file sources 78 FTP 140
configuring flat file targets 106 multiple targets 121
Command tasks overriding connection attributes 130
creating 65 overriding for Lookup transformations 130
definition 65 overriding for Stored Procedure transformations 130
description 60 relational database 136, 147
executing commands 66 replacing a relational database connection 139
Fail Task if Any Command Fails 66 resilience 135
making reusable 51 sources 75
monitoring details in the Workflow Monitor 237 targets 91
multiple UNIX commands 66 connectivity
promoting to reusable 65 connect string examples 128
using parameters and variables 50 constraint-based loading
using variables in 65 active sources 99
Command Type configuring 98
configuring flat file sources 78 configuring for sessions 53
comments enabling 101
adding in Expression Editor 33 key relationships 99
commit target connection groups 99
flushing XML 119 Update Strategy transformations 99
commit interval Control tasks
bulk loading 101 definition 66
commit type description 60
configuring 260 options 66
comparing objects copying
sessions 29 repository objects 28
tasks 29 counters
workflows 29 overview 242
worklets 29 CPI-C application connections
Components tab configuring 172
properties 277 creating
concurrent workflows Assignment tasks 64
scheduling 197 Command tasks 65
Config Object tab Decision tasks 68
overview 52 Email tasks 210
session properties 52 external loader connections 141
configuring metadata extensions 31
in Web Services Consumer application connections 132 reserved words file 103
connect string reusable scheduler 202
examples 128 sessions 45, 46
syntax 128 tasks 61
Connection workflows 36
Microsoft Azure SQL Data Warehouse connection 162 custom properties
connection environment SQL overriding Integration Service properties for sessions 53
configuring 134 customization
connection objects of toolbars 24
assigning permissions 133 of windows 23
code pages 131 workspace colors 21
configuring in sessions 126 customized repeat
deleting 191 daily 199
overriding connection attributes 130 editing 198
Index 287
customized repeat (continued) delimited flat files (continued)
monthly 199 quote character, sources 81
options 199 quote character, targets 108
repeat every 199 row settings 81
weekly 199 session properties, sources 81
session properties, targets 108
delimiter
288 Index
email (continued)
workflows 209 F
worklets 209 Fail Task if Any Command Fails
Email tasks in Command Tasks 66
creating 210 failed
description 60 status 225
overview 209 failing workflows
empty strings failing parent workflows 63, 66
XML target files 117 using Control task 66
enabling file list
enhanced security 23 generating with command 80
past events in Event-Wait task 72 multiple XML targets 120
end options file mode
end after 198 SAP R/3 application connections 173
end on 198 file mode connections
forever 198 RFC 172
endpoint URL file sources
in Web Service application connections 185 Integration Service handling 83, 85
enhanced security numeric data handling 85
enabling 23 session properties 78
environment SQL file targets
configuring 134 session properties 106
guidelines for entering 135 file-based ledger
error handling TIB/Rendezvous application connections, configuring 183
COBOL sources 83 filtering
configuring 49 deleted tasks in Workflow Monitor 218
fixed-width file 83 Integration Services in Workflow Monitor 219
pre- and post-session SQL 49 tasks in Gantt Chart view 218
error handling settings tasks in Task View 229
session properties 56 Find in Workspace tool
errors overview 24
pre-session shell command 51 Find Next tool
stopping session on 56 overview 24
validating in Expression Editor 33 fixed-width files
escape characters code page, sources 80
in XML targets 117 code page, targets 108
Event-Raise tasks error handling 83
configuring 71 multibyte character handling 83
declaring user-defined event 70 null characters, sources 80
definition 69 null characters, targets 108
description 60 numeric data handling 85
in worklets 42 padded bytes in fixed-width targets 110
Event-Wait tasks source session properties 80
definition 69 target session properties 108
description 60 writing to 110, 111
for predefined events 72 flat file definitions
for user-defined events 71 escape character, sources 81
waiting for past events 72 Integration Service handling, targets 109
working with 71 quote character, sources 81
events quote character, targets 108
in worklets 42 session properties, sources 78
predefined events 69 session properties, targets 106
user-defined events 69 flat files
ExportSessionLogLibName code page, sources 80
passing log events to an external library 248 code page, targets 108
Expression Editor creating footer 106
adding comments 33 creating headers 106
displaying 33 delimiter, sources 81
syntax colors 33 delimiter, targets 108
using 32 Footer Command property 106
validating 196 generating source data 79
validating expressions using 33 generating with command 79
expressions Header Command property 106
validating 33 Header Options property 106
external loader multibyte data 112
connections 141 null characters, sources 80
null characters, targets 108
numeric data handling 85
Index 289
flat files (continued) general options (continued)
precision, targets 111, 112 show background in partition editor and DBMS based optimization
processing with command 108 20
shift-sensitive target 112 show expression on a link 20
writing targets by transaction 112 show full name of task 20
flushing data General tab in session properties
appending to document 119 in Workflow Manager 259
create new documents 119 generating certificates
ignore commit 119 client certificate file 132
folder objects private key file 132
refresh 26 globalization
fonts database connections 89
format options 21 overview 89
setting 21 targets 89
footer Google Analytics connections
creating in file targets 106 properties 152
Footer Command Google BigQuery connections
flat file targets 106 properties 152
format Google Cloud Spanner connections
date time 19 properties 154
format options Google Cloud Storage connections
color themes 21 properties 155
colors 21 grid
date and time 19 enabling sessions to run 59
fonts 21
orthogonal links 21
resetting 21
schedule 19
H
solid lines for links 21 Hadoop HDFS application connections
Timer task 19 properties 155
FTP header
connection names 140 creating in file targets 106
connection properties 140 Header Command
connections for ABAP integration 171 flat file targets 106
creating connections 140 Header Options
defining connections 140 flat file targets 106
defining default remote directory 140 heterogeneous sources
defining host names 140 defined 74
resilience 140 heterogeneous targets
retry period 140 overview 121
Use SFTP 140 high availability
WebSphere MQ, configuring 189
high precision
G enabling 262
history names
Gantt Chart in Workflow Monitor 225
configuring 221 host names
filtering 218 for FTP connections 140
listing tasks and workflows 227
navigating 228
opening and closing folders 219
organizing 228
I
overview 216 IBM DB2
searching 228 connect string example 128
time increments 228 connection with client authentication 128
time window, configuring 221 IBM DB2 EE
using 227 connecting with client authentication 141
general options external loader connections 141
arranging workflow vertically 20 IBM DB2 EEE
configuring 20 connecting with client authentication 141
in-place editing 20 external loader connections 141
launching Workflow Monitor 20 icons
open editor 20 Workflow Monitor 217
panning windows 20 worklet validation 193
reload task or workflow 20 ignore commit
repository notifications 20 flushing XML 119
session properties 259 in-place editing
enabling 20
290 Index
incremental aggregation
configuring 262 K
indexes Kafka connections
dropping for target tables 98 properties 160
recreating for target tables 98 keyboard shortcuts
indicator files Workflow Manager 33
predefined events 71 keys
input link type constraint-based loading 99
selecting for task 63
Input Type
flat file source property 78
Integration Service L
assigning workflows 39 launching
connecting in Workflow Monitor 218 Workflow Monitor 20, 217
filtering in Workflow Monitor 219 LDAP
handling file targets 109 connection properties 161
monitoring details in the Workflow Monitor 232 Ledger File (property)
online and offline mode 218 TIB/Rendezvous application connections, configuring 183
pinging in Workflow Monitor 218 line sequential buffer length
removing from the Navigator 19 configuring for sessions 53
selecting 39 sources 82
tracing levels 256 links
truncating target tables 97 AND 63
using FTP 140 condition 43
using SFTP 140 example link condition 44
version in session log 256 linking tasks concurrently 43
Integration Service handling linking tasks sequentially 43
file targets 109 loops 43
fixed-width targets 111, 112 OR 63
multibyte data to file targets 113 orthogonal 21
shift-sensitive data, targets 113 show expression on a link 20
Integration Service Monitor solid lines 21
system resource usage 233 specifying condition 43
Is Transactional using Expression Editor 32
MSMQ connection property 165 working with 43
List Tasks
in Workflow Monitor 227
J log files
archiving 251
Java Classpath real-time sessions 248
session property 260 session log 260
Java transformation using a shared library 248
session level classpath 260 writing 250
JD Edwards EnterpriseOne log options
connection properties 158 session properties 55
JMS application connections logs
configuring 158 session log rollover 252
JMS Connection Factory Name (property) 159 lookup caches
JMS Destination (property) 159 configuring concurrent for sessions 53
JMS Destination Type (property) 159 configuring in sessions 53
JMS Password (property) 159 Lookup transformation
JMS Recovery Destination (property) 159 resilience 135
JMS User Name (property) 159 loops
properties 159 invalid workflows 43
JMS Connection Factory Name (property)
JMS application connections 159
JMS Destination (property)
JMS application connections 159 M
JMS Destination Type (property) MAPI
JMS application connections 159 sending email using 207
JMS Password (property) Maximum Days
JMS application connections 159 Workflow Monitor 221
JMS Recovery Destination (property) maximum memory limit
JMS application connections 159 configuring for session caches 53
JMS User Name (property) percentage of memory for session caches 53
JMS application connections 159 Maximum Partial Session Log Files
JNDI application connections configuring session log rollover 252
configuring 158 session config object 53
Index 291
Maximum Workflow Runs null characters (continued)
Workflow Monitor 221 fixed-width targets 114
Merge Command Integration Service handling 84
flat file targets 106 session properties, targets 108
Merge File Directory null data
flat file target property 106 XML target files 117
Merge File Name numeric values
flat file target property 106 reading from sources 85
Merge Type
flat file target property 106
merging target files
session properties 271
O
Message Queue queue connections objects
configuring for WebSphere MQ 189 viewing older versions 27
metadata extensions older versions of objects
configuring 31 viewing 27
creating 31 on commit
deleting 32 append to document 119
editing 32 create new documents 119
overview 30 ignore commit 119
session properties 279 options 119
Microsoft Outlook operating system profile
configuring an email user 207, 214 override 203, 204
configuring the Integration Service 207 optimizing
Microsoft SQL Server data flow 242
commit interval 101 options (Workflow Manager)
connect string syntax 128 format 19, 21
connect string syntax with SSL encryption 128 general 19
MIME format miscellaneous 19
email 207 solid lines for links 21
monitoring OR links
command tasks 237 input type 63
failed sessions 238 Oracle
folder details 234 bulk loading guidelines 101
Integration Service details 232 commit intervals 101
Repository Service details 232 connect string syntax 128
session details 237 connection with OS Authentication 128
targets 239 temporary tablespace 128
tasks details 235 Oracle E-Business Suite
worklet details 236 connection properties 167
MSMQ queue connections Oracle external loader
configuring 165 connecting with OS Authentication 141
Is Transactional 165 external loader connections 141
multibyte data Output File Name property
character handling 83 flat file targets 106
writing to files 112 output files
multiple sessions session properties 115, 271
validating 195 targets 106
multiple XML output Output Type property
example 120 flat file targets 106
generating 119 overriding
tracing levels in sessions 56
owner
292 Index
$PMFailureEmailUser (continued) post-session SQL commands
tips 214 entering 49
$PMSessionLogCount PowerCenter Repository Reports
archiving session logs 253 viewing in Workflow Manager 40
$PMWorkflowCount PowerChannel
archiving log files 252 configuring a database connection 147
$PMSessionLogDir PowerChannel database connections
archiving session logs 253 configuring 147
packet size PowerExchange
database connections 136, 147 connection resilience 135
page setup PowerExchange for Db2 Warehouse connections
configuring 23 properties 149
partial log file PowerExchange for Essbase connections
configuring session log rollover 252 properties 150
partitionable PowerExchange for Greenplum connections
XML source option 85 properties 151
partitioning options PowerExchange for Hadoop
configuring dynamic 58 application connection objects 155
configuring number 58 sessions 155
session properties 58 PowerExchange for Tableau connection
PeopleSoft application connections properties 179
configuring 167 PowerExchange for Teradata connections
performance properties 181
data, collecting 262 Pre 85 Timestamp Compatibility option
data, writing to repository 262 setting 53
performance counters pre- and post-session SQL
overview 242 entering 49
performance detail files guidelines 49
understanding counters 242 Pre-Build Lookup Cache
viewing 241 restricting concurrent pipelines 53
performance details pre-session shell command
in performance details file 241 configuring non-reusable 50
in Workflow Monitor 241 configuring reusable 51
viewing 241 creating reusable Command task 51
performance settings errors 51
session properties 262 session properties 277
permissions using 49
connection object 133 pre-session SQL commands
connection objects 133 entering 49
database 133 precision
editing sessions 46 flat files 112
pinging writing to file targets 111
Integration Service in Workflow Monitor 218 predefined events
pipeline partitioning waiting for 72
merging target files 271 predefined variables
reject file 122 in Decision tasks 67
session properties 277 preparing to run
pipelines status 225
active sources 105 printing
data flow monitoring 242 page setup 23
PM_RECOVERY table Private Key File Name
deadlock retries 98 SFTP 140
PmNullPasswd Private Key File Password
reserved word 128 SFTP 140
PmNullUser properties
IBM DB2 client authentication 128 Hadoop HDFS application connections 155
Oracle OS Authentication 128 XML caching 120
reserved word 128 Properties tab in session properties
post-session command in Workflow Manager 260
session properties 277 Public Key File Name
post-session email SFTP 140
overview 210
session properties 277
post-session shell command
configuring non-reusable 50
Q
configuring reusable 51 queue connections
creating reusable Command task 51 MSMQ 165
using 49 testing WebSphere MQ 189
Index 293
queue connections (continued) Request Old (property)
WebSphere MQ 189 TIB/Rendezvous application connections, configuring 183
quoted identifiers reserved words
reserved words 103 generating SQL with 103
reswords.txt 103
reserved words file
R creating 103
resilience
real-time sessions connections 135
log files 248 FTP 140
session logs 248 WebSphere MQ, configuring 189
truncating target tables 97 restarting tasks
recovery queue name in Workflow Monitor 223
WebSphere MQ connections 189 restarting tasks and workflows without recovery
recreating in Workflow Monitor 224
indexes 98 retry period
refreshing FTP 140
folder objects 26 reusable tasks
repository objects 26 inherited changes 63
reject file reverting changes 63
changing names 122 reverting changes
column indicators 123 tasks 63
locating 122 RFC file mode connections
pipeline partitioning 122 configuring 172
reading 122 RFC stream mode connections
row indicators 123 configuring 172
session properties 94, 106, 271 RFC/BAPI application connections
viewing 122 configuring 175
Reject File Name rmail
flat file target property 106 configuring 206
relational connections row error logging
Netezza 166 active sources 105
relational databases row indicators
copying a relational database connection 138 reject file 123
replacing a relational database connection 139 run options
relational sources run continuously 198
session properties 76 run on demand 198
relational targets service initialization 198
session properties 93, 94, 271 running
Relative time status 225
specifying 72 workflows 203
Timer task 72
reload task or workflow
configuring 20
removing
S
Integration Service 19 $Source connection value
renaming setting 129, 260
repository objects 26 $Source
repeat options how Integration Service determines value 129
customizing 199 multiple sources 129
repositories session properties 260
adding 26 Salesforce Analytics application connections
connecting in Workflow Monitor 218 accessing Sandbox 170
entering descriptions 26 configuring 170
repository folder Salesforce application connections
monitoring details in the Workflow Monitor 234 accessing Sandbox 170
repository notifications configuring 170
receiving 20 SAP ALE IDoc Reader application connections
repository objects configuring 174
comparing 29 SAP ALE IDoc Writer application connections
configuring 26 configuring 174
refresh 26 SAP ECC
rename 26 ABAP integration 171
Repository Service ALE integration 174
monitoring details in the Workflow Monitor 232 SAP NetWeaver application connections
notification in Workflow Monitor 221 configuring 171
notifications 20 SAP NetWeaver BI application connections
configuring 176
294 Index
SAP R/3 application connections session logs (continued)
configuring 173 naming 250
stream and file mode sessions 173 real-time sessions 248
stream mode sessions 172 sample 256
scheduled saving 55
status 225 tracing levels 256
scheduled states viewing in Workflow Monitor 225
workflows 200 XML targets 120
scheduling session on grid settings
concurrent workflows 197 session properties 59
configuring 198 session properties
creating reusable scheduler 202 advanced settings 53
disabling workflows 203 buffer sizes 53
editing 198 Components tab 277
end options 198 Config Object tab overview 52
error message 202 constraint-based loading 53, 101
run every 198 delimited files, sources 81
run once 198 delimited files, targets 108
run options 198 email 210
schedule options 198 error handling settings 56
start date 198 fixed-width files, sources 80
start time 198 fixed-width files, targets 108
workflows 197, 282 general settings 259
searching General tab 259
versioned objects in the Workflow Manager 28 log option settings 55
Workflow Manager 24 lookup caches 53
Workflow Monitor 228 Metadata Extensions tab 279
sendmail null character, targets 108
configuring 206 on failure email 210
server handling on success email 210
XML sources 86 output files, flat file 271
XML targets 116 partitioning options settings 58
service process variables Partitions View 277
in Command tasks 50 performance settings 262
service variables post-session email 210
email 214 Properties tab 260
session reject file, flat file 106, 271
additional JDBC URL parameters 163, 164, 168 reject file, relational 94, 271
connection properties 163, 164, 168 relational sources 76
database/schema 163, 164, 168 relational targets 93
session command settings session command settings 277
session properties 277 session on grid settings 59
session configuration objects source connections 75
creating 59 sources 75
session properties 52 table name prefix 102
understanding 52 target connections 91
using in a session 59 target load options 94, 101, 271
session events targets 91
passing to an external library 248 Transformation node 277
Session Log File Max Size transformations 277
configuring session log rollover 252 XML output filename 115
session config object 53 XML sources 85
Session Log File Max Time Period XML targets 115
configuring session log rollover 252 session statistics
session config object 53 viewing in the Workflow Monitor 235
session log files sessions
archiving 251 apply attributes to all instances 47
time stamp 251 configuring for multiple source files 87
session log rollover creating 45, 46
description 252 definition 45
session logs description 60
changing locations 253 editing 46
changing name 253 email 205
duplicate XML rows 118 monitoring counters 242
enabling and disabling 253 monitoring details 237
generating using UTF-8 250 multiple source files 87
Integration Service version and build 256 overriding connection attributes 130
location 250, 260 overriding source table name 78, 264
Index 295
sessions (continued) sources (continued)
overriding target table name 103 session properties 75
overview 45 wildcard characters 80
properties reference 259 special characters
task progress details 235 parsing 117
test load 92 SQL
truncating target tables 97 configuring environment SQL 134
validating 195 guidelines for entering environment SQL 135
viewing details in the Workflow Monitor 238 overriding query at session level 77
viewing failure information in the Workflow Monitor 238 SQL query
viewing performance details 241 overriding at session level 77
viewing statistics in the Workflow Monitor 235 start date and time
Set File Properties scheduling 198
description 78, 106 Start tasks
SFTP definition 35
authentication methods 140 starting
configuring connection 140 selecting a service 39
defining Private Key File Name 140 start from task 204
defining Private Key File Password 140 starting part of a workflow 204
defining Public Key File Name 140 starting tasks 204
shared library Workflow Monitor 217
passing log events to an external library 248 workflows 203
shell commands statistics
executing in Command tasks 66 for Workflow Monitor 219
make reusable 51 viewing 219
post-session 49 status
pre-session 49 aborted 225
using Command tasks 65 aborting 225
using parameters and variables 50, 65 disabled 225
shortcuts failed 225
keyboard 33 in Workflow Monitor 225
SMTP preparing to run 225
sending email using 208 running 225
source commands scheduled 225
generating file list 79 stopped 225
generating source data 79 stopping 225
Source File Name succeeded 225
description 78 suspended 225
Source File Type suspending 225
description 78 tasks 225
source filename terminated 225
XML sources option 85 terminating 225
source files unknown status 225
configuring for multiple files 87 unscheduled 225
session properties 78, 264 waiting 225
wildcard characters 80 workflows 225
source filetype stop on
XML source option 85 pre- and post-session SQL errors 49
source location stop on errors
session properties 78, 264 session property 56
source tables stopped
overriding table name 78, 264 status 225
sources stopping
code page 81 in Workflow Monitor 224
code page, flat file 80 status 225
commands 79 using Control tasks 66
connections 75 stream mode
delimiters 81 SAP R/3 application connections 173
dynamic files names 80 stream mode connections
generating file list 80 RFC 172
generating with command 79 subseconds
line sequential buffer length 82 trimming for pre-8.5 compatibility 53
monitoring details in the Workflow Monitor 239 succeeded
multiple sources in a session 87 status 225
null characters 80, 84 suspended
overriding source table name 78, 264 status 225
overriding SQL query, session 77 suspending
resilience 135 email 213
296 Index
suspending (continued) targets (continued)
status 225 processing with command 108
Sybase ASE relational settings 94, 271
commit interval 101 relational writer 91
connect string example 128 resilience 135
Sybase IQ external loader session properties 91, 93
connections 141 setting DTD/schema reference 118
system resource usage truncating tables 97
Integration Service Monitor 233 truncating tables, real-time sessions 97
writers 91
Task Developer
T creating tasks 61
displaying and hiding tool name 20
$Target connection value Task view
setting 129, 260 configuring 221
$Target displaying 228
how Integration Service determines value 129 filtering 229
multiple targets 129 hiding 221
session properties 260 opening and closing folders 219
table name prefix overview 216
target owner 102 using 228
table names tasks
overriding source table name 78, 264 aborted 225
overriding target table name 103 aborting 225
table owner name adding in workflows 37
session properties 77 arranging 25
targets 102 Assignment tasks 64
Tableau V3 connection cold start 224
properties 180 Command tasks 65
target commands configuring 62
processing target data 108 Control task 66
target connection groups copying 28
constraint-based loading 99 creating 61
description 104 creating in Task Developer 61
target directories creating in Workflow Designer 61
creating at run time 106 Decision tasks 67
target load order disabled 225
constraint-based loading 99 disabling 63
target owner email 209
table name prefix 102 Event-Raise tasks 69
target properties Event-Wait tasks 69
bulk mode 94 failed 225
test load 94 failing parent workflow 63
update strategy 94 in worklets 42
using with source properties 95 inherited changes 63
target tables instances 63
overriding table name 103 list of 60
truncating 97 monitoring details 235
truncating, real-time sessions 97 non-reusable 37
targets overview 60
code page 108 preparing to run 225
code page compatibility 89 promoting to reusable 62
code page, flat file 108 restarting in Workflow Monitor 223
commands 108 restarting without recovery in Workflow Monitor 224
connections 91 reusable 37
database connections 89 reverting changes 63
delimiters 108 running 225
duplicate group row handling 118 show full name 20
file writer 91 starting 204
globalization features 89 status 225
heterogeneous 121 stopped 225
load, session properties 94, 101, 271 stopping 225
monitoring details in the Workflow Monitor 239 stopping and aborting in Workflow Monitor 224
multiple connections 121 succeeded 225
multiple types 121 Timer tasks 72
null characters 108 using Tasks toolbar 37
output files 106 validating 194
overriding target table name 103
Index 297
temporary tablespace Treat Error as Interruption
Oracle 128 effect on worklets 40
Teradata Treat Source Rows As
connect string example 128 bulk loading 101
Teradata external loader using with target properties 95
connections 141 Treat Source Rows As property
terminated overview 76
status 225 truncating
terminating Table Name Prefix 97
status 225 target tables 97
test load
bulk loading 92
enabling 260
file targets 92
U
number of rows to test 260 UNIX systems
relational targets 92 email 206
TIB/Adapter SDK application connections unknown status
properties 184 status 225
TIB/Rendezvous application connections unscheduled
configuring 183 status 225
properties 183 unscheduling
TIBCO application connections workflows 203
configuring 183 update strategy
time target properties 94
configuring 19 Update Strategy transformation
formats 19 constraint-based loading 99
time increments using with target and source properties 95
Workflow Monitor 228 URL
time stamps adding through business documentation links 33
session log files 251 user-defined events
session logs 253 declaring 70
workflow log files 251 example 69
workflow logs 252 waiting for 71
Workflow Monitor 216
time window
configuring 221
Timer tasks
V
absolute time 72 validating
definition 72 expressions 33, 196
description 60 multiple sessions 195
relative time 72 tasks 194
subseconds in variables 72 validate target option 115
tool names workflows 192
displaying and hiding 20 worklets 193
toolbars XML source option 85
adding tasks 37 variables
using 24 email 211
Workflow Manager 24 in Command tasks 65
Workflow Monitor 222 Verbose Data tracing level
tracing levels configuring session log 256
Normal 256 Verbose Initialization tracing level
overriding in the session 56 configuring session log 256
session 256 versioned objects
setting 256 Allow Delete without Checkout option 22
Verbose Data 256 checking in 26
Verbose Initialization 256 checking out 26
transaction environment SQL comparing versions 27
configuring 134, 135 searching for in the Workflow Manager 28
transaction generator viewing 27
active sources 105 viewing multiple versions 27
effective and ineffective 105 viewing
transformations older versions of objects 27
session properties 277 reject file 122
Transformations node
properties 277
Transformations view
session properties 263
298 Index
W Workflow Manager (continued)
messages to Workflow Monitor 221
waiting MSMQ queue connections 165
status 225 Netezza connections 166
web links overview 17
adding to expressions 33 PeopleSoft connections 167
Web Services application connections printing the workspace 23
configuring 185 relational database connections 136
endpoint URL 185 RFC file mode connection 172
webMethods application connections RFC stream mode connection 172
configuring 187 RFC/BAPI connections 175
WebSphere MQ queue connections Salesforce Analytics connections 170
configuring 189 Salesforce connections 170
testing 189 SAP ALE IDoc Reader connections 174
wildcard characters SAP ALE IDoc Writer connections 174
configuring source files 80 SAP ECC connections 173
windows SAP NetWeaver BI connections 176
customizing 23 SAP NetWeaver connections 171
displaying and closing 23 searching 24
docking and undocking 23 searching for versioned objects 28
fonts 21 SFTP connections 140
Navigator 18 TIB/Rendezvous connections 183
Output 18 TIBCO connections 183
overview 18 toolbars 24
panning 20 tools 18
reloading 20 validating sessions 195
Workflow Manager 18 versioned objects 26
Workflow Monitor 216 viewing reports 40
workspace 18 Web Services connections 185
Windows Start Menu webMethods connections 187
accessing Workflow Monitor 217 WebSphere MQ connections 189
Windows systems windows 18, 23
email 207 zooming the workspace 25
logon network security 208 Workflow Monitor
Workflow Composite Report advanced options 221
viewing 40 closing folders 219
Workflow Designer cold start tasks or workflows 224
creating tasks 61 configuring 220
displaying and hiding tool name 20 connecting to Integration Service 218
workflow log files connecting to repositories 218
archiving 251 customizing columns 221
configuring 252 deleted Integration Services 218
time stamp 251 deleted tasks 218
workflow logs disconnecting from an Integration Service 218
changing locations 252 displaying services 219
changing name 252 filtering deleted tasks 218
enabling and disabling 252 filtering services 219
locating 250 filtering tasks in Task View 218, 229
naming 250 Gantt Chart view 216
viewing in Workflow Monitor 225 Gantt chart view options 221
Workflow Manager general options 221
adding repositories 26 hiding columns 221
arrange 25 hiding services 219
checking out and in versioned objects 26 icon 217
configuring for multiple source files 87 launching 217
connections overview 126 launching automatically 20
copying 28 listing tasks and workflows 227
CPI-C connection 172 Maximum Days 221
customizing options 19 Maximum Workflow Runs 221
database connections 147 monitor modes 218
date and time formats 19 navigating the Time window 228
display options 19 notification from Repository Service 221
entering object descriptions 26 opening folders 219
external loader connections 141 overview 216
FTP connections 140 performing tasks 223
general options 20 pinging the Integration Service 218
JMS connections 158 receive messages from Workflow Manager 221
JNDI connections 158 resilience to Integration Service 218
Index 299
Workflow Monitor (continued) workflows (continued)
restarting tasks or workflows without recovery 224 monitor 35
restarting tasks, workflows, and worklets 223 override Integration Service 203, 204
searching 228 override operating system profile 203, 204
Start Menu 217 overview 35
starting 217 preparing to run 225
statistics 219 properties reference 280
stopping or aborting tasks and workflows 224 restarting in Workflow Monitor 223
switching views 216 restarting without recovery in Workflow Monitor 224
Task view 216 run type 235
task view options 221 running 203, 225
time 216 scheduled 225
time increments 228 scheduling 197
toolbars 222 scheduling concurrent instances 197
viewing command task details 237 selecting a service 35
viewing folder details 234 starting 203
viewing history names 225 starting with advanced options 203, 204
viewing Integration Service details 232 status 225
viewing performance details 241 stopped 225
viewing repository details 232 stopping 225
viewing session details 237 stopping and aborting in Workflow Monitor 224
viewing session failure information 238 succeeded 225
viewing session logs 225 suspended 225
viewing session statistics 235 suspending 225
viewing source details 239 suspension email 213
viewing target details 239 terminated 225
viewing task progress details 235 terminating 225
viewing workflow details 234 unknown status 225
viewing workflow logs 225 unscheduled 225
viewing worklet details 236 unscheduling 203
workflow and task status 225 using tasks 60
workflow properties validating 192
configuring 280 viewing details in the Workflow Monitor 234
Events tab 285 viewing reports 40
General tab 280 waiting 225
Metadata Extension tab 31 Workflow Monitor maximum days 221
Properties tab 281 Worklet Designer
Schedule tab 282 displaying and hiding tool name 20
suspension email 213 worklets
Variables tab 284 adding tasks 42
workflow schedules configuring properties 41
Daylight Savings Time 197 create non-reusable worklets 41
time zones 197 create reusable worklets 41
workflow tasks declaring events 42
reusable and non-reusable 62 developing 41
workflows email 209
running 203, 225 fail parent worklet 63
scheduled state 200 monitoring details in the Workflow Monitor 236
aborted 225 overview 40
aborting 225 restarting in Workflow Monitor 223
adding tasks 37 status 225
assigning Integration Service 39 suspended 225
branches 35 suspending 40, 225
cold start workflows 224 validating 193
copying 28 waiting 225
creating 36 workspace
definition 35 colors 21
deleting 37 colors, setting 21
developing 35, 36 file directory 20
disabled 225 fonts, setting 21
disabling 203 navigating 23
editing 37 printing 23
email 209 zooming 25
events 35 writing
fail parent workflow 63 multibyte data to files 112
failed 225 to fixed-width files 110, 111
guidelines 35
links 35
300 Index
X XML targets (continued)
duplicate group row handling 118
XML file list of multiple targets 120
duplicate row handling 118 in sessions 115
flushing data 119 outputting multiple files 119
performance 119 server handling 116
special characters 117 session log entry 120
XML file session properties 115
creating multiple XML files 120 setting DTD/schema reference 118
XML sources validate option 115
numeric data handling 85 XMLWarnDupRows
partitionable option 85 writing to session log 118
server handling 86
session properties 85
source filename 85
source filetype option 85
Z
source location 85 zooming
validate option 85 Workflow Manager 25
XML targets
active sources 105
Index 301