0% found this document useful (0 votes)
128 views

IICS

The document discusses application integration and how to create and configure various connectors like file, database, API and message queue connectors. It also covers how to create processes to integrate applications and consume APIs. Key steps discussed are creating and testing connectors, designing processes with input/output, and executing processes by calling APIs.

Uploaded by

vinay.murahara
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
128 views

IICS

The document discusses application integration and how to create and configure various connectors like file, database, API and message queue connectors. It also covers how to create processes to integrate applications and consume APIs. Key steps discussed are creating and testing connectors, designing processes with input/output, and executing processes by calling APIs.

Uploaded by

vinay.murahara
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 269

Application Integration

Application Integration: is used to created and API process.

We can create process by using create new process. Once we have process created, we can configure
in/out param as required. Once process design is complete, we need to publish is and take the URL to
execute the process. inside URL we can pass the param as we mentioned while creating the process.

To get access to any use check the option as “Allow anonymous access”.

Use the assignment task to configure return value as payload for HTTP.

Application Integration Console: once above process configured, we can execute and monitor the
process and its statics here.

Connectors:

connecters are the interface to interact with API, DW or any third party application.

Types Of Connectors:

Build In/ Native Connectors:

A- Informatica Cloud Connectors (Salesforces,Oracle,ODBC)


B- Listener Based Connectors
1- File- Based Connectors (AWS S3)
2- Massage Based Connectors (Active MQ JMS to process Realtime data)

Service Connectors:

Service Connectors are used to connect to web services

How to Create Connectors:


Connection Properties:

Below Properties can be changed based on connection type we choice

Metadata: this tab show metadata available under the connection object like tables, view etc.
Create Service Connector:

Service connector is sued to access data available of HTTP/SOAP.

Create connection using Thomas Bayer API documentation.

In Action tab we need to specify connection details:


Test Service Connector:

Once we configure connector by adding in/out fields we can test connects

Test Output also show the HTTP payload as below


Once this configuration is done, we need to create connector and locate the connection as below.

After this we need to publish the connection.

Creating JDBC Connection:

Click on create connection and enter respective details.

Once you configure the correction, we can see all the object like view, tables under that connection
and schema. even we can drill down and see the columns for table.
How to Insert data from API to MySQL table:

We can configure start task to receive some inputs from API and that can we loaded in to table using

Create step as below.

Value received can be assign to table filed in Step task/Transformation.

Enable OData:

Once you publish the process, we can pass value to URL and in return get some msg configured in
process for confirmation.

We can create JDBC connection to return table data called OData and pagination.

All below setting we must done in JDBC connection.


In response to this service can return whole table data which would be views in XML format on web
page for that we need to enable “OData Enable” while creating JDBC connection also we need to
have define primary key on table in process.in URL we can mention which table we want from
belongs to same JDBC connection. when user access this service, he needs to give username and
pass for as a login user for IICS.

Once we publish the connection in same place, we can see properties option click on that and copy
the we URL to call and get data for table. Like below

Like below image we can call the table and get complete data for the mentioned table.

Pagination Using Browser Client:

In below case odatapaginationtable is the table name after that we can mentioned the skip row

Or top row option to get some specific results .

Like URL?$top=2 or URL?$skip=3 or URL?$skip=3&$top=2


Creating File Connection:

File connection can be created to monitor folder or files.so its event based.

Event Source: here we need to configure and reading file/ monitoring files.

Event Target: here we need to mention where file to be placed after processing.

We can read the file from below type of source.


Creating Salesforce Connection:

Create new connection and say create.

Select type as Salesforce as a type of connection and enter all the required details.

Either by using username pass or Auth connection and test same.

How to use Salesforce connection in Process:

We can use the Salesforce connection same as we used for JDBC to insert data from web service. We
can pass the required value using web URL and that can be stored in Salesforce object like table.

Swagger File:

This file hold holds all the information related to input and output fields the web services and the
application object like table. its hold complete details even datatype and all.

Creating AMQP Connection:(Massaging Queue)

AMQP is same as the JMS queues where can configure all queue related broker details. As host
name, port name, massage payload type like JSON,XML. Once we configure the connection and
publish it will monitor the connection to read msg. massaging queues are used to process real time
data from application like trading applications. AMQP connection trigger on arrival of msg in
monitoring queue.
Creating Email Connection:

Email connection is used to send an email to chosen recipient with relevant msg.connection can be
created same as any other connection.

Configure the email connection information as below


We can provide all the attachment and recipients details type of mail is HTML/Text etc. all will be
mentioned in metadata tabs as mentioned in below image.

Creating AWS S3 Connection:

S3 connector is used to connect AWS S3 bucket, monitor the same and can read and write file into
bucket. Can be configured and file monitor. S3 connection created as usual way. We need to define
all details related to AWS this is same as we configure flat file connection.

We need to have below details to configure S3 connection. Even we can define bucket policy while
configuring connection for new bucket created. if the specified bucket is not their connection will
create we according to policy defined in connection. Here we can define all delay how the
connection will read file where the file to be moved and what if file filed to consume.
Creating Process Object:

We can created set of object that can be reused many time in business process like instead of created
object for each table filed we can created one process object where all table columns are defined
and that can be used many time in different process as is process object.

In below screen we have added all column which would be frequently used in process. Once this
objected is configured that can be called in other process as the process object.

When we created new process in input fields we can directly browse the process so it will use that
flied as a input from web process.
Once we publish the process, we can see all input/output details in swager file as below.

so here we are use POST web service, for that we need to have rest client. We can pest URL in and
select POST and all data which we want to send to process will be as part BODY and it will be in type
of JSON. We can use any rest client as pass data as below.
Once the URL is hit it will execute the process and generate the unique id for each of the runs as
below. We can get the porcess excution related etails by cliecking on the ID link.

Service Connector (Manual):

We can create service connector manually for third part. Like we can connect to Thomas Bayer
service (Third party service) here first we need to create service connector manually. Once that is
created, we can use that in our process to fetch data from third party services.

We can define all action which we want to perform like in Thomas Bayer case we are creating action
for get customer details and get Invoice details. Like below
For each action we can configure input(required for service to) and outfile fileds(value return from
service) data.once this is done we can publish this srrvice connector.

Once service connector created we need to create connection for that thired party service.same
can be used in porcess.

How to use the srvice connector in your process :

1- Create new process, here start, and end pallet will by default.
2- Here we can input value received from URL hit, in this case Invoice id will be input.
3- Configure output value this is output of Thomas Bayer service sent as response to URL hit.
4- Now add the Service step in between start and end step.
A- Here we need to set connect to read data from (in this case connection created for
Thomas Bayer service)
B- Selected action from the service which we want to execute as part of this service step.

C- configure input for service as the input received from HTTP request.
D- Same like we can add one more service step to call second action from our connection
That is get customer details by passing customer ID, here we can pass the customer
value as input which is the output of preview service that is Get Invoice.
5- Now Publish the process.
6- Copy URL and pass the Invoice and we get the customer details from the invoice ID

7- Once execution complete it will show process status on Application integration Console
Click on the unique id to view all details it will show details as below.

Sample API for testing:

API Rest Client: https://extendsclass.com/rest-client-online.html

GET API:

https://reqres.in/api/products/3

https://reqres.in/api/users/2

POST API:
https://reqres.in/api/users , https://reqres.in/api/register

data: data: { name: "paul rudd", movies: ["I Love You Man", "Role Models"]}

data: {"email":"[email protected]","password":"pistol"}

What Is The Guide:

Guide is set of screen that prompts user to review , enter or confirm the data. guide need to be
designed same as we design the application process using decision and input output button so it
would display msg and data based on user input and guide the user.

Guide are used for call center automation. for customer information and automated user reply
based on user inputs.
Data Integration
IICS Licenses and services allowed in each licenses pack.

Informatica Cloud Application Architecture:


ICS Repository:

Its hold all matata related to all mapping task and task flows in database. Like Source tables field
related, connection data will be stored in ICS repository and its cannot be accessed by in user.

What Is the Secure Agent?

Secure agent is the lightweight application that will make connection between your local
machine and informatica resources like repository. Secure agent is installed on local machine and
it will access all the resources which is accessible to the local machine. Like all localhost
databases can be accessed in IICS using secure agent. We can create multiple secure agents in
IICS and make use of those to access resources across multiple systems. secure agent is
responsible for moving data from source to target and it uses all resource on machine where it is
installed (power of local machine). Secure agent is like Integration service in PWC. Your
application data never stored on informatica cloud. Number of agents can be installed is depends
on your licence agreement. Secure agent can be downloaded and installed through you IICS
login. Once the secure agent installed on your local machine we can check/manage the status
for secure agent like restart, stop. on secure agent machine all directories structure are listed out
like Src, Tgt files param files. Also make sure your service should be login in as Administrator
credentials as its access windows directories for multiple operations.
Data Synchronization:

Data synchronization task is used to sync. Object from source to target. Look up operation can be
done using Synchronization task as well while configuring fields we can call the looks and
mention all details for looks. Task have expression and lookup option as each field level
mentioned as below.
Data synchronization task is used to configure data load without doing any mapping just we need
to configure source and target details and object to be synchronized. We can join up to 5 objects
for load in the same connection. Multiple objects from cross connection is not allowed in
synchronization task. join can be automatically analysis based on tables primary/frogging key
defined in database. We can also define same relation in source tables, or we can add user
defined join condition while configuring the task.

Saved Queries:

Saved queries is the SQL we can use as override to perform operation like that can not be done is
general synchronization task like , this task just load the data from source to target however user
want to have some logic to be implemented on source columns like SUM,AVG or any other
operation in that case we can use the Saved queries option.

Create Saved Queries:

New----->Components---->Saved Queries
Look Up on salesforce always make API call and number of call can be fired on salesforce also
have limit , so to remove that limitation we can use data synchronization task to created required
columns from Salesforce and write down that value in file and perform the lookup on file.also,
this increase the performance and its file.

Replication Application:

Data replication task is used to sync the whole schema. Here we need to define schema names
and objects we want to sync. All mentioned objects are synced based on configuration here we
don’t need to specify the target table/files that would be automatically created when its runs we
can define the prefix and all source table will be created in target by adding prefix to target.

This task is basically is used to for data backup or data archival purpose. We can configure as
even if some loads are failed, we can continue with remaining object to continue.This task will
create one flow to each table load, so number of flows would be created depends on number of
objects in replication process. we can exclude fields which is not used.
Data Masking Task:

Data masking is the process where highly confidential data will be encrypted and stored in DB.
Data masking usages some set of data dictionary file where set of rules been defined and same
has been used in Mapping for field masking. all the dictionary file is stored in secure agent on
some location. We can create separate data masking task using new component.

Parameterize Source and Target

The Connections in Informatica Cloud Data Integration can be parameterized at the mapping
level by creating Input parameters. But the values to the parameters defined for connections
needs to be passed at the Mapping Task level.

However, there is an option to override the connection defined at the Mapping Task level with
values specified in a parameter file at runtime.

Once the connection is parameterized, you cannot select a different source object or make
changes to the field's metadata. Select the actual connection back to make any such changes.

Once parameter created at mapping level it would as for value for that parameter which is actual
connection name that would be used for data. Mapping validation as below.
In above case Src_connection is the parameter defined at mapping level and with check option
override option at run time that mean value for this param will be used at run time.

Next step is to define parameter at MCT level and place it in correct dir.

#USE_SECTIONS

[Default].[MCT_Employee_Details]

$$Src_Connection=Oracle_SCOTT

[Global]

Once all setup is done the task it will use param file connection and that can be verified when we
run the mapping as below.

How to handle the Source and Target connections during the migration

Case 1 : we can crated same connection in all the environment, let say I have I have connection
name for my source SRC_connection in dev is same connection exist in QA and PROD while
migration our MCT/mapping will use the same connection but with the QA/PROD credentials. If
connection is not present in higher environment, then it will be copied from migrated
code.However, the password details are not migrated and needs to be entered manually in QA

Case 2 :if we have different connection in DEV and QA the at time time of deployed we can
change the connection details we want from QA/PROD. So it would replace all connection where
we are using Oracle_DEV to Oracle_QA.

Note : when code is migrated from one environment to other connection is copied but it would
be invalid case password wont be migrated. So because of connection invalid you mapping and
MCT invalidated automatically, so make sued you have to open the connection and enter the
correct credentials and validate all components. In both the parametrization case we should have
well defined connection at IICS level.

Overriding Data Objects/SQL using Parameter file in Informatica Cloud

Additionally, the data objects (source/Target table name) in Informatica Cloud can also be
parameterized just like connections and can be overridden with values specified in a parameter
file during runtime.

For Object and SQL override we need to select correct Source Type as Single / Query/Multiple at
mapping task level.

When you parameterize the SQL query, the fields in the overridden query must be the same as
the fields in the default query. The task fails if the query in the parameter file contains fewer
fields or is invalid.

Pass data from one Mapping Task to another

The data can be passed from one Mapping task to another in Informatica Cloud Data Integration
through a Task flow using parameters. The Mapping Task which passes the data should have an
In-Out Parameter defined using SetVariable functions. The Mapping Task which receives the data
should either have an Input parameter or an In-Out Parameter defined in the mapping to read
the data passed from upstream task.

Task-1:

Created mapping as below


There is an In-Out Parameter “MAX_HireDate” of type date/time defined in the mapping. The
value to the IO-parameter is passed from expression transformation using SetMaxVariable.

Task 2 :

In below task2 we have used $$MAX_Date_Hire value


The IO parameter values calculated in Task-1 can be passed to Task-2 IO parameter using
Advanced Taskflow.In the Advanced Taskflow, add Task-1 and Task-2 between start and end using
Data Task step. Select the respective mapping tasks in the Data task steps as shown below.

Shared Sequences

Shared Sequences are reusable sequences that can be used with multiple Sequence Generator
transformations in Informatica Cloud (IICS). The Sequence Generator transformation inherits the
properties of a shared sequence to generate unique values when a shared sequence is selected.

1. Click on New on home page.


2. From the pop-up, click Components, select Shared Sequence and click Create.
3. Enter Name and define the Sequence Properties and click save.

How to use in mapping:

4. Open the mapping.


5. Add a Sequence generator transformation to the mapping canvas and connect it to a
downstream transformation.
6. Navigate to Sequence tab and select Use a Shared Sequence.
7. Select the Shared Sequence already available. Shared Sequence cannot be created on the fly
from sequence generator.
8. Configure other properties in Advanced tab to use the sequence as either connected or
unconnected transformation.
To reset shared sequence value Click Reset Current Value in the Sequence Properties area.

Difference between Connected and Unconnected Shared Sequence

Sequence generator can be used in two different ways in Informatica cloud. One with Incoming fields
disabled and the other with incoming fields not disabled.

The Shared sequence will inherit the properties of the sequence generator it is attached to i.e.

● Share Sequence used in sequence generator with incoming fields not disabled will
generate same sequence of numbers for each downstream transformation.
● Shared Sequence used in sequence generator with incoming fields disabled will
generate Unique sequence of numbers for each downstream transformation.

Flow Run Order: Target Load Plan

This is used when we have multiple flows in mapping, here we can define which need to be runs first
in that can we and decided sequence which should run first and second using flow run order. This
option is available at mapping level as displayed below.

Points to remember before configuring the Flow Run order

● If you add a new data flow to the mapping after you configured the flow run order, the new
flow is added to the end of the flow run order by default.
● If there are multiple targets in a single data flow, the both targets are loaded concurrently.
● If one of the data flows in the mapping fails, the mapping completely fails and stops
executing the other data flows in the mapping. If the mapping is retriggered, the data
integration starts executing from flow-1 again.
● If the above-mentioned scenario is not a desired, create a separate mapping for each data
flow and control the execution order of their mapping tasks through Task Flows or Linear
Task Flows available in data integration.
Expression Macros

Expression is a passive transformation which allows you to create Output Field and Variable
Field types to perform non-aggregate calculations and create new fields in the mapping. Expression
transformation also provides two other field types in Informatica Cloud Data Integration which
are Input Macro Field and Output Macro Field.

A Field expression transforms source data before loading into target. The scope of a field expression
you define for an Output or a Variable field is within that field itself. It cannot be extended for other
fields. To extend the field expression logic to other fields you need to define same logic for each field
separately.

Expression Macros provide a solution to define the field expression logic once and extend them to all
the required fields in expression transformation. They allow you to create repetitive and complex
expressions in mappings.

Macro are basically used to define repetitive patters for some calculation so we don’t have to write
same code gain and again.

The expression macros can be implemented in 3 different types.

1. Vertical Macro
A Vertical Macro expands an expression vertically. That implies vertical macro generates a set
of same expression conditions on multiple incoming fields.
2. Horizontal Macro
A Horizontal Macro expands an expression horizontally. That implies horizontal macro
generates a set of same expression conditions on multiple incoming fields horizontally.

Below is the operation can be used for Horizontal Macro


3. Hybrid Macro

A Hybrid Macro expands an expression both horizontally and vertically.

Hierarchy Parser Transformation

Informatica Cloud supports reading JSON and XML files through Hierarchy Parser transformation in
Cloud Mappings. The Hierarchy Parser transformation reads XML or JSON (hierarchical input) data
from the upstream transformation and provides relational output to the downstream
transformation.

new --> component --> Hierarchical Schema

one new this open e need to brows the details of JSON file and validate the component and save it
that need to be used in mapping latter phase.

create new mapping :

source : select flat file connection as source and select the file here we have .txt file in which we have
actual location of the json file.

drag and drop the Hierarchy Parser transformation configure same for sample file connection and
path coming from the source file also select the field required to target.
Testing file:

source File :

Hierarchy Builder Transformation

Informatica Cloud supports creating a JSON or XML file through Hierarchy Builder transformation in
Cloud Mappings. Hierarchy Builder transformation processes relational input from the upstream
transformation and provides JSON or XML output to the downstream transformation.

This is just opposite to the Hierarchy Parser transformation discussed in our previous article which is
used to read JSON and XML files and provide relational output.

same Hierarchical Schema can be used for Parser and Builder as a schema.

Pushdown Optimization

Pushdown Optimization is performance tuning technique, where all possible logic can be pushed to
source , target or both the side.

we can configure which technique is to be used at run time using parameter $$PushdownConfig at
session level.
Limitations of Pushdown Optimization

1- Source, Target and Lookup databases should be in the same relational database management
system to use Full Pushdown Optimization.

2-Using variable ports in Expression transformation is not supported by Pushdown Optimization. The
solution is to use to handle the logic in source query.

3-When the source and target tables belong to two different schemas within the same database,
enable cross-schema pushdown optimization.

Partitioning in Informatica Cloud

Partitioning is nothing but enabling the parallel processing of the data through separate pipelines.
Use a blank value for the start range to indicate the minimum value. Use a blank value for the end
range to indicate the maximum value.

Types of Partitions supported in Informatica Cloud

1. Key Range Partitioning


2. Fixed Partitioning

Fixed Partitioning

Fixed Partitioning can be enabled for sources which are not relational or not support key range
partitioning. In fixed partitioning there no need to define number of row that should flow in one
partition that is taken care by IICS.
Guidelines to Informatica Cloud Partitioning

● The maximum number of partitions that could be specified is 64.


● Partitioning is not supported when mapping uses a parameterized source or source query.
● Partitioning is not supported when mapping includes Hierarchy Parser or Web services
transformation.
● Parameters cannot be used for key range values.
● Currently Partitioning is not supported when using custom query in source transformation in
IICS.

Discovery IQ
Informatica Discovery ID (IDIQ) is an innovative cloud based solution that let you manage , monitor
and troubleshoot your jobs running on IICS by proving self service proactive analysis based on
industry based best practices .

IDIQ is used to :

performances and usage management, generate report based on job for better analysis, perform log
analysis and capacity planning

we can monitor the number of jobs that is been ran on particular day and can see which day load is
high and low and according we can manage the jobs .

same way we can check the loads on each secure agent . and can offload some on the jobs from one
agent to other based on loads.

user management IDIQ have active and inactive user analysis. based on that you can remove inactive
user.

Log Analysis Tab:

This is log analyzer here we can give some keywords which you want to track you can selects log by
agent etc. based on key words its shows number of time that issue has been occurred in logs. list
down last 5 the critical error that has happed in recent past. error trades. also provide possible
solution to avoid such kind of issues.
Operational Insights
Given overview of last 24Hrs or last 7 days analysis as below .

here we can get each jobs status as below

Also show connection wise number of rows read and written

Also shows the number of jobs is been scheduled for each time as below.
Export and Import in IICS using Asset Management utility

Asset Management CLI V2 utility is like the PMREP in the power center. Used to export/import
automation in IICS.

You must use the below .exe file and locate it in the directory using CLI. Below is the utility for
windows 64 bit

For another machine you can download from the link below.
https://github.com/InformaticaCloudApplicationIntegration/Tools/tree/master/IICS%20Asset%20Management%20
CLI/v2

Once you have the utility in folder and once moved to folder you can use below commands to
perform export and import and various operation.

Comman
Synopsis
d

IICS Command line interface for the IICS application.

Export Exports artifacts from IICS


Export
Gets the status of an export from IICS.
Status

Import Import’s artifacts into IICS.

Import
Gets the status of an import into IICS.
Status

List Lists artifacts in IICS

Publishes artifacts in IICS. You can publish connections, service connectors, guides,
Publish
processes, and taskflow.

Publish
Gets the status of a publish job.
Status

Version Prints the application version.

You can use the following flags with the IICS Asset Management CLI V2 utility commands which gives
much more flexibility and usability

Option Description DataType

-a artifacts stringArray

-f artifactsFile string

-h help

-m maxWaitTime int
-n name string

-p password string

-P pollingInterval int

-r region string

-s sync

-u username string

-z zipFilePath string

-v logLevel string

-I ID string

-a artifacts

Defines the list of artifacts to be used. The artifacts are to be represented in a normalized form, see
example below. It will be the path of the artifact from the root, followed by a period (.) character and
then artifact type. The following are the list of available artifact types:

● AI_CONNECTION (for application integration connections)


● AI_SERVICE_CONNECTOR (for service connectors)
● DTEMPLATE (for mappings)
● GUIDE (for guides)
● PROCESS (for processes)
● PROCESS_OBJECT (for process objects)
● MAPPLET (for mapplets)
● MTT (for mapping tasks)
● DSS (for synchronization tasks)
● DRS (for replication tasks)
● DMASK (for masking tasks)
● FWCONFIG (for fixed width file formats)
● VISIOTEMPLATE (for visio templates)
● PCS (for powercenter tasks)
● CustomSource (for saved query)
● WORKFLOW (for linear taskflows)
● TASKFLOW (for taskflows)
● FOLDER (for folders)
● PROJECT (for projects)
Examples:
Explore/MyProjectName.Project
Explore/ProjectName/MyFolderName.Folder
Explore/ProjectName/MyMappingName.Mapping

If you want to specify multiple artifacts, use the flag multiple times (for example, -a artifact1 -a
artifact2). If the path values contain spaces, you must enclose them within double quotes.
-f artifactsFile

Instead of passing each individual artifact as an argument, you can pass path and file name of the file
that contains a list of artifacts to be used.

-h help

Display help for the command

-n name

Name of the request. This is used as the name of the import/export job in IICS My Import/Export
Logs.

-u username

Username to login to IICS. Mandatory.

-p password

Password to login to IICS. Mandatory.

-r region

IICS region to which the org belongs. For example, us, eu, ap. Mandatory.

-z zipFilePath

Location (including name) of the zip file to create.

-v logLevel

Log level with which the command is run. The value can be error, warn, info, or trace.
● Trace indicates the maximum log level and all log messages will be displayed.
● Default log level is “info”.
● If set to warn, both warn and error messages will be displayed.
-I ID

ID of the original request. Available only after the original request is submitted.

-s sync

Optional. Controls whether the command is blocking or non-blocking.

● If set to true, the command will be a blocking call. The command will issue the
request and wait for the action to be completed.
● If set to false, the command will be a non-blocking call. The command will issue the
request, but not wait for the action to complete. It will return the request id.
● You can use the request id in subsequent operations such as checking the status of
the request. (default true)
-P pollingInterval

Optional. Indicates how often to poll for status (in seconds). Applicable only with –sync. (default 10)

-m maxWaitTime

Optional. Indicates the maximum time (in seconds) to wait for the request to complete. Applicable
only with –sync. (default 120)

Method-1: Passing Individual Artifacts

The syntax to export individual artifacts from IICS is as mentioned below

iics export -n <request name> -u <username> -p <password> -r <region> -a <artifact1> -a <artifact2>


-z <zip file name with path>

In the example below, two mappings “m_Test_Mapping_1” and “m_Test_Mapping_2” are exported
from “Default” project into a zip file as “C:\IICS_Artifacts\IICS_CLI_EXPORT.zip”. The name of the
export job is mentioned as “IICS_CLI_EXPORT”

Method-2: Passing Artifacts List in a File

The syntax to export list of artifacts by reading them from a file is as mentioned below

iics export -n <request name> -u <username> -p <password> -r <region> -f <filename with path
holding artifcats list> -z <zip file name with path>
In the example below, artifacts from the file “C:\IICS_Artifacts\IICS_ArtifactsList.txt” are exported
into a zip file is as “C:\IICS_Artifacts\IICS_CLI_EXPORT_LIST.zip”. The name of the export job is
mentioned as “IICS_CLI_EXPORT_LIST”

Automate Imports into Informatica Cloud using CLI Utility

Use IICS Import command to import artifacts into Informatica Cloud.

The syntax to import artifacts into IICS is as mentioned below

iics import -n <request name> -u <username> -p <password> -r <region> -z <zip filename with path>

In the example below, the zip file “C:\IICS_Artifacts\IICS_CLI_EXPORT.zip” is imported into IICS org.
The name of the export job is mentioned as “IICS_CLI_IMPORT”

Verify Export and Import Status

iics export status -u <username> -p <password> -r <region> -I <ID of


original request>

The example below shows the export status as SUCCESSFUL for the export job with ID
= 0JU6pDESBi6gvq3kLIm7Sn

List Artifacts in Informatica Cloud using CLI Utility

IICS List command lists the artifacts in IICS.

In the earlier discussed export command, the file which contains the list of all artifacts to be
exported can also be created using the CLI utility using IICS List command.
Below are other flags available which can be used with IICS List command

-o outputFile

Location (including name) of the artifacts list file to create. The location can be a relative path or
absolute path. If you do not specify this argument, the command prints the list of IICS artifacts to the
standard output.

-q query

Specifies the queries to filter the IICS artifacts that you want to include in the artifacts list.

● You can define multiple query parameters separated with a space character.
● The command performs an AND operation when you define multiple query
parameters.
● You can use the following parameters to construct the query
location: Defines where the artifact is stored in IICS. Include the project and folder
name in the location.
tag: Defines the tag associated with the asset.
type: Defines the artifact type.
updateTime: Defines a filter condition based on the last updated time for the
artifact.
updated By: Defines the user name of the user account that last updated the
artifact.
If your query parameters includes spaces in name of Project and/or Folder you need to enclose the
parameter value in encoded double quotes, replace spaces with + and replace / with %2F. Similarly
operators such as <, <=, >, >=, and = must be replaced with URL-encoded values.
For example:

● Query parameter “location==Project with space/Folder with space” must be


replaced with “location==%22Project+with+space%2FFolder+with+space%22″
where Project with space is the Project name and Folder with space is the Folder
name
● Query parameter “updateTime>=2021-01-28T13:26:12Z” must be replaced with
“updateTime%3E%3D2021-01-28T13:26:12Z”
Note: You must use two == operators between the parameter name and value. If any value contains
space characters, you must enclose the parameter-value pair within double quotes.
The syntax to create a file with artifacts list using IICS List command is as mentioned below

iics list -u <username> -p <password> -r <region> -o <artifacts filename with path> -q <query>

Bulk Publish Artifacts in Informatica Cloud using CLI Utility

The IICS Publish command publishes artifacts in IICS. You can publish connections, service
connectors, guides, processes, and taskflows. After the command runs successfully, the status of the
artifact changes to Published in IICS.
The Taskflows imported using the CLI utility are not auto published. Hence IICS Publish command
helps in bulk publishing the taskflows from command line.
The syntax to publish taskflows by passing taskflows names in the command line request is as
mentioned below
iics publish -u <username> -p <password> -r <region> -a <taskflow with path>

Example:
iics publish -u infauser_005 -p Informatica123 -r us -a Explore/Default/TF_Test.TASKFLOW
The syntax to publish taskflows by passing a filename which contains the artifacts list to be published
is as mentioned below
iics publish -u <username> -p <password> -r <region> -f <artifacts filename with path>

Example:
iics publish -u infauser_005 -p Informatica123 -r us -f C:\IICS_Artifacts\IICS_CLI_ArtifactsList.txt

What is RunAJob utility?

RunAJob utility is a command line tool used to trigger IICS Data Integration tasks.The RunAJob
utility internally calls IICS REST API service to trigger tasks.
One can use REST API to trigger the tasks directly. But a proper understanding of REST framework is
required to implement the tasks.
RunAJob utility Requirements

Your organization must have a license to use the RunAJob utility.


To verify if your organization have the required license, log in to your Informatica account and go
to Administrator > Licenses > Custom Licenses at the bottom of the page.
If your organization has the license to use RunAJob utility, it can be found under the following
location:
<Secure Agent directory>\apps\runAJobCli
Informatica Intelligent Cloud Services Free trail do not provide access to RunAJob utility by default.
To use the RunAJob utility, the Secure Agent machine must have latest Java version installed.
4. RunAJob utility Setup

To complete the set up process, you need to create two files and place them in the RunAJob utility
folder (<Secure Agent directory>\apps\runAJobCli)
restenv.properties
log4j.properties
Template files are provided along with the package which can be used to create the required files
and are also available under RunAJob utility folder.
restenv_default.properties
log4j_default.properties
Copy both the template files in the same RunAJob utility folder and rename them to
the restenv.properties and log4j.properties.
4.1. Configuring restenv.properties File

The restenv_default.properties file specifies login credentials and job polling behavior.
The contents of the file will be as below.

baseUrl=https://dm-us.informaticacloud.com/ma
username=username
password=password
ACTIVITYMONITORWAIT=2000
TOTALWAIT=5000
PROXYHOST=
PROXYPORT=
RETRYCOUNT=
Enter the details of the username and password to login and use the default baseUrl in
the restenv.properties file to connect to Informatica Intelligent Cloud Services.
The other parameters are
Parameter Description

The amount of time the utility waits before retrying if an internal exception occurs,
ACTIVITYMONITORWAIT
such as a login failure or network problem. Default is 2000 milliseconds.

The maximum amount of time the utility waits for a job to complete before polling
TOTALWAIT
the activity monitor and activity log again for status. Default is 5000 milliseconds.

The number of times the utility polls for status. This parameter is used for polling
RETRYCOUNT the activity monitor and activity log for job status and for internal exceptions such
as login failure or network problems. Default is 6.
With the default settings the utility checks the Activity monitor for every 5 seconds (TOTALWAIT) until
6 (RETRYCOUNT) times for the job status. If the job keeps running even after the completion of retry
attempts, the command will exit with return code 6 (which means the job is running) and the job will
continue to run in Informatica Intelligent Cloud Services.
If there is an error, the utility makes 6(RETRYCOUNT) retry attempts and waits 2
seconds(ACTIVITYMONITORWAIT) between each retry attempt.

4.2. Configuring log4j.properties File

The log4j.properties file is used to specify the level of detail to return in the log file.
If you want the log to return basic information about the job such as user ID, job ID, and time the
task was initiated, set the level of detail to Info.
If you want the log to return all of the job details, set the level of detail to Debug.
If you are not sure about the settings, copy and leave the file as it is. No extra changes are required
from the user.

5. How to trigger tasks from RunAJob utility?

RunAJob utility comes with preloaded scripts which can be used to trigger the tasks.
If the secure agent is installed on Linux, use cli.sh script to trigger the tasks.
If the secure agent is installed on Windows, use cli.bat script to trigger the tasks.
The syntax to trigger the tasks from RunAJob utility is as follows
cli.sh runAJobCli -n <taskname> -t <tasktype> -w<true/false>
The syntax to trigger taskflow is slightly different and the taskflow must have been published.
cli.sh runAJobCli -t TASKFLOW -un <taskflow name>
The other arguments that can be used in syntax are as follows

Parameter Argument Description

Username -u Informatica Intelligent Cloud Services username.

Password -p Informatica Intelligent Cloud Services password.

Required when the command does not include the task name or
TaskId -i
Federated Task ID.

Required when the task is not in the Default folder and the command
Folder Path -fp
does not include the federated task ID.

Required when the task is not in the Default folder and the command
Federated task ID -fi
does not include the folder path.

taskflowUniqueName -un Taskflow unique name. Required to run a taskflow.

Task Name -n Name of the task.


Parameter Argument Description

Use one of the following values:


DMASK – Masking task
DRS – Replication task
DSS – Synchronization task
Task Type -t
MTT – Mapping task
PCS – PowerCenter task
Workflow – Linear taskflow
TASKFLOW – Taskflow

Determines whether to wait for the job to complete or run the job in
Wait Flag -w
the background.

debug -d Display debugging information.

6. RunAJob utility Status Codes


The utility can return the following status codes
Code Description

-1 Exception

0 Success

1 Warning

2 No wait

3 Failure

4 Timeout

5 Error

6 Running

Certification Questions
Interview Questions

Q1: What's the goal of Taskflow?

Taskflow aims to help C++ developers quickly implement efficient parallel decomposition strategies
using task-based approaches.

Q2: How do I use Taskflow in my projects?

Taskflow is a header-only library with zero dependencies. The only thing you need is a C++17
compiler. To use Taskflow, simply drop the folder taskflow/ to your project and include taskflow.hpp.

Q3: What is the difference between static tasking and dynamic tasking?

Static tasking refers to those tasks created before execution, while dynamic tasking refers to those
tasks created during the execution of static tasks or dynamic tasks (nested). Dynamic tasks created by
the same task node are grouped together to a sub flow.

Q4: How many tasks can Taskflow handle?


Benchmarks showed Taskflow can efficiently handle millions or billions of tasks (both large and small
tasks) on a machine with up to 64 CPUs.

Q5: What is the weird hex value, like 0x7fc39d402ab0, in the dumped graph?

The hex value represents the memory address of the task. Each task has a
method tf::Task::name(const std::string&) for user to assign a human readable string to ease the
debugging process. If a task is not assigned a name or is an internal node, its address value in the
memory is used instead.

Q6: Does Taskflow have backward compatibility with C++03/98/11/14?

Unfortunately, Taskflow is heavily relying on modern C++17's features/idoms/STL and it is very


difficult to provide a version that compiles under older C++ versions.

Q7: How does Taskflow schedule tasks?

Taskflow implemented a very efficient work-stealing scheduler to execute task dependency graphs.
The source code is available at taskflow/core/executor.hpp.

Q8: What is the overhead of taskflow?

Creating a taskflow has certain overhead. For example, creating a task and a dependency takes about
61 and 14 nanoseconds in our system (Intel 4-core CPU at 2.00GHz). The time is amortized over 1M
operations, since we have implemented an object pool to recycle tasks for minimal overhead.

Q9: How does it compare to existing task programming systems?

There is a large amount of work on programming systems (e.g., StarPU, Intel TBB, OpenMP, PaRSEC,
Kokkos, HPX) in the interest of simplifying the programming complexity of parallel and
heterogeneous computing. Each of these systems has its own pros and cons and deserves a reason to
exist. However, they do have some problems, particularly from the standpoint of ease of use, static
control flow, and scheduling efficiency. Taskflow addresses these limitations through a simple,
expressive, and transparent graph programming model.

Q10: Do you try to simplify the GPU kernel programming?

No, we do not develop new programming models to simplify the kernel programming. The rationale
is simple: Writing efficient kernels requires domain-specific knowledge and developers often require
direct access to the native GPU programming interface. High-level kernel programming models or
abstractions all come with restricted applicability. Despite non-trivial kernel programming, we believe
what makes heterogeneous computing difficult are surrounding tasks. A mistake made by task
scheduling can outweigh all speed-up benefits from a highly optimized kernel. Therefore, Taskflow
focuses on heterogeneous tasking that affects the overall system performance to a large extent.

Q11: Do you have any real use cases?


We have applied Taskflow to solve many realistic workloads and demonstrated promising
performance scalability and programming productivity. Please refer to Real Use
Cases and References.

Q12: Who is the Principal Investigator of Taskflow I can talk to?

Please visit this page or email the investigator Dr. Tsung-Wei Huang.

Q13: Who are developing and maintaining Taskflow?

Taskflow is in active development with core functionalities contributed by an academic group at the
University of Utah, led by Dr. Tsung-Wei Huang. While coming out of an academic lab, Taskflow aims
to be industrial-strength and is committed to long-term support.

Q14: Is Taskflow just an another API or model?

OK, let me ask this first: Is your new car just another vehicle? Or, is your new home just another place
to live?

The answer to this question is the question itself. As technology advances, we can always find new
ways to solve computational problems and achieve new performance milestones that were
previously out-of-reach.
Q15: How can I contribute?
New contributors are always welcome! Please visit Contributing.

Programming Questions
Q1: What is the difference between Taskflow threads and workers?

The master thread owns the thread pool and can spawn workers to run tasks or shutdown the pool.
Giving taskflow N threads means using N threads to do the works, and there is a total of N+1 threads
(including the master thread) in the program. Please refer to Create an Executor for more details.

Q2: What is the Lifetime of a Task and a Graph?

The lifetime of a task sticks with its parent graph. A task is not destroyed until its parent graph is
destroyed. Please refer to Understand the Lifetime of a Task for more details.

Q3: Is taskflow thread-safe?

No, the taskflow object is not thread-safe. Multiple threads cannot create tasks from the same
taskflow at the same time.

Q4: Is executor thread-safe?


Yes, the executor object is thread-safe. You can have multiple threads submit different taskflows to
the same executor.

Q5: My program hangs and never returns after dispatching a taskflow graph. What's wrong?

When the program hangs forever it is very likely your taskflow graph has a cycle or not properly
conditioned (see Conditional Tasking). Try the tf::Taskflow::dump method to debug the graph before
dispatching your taskflow graph.

Q6: In the following example where B spawns a joined subflow of three tasks B1, B2, and B3, do they
run concurrently with task A?

TaskflowSubflow: BABCDB1B3B2

No. The subflow is spawned during the execution of B, and at this point A must have finished
because A precedes B. This gives rise to the fact B1 and B2 must run after A.

Q7: What is the purpose of a condition task?

A condition task lets you perform in-task decision making so you can integrate control flow into a
task graph with end-to-end parallelism without synchronizing or partitioning your parallelism across
conditionals.
Q8: Is the program master thread involved in running tasks?

No, the program master thread is not involved in running taskflows. The executor keeps a set of
private worker threads spawned upon construction time to run tasks.

Q9: Are there any limits on the branches of conditional tasking?

No, as long as the return value points to a valid successor, your conditional tasking is valid.

Q10: Why does Taskflow program GPU in a task graph?

We ask users to describe a GPU workload in a task graph and execute it in a second moment. This
organization minimizes kernels launch overhead and allows the GPU runtime (e.g., CUDA) to optimize
the whole workflow.

Q11: Can I limit the concurrency in certain sections of tasks?

Yes, Taskflow provides a lightweight mechanism, tf::Semaphore, for you to limit the maximum
concurrency (i.e., the number of workers) in a section of tasks. Please refer to Limit the Maximum
Concurrency.
Informatica Cloud (IICS)
Professional
Certification-Registration Guide
There are multiple certifications offered by Informatica. The registration
process for all exams is almost similar. Let us discuss in this article how to
register for Informatica Cloud Professional Certification exam.

This is just an informative article. We are not responsible for


any payment related issues. Make sure you read all the
instructions provided by Informatica thoroughly and contact
the Informatica support team in case of any queries,
requests or issues.

After your successful registration and receiving the confirmation from


Informatica, you will have 90 days’ time to complete your certification exam.
No scheduling required. You can take exam at any point of time in that 90
days’ time frame.

The exam comprises of 70 multiple choice questions to be completed in 90


minutes.

If you fail to achieve the pass percentage (which is 70%), you can take a
second attempt after 2 weeks.

You should complete both your 1st and 2nd attempt within 90
days only. So plan accordingly.
2. Step-by-step Registration Guide
2.1. Create an Informatica Account with your
email-id
● Navigate to the Informatica site to create an account. The account
creation process is straight forward.
2.2. Login to Informatica University
● After successful account creation, navigate to Informatica site and
click Login.
● Enter the email-id and password you have set during account
creation process.
● On the home page you will find links to various Informatica
portals.
● Click on Informatica University.
2.3. Search for the certification exam and add
to cart
● In the Informatica University home page, search for the
certification exam you like to take.

● Click on the certification exam from the auto results.


● You will be redirected to the selected exam page. Click on Add to
Cart.
● The Informatica Cloud certification cost is 340 USD.
● You will be asked if you are registering for yourself or other.
Leave the default selection Myself as it is and click on Submit.
● After submission you will be redirected to Shopping Cart. Verify
the details once again and click on Proceed To Checkout to
make payment.
2.4. Proceed to the payment page
2.4.1 Steps to follow if Credit card payment method is
not enabled.
● On the payment page you will see three payment methods. We
need to select the Credit Card payment method.
If you are outside USA and Canada, there are chances that
you might not see the credit card option enabled when you
click on select as shown below.

● If you see the credit card payment method already enabled,


proceed to next step 2.4.2.
● On the Payment page right click on Request Registration
Assistance and select copy email address.
● Compose an email as shown below and send it to the copied
email address which
is [email protected]
Subject: Credit Card Payment Method Not enabled

Body:

Hi Informatica Team,

I am from <your country> and I do not see Credit card payment method
enabled to pay the amount for certification exam - Cloud Data and
Application Integration R34, Professional Certification.

Please enable the credit card payment method.

Please let me know if any further information is required from my side


to get this enabled.

Thanks,
Your name.

NOTE: This is just a sample format which you can use. Feel free to compose
your own email if you want to make any changes.
Make sure you send email from the same email-id you are using to register for
the certification exam.

● Now you need to wait for the reply from the Informatica team.
Usually it might take 2-4 working days for them to enable the
credit card payment method.
● Once you receive email confirmation from Informatica team as
below, login to Informatica and go to shopping cart.

● You should see the Credit Card payment method enabled.


2.4.2 Steps to follow for Credit card payment
● Select Credit Card payment method and click Next.
● Enter your Credit card information.
● Enter the address details and click next. Do not worry about
Billing Address and Shipping Address. Just make sure you enter
the address registered with your Credit card.

● You will enter into step-2 of payment procedure. Check details


and click Place Order.
● In the final step you will receive a payment confirmation. Also look
out for an email confirmation from Informatica.

3. Launching the exam


After successful registration, whenever you decide to take the exam, login to
Informatica and navigate to Informatica University.

Search for the exam as we did in step 2.3 and select the exam from auto
results.

You might also like