DevGuide ManageIntContent External
DevGuide ManageIntContent External
3 Packaging Integration Content in SAP Cloud Platform Integration Web Application. . . . . . . 394
3.1 Creating an Integration Package. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .394
3.2 Importing Integration Packages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
3.3 Working with an Integration Package. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
3.4 Editing an Integration Package. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
3.5 Update on Integration Packages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .398
3.6 Exporting Integration Packages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
4 Developing Integration Content with the SAP Cloud Platform Integration. . . . . . . . . . . . . . . 402
4.1 Understanding the Basic Concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .403
Elements of an Integration Flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
Adapter and Integration Flow Step Versions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
Versioning of Artifacts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
Product Profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .406
Restrictions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
4.2 Generating Integration Content using APIs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
4.3 Working with Prepackaged Integration Content. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
4.4 Add Integration Packages to the Customer Workspace. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
4.5 Creating an Integration Flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
4.6 Define Integration Processes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
4.7 Integration Flow Extension. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
Integration Flow Extension - Concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
Mapping Extension Step by Step (Demo Example). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
Mapping Extension Step by Step (Example from SAP Hybris C4C). . . . . . . . . . . . . . . . . . . . . . 447
4.8 Content Transport. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
Enabling Content Transport. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Creating HTTP Destination in Solutions Lifecycle Management. . . . . . . . . . . . . . . . . . . . . . . . 458
Design integration content in order to specify how messages are to be exchanged between the connected
components.
SAP Cloud Platform Integration provides a set of tools and applications that help you perform end-to-end tasks
on development and deployment, packaging and publishing, accessing and editing the integration content.
This topic provides an overview of roles, working environment and tasks involved in managing integration
content.
Quick Links:
● Developing Integration Content with the SAP Cloud Platform Integration [page 402]
● Developing Integration Content Using the Eclipse Integration Designer [page 6]
Since the tasks are performed by different roles, in different working environment such as, Integration Designer
on Eclipse platform or SAP Cloud Platform Integration Web application on SAP UI5, the figure below helps you
understand the relationship between the roles, tools/applications, and tasks:
SAP Cloud Platform Integration provides integration tools on the Eclipse platform to model integration flows,
configure attributes of the integration flows, and deploy them to the runtime.
You can work with the integration tools in the local development mode, which means that you create an
integration project in your local Eclipse workspace and start developing integration content using the features
available in the Integration Designer perspective. Once the content is ready, you deploy the project to the
runtime in the SAP Cloud Platform Integration infrastructure.
To develop and configure integration content, install the features as described on the installation page SAP
Cloud Platform Integration Tools.
You can use integration flows to specify specific integration patterns like mapping or routing.
A graphical editor allows you, the integration developer, to model the message processing steps and specify in
detail what happens to the message during processing.
The following figure provides a simplified and generalized representation of an integration flow.
Connectivity (Adapters)
An integration flow channel allows you to specify which technical protocols should be used to connect a sender
or a receiver to the tenant.
Note
To specify an adapter, click the connection arrow between the sender/receiver and the Integration Process
box.
Note
To insert a step into an integration flow, drag and drop the desired step type from the palette on the right of
the graphical modeling area.
Message Flows
You use message flows to connect various integration flow elements.
The integration framework gives you options to evaluate certain parameters at runtime, which allows you to
define sophisticated ways of controlling message processing. There are two different kinds of parameter:
Note
Note that data written to the message header during a processing step (for example, in a Content
Modifier or Script step) will also be part of the outbound message addressed to a receiver system
(whereas properties will remain within the integration flow and will not be handed over to receivers).
Because of this, it is important to consider the following header size restriction if you are using an
HTTP-based receiver adapter: If the message header exceeds a certain value, the receiver may not be
able to accept the inbound call (this applies to all HTTP-based receiver adapters). The limiting value
depends on the characteristics of the receiver system, but typically ranges between 4 and 16 KB. To
overcome this issue, you can use a subsequent Content Modifier step to delete all headers that are not
supposed to be part of the outbound message.
● Exchange property
For as long as a message is being processed, a data container (referred to as Exchange) is available. This
container is used to store additional data besides the message that is to be processed. An Exchange can be
seen as an abstraction of a message exchange process as it is executed by the Camel framework. An
Exchange is identified uniquely by an Exchange ID. In the Properties area of the Exchange, additional data
can be stored temporarily during message processing. This data is available for the runtime during the
whole duration of the message exchange.
When you use an HTTP-based receiver adapter, Exchange properties are not converted to an HTTP header
for transfer to the receiver.
You can use the Content Modifier to modify the content of the message header and the Exchange property (as
well as of the message body) at one or more steps during message processing.
Remember
Please do not modify headers or properties prefixed with SAP unless otherwise specified in the document.
If modified it can result in runtime issues during message processing.
You can use the message header and the Exchange property to configure various sophisticated ways of
controlling message processing.
When configuring an integration flow using the modeling user interface, you can define placeholders for
attributes of certain adapters or step types. The value that is actually used for message processing is set
dynamically based on the content of the message. You can use a certain message header or Exchange property
to dynamically set a specific integration flow property.
Another option to derive such data from a message at runtime is to access a certain element in the message
payload.
The following headers and Exchange properties are supported by the integration framework.
Note
A subset of these parameters is provided by the associated Open Source components, such as Apache
Camel.
● CamelXmlSignatureTransformMethods
Specifies transformation methods in a comma-separated list.
You can use this header to specify transformation methods in a comma-separated list. This header will
overwrite the value of the option Transform Method for Payload.
Example
Sample Code
Example of this use case: The XML signature verifier of the receiving system expects an XML signature
as shown in the following code snippet.
The signature is a detached signature, because the signature element is a sibling of the signed element
B. However, the receiving system requires the enveloped-signature transform method to be specified in
the Transforms list. To ensure this, you have to configure a detached signature in the XML Signer step,
then add a Content Modifier step before the XML Signer step, where you specify the header
"CamelXmlSignatureTransformMethods" with the constant value “http://www.w3.org/2000/09/
xmldsig#enveloped-signature,http://www.w3.org/TR/2001/REC-xml-c14n-20010315".
Headers Relevant for Message Signing with XML Advanced Electronic Signature
● CamelXmlSignatureXAdESQualifyingPropertiesId
Specifies the Id attribute value of the QualifyingProperties element.
● CamelXmlSignatureXAdESSignedDataObjectPropertiesId
Specifies the Id attribute value of the SignedDataObjectProperties element.
● CamelXmlSignatureXAdESSignedSignaturePropertiesId
Specifies the Id attribute value of the SignedSignatureProperties element.
● CamelXmlSignatureXAdESDataObjectFormatEncoding
Specifies the value of the Encoding element of the DataObjectFormat element.
● CamelXmlSignatureXAdESNamespace
Overwrites the namespace parameter value.
● CamelSplitIndex
Provides a counter for split items that increases for each Exchange that is split (starts from 0).
● CamelSplitSize
Provides the total number of split items (if you are using stream-based splitting, this header is only
provided for the last item, in other words, for the completed Exchange).
● CamelSplitComplete
Indicates whether an Exchange is the last split.
● CamelCharsetName
Specifies the character encoding to be applied for message processing.
Is relevant for content encoding steps.
● CamelHttpUri
Overrides the existing URI set directly in the endpoint.
This header can be used to dynamically change the URI to be called.
● CamelHttpUrl
Refers to the complete URL called, without query parameters.
For example, CamelHttpUrl=https://test.bsn.neo.ondemand.com/http/hello.
● CamelHttpQuery
Refers to the query string that is contained in the request URL.
In the context of a receiver adapter, this header can be used to dynamically change the URI to be called.
For example, CamelHttpQuery=abcd=1234.
● CamelHttpMethod
Refers to the incoming method names used to make the request. These methods are GET, POST, PUT,
DELETE, and so on.
● CamelServletContextPath
Refers to the path specified in the address field of the channel.
For example, if the address in the channel is /abcd/1234, then CamelServletContextPath is /abcd/1234.
● CamelHttpResponseCode
This header can be used to manually set the HTTP response code.
● Content-Type
HTTP content type that fits to the body of the request.
The content type is composed of two parts: a type and a subtype.For example, image/jpeg (where image
is the type and jpeg is the subtype).
Examples:
○ text/plain for unformatted text
○ text/html for text formatted with HTML syntax
○ image/jpeg for a jpeg image file
○ application/json for data in JSON format to be processed by an application that requires this
format
More information on the available types: https://www.w3.org/Protocols/rfc1341/4_Content-Type.html
Note
If transferring text/* content types, you can also specify the character encoding in the HTTP header
using the charset parameter.
The default character encoding that will be applied for text/* content types depends on the HTTP
version: us-ascii for HTTP 1.0 and iso-8859-1 for HTTP 1.1.
Text data in string format is converted using UTF-8 by default during message processing. If you want
to override this behavior, you can use the Content Modifier step and specify the CamelCharsetName
Exchange property. To avoid encoding issues when using this feature together with the HTTP adapter,
consider the following example configuration:
If you use a Content Modifier step and you want to send iso-8859-1-encoded data to a receiver, make
sure that you specify the CamelCharsetName Exchange property (either header or property) as
iso-8859-1. For the Content-Type HTTP header, use text/plain; charset=iso-8859-1.
● Content-Encoding
HTTP content encoding that indicates the encoding used during message transport (for example, gzip for
GZIP file compression).
This information is used by the receiver to retrieve the media type that is referenced by the content-type
header.
If this header is not specified, the default value identity (no compression) is used.
More information: https://tools.ietf.org/html/rfc2616 (section 14.11)
The list of available content types is maintained by the Internet Assigned Numbers Authority (IANA). For
more information, see:http://www.iana.org/assignments/http-parameters/http-
parameters.xhtml#content-coding .
● CamelFileName
Overrides the existing file and directory name that is set directly in the endpoint.
This header can be used to dynamically change the name of the file and directory to be called.
Note
The mail adapter supports all possible headers that a mail server or mail client can set. Which headers are
set or not set depends on the mail server and the mail client. The headers listed in the table below are
examples of commonly used headers.
● Subject
Specifies the subject of the e-mail message.
● To
Specifies the e-mail address that the message is sent to.
● Cc
Specifies the additional e-mail address that the message is sent to.
● CamelAggregatedCompletedBy
This header is relevant for use cases with message aggregation.
The header attribute can only have one of the following values:
○ timeout
Processing of the aggregate has been stopped because the configured Completion Timeout has been
reached.
○ predicate
Processing of the aggregate has finished because the Completion Condition has been met.
● JMSTimestamp
Specifies the time when a JMS message was created.
Headers Relevant for the SOAP (SOAP 1.x), SOAP (SAP RM), and IDoc Adapter
● SOAPAction Header
This header is part of the Web service specification.
● SapAuthenticatedUserName
Contains the user name of the client that calls the integration flow.
If the sender channel is configured to use client certificate authentication, no such header is set (as it is not
available in this case).
● SapIDocType <XML Response>
The adapter parses the XML Response and generates the SapIdocType header from it.
An example header would be:
SapIDocType WPDTAX01
.
● SapIDocType <XML Response>
An example header would be:
SapIDocType WPDTAX01
You can specify one of the following headers (under Message Header in the Name field):
● SAP_ApplicationID
When you monitor the messages at runtime, you can search for all messages whose defined
SAP_ApplicationID has a specific value (displayed as the MessageID attribute in the Message
Monitoring editor).
As Type, select the XPath expression that points to the message element that is to be used as the
application ID.
● SAP_Sender
● SAP_Receiver
● SAP_MessageType
You can use this property to categorize messages.
● SAP_MessageProcessingLogID
You can use this property to read the ID of the message processing log (no write access supported).
If you have specified SAP_Sender or SAP_Receiver, the corresponding values are displayed in the message
processing log. If you change the SAP_Receiver value during message processing, all values are added to the
receiver field in the message processing log as a comma-separated list. If you don't want this behavior, you can
specify the exchange property SAP_ReceiverOverwrite (see below).
Exchange Properties
You can specify one of the following Exchange properties (under Exchange Property in the Name field):
● SAP_CorrelateMPLs
You can use this property to specify whether message processing logs (MPLs) are to be correlated with
each other using a correlation ID.
By default, MPL correlation is switched on. To specify this property, select Constant as Type and enter
True or False as Value.
● SAP_ReceiverOverwrite
Headers that are added to a message using the SAP_Receiver header element during message
processing are appended to the message processing log (MPL).
This behavior is helpful in scenarios like,the multicast pattern, for example, where a message is sent to
several receivers and all receivers are to be collected in the MPL (not just the last added header).
By setting the SAP_ReceiverOverwrite exchange property to true, you can change this behavior in
such a way that only the last added header is shown in the MPL.
Note
Example configuration:
Name: SAP_ReceiverOverwrite
Type: Constant
Value: True
Related Information
You can define placeholders for parameters of certain adapters or step types. The values of these parameters
will then dynamically be set based on the content of the processed message.
For example, parameters From, To, Cc, Bcc, Subject, Mail Body as well as the attachment name, can be
dynamically set at runtime from message headers or content.
To set an attribute to be dynamically filled by a message header attribute, enter a variable in the form $
{header.attr} in the corresponding field for the attribute of the corresponding step or adapter.
At runtime, the value of the header attribute (attr) of the processed message is written into the field for the
corresponding attribute of the outbound email.
For example, assume that you dynamically define the email Subject of the mail adapter as shown in the figure
below by the variable {header.attr}.
At runtime, a message is received whose header contains a header attribute attr with the value value1. The
mail adapter will then dynamically set the subject of the outbound email with the entry value1.
As shown in the figure, we assume that the inbound message contains a header header1 with value value1.
Let us assume that you like to define the Subject attribute of the mail receiver adapter dynamically via this
header. To do that, specify the Subject field by the following entry:
${header.header1}
As a result, the mail adapter dynamically writes value value1 of header header1 (from inbound message) into
the subject of the outbound email.
Related Information
SAP Cloud Platform Integration provides features on Eclipse to develop and configure integration content.
The feature, called the Integration Designer, provides options to develop integration flows in your local Eclipse
workspace, which implies no network connection is required during development. Each integration flow is
associated with a project and can refer to other entities, such as message mappings, operation mappings, and
WSDL definitions, that are available within the same project.
The integration flow can also refer to an entity, such as a value mapping, that is not available within its project.
You create a separate value mapping project such that the reference takes place across the projects within the
workspace.
The integration flow along with other referenced entities form the integration content. Once you complete the
development of integration content, you deploy the integration flow project as well as the referenced value
mapping to the runtime.
Note
Another feature, called the Integration Operation Monitoring, provides options to monitor the deployed
integration projects in runtime.
The sections below introduce you to different project types that the tooling provides based on the entities.
The Integration Flow project type contains packages for creating integration content, where each package
consists of a particular entity.
Note
Additional files that are available in the Integration Flow project types are:
The Value Mapping project type is used for scenarios that require you to map different representations of an
object to each other. Each value mapping project contains one or more value mapping group that is a set of
values of the object.
value_mapping.XML File contains value mapping groups that hold the objects and
their corresponding representations
2.1.3 Restrictions
The Integration Designer allows you to model specific patterns which are handled at runtime in an unexpected
way.
Integration flow step with The same message is proc The messages are delivered Configure only one outgoing
more than one outgoing se essed in parallel after the in to the different receivers in a sequence flow and parallel
quence flows tegration flow step. sequence. processing using a multicast
of messages.
For example, after a Message Hereby, the order in that the
Persistence step the mes messages are delivered is
sage is supposed to be sent randomly generated.
to multiple receivers in paral
In addition to that, the follow
lel.
ing behavior may occur: the
message which results from
the processing in the previ
ous sequence flow is taken
as input for the next se
quence flow.
Note
As an example, consider
two parallel sequence
flows where the first one
contains an encryption
step and the second one
not. In that case, the re
ceiver of the second se
quence flow also gets an
encrypted message (al
though in the second se
quence flow no encryp
tion step has been con
figured).
If one of the below listed steps is contained in an integration flow, the processing of the message is executed in
one transaction.
Caution
Such steps might lead to resource shortages because long running transactions can cause node instability
and impede other processes that are running in transactions.
In case an error is propagated back to the calling component, all data that have been written in the course
of the (failed) transaction are being removed (in other words: not persisted in the database). For the calling
component, an error implies, therefore, to restart the integration flow.
Transactional processing is also to be considered in scenarios that contain asynchronous decoupling. Let’s
assume integration flow A contains a Data Store Operation step. Integration flow B contains a Select
operation on the Data Store and runs into an error. In that case, that data is preserved that has been written
to the database by integration flow A. This behavior makes sense in particular when you consider the case
that integration flow B changes or deletes the data that has been stored by integration flow A. In case
integration flow B fails, the original data from integration flow A can be retrieved.
Additional Restrictions
Usage of an Aggregator step in a Local Integration Process or Exception Subprocess is not supported.
You install the features of SAP Cloud Platform Integration on the Eclipse integrated development environment
(IDE) to access the Integration Designer functions.
Context
Procedure
1. Start Eclipse.
You can make specific settings for SAP Cloud Platform Integration in the Eclipse Preferences.
Context
You perform the tasks below to configure the tool settings with attributes you are likely to need when working
with the Eclipse IDE.
You can find the specific settings for SAP Cloud Platform Integration under Window Preferences SAP
Cloud Platform Integration .
Related Information
You need to specify the connection from Eclipse to the tenant management node in order to perform tasks
such as deploying integration flows on the tenant.
The tenant management node contains the operations subsystem that is responsible for tasks such as
deploying integration content or selecting monitoring data from the database.
Property Description
Password
Choose Test Connection to test whether the specified URL and user/password enable you to connect to the
tenant management node.
If the URL and user/password are correct but you still get the error message Sending request to server
failed. Reason: Error during processing request on client, check the proxy settings in Eclipse
(under Window Preferences General Network Connections ).
2.2.2.2 Personalize
You can specify personal settings that control how to deal with integration flow templates and how to handle
integration project/integration flow creation.
Personal Settings
Property Description
Store integration flow templates at Location where to store integration flow templates on your
local computer
Always create integration flow for new integration project If this checkbox is selected (which is the default setting), an
integration flow is always created for a new integration
project.
Product profile is a collection of capabilities such as success factor adapter, splitter or datastore elements,
available for a particular product. You can consume these capabilities at the time of designing integration flows.
Context
SAP Cloud Platform Integration enables you to design for multiple runtimes at the same time. You should
select specific product profile to develop content for the respective runtime.
Note
● If a product profile does not support a particular capability then the checks report errors for
unsupported components in the integration flow.
Procedure
Note
○ If no product profile is selected, by default it is SAP Cloud Platform Integration at the project level
configuration. Also, in that case the system applies workspace level product profile for the
integration flow.
○ If you want to import or export the zip file format of product profile, then you can use Import or
Export option.
○ If you update the tooling, cmd folder is placed inside workspace directory and old profiles cached
in old cmd location are lost. You can manually copy old profiles to the new cmd location and restart
eclipse.
○ You can reload component metadata in the following 3 ways:
○ You can reopen eclipse tool.
○ You can reconnect to server.
○ You can use icon (on eclipse toolbar) to manually download it.
If you need to import interfaces or mappings from an on-premise repository, such as the Enterprise Services
Repository, you have to set the connection details to establish the connection with the repository.
Connection to the Enterprise Services Repository is supported for both the Advanced Adapter Engine
(AEX) and Dual Stack.
Property Description
Password
Choose Test Connection to test whether the specified URL and user/password enable you to connect to the
repository.
You can specify settings that control how integration tests are executed.
Context
You use this task only if you need to uninstall a feature of an installed software plugin.
Context
An integration flow is a graphical representation of how the integration content can be configured to enable the
flow of messages between two or more participants using SAP Cloud Platform Integration, and thus, ensure
successful communication.
You perform this task to create a BPMN 2.0-based integration flow model in your integration project under the
src.main.resources.scenarioflows.integrationflow package. You can create an integration flow by using the
built-in patterns, templates provided by SAP, or user-defined templates.
Note
You can use the templates provided by SAP in the New Integration Flow wizard page to help you create and
modify integration flows based on certain scenarios. These templates are based on some of the SAP
supported scenarios.
Restriction
In this release, SAP supports only limited number of possible integration scenarios.
1. In the main menu, choose Window Open Perspective Integration Designer to open the
perspective.
2. In the main menu, choose File New Integration Project… to create a new integration project.
3. In the New Integration Project wizard, enter a project name.
Note
○ By default, Node Type is set to IFLMAP, which indicates that the integration flow is deployed to that
node in the runtime environment.
○ Choose product profile for the integration project from Product Profile field. The integration flow
templates used during creation adheres to the latest version of a component available in product
profile.
4. If you want to add the project to the working set at this point, select the option Add project to working set.
Note
If you do not choose to add the project to the working set in this step, you can add it later. For more
information about working sets, see Creating a Working Set of Projects [page 25].
5. If you want to create an integration flow of a specific pattern for the new integration project, choose Next.
Note
You can also create an integration project together with a point-to-point integration flow. To enable this
option, choose Window Preferences SAP Cloud Platform Integration Integration Flow
Preferences page, and select the Auto-create integration flow on 'Finish' during integration project
creation option.
6. In the New Integration Flow page, enter a name for the integration flow.
7. If you want to create an integration flow using the built-in patterns, select the Enterprise Integration
Patterns category and choose the required pattern.
8. If you want to create an integration flow using SAP templates, select the SAP Defined Template category
and choose the required template.
9. If you want to create an integration flow using templates specific to your scenario, select the User Defined
Template category and choose the required template.
Note
You can find the templates in the User Defined Template category only if you have saved an integration
flow as a template. For more information, see Saving Integration Flow as a Template [page 29].
○ The bundle name and bundle ID that you enter get updated in the MANIFEST.MF file.
○ The bundle name and the integration project name are two different attributes.
○ The Node Type shows the runtime node type on which the integration flow is deployed.
○ The description field allows you to enter brief information about the integration project to help
other users understand the use of the project.
12. Click the graphical area, and in the Properties view, select the Integration Flow tab page.
13. Enter a description about the integration flow that provides information to other users who will view or
work with the integration flow.
14. Save the changes.
Context
You perform this task to group projects using the Working Sets feature provided by Eclipse.
For example, you can create a Working Set for grouping projects based on customer or you can create each
Working Set for specific integration scenarios.
Note
The actions available in the context menu of the projects that are added to the Working Set remain as
before.
Procedure
1. In the Project Explorer view of the Integration Designer perspective, select the dropdown icon from the
toolbar of the view.
2. Choose Select Working Set…
3. In the Select Working Set dialog, choose New….
4. In the New Working Set dialog, select Resource and choose Next.
5. Enter a name for the working set.
6. Select a resource from the Working set contents section.
7. Choose Finish.
8. If you want to edit the working set, select the dropdown icon and choose Edit Active Working Set.
9. Select the dropdown icon in the toolbar of the Project Explorer and choose Top Level Elements
Working Sets to display the Working Set and its content in the Project Explorer.
Note
If you want to go back to the default Project Explorer view that displays all the projects irrespective of
the Working Sets, select the dropdown icon in the toolbar of the Project Explorer and choose Deselect
Working Set.
Context
You perform this task to import interfaces and mappings from an On-Premise repository, such as the ES
Repository, into the integration project. In case of mappings, you can import message mappings (.mmap) and
operation mappings (.opmap).
Restriction
See the table below to know about the list of unsupported functionalities of the mappings being imported:
Procedure
1. In the Project Explorer, right-click on an integration project and from the context menu choose Import PI
Object.
2. In the Import PI Object dialog, select ES Repository below the object type you want to import.
For example, if you want to import operation mappings, select ES Repository below Operation Mapping
object type.
3. Choose Next.
4. In the Import Mappings from ES Repository dialog, select one or more objects and choose Finish.
Results
The imported objects are placed under their respective src.main.resources.<object> folder. For
example, check the imported mapping under src.main.resources.mapping and imported interface under
src.main.resources.wsdl.
WSDLs/XSDs corresponding to Message Types and Fault Message Types are placed under
src.main.resources.mapping folder, other interfaces get placed under src.main.resources.wsdl.
● If operation mapping contains message mapping, then the message mapping is downloaded as a jar under
src.main.resources.mapping package.
● If the operation mapping contains XSLTs, then the files are downloaded as .xsl under
src.main.resources.mapping.
● Imported source or target WSDLs are not supported in integration flows.
Context
You perform this task if you want to modify the existing integration flow model. For example, if the templates
provided by SAP do not exactly match your requirement, you can modify the integration flow created from the
templates while adhering to SAP's modeling constraints.
To add integration flow elements, you can drag the notations for systems, connections, mappings, routers, and
so on, from the palette and drop it at the required location of the integration flow model in the graphical editor.
Alternatively, you can add elements, such as mapping and split, to the integration flow model using the context
menu options available on the connections of the model.
Note
The integration flow should match the SAP supported scenarios to avoid deployment failure.
Example
Consider an example which requires you to model an integration flow with multiple pools. The scenario with
multiple pool may involve any of the following:
● Hosting same endpoint with different connectors, such as SFTP and SOAP connector
● Polling content from different servers
● Grouping similar integration logic that uses different interfaces
The list of elements that you require to model a multiple pool integration flow are:
Procedure
Prerequisites
You have specified the location path to store the user-defined templates at Windows Preferences SAP
Cloud Platform Integration Personalize .
Context
You perform this task to save an integration flow as a template. The integration flow saved as template retain
the attribute values so that you can reuse the template for similar configuration scenarios.
Procedure
1. In the Project Explorer view, open the <integration flow>.iflw from your project.
Note
If the integration flow contains externalized parameters and you do not select this option, the
integration flow gets saved with the most recent values you have assigned to the externalized
parameters.
5. Choose OK.
You can find the new template in the location you have mentioned in the Preferences page. When you create the
integration flow using the template, you can find the saved configurations.
Prerequisites
Make sure you have created a valid Integration project with Message Mapping associated scripts.
Context
In Message Mapping, you can create your own custom functions by using groovy scripts and use them as
required. You can use custom functions from the function palette for modeling the mapping expressions.
Procedure
The path of the selected project gets added to the Path field, and the selected message mapping page
opens.
6. In the Signature section of the message mapping, select the Source Element (.xsd) and Target Element
(.xsd) by choosing Add…
7. In the Properties section, double-click the Definition tab to see the message mapping.
The Standard Functions and Custom Functions appear on the function palette of the Properties section.
8. Choose Custom Functions.
In the Properties section, if you do not find any groovy scripts under Custom Functions in the function
palette, then can add an existing groovy script or create a new groovy script.
9. In the context menu of Custom Functions, choose Add to use and existing groovy script file.
The Add Script dialog box appears.
a. Select a groovy script (.gsh) file, and choose OK.
The selected groovy script along with all the functions that fulfill the message mapping requirements
appear under Custom Functions in the functions palette.
10. In the context menu of Custom Functions, choose Create to create a new groovy script file.
The New Mapping Script dialog box appears.
a. Enter or select the parent folder for the new groovy script file.
b. Enter the name of the new groovy script file in the File name field.
c. Choose Finish.
The new groovy script file appears under Custom Functions in the function palette. You can expand the
new groovy script to see all the functions under it.
In the function palette, you can select any groovy script under Custom functions and you can use it and
customize it according to your requirements.
11. If you want to add a function to the Expression editor, select a function under the groovy script in the
function palette.
12. Drag and drop the function from the groovy script to the text area of the expression editor.
13. You can define your mapping logic by using your custom functions.
Note
○ You can validate your mapping and check for errors by viewing the Problems tab, or Console tab, or
Error log tab. If your mapping has any errors, then they are displayed with error markers .
○ In the function palette, if you have used a function from one of your scripts under Custom
Functions, and if that script is removed is from the functions palette, then an error marker is
displayed.
○ You can access the Message Processing logs using
messageLogFactory.getMessageLog(context);. For access specific logs, please refer
Javadocs.
○ When you launch Eclipse, if you do not see your local workspace, choose File Switch
workspace Other... and select the required workspace.
You can create your own custom functions by using mapping scripts and use them as required. You can use
custom functions from the function palette for modeling the mapping expressions.
● Single value
● All values of context
Guidelines for Creating Mapping Script for Single Value Type of Execution
If you want to add a mapping script to a Message Mapping function palette, ensure that the following
conditions are fulfilled:
● Make sure that each function has at least one argument, and also make sure that their type is also declared
along with it.
● The supported types for mapping script are int, float, string, or boolean.
● Make sure that the function’s return type is specified and that it can only be a String.
● Functions which you declare as private cannot be seen in the Message Mapping function palette, but it can
be used in other functions internally.
● You can only use the functions of the JAR files supported by the Integration project/package that you are
working on.
Example
import com.sap.it.api.mapping.*
def String extParam(String P1,String P2,MappingContext context) {
String value1 =
context.getHeader(P1);
String value2 =
context.getProperty(P2);
return value1+value2;
}
Guidelines for Creating Mapping Script for All Values of Context Type of
Execution
For all values of context, ensure that the following conditions are fulfilled:
import com.sap.it.api.mapping.*
def void extParam(String[] P1,String[] P2, Output output, MappingContext
context) {
String value1 =
context.getHeader(P1);
String value2 =
context.getProperty(P2);
output.addValue(value1);
output.addValue(value2);
}
Context
Procedure
Context
You use this task to create a value mapping definition that represent multiple values of an object. For example,
a product in Company A is referred by the first three letters as 'IDE' whereas in Company B it is referred by
product code as ''0100IDE". When Company A sends message to Comapny B, it needs to take care of the
difference in the representations of the same product. So Company A defines an integration flow with a
Mapping element and this Mapping element contains reference to the value mapping definition. You create
such value mapping groups in a Value Mapping project type.
Note
2. In the New Project wizard, select SAP Cloud Platform Integration Integration Project .
3. Choose Next.
4. Enter a project name and select the required Project Type as Value Mapping.
5. Choose Finish.
A new project is available in the Project Explorer view. By default, the new value mapping project is
configured with IFLMAP runtime node type.
Context
You use this task to define value mapping groups in the value_mapping.xml file that is available under the Value
Mapping project type. You enter the group IDs for each set of agency, schema and a value of the object that
together identify a specific representation of the object.
Procedure
1. Open the value_mapping.xml in the editor and choose the Source tab page editor.
2. Enter the group ID, agency, schema and value as shown in the example below, and save the changes.
Tip
If you want to edit the values, you can switch to the Design tab page editor.
Context
You use this task to either import a .csv file containing value mapping groups into the Value Mapping project
type within your workspace or export the content of value_mapping.xml from your workspace and store it as
a .csv file in your file system. The format of the valid .csv file containing value mapping groups is shown in the
image below:
This task shows the steps for a simple scenario that requires you to export value mappings from your
workspace, and import the same value mappings into a workspace located in another system.
1. Export the value mapping groups into your local file system
a. In the Project Explorer, select the value mapping project and choose Export Value Mappings.
b. In the Export Value Mapping Groups wizard, select the required value mapping groups and choose
Next.
c. In the Export groups into a file page, enter a name for the .csv file and browse for a location to store the
file.
d. Choose Finish.
The .csv file containing the exported value mapping groups is available at the selected file location.
Example
The image below shows an example of a vale mapping group exported into a .csv file.
Note
You cannot import value mappings that have been exported from Eclipse. If you do so, then the
existing version of the value mapping files changes.
The .csv file is imported as value_mapping.xml file, and is available under the value mapping project.
The screenshot below shows how the content of the .csv file (as shown in the previous screenshot)
gets imported as the value_mapping.xml.
Prerequisites
● You have imported message mapping (.mmap) from an On-Premise repository into the
src.main.resources.mapping folder of the integration project in your workspace.
● You have placed the required source and target WSDLs into the src.main.resources.wsdl folder of the
integration project in your workspace.
● You have added value_mapping.xml under the value mapping project.
Context
You use this procedure to configure the message mapping definition with references to a value mapping. The
value mapping referencing is required when a node value of a message needs to be converted to another
representation. The runtime can evaluate the reference only if you deploy the integration flow project
containing the message mapping, and the associated value mapping project on the same tenant.
7. In the Expression tab page of the Properties view, expand the Function folder.
For example, see the screenshot below:
8. Select the valuemap option under the Conversion package and drop it within the Expression tab page.
9. Connect the required node and the valuemapping function.
10. Double-click the value mapping function and provide details for the value mapping parameters.
11. Save the changes.
Context
You can execute a consistency check to validate if the content of the project is adhering to the required
definition of a value mapping.
The consistency check is executed on the value_mapping.xml file. The inconsistencies can be mainly due to
invalid content entered in the value_mapping.xml such as the value for an agency-schema pair is repeated ,
incorrect tags or missing tags.
Procedure
Context
You perform this task to configure an integration flow to represent a specific scenario.
You configure the integration flow by adding elements to the graphical model and assigning values to the
elements relevant to the scenario. The basic integration flow requires you to configure the sender and receiver
channels to enable a point-to-point process flow from the sender to a receiver.
The figure below helps you understand how a scenario is configured using an integration flow and is followed by
an explanation:
The scenario involves communication of System A with System P and System Q, where System A sends
messages to System P and System Q.
System A and System P have different communication protocols, whereas, System Q requires additional field
information in the message format sent by System A. In such a case, you do the following configurations in the
integration flow:
Note
You can use an Error End event to throw the exception to default exception handlers in integration process.
Context
You perform this task to assign the sender participant and receiver participant to the integration flow. To allow
the sender participant to connect to the tenant, you have to provide either the client certificates or
authenticate using the SDN username and password.
Procedure
○ Role-based Authentication
Select this options if you like to configure one of the following use cases:
○ Basic authentication
○ Client certificate authentication with certificate-to-user mapping
○ Client Certificate Authentication
Prerequisites
● You have configured connections to an On-Premise repository if you have to obtain interface WSDL from
the repository into this project.
Note
You can import service interfaces from ES Repository with server version 7.1 onwards. The imported
service interface WSDLs get added to src.main.resources.wsdl.
For more information on setting the connections to the repository, see Setting the Connections to On-
Premise Repository under Configuring the Tool Settings [page 19].
● If you want to use a WSDL available in your local file system, you have copied the file under
src.main.resources.wsdl in your project.
Context
You perform this task to enable communication between the sender and receiver participants by defining
connectors for the sender and receiver channels of the integration flow.
Procedure
1. In the Model Configuration editor page, select the sender or receiver channel (dotted lines at sender and
receiver side).
2. To configure the channel with a saved configuration that is available as a template, choose Load Channel
Template from the context menu of the channel.
Tip
If you want to reuse the connector configurations for channels that are within or across integration
flows, then select the Copy and Paste option from the context menu of the channel.
4. To save the configurations of the channel as a template, choose Save as Template from the context menu
of the channel.
Note
The receiver mail adapter allows you to send encrypted messages by e-mail. The sender mail adapter can
download e-mails and access the e-mail body content as well as attachments.
Context
You configure the mail adapter either as a receiver adapter or as a sender adapter.
You can use the receiver mail adapter to send encrypted messages by e-mail.
Note
The mailbox settings for downloading e-mails can interfere with the settings in the sender mail adapter.
For example: When using POP3 protocol, the post-processing setting Delete/Remove might not work
properly. In this case, try to configure the correct behavior in the mailbox.
Note
To access the mail attributes (Subject, From, or To), you have to set them manually as Allowed Headers on
the Runtime Configuration tab. This adds them to a whitelist.
Note
Note
Unlike with other adapters, if you’re using the sender mail adapter, the Cloud Integration system can’t
authenticate the sender of an e-mail.
Therefore, if someone is sending you malware, for example, it is not possible to identify and block this
sender in the Cloud Integration system.
To minimize this danger, you can use the authentication mechanism of your mailbox. Bear in mind,
however, that this mechanism might not be sufficient to protect against such attacks.
Caution
If you select Run Once option in the Scheduler, you see messages triggered from all the integration flows
with this setting after a software update. After the latest software is installed on a cluster, it is restarted.
You see messages from these integration flows with Run Once setting.
Restriction
An integration flow you deploy in SAP Cloud Platform Integration deploys in multiple IFLMAP worker nodes.
Polling is triggered from only one of the worker nodes. The message monitoring currently displays the
process status from the worker nodes where the Scheduler is not started. This results in the message
monitor displaying messages with less than a few milliseconds, where the schedule was not triggered.
These entries contain firenow=true in the log. You can ignore these entries.
Connection
Parameter Description
Address Specifies the host name or address of the IMAP server, for
example, mail.domain.com.
Use one of the following open ports for external mail serv
ers:
Proxy Type The type of proxy that you’re using to connect to the tar
get system.
Timeout (in ms) Specifies the network timeout for the connection attempt
to the server. The default value is 30000.
Location ID (only if On-Premise is selected for Proxy Type) To connect to an SAP Cloud Connector instance associ
ated with your account, enter the location ID that you de
fined for this instance in the destination configuration on
the cloud side.
○ Off
No encryption is initiated by the client.
Note
If your on-premise mail server requires SMTPS,
select Off for Protection. The SSL connection
then needs to be configured in SAP Cloud Con
nector.
Credential Name Specifies the name of the User Credentials artifact that
contains user name and password (used to authenticate
at the email account).
Processing
Parameter Description
Selection Specify which mails will be processed (all mails or only un
read ones).
(only if as Transport Protocol the option IMAP4 has been
selected)
Max. Messages per Poll Defines the maximal number of messages that will be read
from the email server in one polling step.
Note
If Post-Processing is set to Mark as Read and the poll
strategy is set to poll for all mails (Selection: All), then
already processed mails will be processed again at ev
ery polling interval.
Scheduler
Time Zone Select the time zone that you want the
scheduler to use as a reference for the
date and time settings.
Time Zone Select the time zone that you want the
scheduler to use as a reference for the
date and time settings.
The Run Once option has been removed in the newest version of the adapter. Default values for the interval
under Schedule on Day and Schedule to Recur have been changed so that the scheduler runs every 10
seconds between 00:00 and 24:00.
4. If you configure a receiver adapter, you can specify the following settings in the Connection and Security
tab pages.
Address Specifies the host name and (optionally) a port number of the SMTP server.
Use one of the following open ports for external mail servers:
Timeout (in ms) Specifies the network timeout for the connection attempt to the server.
The default value is 30000. The timeout should be more than 0, but less than five mi
nutes.
Proxy Type The type of proxy that you’re using to connect to the target system.
Location ID (only if On-Premise To connect to a cloud connector instance associated with your account, enter the lo
is selected for Proxy Type) cation ID that you defined for this instance in the destination configuration on the
cloud side.
Note
If your on-premise mail server requires SMTPS, select Off for Protection. The
SSL connection then needs to be configured in SAP Cloud Connector.
○ STARTTLS Mandatory
If the server supports STARTTLS, the client initiates encryption using TLS. If the
server doesn’t support this option, the connection fails.
○ STARTTLS Optional
If the server supports the STARTTLS command, the connection is upgraded to
Transport Layer Security encryption. This works with the normal port 25.
If the server supports STARTTLS, the client initiates encryption using TLS. If the
server doesn’t support this option, client and server remain connected but com
municate without encryption.
○ SMTPS (only when None has been selected for Proxy Type)
The TCP connection to the server is encrypted using SSL/TLS. This usually re
quires an SSL proxy on the server side and access to the port it runs on.
Authentication Specifies which mechanism is used to authenticate against the server with a user
name and password combination. Possible values are:
○ None
No authentication is attempted. No credential can be chosen.
○ Plain User Name/Password
The user name and password are sent in plain text. You should only use this op
tion together with SSL or TLS, as otherwise an attacker could obtain the pass
word.
○ Encrypted User/Password
The user name and password are hashed before being sent to the server. This au
thentication mechanism (CRAM-MD5 and DIGEST-MD5) is secure even without
encryption.
Credential Name Specifies the name of a deployed credential to use for authentication.
If you want to configure multiple mail receivers, use a comma (,) to separate the ad
dresses.
If you want to configure multiple mail receivers, use a comma (,) to separate the ad
dresses.
If you want to configure multiple mail receivers, use a comma (,) to separate the ad
dresses.
Body MIME Type Specifies the type of the message body. This type determines how the message is dis
played by different user agents.
Body Encoding Specifies the character encoding (character set) of the message body. The content of
the input message will be converted to this encoding, and any character that is not
available will be replaced with a question mark ('?'). To ensure that data is passed un
modified, select a Unicode encoding, for example, UTF-8.
MIME Type (under The Multipurpose Internet Mail Extensions (MIME) type specifies the data format of
Attachments) the e-mail.
○ Text/Plain
○ Text/CSV
○ Text/HTML
○ Application/XML
○ Application/JSON
○ Application/Octet-Stream
Source Specifies the source of the data. This can be either Body, meaning the body of the in
put message, or Header, meaning a header of the input message.
Header Name If the source is Header, this parameter specifies the name of the header that is at
tached.
Add Message Attachments Select this option to add all attachments contained in the message exchange to the e-
mail.
Security
Parameter Description
Signature and Encryption Type This parameter configures encryption and signature
schemes used for sending e-mails. The message body and
attachments are encrypted with the selected scheme and
can only be decrypted by the intended recipients.
Content Encryption Algorithm Specifies the symmetric (block) cipher. DESede should
only be chosen if the destination system or mail client
doesn’t support AES.
Secret Key Length Specifies the key size of the previously chosen symmetric
cipher. To increase the security, choose the maximum key
size supported by the destination.
Receiver Public Key Alias Specifies an alias for the public key that is to be used to
encrypt the message. This key has to be part of the tenant
keystore.
Send Clear Text Signed Message Sends the signed message as clear text, so that recipients
who don't have S/MIME security are able to read the mes
sage.
Private Key Alias Specifies an alias for the private key that is to be used to
decrypt the message. This key has to be part of the tenant
keystore. The alias can be dynamically read from a header
or property using ${header.alias}.
Signature Algorithm Specifies the algorithm used to sign the content using the
private key.
Note
The parameters From, To, Cc, Bcc, Subject, Mail Body as well as the attachment name, can be
dynamically set at runtime from message headers or content.
For more information about Camel Simple Expressions, see the following: http://camel.apache.org/
simple.html
Related Information
Context
The IDoc with SOAP message protocol is used when a system needs to exchange IDoc messages with another
system that accepts data over SOAP protocol.
● SapAuthenticatedUserName
Contains the user name of the client that calls the integration flow.
If the sender channel is configured to use client certificate authentication, no such header is set (as it is not
available in this case).
● SOAPAction Header
This header is part of the Web service specification.
Procedure
1. If you are configuring the sender channel, ensure the sender authorization certificate is specified by
following the steps:
a. In the Model Configuration editor, select the sender.
b. In the Properties view, check if the certificate is available in the Sender Authorization table, or add a
certificate.
2. In the Model Configuration editor, double-click the sender or receiver channel.
3. Choose the General tab page and enter the details listed below.
4. In the Adapter Type field, browse and select the IDoc adapter and Message Protocol as IDoc SOAP.
5. Choose the Adapter Specific tab page and enter the details as shown in the table below:
Address Relative endpoint address on which Cloud Integration can be reached by incoming re
quests, for example, /GetEmployeeDetails.
Note
When you specify the endpoint address /path, a sender can also call the inte
gration flow through the endpoint address /path/<any string> (for exam
ple, /path/test/).
Be aware of the following related implication: When you in addition deploy an inte
gration flow with endpoint address /path/test/, a sender using the /path/
test endpoint address will now call the newly deployed integration flow with the
endpoint address /path/test/. When you now undeploy the integration flow
with endpoint address /path/test, the sender again calls the integration flow
with endpoint address /path (original behavior). Therefore, be careful reusing
paths of services. It is better using completely separated endpoints for services.
Depending on your choice, you can also specify one of the following properties:
Note
Note the following:
○ You can also type in a role name. This has the same result as selecting
the role from the value help: Whether the inbound request is authenti
cated depends on the correct user-to-role assignment defined in SAP
Cloud Platform Cockpit.
○ When you externalize the user r, the value help for roles is offered in the
integration flow configuration as well.
○ If you have selected a product profile for SAP Process Orchestration, the
value help will only show the default role ESBMessaging.send.
○ Client Certificate
Note
Sender authorization is checked on the tenant by evaluating the sub
In the following ject/issuer distinguished name (DN) of the certificate (sent together
cases certain fea with the inbound request).
tures might not be You can use this option together with the following authentication op
available for your tion: client-certificate authentication (without certificate-to-user map
current integra ping).
tion flow: ○ User Role
○ A feature for Sender authorization is checked based on roles defined on the tenant
a particular for the user associated with the inbound request.
adapter or You can use this option together with the following authentication op
step was re tions:
leased after ○ Basic authentication (using the credentials of the user)
you created The authorizations for the user are checked based on user-to-role
the corre assignments defined on the tenant.
sponding ○ Client-certificate authentication and certificate-to-user mapping
shape in your The authorizations for the user derived from the certificate-to-
integration user mapping are checked based on user-to-role assignments de
flow. fined on the tenant.
○ You are using
a product
profile other
than the one
expected.
More information:
Adapter and Inte
gration Flow Step
Versions [page
405]
Client Certificate Allows you to select one or more client certificates (based on which the in
Authorization bound authorization is checked).
(only if you have se Choose Add to add a new certificate for inbound authorization for the se
lected Client
lected adapter. You can then select a certificate stored locally on your com
Certificate as
puter. You can also delete certificates from the list.
Authorization)
User Role Allows you to select a role based on which the inbound authorization is
checked.
However, you have the option to define custom roles for the runtime node
as well. When you choose Select, a selection of all custom roles defined that
way is offered.
Conditions
Parameter Description
Maximum Message Size This parameter allows you to configure a maximum size
for inbound messages (smallest value for a size limit is 1
MB). All inbound messages that exceed the specified size
(per integration flow and on the runtime node where the
integration flow is deployed) are blocked.
○ Body Size
○ Attachment Size
Address Endpoint address on which Cloud Integration posts the outbound mes
sage, for example http://<host>:<port>/payment.
Proxy Type The type of proxy that you are using to connect to the target system:
○ Select Internet if you are connecting to a cloud system.
○ Select On-Premise if you are connecting to an on-premise system.
Note
If you select the On-Premise option, the following restrictions
apply to other parameter values:
○ Do not use an HTTPS address for Address, as it leads to er
rors when performing consistency checks or during deploy
ment.
○ Do not use the option Client Certificate for the
Authentication parameter, as it leads to errors when per
forming consistency checks or during deployment.
Note
If you select the On-Premise option and use the SAP Cloud Con
nector to connect to your on-premise system, the Address field
of the adapter references a virtual address, which has to be con
figured in the SAP Cloud Connector settings.
○ If you select Manual, you can manually specify Proxy Host and Proxy
Port (using the corresponding entry fields).
Furthermore, with the parameter URL to WSDL you can specify a
Web Service Definition Language (WSDL) file defining the WS pro
vider endpoint (of the receiver). You can specify the WSDL by either
uploading a WSDL file from your computer (option Upload from File
System) or by selecting an integration flow resource (which needs to
be uploaded in advance to the Resources view of the integration
flow).
This option is only available if you have chosen a Process Orchestra
tion product profile.
Location ID only in case On-Premise is se To connect to a cloud connector instance associated with your account,
lected for Proxy Type. enter the location ID that you defined for this instance in the destination
configuration on the cloud side. You can also enter an expression such
like ${header.headername} or ${property.propertyname}
(example) to dynamically read the value from a header or a property.
Application/x-sap.doc
○ Allows only single IDoc record for each request.
○ Enables Exactly-Once processing.
○ Enables message sequencing.
Text/XML
○ Allows multiple IDoc records for each request.
○ Basic
The tenant authenticates itself against the receiver using user cre
dentials (user name and password).
It is a prerequisite that user credentials are specified in a User
Credentials artifact and deployed on the related tenant. Enter the
name of this artifact in the Credential Name field of the adapter.
○ Client Certificate
The tenant authenticates itself against the receiver using a client
certificate.
This option is only available if you have selected Internet for the
Proxy Type parameter.
It is a prerequisite that the required key pair is installed and added to
a keystore. This keystore has to be deployed on the related tenant.
The receiver side has to be configured appropriately.
○ None
○ Principal Propagation
The tenant authenticates itself against the receiver by forwarding
the principal of the inbound user to the cloud connector, and from
there to the back end of the relevant on-premise system
Note
This authentication method can only be used with the following
sender adapters: HTTP, SOAP, IDoc, AS2.
Note
Note that the token for principal propagation expires after 30
minutes.
Note
You can externalize all attributes related to the configuration of the
authentication option. This includes the attributes with which you
specify the authentication option as such, as well as all attributes
with which you specify further security artifacts that are required for
any configurable authentication option (Private Key Alias or
Credential Name).
The reason for this is the following: If you have externalized the
Authentication parameter and only the Private Key Alias parameter
(but not Credential Name), all authentication options in the integra
tion flow configuration dialog (Basic, Client Certificate, and None) are
selectable in a dropdown list. However, if you now select Basic from
the dropdown list, no Credential Name can be configured.
Credential Name (only available if you have Name of the User Credentials artifact that contains the credentials for
selected Basic for the Authentication pa basic authentication
rameter)
You can dynamically configure the Credential Name field of the adapter
by using a Simple Expression (see http://camel.apache.org/simple.html
. For example, you can dynamically define the Credential Name of the
receiver adapter by referencing a message header $
{header.MyCredentialName} or a message property $
{property.MyCredentialName}.
Private Key Alias (only available if you have Specifies an alias to indicate a specific key pair to be used for the authen
selected Client Certificate for the tication step.
Authentication parameter)
You can dynamically configure the Private Key Alias parameter by speci
fying either a header or a property name in one of the following ways: $
{header.headername} or $ {property.propertyname}.
Be aware that in some cases this feature can have a negative impact on
performance.
Timeout Specifies the time (in milliseconds) that the client will wait for a response
before the connection is being interrupted.
Compress Message Enables the WS endpoint to send compressed request messages to the
WS Provider and to indicate the WS Provider that it can handle com
pressed response messages.
Allow Chunking Used for enabling HTTP chunking of data while sending messages.
Return HTTP Response Code as Header When selected, writes the HTTP response code received in the response
message from the called receiver system into the header
CamelHttpResponseCode.
Note
You can use this header, for example, to analyze the message proc
essing run (when level Trace has been enabled for monitoring). Fur
thermore, you can use this header to define error handling steps af
ter the integration flow has called the IDoc SOAP receiver.
You cannot use the header to change the return code since the re
turn code is defined in the adapter and cannot be changed.
Clean-up Request Headers Select this option to clean up the adapter specific- headers after the re
ceiver call.
Results
In the Model Configuration editor, when you place the cursor on the sender or receiver message flows, you can
see the SOAP Address and WSDL information.
Prerequisites
Context
You perform this task to configure a sender or receiver channel with the SOAP communication protocol, with
SAP RM as the message protocol. SAP RM is a simplified communication protocol for asynchronous Web
service communication that does not require the use of Web Service Reliable Messaging (WS-RM) standards. It
offers a proprietary extension to ensure reliability on the ABAP back-end side of both Web service consumers
and providers. For more information, see http://wiki.scn.sap.com/wiki/display/ABAPConn/Plain+SOAP
● SapAuthenticatedUserName
Contains the user name of the client that calls the integration flow.
If the sender channel is configured to use client certificate authentication, no such header is set (as it is not
available in this case).
● SOAPAction Header
This header is part of the Web service specification.
Procedure
1. Choose the General tab page and enter the details below.
2. In the Adapter Type field, browse and select the SOAP adapter, and SAP RM as the Message Protocol.
3. Choose the Adapter Specific tab page and enter the details as shown in the table below:
Note
Parameters and Values of Sender SOAP (SAP RM) Adapter - Connection Details
Parameters Description
Address Relative endpoint address at which the ESB listens to the incoming requests, for
example, /HCM/GetEmployeeDetails.
Note
When you specify the endpoint address /path, a sender can also call the in
tegration flow through the endpoint address /path/<any string> (for
example, /path/test/).
URL to WSDL URL to the WSDL defining the WS provider endpoint (of the receiver). You can
specify the WSDL by selecting a source to browse for a WSDL either from an On-
Premise ES Repository or your local workspace.
In the Resources view, you can upload an individual WSDL file or an archive file
(file ending with .zip) that contains multiple WSDLs or XSDs, or both. For ex
ample, you can upload a WSDL that contains an imported XSD referenced by an
xsd:import statement. This means that if you want to upload a WSDL and de
pendent resources, you need to add the parent file along with its dependencies in
a single archive (.zip file).
You can download the WSDL by using the Integration Operations user interface
(in the Properties view, Services tab, under the integration flow-specific end
point). For newly deployed integration flows, the WSDL that is generated by the
download corresponds to the endpoint configuration in the integration flow.
The WSDL download does not work for WSDLs with external references because
these WSDLs cannot be parsed.
Processing Settings This feature corresponds to an older version of this adapter. The reason why it is
shown can be that you either have selected a certain product profile other than
SAP Cloud Platform Integration or (in case you have selected SAP Cloud Plat
form Integration product profile) that you continue editing an integration flow
which exists already for a certain time. If you still like to use this feature, you have
the following options:
When you use the up-to-date adapter version, the processing setting Robust is
implicit activated.
Note
Note the following:
○ You can also type in a role name. This has the same result as select
ing the role from the value help: Whether the inbound request is au
thenticated depends on the correct user-to-role assignment de
fined in SAP Cloud Platform Cockpit.
○ When you externalize the user r, the value help for roles is offered in
the integration flow configuration as well.
○ If you have selected a product profile for SAP Process Orchestra
tion, the value help will only show the default role
ESBMessaging.send.
○ Client Certificate
Note
Sender authorization is checked on the tenant by evaluating the sub
In the following ject/issuer distinguished name (DN) of the certificate (sent together
cases certain fea with the inbound request).
tures might not be You can use this option together with the following authentication op
available for your tion: client-certificate authentication (without certificate-to-user map
current integra ping).
tion flow: ○ User Role
○ A feature for Sender authorization is checked based on roles defined on the tenant
a particular for the user associated with the inbound request.
adapter or You can use this option together with the following authentication op
step was re tions:
leased after ○ Basic authentication (using the credentials of the user)
you created The authorizations for the user are checked based on user-to-role
the corre assignments defined on the tenant.
sponding ○ Client-certificate authentication and certificate-to-user mapping
shape in your The authorizations for the user derived from the certificate-to-
integration user mapping are checked based on user-to-role assignments de
flow. fined on the tenant.
○ You are using
a product
profile other
than the one
expected.
More information:
Adapter and Inte
gration Flow Step
Versions [page
405]
Client Certificate Allows you to select one or more client certificates (based on which the in
Authorization bound authorization is checked).
(only if you have se Choose Add to add a new certificate for inbound authorization for the se
lected Client
lected adapter. You can then select a certificate stored locally on your com
Certificate as
puter. You can also delete certificates from the list.
Authorization)
User Role Allows you to select a role based on which the inbound authorization is
checked.
However, you have the option to define custom roles for the runtime node
as well. When you choose Select, a selection of all custom roles defined that
way is offered.
Note
For Exactly-Once handling, the sender SOAP (SAP RM) adapter save the protocol-specific message ID
in the header SapMessageIdEx. If this header is set, SOAP (SAP RM) receiver use the content of this
header as the message ID for outbound communication. Usually, this is the desired behavior and
enables the receiver to identify any duplicates. However, if the sender system is also the receiver
system, or several variants of the message are sent to the same system (for example, in an external call
or multicast), the receiver system will incorrectly identify these messages as duplicates. In this case,
the header SapMessageIdEx must be deleted (for example, using a script) or overwritten with a new
generated message ID. This deactivates Exactly-Once processing (that is, duplicates are no longer
recognized by the protocol).
If you want to set SOAP headers via the Camel header, the following table shows which Camel header
corresponds to which SOAP header.
QualityOfService SapPlainSoapQoS
ExactlyOnce ExactlyOnce
ExactlyOnceInOrder ExactlyOnceInOrder
QueueId SapPlainSoapQueueId
Parameter Description
Maximum Message Size This parameter allows you to configure a maximum size
for inbound messages (smallest value for a size limit is 1
MB). All inbound messages that exceed the specified size
(per integration flow and on the runtime node where the
integration flow is deployed) are blocked.
○ Body Size
○ Attachment Size
Note
Parameters and Values of Receiver SOAP (SAP RM) Adapter - Connection Details
Parameters Description
Address Endpoint address at which the ESB posts the outgoing message, for example http://
<host>:<port>/payment.
You can dynamically configure the address field of the SOAP (SAP RM) Adapter.
When you specify the address field of the adapter as ${header.a} or ${property.a}, at
runtime the value of header a or exchange property (as contained in the incoming mes
sage) will be written into the Camel header CamelDestinationOverrideUrl and will be
used in runtime to send the message to.
Also in case the CamelDestinationOverrideUrl header has been set by another process
step (for example, a Content Modifier), its value will be overwritten.
The endpoint URL that is actually used at runtime is displayed in the message process
ing log (MPL) in the message monitoring application (MPL property
RealDestinationUrl). Note that you can manually configure the endpoint URL
using the Address attribute of the adapter. However, there are several ways to dynami
cally override the value of this attribute (for example, by using the Camel header
CamelHttpUri).
Proxy Type The type of proxy that you are using to connect to the target system.
Location ID only in case On- To connect to a cloud connector instance associated with your account, enter the loca
Premise is selected for Proxy tion ID that you defined for this instance in the destination configuration on the cloud
Type. side. You can also enter ${header.headername} or ${property.propertyname} to dy
namically read the value from a header or a property.
URL to WSDL URL to the WSDL defining the WS provider endpoint (of the receiver). You can specify
the WSDL by selecting a source to browse for a WSDL either from an On-Premise ES
Repository or your local workspace.
In the Resources view, you can upload an individual WSDL file or an archive file (file
ending with .zip) that contains multiple WSDLs or XSDs, or both. For example, you
can upload a WSDL that contains an imported XSD referenced by an xsd:import
statement. This means that if you want to upload a WSDL and dependent resources,
you need to add the parent file along with its dependencies in a single archive (.zip
file).
Endpoint Name of the selected port of a selected service (that you provide in the Service Name
field) contained in the referenced WSDL.
Note
Using the same port names across receivers is not supported in older versions of
the receiver adapters. To use the same port names, you need to create a copy of
the WSDL and use it.
Operation Name Name of the operation of the selected service (that you provide in the Service Name
field) contained in the referenced WSDL.
Private Key Alias Allows you to enter the private key alias name that gets the private key from the key
store and authenticates you to the receiver in an HTTPS communication.
Note
If you have selected the Connect using Basic Authentication option, this field is not
visible.
You can dynamically configure the Private Key Alias property by specifying either a
header or a property name in one of the following ways:
$ {header.headername} or $ {property.propertyname}
Please be aware that in some cases this feature can have a negative impact on per
formance.
Compress Message Enables the WS endpoint to send compressed request messages to the WS provider
and to indicate to the WS provider that it can handle compressed response messages.
Allow Chunking Used for enabling HTTP chunking of data while sending messages.
Return HTTP Response Code When selected, writes the HTTP response code received in the response message from
as Header the called receiver system into the header CamelHttpResponseCode.
Note
You can use this header, for example, to analyze the message processing run
(when level Trace has been enabled for monitoring). Furthermore, you can use this
header to define error handling steps after the integration flow has called the
SOAP (SAP RM) receiver.
You cannot use the header to change the return code since the return code is de
fined in the adapter and cannot be changed.
Clean Up Request Headers Select this option to clean up the adapter specific-headers after the receiver call.
Request Timeout Specifies the time (in milliseconds) that the client will wait for a response before the
connection is interrupted.
Note that the timeout setting has no influence on the Transmission Control Protocol
(TCP) timeout if the receiver or any additional component interconnected between the
Cloud Integration tenant and the receiver has a lower timeout. For example, consider
that you have configured a receiver channel timeout of 10 minutes and there is another
component involved with a timeout of 5 minutes. If nothing is transferred for a period
of time, the connection will be closed after the fifth minute. In HTTP communication
spanning multiple components (for example, from a sender, through the load balancer,
to a Cloud Integration tenant, and from there to a receiver), the actual timeout period is
influenced by each of the timeout settings of the individual components that are inter
connected between the sender and receiver (to be more exact, of those components
that can control the TCP session). The component or device with the lowest number
set for the idle session timeout will determine the timeout that will be used.
○ Basic
The tenant authenticates itself against the receiver using user credentials (user
name and password).
It is a prerequisite that user credentials are specified in a Basic Authentication ar
tifact and deployed on the related tenant.
○ Client Certificate
The tenant authenticates itself against the receiver using a client certificate.
It is a prerequisite that the required key pair is installed and added to a keystore.
This keystore has to be deployed on the related tenant. The receiver side has to be
configured appropriately.
○ None
○ Principal Propagation
The tenant authenticates itself against the receiver by forwarding the principal of
the inbound user to the cloud connector, and from there to the back end of the rel
evant on-premise system
Note
This authentication method can only be used with the following sender adapt
ers: HTTP, AS2, SOAP, IDOC
Note
Please note that the token for principal propagation expires after 30 minutes.
If it takes longer than 30 minutes to process the data between the sender and
receiver channel, the token for principal propagation expires, which leads to
errors in message processing.
Note
In the following cases certain features might not be available for your current
integration flow:
○ A feature for a particular adapter or step was released after you created
the corresponding shape in your integration flow.
○ You are using a product profile other than the one expected.
More information: Adapter and Integration Flow Step Versions [page 405]
You can dynamically configure the Credential Name field of the adapter by using a Sim
ple Expression (see http://camel.apache.org/simple.html . For example, you can dy
namically define the Credential Name of the receiver adapter by referencing a message
header ${header.MyCredentialName} or a message property $
{property.MyCredentialName}.
4. Save the configurations in both the sender and receiver channel editors.
In the Model Configuration editor, when you place the cursor on the sender or receiver message flows, you can
see the SOAP Address and WSDL information.
Related Information
https://wiki.scn.sap.com/wiki/display/ABAPConn/Plain+SOAP?original_fqdn=wiki.sdn.sap.com
Prerequisites
Since the adapter implements web services security, you have ensured that the related certificates are
deployed in the truststore.
Context
SOAP (SOAP 1.x) allows you to deploy web services that support SOAP 1.1 and SOAP 1.2. SOAP 1.x provides you
a framework for binding SOAP to underlying protocols. The binding specification in the WSDL defines the
message format and protocol details for a web service.
You have the option to set SOAP headers using Groovy script (for example, using the Script step).
● SapAuthenticatedUserName
Contains the user name of the client that calls the integration flow.
If the sender channel is configured to use client certificate authentication, no such header is set (as it is not
available in this case).
● SOAPAction Header
This header is part of the Web service specification.
1. If you are configuring the sender channel, ensure the sender authorization certificate is specified by
following the steps below:
a. In the Model Configuration editor, select the sender.
b. In the Properties view, check if the certificate is available in the Sender Authorization table, else add a
certificate.
2. In the Model Configuration editor, double-click the sender or receiver channel.
3. In the Adapter Type section of the General tab page, select SOAP from the Adapter Type dropdown and
select SOAP 1.x as the message protocol.
4. Choose the Adapter-Specific tab page and enter the details as shown in the table below. The attributes
depend on whether you configure a sender or a receiver channel::
Note
Connection
Parameter Description
Address Relative endpoint address on which the integration runtime expects incoming re
quests, for example, /HCM/GetEmployeeDetails.
Note
When you specify the endpoint address /path, a sender can also call the inte
gration flow through the endpoint address /path/<any string> (for exam
ple, /path/test/).
Be aware of the following related implication: When you in addition deploy an inte
gration flow with endpoint address /path/test/, a sender using the /path/
test endpoint address will now call the newly deployed integration flow with the
endpoint address /path/test/. When you now undeploy the integration flow
with endpoint address /path/test, the sender again calls the integration flow
with endpoint address /path (original behavior). Therefore, be careful reusing
paths of services. It is better using completely separated endpoints for services.
Use WS-Addressing Select this option to accept addressing information from message information head
ers during runtime.
(only if Service Definition:
Manual is selected)
Message Exchange Pattern Specifies the kind of messages that are processed by the adapter.
○ Request-Reply: The adapter processes both request and response.
(only if Service
Definition:Manual is selected
Tip
When using this option, the response code can accidentally be overwritten
by a called receiver. Assume that, for example the integration flow contains
an SOAP sender adapter (with a Request-Reply pattern) and an HTTP re
ceiver adapter. Let's furthermore assume that the HTTP receiver returns an
HTTP response code 202 (as it has accepted the call). In this case, the SOAP
sender adapter returns in the reply also HTTP response code 202 instead of
200 (OK). To overcome this situation, you have to remove the header
CamelHttpResponseCode before the message reply is sent back to the
sender.
○ One-Way
URL to WSDL URL to the WSDL defining the WS provider endpoint (of the receiver). You can specify
the WSDL by selecting a source to browse for a WSDL either from an On-Premise ES
(only if as Service Definition
Repository or your local workspace.
the option WSDL is selected)
In the Resources view, you can upload an individual WSDL file or an archive file (file
ending with .zip) that contains multiple WSDLs or XSDs, or both. For example, you
can upload a WSDL that contains an imported XSD referenced by an xsd:import
statement. This means that if you want to upload a WSDL and dependent resources,
you need to add the parent file along with its dependencies in a single archive (.zip
file).
Note
○ If you specify a WSDL, you also have to specify the name of the selected
service and the name of the port selected for this service. These fields must
have a namespace prefix.
Expected format: <namespace>:<service_name>
Example: p1:MyService
○ Don't use WSDLs with blanks.
We recommend that you don't use blanks in WSDL names or directories, as
this can lead to runtime issues.
You can download the WSDL by using the Integration Operations user interface (in the
Properties view, Services tab, under the integration flow-specific endpoint). For newly
deployed integration flows, the WSDL that is generated by the download corresponds
to the endpoint configuration in the integration flow.
The WSDL download does not work for WSDLs with external references because
these WSDLs can't be parsed.
For more information on how to work with WSDL resources, refer to the following blog:
Cloud Integration – Usage of WSDLs in the SOAP Adapter
Endpoint Name of the selected endpoint of a selected service (that you provide in the Service
Name field) contained in the referenced WSDL
(only if Service Definition: The
adapter only: WSDL is se
lected)
Processing Settings ○ WS Standard: Message is executed with WS standard processing mechanism. Er
rors are not returned to the consumer.
(only if one of the following op ○ Robust: WSDL provider invokes service synchronously and the processing errors
tions is selected: are returned to the consumer.
○ Service Definition: WSDL
○ Service Definition: Manual
andMessage Exchange
Pattern: One-Way
Depending on your choice, you can also specify one of the following properties:
Note
Note the following:
○ You can also type in a role name. This has the same result as selecting
the role from the value help: Whether the inbound request is authenti
cated depends on the correct user-to-role assignment defined in SAP
Cloud Platform Cockpit.
○ When you externalize the user r, the value help for roles is offered in the
integration flow configuration as well.
○ If you have selected a product profile for SAP Process Orchestration, the
value help will only show the default role ESBMessaging.send.
○ Client Certificate
Note
Sender authorization is checked on the tenant by evaluating the sub
In the following ject/issuer distinguished name (DN) of the certificate (sent together
cases certain fea with the inbound request).
tures might not be You can use this option together with the following authentication op
available for your tion: client-certificate authentication (without certificate-to-user map
current integra ping).
tion flow: ○ User Role
○ A feature for Sender authorization is checked based on roles defined on the tenant
a particular for the user associated with the inbound request.
adapter or You can use this option together with the following authentication op
step was re tions:
leased after ○ Basic authentication (using the credentials of the user)
you created The authorizations for the user are checked based on user-to-role
the corre assignments defined on the tenant.
sponding ○ Client-certificate authentication and certificate-to-user mapping
shape in your The authorizations for the user derived from the certificate-to-
integration user mapping are checked based on user-to-role assignments de
flow. fined on the tenant.
○ You are using
a product
profile other
than the one
expected.
More information:
Adapter and Inte
gration Flow Step
Versions [page
405]
Client Certificate Allows you to select one or more client certificates (based on which the in
Authorization bound authorization is checked).
(only if you have se Choose Add to add a new certificate for inbound authorization for the se
lected Client
lected adapter. You can then select a certificate stored locally on your com
Certificate as
puter. You can also delete certificates from the list.
Authorization)
User Role Allows you to select a role based on which the inbound authorization is
checked.
However, you have the option to define custom roles for the runtime node
as well. When you choose Select, a selection of all custom roles defined that
way is offered.
Conditions
Parameter Description
Maximum Message Size This parameter allows you to configure a maximum size
for inbound messages (smallest value for a size limit is 1
MB). All inbound messages that exceed the specified size
(per integration flow and on the runtime node where the
integration flow is deployed) are blocked.
○ Body Size
○ Attachment Size
Connection
Parameter Description
Address Endpoint address at which the ESB Bus posts the outgoing message, for example,
http://<host>:<port>/payment.
You can dynamically configure the address field of the SOAP (SOAP 1.x) adapter.
Also, if the CamelDestinationOverrideUrl header has been set by another process step
(for example, a Content Modifier), its value is overwritten.
The endpoint URL that is used at runtime is displayed in the message processing log
(MPL) in the message monitoring application (MPL property
RealDestinationUrl). Note that you can manually configure the endpoint URL
using the Address attribute of the adapter. However, there are several ways to dynami
cally override the value of this attribute (for example, by using the Camel header
CamelDestinationOverrideUrl).
Proxy Type The type of proxy that you are using to connect to the target system:
○ Select Internet if you are connecting to a cloud system.
○ Select On-Premise if you are connecting to an on-premise system.
Note
If you select the On-Premise option, the following restrictions apply to other
parameter values:
○ Do not use an HTTPS address for Address, as it leads to errors when per
forming consistency checks or during deployment.
○ Do not use the option Client Certificate for the Authentication parameter,
as it leads to errors when performing consistency checks or during de
ployment.
Note
If you select the On-Premise option and use the SAP Cloud Connector to con
nect to your on-premise system, the Address field of the adapter references a
virtual address, which has to be configured in the SAP Cloud Connector set
tings.
○ If you select Manual, you can manually specify Proxy Host and Proxy Port (using
the corresponding entry fields).
Furthermore, with the parameter URL to WSDL you can specify a Web Service
Definition Language (WSDL) file defining the WS provider endpoint (of the re
ceiver). You can specify the WSDL by either uploading a WSDL file from your com
puter (option Upload from File System) or by selecting an integration flow re
source (which needs to be uploaded in advance to the Resources view of the inte
gration flow).
This option is only available if you have chosen a Process Orchestration product
profile.
Location ID (only available if To connect to a Cloud Connector instance associated with your account, enter the lo
you have selected On-Premise cation ID that you defined for this instance in the destination configuration on the
for Proxy Type) cloud side. You can also enter ${header.headername} or $
{property.propertyname} to dynamically read the value from a header or a
property.
URL to WSDL URL to the WSDL defining the WS provider endpoint (of the receiver). You can specify
the WSDL by selecting a source to browse for a WSDL either from an On-Premise ES
Repository or your local workspace.
In the Resources view, you can upload an individual WSDL file or an archive file (file
ending with .zip) that contains multiple WSDLs or XSDs, or both. For example, you
can upload a WSDL that contains an imported XSD referenced by an xsd:import
statement. This means that if you want to upload a WSDL and dependent resources,
you need to add the parent file along with its dependencies in a single archive (.zip
file).
Note
○ If you specify a WSDL, you also have to specify the name of the selected serv
ice and the name of the port selected for this service. These fields must have
a namespace prefix.
Expected format: <namespace>:<service_name>
Example: p1:MyService
○ Don't use WSDLs with blanks:
It is not recommended to use blanks in WSDL names or directories. This
could lead to runtime issues.
For more information on how to work with WSDL resources, see the following blog:
Cloud Integration – Usage of WSDLs in the SOAP Adapter
Service Name Name of the selected service contained in the referenced WSDL
Port Name Name of the selected port of a selected service (that you provide in the Service Name
field) contained in the referenced WSDL.
Note
Using the same port names across receivers isn't supported in older versions of
the receiver adapters. To use the same port names, you need to create a copy of
the WSDL and use it.
Operation Name Name of the operation of a selected service (that you provide in the Service Name
field) contained in the referenced WSDL.
Connect Without Client This feature corresponds to the Authentication setting None and is shown when you
Authentication use an older version of this adapter. It is shown either because you have selected a
product profile other than SAP Cloud Platform Integration or (if you have selected the
SAP Cloud Platform Integration product profile) because you are editing an integration
flow that has already existed for some time.
Select this option to connect the tenant anonymously to the receiver system.
Select this option if your server allows connections without authentication at the
transport level.
Connect Using Basic This feature corresponds to the Authentication setting Basic and is shown when you
Authentication use an older version of this adapter. It is shown either because you have selected a
product profile other than SAP Cloud Platform Integration or (if you have selected the
SAP Cloud Platform Integration product profile) because you are editing an integration
flow that has already existed for some time.
Select this option to allow the tenant to connect to the receiver system using the de
ployed basic authentication credentials.
Credential Name: Enter the credential name of the username-password pair specified
during the deployment of basic authentication credentials on the cluster.
○ Basic
The tenant authenticates itself against the receiver using user credentials (user
name and password).
It is a prerequisite that user credentials are specified in a User Credentials artifact
and deployed on the related tenant. Enter the name of this artifact in the
Credential Name field of the adapter.
○ Client Certificate
The tenant authenticates itself against the receiver using a client certificate.
This option is only available if you have selected Internet for the Proxy Type pa
rameter.
It is a prerequisite that the required key pair is installed and added to a keystore.
This keystore has to be deployed on the related tenant. The receiver side has to
be configured appropriately.
○ None
○ Principal Propagation
The tenant authenticates itself against the receiver by forwarding the principal of
the inbound user to the cloud connector, and from there to the back end of the
relevant on-premise system
Note
This authentication method can only be used with the following sender
adapters: HTTP, SOAP, IDoc, AS2.
Note
The token for principal propagation expires after 30 minutes.
If it takes longer than 30 minutes to process the data between the sender and
receiver channel, the token for principal propagation expires, which leads to
errors in message processing.
Note
You can externalize all attributes related to the configuration of the authentication
option. This includes the attributes with which you specify the authentication op
tion as such, as well as all attributes with which you specify further security arti
facts that are required for any configurable authentication option (Private Key
Alias or Credential Name).
○ Externalize all attributes related to the configuration of all options, for exam
ple, Authentication and Credential Name and Private Key Alias.
○ Externalize only one of the following attributes: Private Key Alias or Credential
Name.
In such cases, the integration flow configuration (based on the externalized pa
rameters) cannot work properly.
The reason for this is the following: If you have externalized the Authentication pa
rameter and only the Private Key Alias parameter (but not Credential Name), all
authentication options in the integration flow configuration dialog (Basic, Client
Certificate, and None) are selectable in a dropdown list. However, if you now select
Basic from the dropdown list, no Credential Name can be configured.
Credential Name (only availa Name of the User Credentials artifact that contains the credentials for basic authenti
ble if you have selected Basic cation
for the Authentication param
eter) You can dynamically configure the Credential Name field of the adapter by using a
Simple Expression (see http://camel.apache.org/simple.html . For example, you
can dynamically define the Credential Name of the receiver adapter by referencing a
message header ${header.MyCredentialName} or a message property $
{property.MyCredentialName}.
Private Key Alias (only availa Specifies an alias to indicate a specific key pair to be used for the authentication step.
ble if you have selected Client
Certificate for the You can dynamically configure the Private Key Alias parameter by specifying either a
Authentication parameter) header or a property name in one of the following ways: $
{header.headername} or $ {property.propertyname}.
Timeout (in ms) Specifies the time (in milliseconds) that the client waits for a response before the con
nection is interrupted.
Note that the timeout setting has no influence on the Transmission Control Protocol
(TCP) timeout if the receiver or any additional component interconnected between the
Cloud Integration tenant and the receiver has a lower timeout. For example, consider
that you have configured a receiver channel timeout of 10 minutes and there is an
other component involved with a timeout of 5 minutes. If nothing is transferred for a
period of time, the connection will be closed after the fifth minute. In HTTP communi
cation spanning multiple components (for example, from a sender, through the load
balancer, to a Cloud Integration tenant, and from there to a receiver), the actual time
out period is influenced by each of the timeout settings of the individual components
that are interconnected between the sender and receiver (to be more exact, of those
components that can control the TCP session). The component or device with the low
est number set for the idle session timeout will determine the timeout that will be
used.
Compress Message Enables the WS endpoint to send compressed request messages to the WS Provider
and to indicate to the WS Provider that it can handle compressed response messages.
Allow Chunking Used for enabling HTTP chunking of data while sending messages.
Return HTTP Response Code When selected, writes the HTTP response code received in the response message
as Header from the called receiver system into the header CamelHttpResponseCode.
Note
You can use this header, for example, to analyze the message processing run
(when level Trace has been enabled for monitoring). Furthermore, you can use this
header to define error handling steps after the integration flow has called the
SOAP receiver.
Caution
It is recommended that you model the integration flow in such a way that header
CamelHttpResponseCode is deleted after is has been evaluated. The reason
is that this header can have an impact on the communication with a sender sys
tem in case one of the following sender adapters are used in the same integration
flow: SOAP 1.x, XI, IDoc, SOAP SAP RM. This is because in such a case the value of
header CamelHttpResponseCode also determines the response code used in
the connection with the sender system. This is in most cases not the desired be
havior.
Furthermore, note that in case the SOAP 1.x receiver channel uses a WSDL with a
one-way operation, header CamelHttpResponseCode is not set (even if fea
ture Return HTTP Response Code as Header is activated).
Clean Up Request Headers Select this option to clean up the adapter-specific headers after the receiver call.
Related Information
WS-Security Configuration for the Sender SOAP 1.x Adapter [page 86]
WS-Security Configuration for the Receiver SOAP 1.x Adapter [page 86]
https://blogs.sap.com/2018/01/25/cloud-integration-soap-adapter-web-service-security/
https://blogs.sap.com/2018/01/24/cloud-integration-wss-between-cloud-integration-and-sap-po-soap-
adapter/
You use a sender channel to configure how inbound messages are to be treated at the tenant’s side of the
communication.
● How the tenant verifies the payload of an incoming message (signed by the sender)
● How the tenant decrypts the payload of an incoming message (encrypted by the sender)
The sender SOAP 1.x adapter allows the following combination of message-level security options:
● Verifying a payload
● Verifying and decrypting a payload
For a detailed description of the SOAP adapter WS-Security parameters, check out Configure the SOAP (SOAP
1.x) Sender Adapter [page 655] (under WS-Security).
With a receiver channel you configure the outbound communication at the tenant’s side of the communication.
● How the tenant signs the payload of a message (to be verified by the receiver)
● How the tenant encrypts the payload of a message (to be decrypted by the receiver)
● Signing a payload
● Signing and encrypting a payload
Signing and encryption (and verifying and decryption) is based on a specific set up of keys as illustrated in the
figures. Moreover, for the message exchange, specific communication rules apply as been agreed between the
administrators of the Web service client and Web service provider (for example, if certificates are to be sent
with the message).
There are two options how these security and communication settings can be specified:
For a detailed description of the SOAP adapter WS-Security parameters, check out Configure the SOAP (SOAP
1.x) Receiver Adapter [page 667] (under WS-Security).
Prerequisites
You (tenant admin) can provision message broker to AS2 adapter scenarios only if you have Enterprise Edition
license.
Note
You have to set up a cluster for the usage of message broker. For more details refer .
Caution
Do not use this adapter type together with Data Store Operations steps, Aggregator steps, or global
variables, as this can cause issues related to transactional behavior.
Context
You use this procedure to configure a sender and receiver channel of an integration flow with the AS2 adapter.
You can use this adapter and exchange business-specific documents with your partner through AS2 protocol.
You can use this adapter to encrypt/decrypt, compress/decompress and sign/verify the documents.
Restriction
An integration flow you deploy in SAP Cloud Platform Integration deploys in multiple IFLMAP worker nodes.
Polling is triggered from only one of the worker nodes. The message monitoring currently displays the
process status from the worker nodes where the Scheduler is not started. This results in the message
monitor displaying messages with less than a few milliseconds, where the schedule was not triggered.
These entries contain firenow=true in the log. You can ignore these entries.
Procedure
1. Double-click the channel that you want to configure on the Model Configuration tab page.
2. Select the General tab page.
3. Choose Browse in the Adapter Type screen area.
4. Select AS2 in the Choose Adapter window and choose OK.
5. If you configure the sender channel, choose AS2 or AS2 MDN under Message Protocol field else for receiver
channel choose AS2.
○ If you are configuring the sender channel to receive AS2 message then you can choose AS2
message protocol.
○ If you are configuring the sender channel to receive asynchronous AS2 MDN, then you can choose
AS2 MDN message protocol.
○ If you want to call as2 sender channel, then for as2 sender channel the pattern should be http://
<host>:<port>/as2/as2 and for as2 mdn sender channel the pattern should be http://
<host>:<port>/as2/mdn .
○ To analyze a troubleshooting scenario better, it is recommended to mention the name of the AS2
sender channel. Because a part of the JMS queue name contains the AS2 sender channel name.
6. Choose the Processing tab under Adapter-Specific tab page and enter the details.
Fields Description
Message ID Left Part Specify left side of AS2 message ID. Regular expression or
'.*' is allowed.
Message ID Right Part Specify right side of AS2 message ID. Regular expression
or '.*' is allowed.
Partner AS2 ID Specify partner's AS2 ID. Regular expression or '.*' is al
lowed.
Own AS2 ID Specify own AS2 ID. Regular expression or '.*' is allowed.
Number of Concurrent Processes The number provided determines the processes that are
running in parallel for each worker node and it must be
less than 99. The value depends on the number of worker
nodes, the number of queues on the tenant, and the in
coming load.
User Role (For sender only) Provide a role as defined in the tenant system, to check in
bound sender authorization.
File name (For receiver only) Specify AS2 filename. If no filename is specified, default
filename will be set to <Own AS2 ID>_File. Use of simple
expression, ${header.<header-name>} or ${prop
erty.<property-name>}, is allowed.
Own E-mail address (For receiver only) Specify own email ID. Use of simple expression, $
{header.<header-name>} or ${property.<property-
name>}, is allowed.
Content Type (For receiver only) Specify content type of the outgoing message. For e.g. ap
plication/edi-x12. Use of simple expression, $
{header.<header-name>} or ${property.<property-
name>}, is allowed.
Note
If header value is set, it takes precedence over actual
value configured in the channel.
Custom Headers Pattern (For receiver only) Specify regular expression to pick message headers and
add them as AS2 custom headers.
Content Transfer Encoding (For receiver only) Specify AS2 message encoding type.
Note
○ You should ensure that the combination of Message ID Left Part, Message ID Right Part, Partner
AS2 ID, Own AS2 ID and Message Subject parameters, is unique across all AS2 sender channels.
○ If you use regular expression for the above mentioned AS2 sender parameters, then you must
ensure that the regular expression configuration is unique across the endpoints.
○ The runtime identifies relevant channel and integration flow for the incoming AS2 sender message,
based on the above mentioned parameters.
○ AS2 adapter now supports Camel Attachments that contains headers. From the message payload,
the AS2 adapter preserves and reads the corresponding header values that are part of the
attachment.
7. Choose the Security tab under Adapter-Specific tab page and enter the details.
Decrypt Message (For sender only) Ensures that the message is decrypted.
Note
If header value is set, it takes precedence over actual
value configured in the channel.
Private Key Alias (For sender only) If you select Decrypt Message, then this field is enabled.
Specify private key alias to Decrypt AS2 Message.
Verify Signature (For sender only) Ensures that the signature is verified.
Public Key Alias (For sender only) If you select Verify Signature of Message, then this field is
enabled. Specify public key alias to verify signature of AS2
Message.
Compress Message (For receiver only) Ensures that the outgoing message is compressed.
Note
If header value is set, it takes precedence over actual
value configured in the channel.
Sign Message (For receiver only) Ensures that the outgoing AS2 Message is signed.
Note
If header value is set, it takes precedence over actual
value configured in the channel.
Algorithm (For receiver only) If you select Sign Message, then this field is enabled. Se
lect AS2 Message signing algorithm.
Note
If header value is set, it takes precedence over actual
value configured in the channel.
Private Key Alias (For receiver only) If you select Sign Message, then this field is enabled.
Specify private key alias to Sign AS2 Message. Use of sim
ple expression, ${header.<header-name>} or ${prop
erty.<property-name>}, is allowed.
Encrypt Message (For receiver only) Ensures that the message is encrypted.
Note
If header value is set, it takes precedence over actual
value configured in the channel.
Algorithm (For receiver only) If you select Encrypt Message, then this field is enabled.
Select AS2 Message encryption algorithm.
Note
If header value is set, it takes precedence over actual
value configured in the channel.
Public Key Alias (For receiver only) If you select Encrypt Message, then this field is enabled.
Specify public key alias to encrypt AS2 Message. Use of
simple expression, ${header.<header-name>} or ${prop
erty.<property-name>}, is allowed. The header or property
can conatin public key alias or X509 certificate.
Key Length (For receiver only) If you select Encrypt Message and choose RC2 for Algo
rithm field then this field is enabled. Specify public key
length.
Note
If header value is set, it takes precedence over actual
value configured in the channel.
8. Choose the MDN tab under Adapter-Specific tab page and enter the details.
Fields Description
Private Key Alias for Signature (For sender only) Specify private key alias to sign the MDN on partner's re
quest.
Signature Encoding (For sender only) Select MDN signature encoding type.
Authentication for Asynchronous MDN (For sender Select authentication type for asynchronous MDN.
only)
Timeout (For sender only) Specify the time in milliseconds during which client has to
accept asynchronous MDN, before the timeout occurs.
Enter the value '0' if you want the client to wait indefinitely.
Number of Concurrent Processes (For Sender Only) The number provided determines the processes that are
running in parallel for each worker node and it must be
less than 99. The value depends on the number of worker
nodes, the number of queues on the tenant, and the in
coming load.
Type (For receiver only) Enable this option to request partner to send Message In
tegrity Check(MIC) in AS2 MDN.
Note
If header value is set, it takes precedence over actual
value configured in the channel.
Target URL (For receiver only) If you choose asynchronous MDN type, then this field is
enabled. Specify URL on which AS2 MDN will be received
from partner. Use of simple expression, $
{header.<header-name>} or ${property.<property-
name>}, is allowed.
Note
If header value is set, it takes precedence over actual
value configured in the channel.
Request Signing (For receiver only) If you choose asynchronous or synchronous MDN type,
then this field is enabled. You can enable this option to re
quest partner to sign AS2 MDN.
Algorithm (For receiver only) If you enable Request Signing option, then this field is ena
bled.
Note
If header value is set, it takes precedence over actual
value configured in the channel.
Verify Signature (For receiver only) If you choose synchronous MDN type, then this field is en
abled. You can enable this option to verify signature of
AS2 MDN.
Public Key Alias (For receiver only) If you select Verify Signature, then this field is enabled.
Specify public key alias to verify MDN signature. Use of
simple expression, ${header.<header-name>} or ${prop
erty.<property-name>}, is allowed. The header or property
can conatin public key alias or X509 certificate.
Request MIC (For receiver only) If you want to requst for integrity check then you can ena
ble this option.
Verify MIC (For receiver only) If you choose synchronous MDN type, then this field is en
abled. You can enable this option to verify MIC of AS2
MDN.
If you enable request MIC option the you can also enable
this option if you wnat to verify integrity of the message.
Note
○ You can configure AS2 receiver channel for Request-Reply integration flow element. If you request
for synchronous MDN, then the adapter sets received MDN response as message payload.
○ If you request for synchronous MDN in receiver channel, you may receive positive or negative MDN.
In both the cases, status of message in Message Monitoring tab is COMPLETED. You can process
the MDN message on your own and take the required action for positive or negative MDN, post AS2
call for synchronous MDN.
○ In MDN message, positive MDN is represented as shown below:
Sample Code
Sample Code
○ If MDN signature validation fails or incorrect message integrity check (MIC) is received, then in that
case, status of message is FAILED.
Parameters Description
Retry Interval (in min) Enter a value for the amount of time to wait before retry
ing message delivery.
Exponential Backoff Enter a value to double the retry interval after each unsuc
cessful retry.
Maximum Retry Interval (in min) Enter a value for the maximum amount of time to wait be
fore retrying message delivery.
Dead-Letter Queue Select this option to place the message in the dead-letter
queue if it cannot be processed after two retries.
10. Choose the Connection tab under Adapter-Specific tab page for AS2 receiver channel and enter the details.
Fields Description
Recipient URL (For reciever only) Specify partner's AS2 URL. Use of simple expression, $
{header.<header-name>} or ${property.<property-
name>}, is allowed.
Note
If header value is set, it takes precedence over actual
value configured in the channel.
URL Paramters Pattern (For receiver only) Specify regular expression to pick message headers and
add them as AS2 URL parameters.
Authentication Type (For receiver only) Specify the authetication type for invoking recipient URL.
Credential Name (For receiver only) If you select basic authetication, then this field is enabled.
Private Key Alias (For receiver only) If you select client certificate authetication, then this field
is enabled.
Timeout (in ms) (For receiver only) Specify the time in milliseconds during which client has to
accept AS2 message, before the timeout occurs.
Note
○ AS2 sender passes the following headers to the integration flow for message processing:
○ AS2PartnerID
○ AS2OwnID
○ AS2MessageSubject
○ AS2Filename
○ AS2MessageID
○ AS2PartnerEmail
○ AS2MessageContentType
○ AS2 MDN sender passes the following headers to the integration flow for message processing:
○ AS2PartnerID
○ AS2OwnID
○ AS2MessageID
○ AS2MessageContentType
○ AS2OriginalMessageID
○ You can configure AS2 sender to retry messages, if any error occurs during integration flow
processing.
○ You can use the parameterRetry Interval (in m) to enter a value for the amount of time to wait
before retrying message delivery.
The AS2 sender adapter writes messages into a JMS queue prior to further processing it. The
retry handling (in case a Retry Interval (in m) is configured) is analog to as described for the
JMS adapter (see Related Link below).
○ You can use the parameterExponential Backoff to double the retry interval after each
unsuccessful retry.
○ You can use the parameter Maximum Retry Interval (in m) to set an upper limit on the value to
avoid an endless increase of the retry interval. The default value is 60 minutes.
{AdapterId=AS2 Receiver,
adapterMessageId=<c31b34955-d799-4219-9f1c-5dd3a044e4@HCIAS2>,
SAP_MplCorrelationId=AFgsEou7oJYm7AqQHsV2lM2T6iTT,
ReceiverAS2Name=ASCDSSCSAS2,
MessageDirection=Outbound,
MDNType=Receiving,
MPL ID=AFgsEosepT9fR54od_XHp6yWu6Gs,
MDNRequested=Asynchronous,
SenderAS2Name=HCIAS2,
AS2MessageID=<c31b5955-d799-4219-9f1c-5dde63a044e4@HCIAS2>}
{AdapterId=AS2 Sender,
adapterMessageId=<define _AS2-147665710-6@endionAS2_HCIAS2>,
ReceiverAS2Name=HCIAS2,
MessageDirection=Inbound,
MDNType=Sending,
MDNStatus=Success,
MPL ID=AFgsPspcD-eYhvHFdfOZYKydBmzw,
MDNRequested=Synchronous,
SenderAS2Name=endionAS2,
AS2MessageID=<define _AS2-147665710-6@endionAS2_HCIAS2>}
{AdapterId=AS2 Sender,
adapterMessageId=<define_AS2-14798922282-7@gibsonAS2_HCIAS2>,
ReceiverAS2Name=HCIAS2,
MessageDirection=Inbound,
MDNType=Sending,
MDNStatus=Success,
MPL ID=AFgsQ0_3KdRx-UiOjcwGruy6Xw4V,
MDNRequested=Asynchronous,
SenderAS2Name= gibsonAS2,
AS2MessageID=<define _AS2-14798922282-7@ gibsonAS2_HCIAS2>}
Related Information
The HTTP adapter allows you to configure an outbound HTTP connection from SAP Cloud Platform Integration
to a receiver.
Prerequisites
If you like to send strings with the HTTP receiver adapter which contain non-ASCII characters (for example,
German umlaut or cyrillic characters), make sure that you do the following (using a Content Modifier):
Use the Content-Type header to specify the media type that the receiver can expect (for example, text/
plain for unformatted text).
Set the value of the CamelCharsetName property or header to the desired character set (for example, UTF-8).
Note
If you don't specify the character set in the proposed way, the HTTP adapter sends ASCII strings. This will
lead to errors when your data contains non-ASCII characters.
Context
The HTTP adapter supports only HTTP 1.1. This means that the target system must support chunked transfer
encoding and may not rely on the existence of the HTTP Content-Length header.
You can configure a channel with the HTTP adapter type for outbound calls (from the tenant toIf you want to
dynamically override the configuration of the adapter, you can set the following headers before calling the
HTTP adapter:
● CamelHttpUri
Overrides the existing URI set directly in the endpoint.
This header can be used to dynamically change the URI to be called.
● CamelHttpQuery
Refers to the query string that is contained in the request URL.
In the context of a receiver adapter, this header can be used to dynamically change the URI to be called.
For example, CamelHttpQuery=abcd=1234.
● Content-Type
HTTP content type that fits to the body of the request.
The content type is composed of two parts: a type and a subtype.For example, image/jpeg (where image
is the type and jpeg is the subtype).
Examples:
○ text/plain for unformatted text
○ text/html for text formatted with HTML syntax
○ image/jpeg for a jpeg image file
○ application/json for data in JSON format to be processed by an application that requires this
format
Note
If transferring text/* content types, you can also specify the character encoding in the HTTP header
using the charset parameter.
The default character encoding that will be applied for text/* content types depends on the HTTP
version: us-ascii for HTTP 1.0 and iso-8859-1 for HTTP 1.1.
Text data in string format is converted using UTF-8 by default during message processing. If you want
to override this behavior, you can use the Content Modifier step and specify the CamelCharsetName
Exchange property. To avoid encoding issues when using this feature together with the HTTP adapter,
consider the following example configuration:
If you use a Content Modifier step and you want to send iso-8859-1-encoded data to a receiver, make
sure that you specify the CamelCharsetName Exchange property (either header or property) as
iso-8859-1. For the Content-Type HTTP header, use text/plain; charset=iso-8859-1.
● Content-Encoding
HTTP content encoding that indicates the encoding used during message transport (for example, gzip for
GZIP file compression).
This information is used by the receiver to retrieve the media type that is referenced by the content-type
header.
If this header is not specified, the default value identity (no compression) is used.
More information: https://tools.ietf.org/html/rfc2616 (section 14.11)
The list of available content types is maintained by the Internet Assigned Numbers Authority (IANA). For
more information, see:http://www.iana.org/assignments/http-parameters/http-
parameters.xhtml#content-coding .
Procedure
Note
Field Description
Address URL of the target system that you are connecting to, for
example, https://mysystem.com
○ throwExceptionOnFailure
○ bridgeEndpoint
○ transferException
○ client
○ clientConfig
○ binding
○ sslContextParameters
○ bufferSize
Query Query string that you want to send with the HTTP request
Note
If you want to send parameters in the query string of
the HTTP adapter, these parameters must be coded
in a URL-compatible way. Individual parameter-value
pairs must be separated with an ”&” and there must
be an “=” between the name of a parameter and its
value.
Example 1)
parameter1=123, parameter2=abc
Example 2)
Proxy Type The type of proxy that you are using to connect to the tar
get system:
○ Select Internet if you are connecting to a cloud sys
tem.
○ Select On-Premise if you are connecting to an on-
premise system.
Note
If you select the On-Premise option, the following
restrictions apply to other parameter values:
○ Do not use an HTTPS address for Address,
as it leads to errors when performing consis
tency checks or during deployment.
○ Do not use the option Client Certificate for
the Authentication parameter, as it leads to
errors when performing consistency checks
or during deployment.
Note
If you select the On-Premise option and use the
SAP Cloud Connector to connect to your on-
premise system, the Address field of the adapter
references a virtual address, which has to be con
figured in the SAP Cloud Connector settings.
Location ID only in case On-Premise is selected for Proxy To connect to a cloud connector instance associated with
Type. your account, enter the location ID that you defined for
this instance in the destination configuration on the cloud
side. You can also enter ${header.headername} or ${prop
erty.propertyname} to dynamically read the value from a
header or a property.
○ POST
Requests that the receiver accepts the data enclosed
in the request body.
○ Delete
Requests that the origin server delete the resource
identified by the Request-URl
○ Dynamic
The method is determined dynamically by reading a
value from a message header or property such as $
{header.abc} or ${property.abc} during runtime.
○ GET
Sends a GET request to the receiver.
○ HEAD
Sends a HEAD request which is similar to a GET re
quest but does not return a message body.
○ PUT
Updates or creates the enclosed data on the receiver
side.
○ TRACE
Sends a TRACE request to the receiver that sends
back the message to the caller.
Send Body Select this checkbox if you want to send the body of the
message with the request. For methods GET, DELETE, and
This field is enabled only if you select for Method the op
HEAD, the body is not sent by default because some HTTP
tion GET, DELETE, HEAD or "Dynamic".
servers do not support this function.
Authentication Defines how the tenant (as the HTTP client) will authenti
cate itself against the receiver.
○ None
○ Basic
The tenant authenticates itself against the receiver
using user credentials (user name and password).
It is a prerequisite that user credentials are specified
in a Basic Authentication artifact and deployed on the
related tenant.
○ Client Certificate
The tenant authenticates itself against the receiver
using a client certificate.
It is a prerequisite that the required key pair is instal
led and added to a keystore. This keystore has to be
deployed on the related tenant. The receiver side has
to be configured appropriately.
Note
You can externalize all attributes related to the config-
uration of the authentication option. This includes the
attributes with which you specify the authentication
option as such, as well as all attributes with which you
specify further security artifacts that are required for
any configurable authentication option (Private Key
Alias or Credential Name).
○ Principal Propagation
The tenant authenticates itself against the receiver by
forwarding the principal of the inbound user to the
cloud connector, and from there to the back end of
the relevant on-premise system
Note
This authentication method can only be used
with the following sender adapters: HTTP, SOAP,
IDOC
Note
Please note that the token for principal propaga
tion expires after 30 minutes.
Note
In the following cases certain features might not
be available for your current integration flow:
○ A feature for a particular adapter or step was
released after you created the correspond
ing shape in your integration flow.
○ You are using a product profile other than
the one expected.
Credential Name Identifies the User Credential artifact that contains the
credentials (user name and password).
Note
You can dynamically configure the Credential Name prop
This field is enabled only if you select for erty by specifying either a header or a parameter name in
Authentication the option Basic. one of the following ways: ${header.headername}
or ${parameter.parametername}. As an example,
you can use a Script step before the adapter where you
look-up the User Credentials and enter the base64-en
coded values for user and password into the header
Authorization. The HTTP adapter will then use this
header in the HTTP request.
Private Key Alias Enter the private key alias that enables the system to fetch
the private key from keystore for authentication.
Note
Restriction
This option is enabled only if you select client certifi-
The values true and false are not supported for
cate authentication.
this field.
Timeout (in ms) Maximum time that the tenant waits for a response before
terminating message processing
Related Information
The HTTPS sender adapter allows you to accept incoming http request on a specific address.
Context
Supported Header:
Supported Header:
● SapAuthenticatedUserName
Contains the user name of the client that calls the integration flow.
If the sender channel is configured to use client certificate authentication, no such header is set (as it is not
available in this case).
The following HTTP request headers for the sample HTTP endpoint https://
test.bsn.neo.ondemand.com/http/hello?abcd=1234 are added to exchange headers for further
processing in integration flow:
● CamelHttpUrl
Refers to the complete URL called, without query parameters.
For example, CamelHttpUrl=https://test.bsn.neo.ondemand.com/http/hello.
● CamelHttpQuery
Refers to the query string that is contained in the request URL.
In the context of a receiver adapter, this header can be used to dynamically change the URI to be called.
For example, CamelHttpQuery=abcd=1234.
● CamelHttpMethod
Refers to the incoming method names used to make the request. These methods are GET, POST, PUT,
DELETE, and so on.
● CamelServletContextPath
Refers to the path specified in the address field of the channel.
For example, if the address in the channel is /abcd/1234, then CamelServletContextPath is /abcd/1234.
Procedure
1. Double-click the sender channel that you want to configure on the Model Configuration tab page.
2. Choose the General tab page.
3. Select HTTPS from the Adapter Type dropdown list.
4. Choose the Adapter Specific tab page.
5. Choose the Connection tab page and specify the following attributes.
Parameter Description
Note
○ Use the following pattern: http://
<host>:<port>/http . This should be ap
pended by the unique address specified in the
channel.
○ The field value supports these characters ~, -, . ,
$ and * .
○ The Address field should start with '/ ' and can
contain alphanumeric values, '_' and '/ '. For ex
ample a valid address is /test/123.
○ In the example mentioned above, you can use ~
only for the address part which succeeds /test/
○ You can use $ only at the beginning of the ad
dress after /.
○ You can use* only at the extreme end of the ad
dress and no characters are allowed after *. A *
can only be preceded with /.
○ You cannot begin address with., - or ~ . Alphanu
meric value or _ must succeed these characters.
○ If you are using /*, it implies that uri containing
the prefix preceding the /* is supported. For ex
ample. if the address is /Customer/* then uris
supported are http://<host>:<port>/http/
Customer/<Any-url>.
○ Uris are case insensitive. So, http://
<host>:<port>/http/test and http://
<host>:<port>/http/Test is treated as same.
Note
When you specify the endpoint address /path, a
sender can also call the integration flow through the
endpoint address /path/<any string> (for ex
ample, /path/test/).
Note
○ During an inbound HTTPS communication, if the
sender adapter receives a GET or HEAD request
to fetch the CSRF token value and you have the
enabled CSRF Protected then the adapter will re
turn the CSRF token and stop processing the
message further.
○ Include X-CSRF-Token in the HTTP header field
for all modifying requests and these requests are
validated during runtime. If the validation fails
then the server returns “HTTP 403 Forbidden”
status code.
Note
○ Additional incoming request headers and URL parameters can be added to exchange headers for
further processing in integration flow. You must define these headers and paramters in Allowed
Headers list at integration flow level.
○ Once the integration flow processing completes, the HTTPS sender adapter returns header and
body to end user and sets the response code. You can use Content Modifier element to send back
specific http response and customize the response.
○ The sample integration flow is as shown below:
○ Only Basic Authentication is supported for the http calls and the ESBMessaging.Send role must be
assigned to the user.
○ Address URLs for http endpoints across integration flow must be unique. If it is not unique then the
integration flow does not start.
○ Adapter returns the following HTTP response code:
○ 200 - Processing is successful
○ 503 - Service is not available
○ 500 - Exception during integration flow processing
Also, you can set the header CamelHttpResponseCode to customize the response code.
○ You can invoke the HTTP endpoints using the syntax <Base URI>/http/<Value of address field>.
You can get Base URI value from Services tab in Properties view of a worker node.
Atleast one integration flow with SOAP endpoint must be deployed to view details in Services tab.
Context
Unlike the standard FTP, the SFTP adapter uses a certificate and keystore to authenticate the file transfer. The
SFTP connector achieves secure transfer by encrypting sensitive information before transmitting it on the
network.
Note
The clock icon on a message flow indicates polling of messages at regular intervals.
If you want to dynamically override the configuration of the adapter, you can set the following header before
calling the SFTP adapter:
● CamelFileName
Overrides the existing file and directory name that is set directly in the endpoint.
This header can be used to dynamically change the name of the file and directory to be called.
The following examples show the header CamelFileName, read via XPath from the payload, or set using an
expression:
Example of Header
Name Type Data Type Value
SAP Cloud Platform Integration for processes currently supports the following ciphers for SSH (SFTP)
communication: blowfish-cbc,3des-cbc,aes128-cbc,aes192-cbc,aes256-cbc,aes128-ctr,aes192-ctr,aes256-ctr,
3des-ctr,arcfour,arcfour128,arcfour256.
Caution
The ciphers listed above can change in the future. New ciphers can be added and existing ones can be
removed in case of security weaknesses. In such cases, you will have to change the ciphers on the SFTP
server and reconfigure the integration flows that contain SFTP adapter. SAP will inform customers, if
necessary.
Caution
If you select Run Once option in the Scheduler, you see messages triggered from all the integration flows
with this setting after a software update. After the latest software is installed on a cluster, it is restarted.
You see messages from these integration flows with Run Once setting.
Related Information
You can use the SFTP sender adapter to transfer files from an SFTP server to the tenant using the SSH
protocol.
Context
Restriction
An integration flow you deploy in SAP Cloud Platform Integration deploys in multiple IFLMAP worker nodes.
Polling is triggered from only one of the worker nodes. The message monitoring currently displays the
process status from the worker nodes where the Scheduler is not started. This results in the message
monitor displaying messages with less than a few milliseconds, where the schedule was not triggered.
These entries contain firenow=true in the log. You can ignore these entries.
Procedure
1. Select the integration flow you want to configure and choose Edit.
2. Choose the communication channel you want to configure.
To configure a sender channel, click on a connection between a sender and the Integration Process
component.
3. In General tab page, provide channel name and description in the relevant fields if required.
4. Choose Adapter Specific tab page and provide values in fields based on description in table.
On the Source tab of the sender channel, specify the following attributes.
Parameters Description
Directory Use the relative path to read the file from a directory, for example, <dir>/<subdir>.
Note
If you do not enter a file name and the parameter remains blank, all the files in
the specified directory are read.
Note
Usage of file name pattern:
Expressions, such as ab*, a.*, *a*, ?b, and so on, are supported.
Examples:
If you specify file*.txt as the File Name, the following files are polled by
the adapter: file1.txt, file2.txt, as well as file.txt and
file1234.txt, and so on.
If you specify file?.txt as the File Name, the following files are polled by
the adapter: file1.txt, file2.txt, and so on, but not the files
file.txt or file1234.txt.
Although you can configure this feature, it is not supported when using the cor
responding integration content with the SAP Process Orchestration (SAP PO)
runtime in releases lower than SAP PO 7.5 SP5.
Caution
Files with file names longer than 100 characters will be processed with the fol
lowing limitations:
○ If two files with names longer than 100 characters are available for proc
essing, only one of these files will be processed at a time. This means that
both files will be processed, but not in parallel. This is also the case if two
runtime nodes are available. If the node fails multiple times while process
ing a file with a file name longer than 100 characters, none of the files
sharing the first 100 characters with that file can be executed without
manual intervention from the administrator.
○ The option Keep File and Mark as Processed in Idempotent Repository (for
sender channels under Processing) will not work for these files.
Address Host name or IP address of the SFTP server and an optional port, for example,
wdfd00213123:22.
Proxy Type The type of proxy that you are using to connect to the target system.
For more information on how to use the On-Premise option to connect to an on-
premise SFTP server, check out the SAP Community blog Cloud Integration – How
to Connect to an On-Premise sftp server via Cloud Connector .
Location ID To connect to an SAP Cloud Connector instance associated with your account, en
(only if On-Premise is selected for ter the location ID that you defined for this instance in the destination configuration
Proxy Type on the cloud side.
○ User Name/Password
SFTP server authenticates the calling component based on the user name and
password. To make this configuration setting work, you need to define the user
name and password in a User Credential artifact and deploy the artifact on the
tenant.
○ Public Key
SFTP server authenticates the calling component based on a public key.
Credential Name Name of the User Credential artifact that contains the user name and password.
(Only available if you have se Make sure that the user name contains no other characters than A-z, 0-9, _ (un
lected Public Key for derscore), - (hyphen), / (slash), ? (question mark), @ (at), ! (exclamation mark),
Authentication) $ (dollar sign ), ' (apostrophe), (, ) (brackets), * (asterisk), + (plus sign), ,
(comma), ; (semicolon), = (equality sign), . (dot), or ~ (tilde). Otherwise, an at
tempt for anonymous login is made which results in an error.
Timeout (in ms) Maximum time to wait for the SFTP server to be contacted while establishing con
nection or performing a read operation.
The timeout should be more than 0, but less than five minutes.
Maximum Reconnect Attempts Maximum number of attempts allowed to reconnect to the SFTP server.
Default value: 3
Reconnect Delay (in ms) How long the system waits before attempting to reconnect to the SFTP server.
Automatically Disconnect Disconnect from the SFTP server after each message processing.
The following figure illustrates how the properties configured for Authentication are used.
When as Authentication the option User Name/Password is chosen, user name and password are
determined by a User Credentials artifact (which is specified in the SFTP adapter). On the SFTP server, the
user is authenticated by the password.
When as Authentication the option Public Key is chosen, the user is specified in the SFTP adapter. On the
SFTP server, the user is authenticated by the public key associated with the user.
Parameters Description
Read Lock Strategy Prevents files that are in the process of being written from
being read from the SFTP server. The endpoint waits until
it has an exclusive read lock on a file before reading it.
○ None: Does not use a read lock, which means that the
endpoint can immediately read the file. None is the
simplest option if the SFTP server guarantees that a
file only becomes visible on the server once it is com
pletely written.
○ Rename: Renames the file before reading. The Re
name option allows clients to rename files on the
SFTP server.
○ Content Change: Monitors changes in the file length/
modification timestamp to determine if the write op
eration on the file is complete and the file is ready to
be read. The Content Change option waits for at least
one second until there are no more file changes.
Therefore, if you select this option, files cannot be
read as quickly as with the other two options.
○ Done File Expected : Uses a specific file to signal that
the file to be processed is ready for consumption.
If you have selected this option, enter the name of the
done file. The done file signals that the file to be proc
essed is ready for consumption. This file must be in
the same folder as the file to be processed. Place
holders are allowed. Default: ${file:name}.done.
Sorting Select the type of sorting to use to poll files from the SFTP
server:
Lock Timeout (in min) Specify how long to wait before trying to process the file
again in the event of a cluster outage. If it takes a very long
time to process the scenario, you may need to increase
the timeout to avoid parallel processing of the same file.
This value should be higher than the processing time re
quired for the number of messages specified in Max.
Messages per Poll.
Default: 15
Change Directories Stepwise Select this option to change directory levels one at a time.
Include Subdirectories Selecting this option allows you to look for files in all the
subdirectories of the directory.
You can select one of the following options from the drop
down list:
Note
If you specify an absolute file path, it may occur
that the file cannot be stored correctly at run
time.
Idempotent Repository You can select one of the following idempotent repository
options:
(only available if you have selected Keep File and Mark as
Processed in Idempotent Repository for Post-Processing) ○ In Memory: Keeps the file names in the memory. Files
are read again from the SFTP server when the run
time node is restarted. It is not recommended to use
the In Memory option if multiple runtime nodes are
used. In this case the other nodes would pick the file
and process it because the memory is specific to the
runtime node.
○ Database(default): Stores the file names in a data
base to synchronize between multiple worker nodes
and to prevent the files from being read again when
the runtime node is restarted. File name entries are
deleted by default after 90 days.
Note
The idempotent repository uses the username,
host name, and file name as key values to identify
files uniquely across integration flows of a tenant.
Retry Threshold for Alerting If the number of attempts to retry polling of a message
from the SFTP server exceeds this threshold value, an
alert is raised. The default value '0' indicates that the alert
is not raised.
Note
If two or more sender channels are configured with
the SFTP connector, the value for the Alert Threshold
for Retry parameter should be the same.
Field Description
Buffer Size Write file content using the specified buffer size (in Byte).
Flatten File Names Flatten the file path by removing the directory levels so
that only the file names are considered and they are writ
ten under a single directory.
Max. Messages per Poll Maximum number of messages to gather in each poll.
Consider how long it will take to process this number of
(for sender channel only)
messages, and make sure that you set a higher value for
Lock Timeout (in min).
Note
If you are using the sender SFTP adapter in combina
tion with an Aggregator step and you expect a high
message load, consider the following recommenda
tion:
Prevent Directory Traversal If the file contains any backward path traversals such as
\..\ or /../.. , this carries a potential risk of directory tra
versal. In such a case, message processing is stopped with
an error. The unique message ID is logged in the message
processing log.
Note
We recommend that you specify the Directory and File
Name fields to avoid any security risks. If you provide
these fields, the header is not considered.
Note
In the following cases certain features might not be available for your current integration flow:
○ A feature for a particular adapter or step was released after you created the corresponding shape
in your integration flow.
○ You are using a product profile other than the one expected.
More information: Adapter and Integration Flow Step Versions [page 405]
Time Zone Select the time zone that you want the
scheduler to use as a reference for the
date and time settings.
The Run Once option has been removed in the newest version of the adapter. Default values for the interval
under Schedule on Day and Schedule to Recur have been changed so that the scheduler runs every 10
seconds between 00:00 and 24:00.
SFTP polling is supported in the following way: The same file can be polled by multiple endpoints
configured to use the SFTP channel. This means that you can now deploy an integration flow with a
configured SFTP channel on multiple runtime nodes (which might be necessary to meet failover
requirements) without the risk of creating duplicates by polling the same file multiple times. Note that to
enable the new option, integration flows (configured to use SFTP channels) that were developed prior to
the introduction of this feature have to be regenerated.
You can use the SFTP receiver adapter to transfer files the tenant to an SFTP server using the SSH protocol.
Context
Procedure
1. Select the integration flow you want to configure and choose Edit.
2. Choose the communication channel you want to configure.
To configure a receiver channel, click on a connection between Integration Process component and a
receiver.
3. In General tab page, provide channel name and description in the relevant fields if required.
4. Choose Adapter Specific tab page and provide values in fields based on description in table.
On the Target tab of the receiver channel, specify the following attributes.
Directory Use the relative path to write the file to a directory, for ex
ample <dir>/<subdir>.
Note
If you do not enter a file name and the parameter re
mains blank, the content of the CamelFileName
header is used as file name. If this header is not speci
fied, the Exchange ID is used as file name.
myfile20151201170800.xml
Note
Be aware of the following behavior if you have config-
ured the file name dynamically: If you have selected
the Append Timestamp option, the timestamp over
rides the file name defined dynamically via the header
(CamelFileName).
Caution
Note that in case files are processed very quickly, the
Append Timestamp option might not guarantee
unique file names.
Proxy Type The type of proxy that you are using to connect to the tar
get system.
(only if On-Premise is selected for Proxy Type ated with your account, enter the location ID that you de
fined for this instance in the destination configuration on
the cloud side.
○ User Name/Password
SFTP server authenticates the calling component
based on the user name and password. To make this
configuration setting work, you need to define the
user name and password in a User Credential artifact
and deploy the artifact on the tenant.
○ Public Key
SFTP server authenticates the calling component
based on a public key.
Credential Name Name of the User Credential artifact that contains the user
name and password.
(only if you have selected User Name/Password for
Authentication)
(only if you have selected Public Key for Authentication) Make sure that the user name contains no other charac
ters than A-z, 0-9, _ (underscore), - (hyphen), /
(slash), ? (question mark), @ (at), ! (exclamation mark), $
(dollar sign ), ' (apostrophe), (, ) (brackets), * (asterisk),
+ (plus sign), , (comma), ; (semicolon), = (equality
sign), . (dot), or ~ (tilde). Otherwise, an attempt for anon
ymous login is made which results in an error.
Timeout (in ms) Maximum time to wait for the SFTP server to be contacted
while establishing connection or performing a read opera
tion.
The timeout should be more than 0, but less than five mi
nutes.
Default value: 3
Reconnect Delay (in ms) How long the system waits before attempting to recon
nect to the SFTP server.
Automatically Disconnect Disconnect from the SFTP server after each message
processing.
The following figure illustrates how the properties configured for Authentication are used.
When as Authentication the option User Name/Password is chosen, user name and password are
determined by a User Credentials artifact (which is specified in the SFTP adapter). On the SFTP server, the
user is authenticated by the password.
When as Authentication the option Public Key is chosen, the user is specified in the SFTP adapter. On the
SFTP server, the user is authenticated by the public key associated with the user.
Field Description
Handling for Existing Files If the file already exists in the target, allow the following:
Append: Add the new file content to the end of the existing
one.
Temporary File Name Allows you to specify a name for a temporary file.
(only visible when Handling for Existing Files is set to If you override an existing file (on the SFTP server) with a
Override) new one, the following situation can occur: The subse
quent file processor (implemented on the receiver side)
already starts processing the file, even though it is not yet
completely written (by the SFTP adapter) to the SFTP
server. Together with the Override option, you can specify
the name of a temporary file. The SFTP adapter then fin-
ishes writing the file with the temporary file name to the
SFTP server first. After that, the temporary file is renamed
according to the target file name specified in the SFTP
adapter (according to the setting for File Name under
Target). This makes sure that the subsequent processor
only processes a completely written file.
Caution
Make sure that the name of the temporary file is
unique on the server, otherwise problems can occur
when different clients try to access the SFTP server
using the same temporary file name.
Use Temporary File For synchronization reasons, the SFTP receiver writes the
data to a temporary file initially. Once the write procedure
is finished, the temp file is renamed to the target file. The
temp file is deleted automatically, irrespective of whether
the write procedure is successful or contains errors.
Buffer Size Write file content using the specified buffer size (in Byte).
Flatten File Names Flatten the file path by removing the directory levels so
that only the file names are considered and they are writ
ten under a single directory.
Max. Messages per Poll Maximum number of messages to gather in each poll.
Consider how long it will take to process this number of
(for sender channel only)
messages, and make sure that you set a higher value for
Lock Timeout (in min).
Note
If you are using the sender SFTP adapter in combina
tion with an Aggregator step and you expect a high
message load, consider the following recommenda
tion:
Prevent Directory Traversal If the file contains any backward path traversals such as
\..\ or /../.. , this carries a potential risk of directory tra
versal. In such a case, message processing is stopped with
an error. The unique message ID is logged in the message
processing log.
Note
We recommend that you specify the Directory and File
Name fields to avoid any security risks. If you provide
these fields, the header is not considered.
SFTP polling is supported in the following way: The same file can be polled by multiple endpoints
configured to use the SFTP channel. This means that you can now deploy an integration flow with a
configured SFTP channel on multiple runtime nodes (which might be necessary to meet failover
requirements) without the risk of creating duplicates by polling the same file multiple times. Note that to
enable the new option, integration flows (configured to use SFTP channels) that were developed prior to
the introduction of this feature have to be regenerated.
Prerequisites
Context
OData adapter allows you to communicate using OData protocol in either ATOM or JSON format. In the sender
channel, the OData adapter listens for incoming requests in either ATOM or JSON format. In the receiver
channel, the OData adapter sends the OData request in the format you choose (ATOM or JSON) to the OData
service provider.
OData adapters only support synchronous communication. In other words, every request must have a
response.
Tip
If your input payload contains nodes without data, the output also contains empty strings. If you want to
avoid empty strings in the output, ensure that the input payload does not contain any empty nodes.
Procedure
You use this procedure to configure OData adapter assigned to a communication channel.
1. Double-click the communication channel in the Model Configuration tab page.
2. Choose Browse in the Adapter Type screen area.
3. Choose OData in the Choose Adapter window and choose OK.
4. Choose the Adapter Specific tab page and enter details in fields based on the description given in the
following table.
Authorization Sender only Select User Role if you want to authorize a user to send OData
requests based on the ESBMessaging.send
EDMX Sender only Select the EDMX file that contains the OData service definition.
Entity Set Sender only Enter name of the entity set in the OData model.
Address Receiver only Enter URL of the OData service provider you want to connect
to.
Proxy Type Receiver only The type of proxy that you are using to connect to the target
system:
○ Select Internet if you are connecting to a cloud system.
○ Select On-Premise if you are connecting to an on-premise
system.
Note
If you select the On-Premise option, the following re
strictions apply to other parameter values:
○ Do not use an HTTPS address for Address, as it
leads to errors when performing consistency
checks or during deployment.
○ Do not use the option Client Certificate for the
Authentication parameter, as it leads to errors
when performing consistency checks or during
deployment.
Note
If you select the On-Premise option and use the SAP
Cloud Connector to connect to your on-premise sys
tem, the Address field of the adapter references a vir
tual address, which has to be configured in the SAP
Cloud Connector settings.
Location ID Receiver only in Enter the Location ID that you have provided in the account
case you choose configuration in the cloud connector
Proxy Type as On-
Premise
Authentication Receiver only Select Basic from the dropdown list if you want to use basic au
thentication to connect to the OData service provider
Restriction
You cannot use client certificate for connecting to the
OData service provider while modeling operations using
operations modeler.
Operation Details Receiver only This contains details to the operation including Query (GET),
Update (PUT), Insert (POST), Read (GET), Create (POST) and
Merge (MERGE).
Query Options Receiver Only Enter any query options that you would like to send to the
OData service.
Custom Query Options Receiver only Enter additional query options other than the ones configured
using operations modeler. For example, sap-client=100 is a
custom query option that you can specify.
Content Type Receiver only Select format of the request payload. You can select Atom or
JSON.
Content Type Encoding Receiver only Select encoding standard used to encode the request payload
content.
Page Size Receiver only Enter total number of records in one page of response from
OData service provider.
Note
5. If you want the system to fetch records in pages of size specified in the Page Size field, select Process in
Batches checkbox.
Tip
In the Process Call step in which you are calling the Local Integration Process, ensure that you enable
looping and select the Expression Type as Non-XML, Condition Expression as $
{property.<ReceiverName>.<ChannelName>} contains 'true', and Maximum Number of
Iterations as 999.
○ Do not declare the property hasMoreRecords in any of the integration flow steps (For example,
content modifier or script). It is available by default. You can directly use this property while
entering the Condition Expression in Process Call step.
○ Ensure that the receiver system name in the Condition Expression is the SuccessFactors system
that you are connecting to using the receiver channel in the Local Integration Process. Do not enter
the receiver system name from the main integration flow.
○ If you have specified a channel name for the receiver channel in the Local Integration Process,
provide that name in the Condition Expression.
Prerequisites
Context
You use the Model Operation feature in the OData adapter to model an operation [Query (GET), Update (PUT),
Create (POST), Read (GET) and Merge (MERGE)]. You also select the ResourcePath, the URI using which you
transact with the OData service provider.
Note
If you are connecting to a system that supports https communication, you must ensure the following:
Note
For information on referring JDK in Eclipse configuration file, refer to Eclipse documentation.
● You have imported the security certificate of the system you are connecting to your JDK keystore
Procedure
You use this procedure to model an operation with the OData adapter.
1. Double-click the OData receiver channel of the integration flow in the Model Configuration tab page.
2. Choose the Adapter Specific tab page.
3. Choose Model Operation.
4. If you want to use a local EDMX file to connect to the OData service provider, perform the following
substeps:
Remember
If you have used client certificate for connecting to the OData service provider, you need to download
the EDMX file from the OData service provider and import it to the src.main.resources.edmx folder. It
enables you to use local EDMX file to connect to the system.
Field Description
Note
You cannot use this option if you have selected client certificate as the authentication method while
configuring the channel.
6. Select the entity in Select Entity for an Operation window and choose Next.
7. Choose the Operation from the dropdown list based on the description given in the table.
Operation Description
Restriction
This operation is not supported for associated entities
Merge (MERGE) Used to merge data with existing data in OData service
Restriction
This operation is not supported for associated entities
Read (GET) Used to fetch a unique entity from the OData Service.
Passes the key fields along with the Entity in the URI.
8. Select the required fields for the operation form the Fields screen area.
Note
IN the case of Update (PUT) or Insert (POST) operation, this would be the last step. Choose Finish.
9. If you have chosen the operation as Query (GET), enter values in Top and Skip fields based on the
description given in table.
Field Description
Top If you enter a value 'x', the system fetches the top x re
cords form the OData service provider
Skip If you enter a value 'x', the system skips x records from top
and fetches the remaining records from the OData service
provider
Field Description
Filter Field Field that is used in the ‘WHERE’ clause for filtering.
Note
Field set contains the set of filterable fields returned
from the OData service provider that you can use in
the Filter Condition
Note
Multiple conditions can be added if required
Remove Any condition that is already added to the list can be se
lected and removed from the final Query
Results
The XSD is the format in which data is processed in the Cloud Integration esb. You use this xsd file in the
mapping step for data transformation.
The EDMX file contains the OData Entity specification form the provider. This can be used when you model
operation again by choosing ‘Local EDMX’ file.
Prerequisites
Context
When you choose batch processing for PUT and POST operations for the OData adapter, the payload format
that is sent to the OData service must be in the recommended structure. You can use the input XSD that is
generated when you model the operation with a mapping step to transform the payload into the recommended
XSD structure. You can alternatively use XSLT or content modifier to do this.
Procedure
This procedure enables you to transform the input payload XML into the recommended batch processing
structure.
2. In the New wizard, choose SAP Cloud Platform Integration Message Mapping and choose Next.
3. In the General Details section of the New Message Mapping window, enter Name and Description.
4. In the Location Details section of the New Message Mapping window, choose Browse and select the project
that you are working in. Choose Ok.
5. In the New Message Mapping window, choose Finish.
6. In the Source Element section, choose Add.
7. In the Select a XSD or WSDL file window, select the input payload format XML and choose OK.
8. In the Target Element section, choose Add.
9. In the Select a XSD or WSDL file window, select the XSD file that was generated when you modeled the
operation and choose OK.
10. Choose the Definition tab page and perform the following substeps to map the elements in the payload
XML to the target XSD.
a. Map the Entity Type element in the left pane to the batchChangeSet on the right pane.
b. Map the fields in the left pane to the appropriate fields on the right pane.
c. Map the batchChangeSet, Entity Set and Entity Type elements on the right pane except the headers to
a constant. The value of the constant can be a dummy example value.
d. Choose the method element and double-click Constant element in the properties view.
e. In the Constant Parameters window, enter the operation (PUT/POST) in the Value field and choose OK.
This is the same operation that you have chosen while modeling the OData operation.
Refer to 2031746 for more details on the structure of request and response XSD.
When you are performing the Insert (POST) operation, in addition to inserting an entity, you can add a
reference to an associated entity. To do this, you have to map the appropriate field from the payload to the key
field of associated entity in <link> tag.
You can use a mapping step to do this. Consider the following example.
The primary key is <ID>. You need to map the key element to your reference to <ID> for successfully executing
reference insert operation.
Prerequisites
You use this procedure to configure a sender and receiver channel of an integration flow with the Ariba Network
adapter. These channels enable the SAP and non-SAP cloud applications to send and receive business-specific
documents in cXML format to and from the Ariba Network. Examples of business documents are purchase
orders and invoices.
Restriction
An integration flow you deploy in SAP Cloud Platform Integration deploys in multiple IFLMAP worker nodes.
Polling is triggered from only one of the worker nodes. The message monitoring currently displays the
process status from the worker nodes where the Scheduler is not started. This results in the message
monitor displaying messages with less than a few milliseconds, where the schedule was not triggered.
These entries contain firenow=true in the log. You can ignore these entries.
Procedure
1. Double-click the channel that you want to configure on the Model Configuration tab page.
2. On the General tab page, choose Browse in the Adapter Type screen area.
3. Select Ariba in the Choose Adapter window and choose OK.
4. Choose the Adapter-Specific tab page and enter the details as shown in the table below:
Connection Details Connectivity URL You need URL to which the cXML re
quests are posted to/polled from or
you need profile URL to connect to
Ariba network.
Request Type (this field is available Select one of the options based on the
only in the sender channel) request types of buyer/supplier that
you want to poll.
Scheduler (only valid for the sender Run Once Run a data polling process immedi
channel) ately after deploying the project.
Note
If the specified date is not appli
cable in a particular month, the
data polling is not executed in
that month. For example, if the
30th day is selected, polling will
not be executed in the month of
February, as 30th is not a valid
day for February.
Every xx minutes between HH hours The connector fetches data from the
and HH hours Ariba system every ‘xx’ minutes be
tween HH hours and HH hours.
Note
If you want the polling to run for
the entire day, enter 1 and 59.
Note
You can use headers and properties to set values for Ariba Network URL, Credential Name, Private Key
Alias and Ariba Network ID. You can enter values in the following format:
○ ${header.url}
○ ${property.credentialName}
In the Model Configuration editor, right-click and choose Deploy Integration Content to apply the configuration
settings and run the integration flow.
Prerequisites
You can only select the Twitter adapter type when you have connected your client (which runs Eclipse) to a
suitable version of a cluster. After connecting to the newest version of the cluster, choose Update client with
latest components from server (see the following figure).
Context
You can use the Twitter receiver adapter to extract information from the Twitter platform (which is the receiver
platform) based on certain criteria such as keywords, user data, for example. As one example, you can use this
feature to send, search for and receive Twitter feeds.
The connection works that way that the tenant logs on to Twitter based on an OAuth authentication mechnism
and searches for information based on criteria as configured in the adapter at design time. OAuth allows the
tenant to access someone else’s resources (of a specific Twitter user) on behalf of the tenant. As illustrated in
the figure, the tenant (through the Twitter receiver adapter) calls the Twitter API to access resources of a
specific Twitter user. Currently, the Twitter adapter can only be used as receiver adapter. For more information
on the Twitter API, go to: https://dev.twitter.com/ .
Procedure
1. Double-click the channel that you want to configure on the Model Configuration tab page.
Endpoint To access Twitter content, you can choose among the following general options.
○ Send Tweet
Allows you to send content to a specific user timeline.
○ Search
Allows you to do a search on Twitter content by specifying keywords.
○ Send Direct Message
Allows you to send messages to Twitter (write access, direct message).
User Specifies the Twitter user from which account the information is to be extracted.
(only in case as
Endpoint you
have selected
Send Direct
Message)
Page Size Specifies the maximum number of results (tweets) per page.
Number Of Specifies the number of pages which you want the tenant to consume.
Pages
(only in case as Use commas to separate different keywords or a valid Twitter Search API query (for more informa
Endpoint you tion, go to https://dev.twitter.com/rest/public/search ).
have selected
Search)
(only in case as
Endpoint you
have selected
Search)
Consumer Key An alias by which the consumer (tenant) that requests Twitter resources is identified
Consumer An alias by which the shared secret is identified (that is used to to define the token of the consumer
Secret (tenant))
Access Token An alias by which the access token for the Twitter user is identified
In order to make authorized calls to the TwitterAPI, your application must first obtain an OAuth ac
cess token on behalf of a Twitter user
Access Token An alias by which shared secret is identified that is used to define the token of the Twitter user
Secret
The authorization is based on shared secret technology. This method relies on the fact that all parties of a
communication share a piece of data that is known only to the parties involved. Using OAuth in the context
of this adapter, the Consumer (that calls the API of the receiver platform on behalf of a specific user of this
platform) identifies itself using its Consumer Key and Consumer Secret, while the context to the user
itself is defined by an Access Token and an Access Token Secret. These artifacts are to be generated for
the receiver platform app (consumer) and should be configured that way that they will never expire. This
adapter only supports consumer key/secret and access token key/secret artifacts that do not expire.
To finish the configuration of a scenario using this adapter, the generated consumer key/secret and access
token key/secret artifacts are to be deployed as Secure Parameter artifact on the related tenant. To do this,
use the Integration Operations feature, position the cursor on the tenant and chosen Deploy Artifact .... As
artifact type, choose Secure Parameter.
Context
You can use the Facebook receiver adapter to extract information from Facebook (which is the receiver
platform) based on certain criteria such as keywords, user data, for example. As one example, you can use this
feature in social marketing activities to do social media data analysis based on Facebook content.
The connection works that way that the tenant logs on to Facebook based on an OAuth authentication
mechanism and searches for information based on criteria as configured in the adapter at design time. OAuth
allows a the tenant to access someone else’s resources (of a specific Facebook user) on behalf of the tenant. As
illustrated in the figure, the tenant (through the Facebook receiver adapter) calls the Facebook API to access
resources of a specific Facebook user. For more information on the Facebook API, go to: https://
developers.facebook.com/ .
1. Double-click the channel that you want to configure on the Model Configuration tab page.
2. On the General tab page, choose Browse in the Adapter Type screen area.
3. Select Facebook in the Choose Adapter window and choose OK.
4. Choose the Adapter-Specific tab page and enter the details as shown in the table below:
Endpoint To access Facebook content, you can choose among the following general options.
○ Get Posts
Allows you to fetch specific Facebook posts.
○ Get Post Comments
Allows you to fetch specific Facebook post comments.
○ Get Users
Allows you to fetch details of a specific user.
○ Get Feeds
Allows you to fetch feeds of a specific user or a page.
User/Page ID Specifies the Facebook user from which account the information is to be extracted.
Timeout (ms) Specifies a timeout (in miliseconds) after which the connection to te Facebook platform should be
terminated.
Application ID An alias by which the consumer (tenant) that requests Facebook resources is identified
Application An alias by which the shared secret is identified (that is used to to define the token of the consumer
Secret (tenant))
Access Token An alias by which the access token for the Facebook user is identified
In order to make authorized calls to the Facebook API, your application must first obtain an OAuth
access token on behalf of a Facebook user
The authorization is based on shared secret technology. This method relies on the fact that all parties of a
communication share a piece of data that is known only to the parties involved. Using OAuth in the context
of this adapter, the Consumer (that calls the API of the receiver platform on behalf of a specific user of this
platform) identifies itself using its Consumer Key and Consumer Secret, while the context to the user
itself is defined by an Access Token and an Access Token Secret. These artifacts are to be generated for
the receiver platform app (consumer) and should be configured that way that they will never expire. This
adapter only supports consumer key/secret and access token key/secret artifacts that do not expire.
To finish the configuration of a scenario using this adapter, the generated consumer key/secret and access
token key/secret artifacts are to be deployed as Secure Parameter artifact on the related tenant. To do this,
use the Integration Operations feature, position the cursor on the tenant and chosen Deploy Artifact .... As
artifact type, choose Secure Parameter.
Prerequisites
To successfully run the Operations Modeler, your Java Virtual Machine (JVM) must contain the security
certificate recommended by the SuccessFactors system. Example: VeriSign Class 3 Public Primary
Certification Authority - G5 security certificate.
Note
First, you must verify if the JVM contains the security certificate that is used by SuccessFactors system. If
not, then download the certificate from the appropriate security certificate vendor and install it. You can
refer to JVM documentation for verifying and installing the security certificate on to your JVM. Ensure that
the IP addresses of the SAP Cloud Platform Integration runtime worker node and the systems you are using
to connect to the SuccessFactors system are in the list of allowed IP addresses.
Context
The SuccessFactors adapter provides three message protocols for you to communicate with the
SuccessFactors system. They are:
Note
3. OData V4 - Configuring SuccessFactors Adapter with OData V4 Message Protocol [page 169]
Note
4. REST - Configuring SuccessFactors Adapter with REST Message Protocol [page 170]
You can choose the protocol you want based on the scenario you want to execute.
You need to provide the following details in order to communicate with the SuccessFactors system.
● Connection details – Details required to establish a connection with the SuccessFactors system
● Processing details – Information required to process your modeled operation
● Scheduler – Settings that enable you to schedule a data polling cycle at regular intervals
Note
Note
The password for connecting to the SuccessFactors system should be deployed onto the tenant via the
‘Credentials’ deployment wizard available in the Node Explorer.
Tip
You can use the Insert (POST) operation to insert more than one records in a single operation. These
records must have an edmx association between them.
Context
You use this procedure to configure the SuccessFactors adapter with the SOAP message protocol.
Note
You can now pass filter conditions via header or property while performing an asynchronous or ad-hoc
operation.
An integration flow you deploy in SAP Cloud Platform Integration deploys in multiple IFLMAP worker nodes.
Polling is triggered from only one of the worker nodes. The message monitoring currently displays the
process status from the worker nodes where the Scheduler is not started. This results in the message
monitor displaying messages with less than a few milliseconds, where the schedule was not triggered.
These entries contain firenow=true in the log. You can ignore these entries.
Procedure
1. On the Model Configuration tab, double-click the channel that you want to configure.
2. Go to the General tab and choose Browse in the Adapter Type screen area.
3. In the Choose Adapter window, select SuccessFactors and choose OK.
4. Choose SOAP from the dropdown list in the Message Protocol field.
5. Go to the Adapter Specific tab.
6. Provide values in the fields based on the descriptions in the following table.
Field Description
Address Suffix The system provides a value for this field based on the
protocol you choose. For SOAP, the value is /sfapi/v1/
soap.
Credential Name Credential name that you have used while deploying cre
dentials on the tenant.
Proxy Type Type of proxy you want to use to connect to the Success
Factors system.
Proxy Host is the name of the proxy host you are using.
Call Type Type of call that the SAP Cloud Platform Integration sys
tem makes to the SuccessFactors system.
Operation Details Details of the operation that you are performing on the
SuccessFactors system. Choose Model Operation to
launch the operations modeler wizard. For more informa
Operation: Query/Insert/Update/Upsert
Note
Query is the only operation available in the sender
channel.
Code Syntax
key1=value1&key2=value2
Note
You can specify the custom parameters in four ways:
<Key>=<value>;<key>=,value>
<header/property variable>=value;<header/property
variable>=value
Example
externalKeyMapping=costCenter;pro
cessinactiveEmployees=true
Example
$
{property.ECERP_PARAMETERS}=costC
enter;$
{header.ECERP_PARAMETERS}=true
Example
${property.ECERP_PARAMETERS};$
{header.ECERP}, which contains the key-
value pair.
Example
$
{property.ECERP_PARAMETERS}=proce
ssinactiveEmployees=true;resultOp
tions=allJobChangesPerDay
Page Size In the case of a Query operation, this value indicates the
maximum number of records fetched in one polling cycle
from the SuccessFactors system.
Caution
If you find that the Query operation stops due to a
timeout, reduce the Page Size and execute the opera
tion again.
Caution
Since the SuccessFactors system supports a maxi
mum page size of 800, you must ensure that the max
imum number of records in the payload for data ma
nipulation operations (Insert, Update, or Upsert) is
less than or equal to the page size specified.
Tip
The system assigns a default value of 200 if you do
not provide a value for this field.
Timeout (in min) Maximum time system waits for a response before the
connection ends or is timed out. By default, 5 minutes is
the timeout value if you do not provide input.
7. If you want to process messages in batches while using the SuccessFactors SOAP adapter in the receiver
channel of a Local Integration Process, select Process in Batches.
Restriction
You cannot use Process in Batches option with Query operation if the Process Call step is used in a
Multicast branch.
Note
By selecting Process in Batches, you enable the adapter to process messages in batches. The size of a
message batch is defined by the value that you specify in Page Size.
Tip
In the Process Call step in which you are calling the Local Integration Process, ensure that you enable
looping and select the Expression Type as Non-XML, Condition Expression as $
{property.SAP_SuccessFactorsHasMoreRecords.<ReceiverName>} contains 'true', and
Maximum Number of Iterations as 999.
8. If you are configuring the sender channel, perform the following substeps to configure the scheduler:
a. Go to the Scheduler tab.
b. Enter the scheduler details based on the descriptions given in the table below.
Daily Run message polling every day to fetch data from the
SuccessFactors system.
Note
If the specified date is not applicable to a month,
data polling is not executed in that particular
month. For example, if the 30th day of the month is
selected as the polling date, polling will not be exe
cuted in the month of February as February 30 is
not a valid date.
Time The time at which the data polling cycle has to be initi
ated. For example, if you want data polling to start at
4.1PM, enter 16:10. Note that the time must be entered
in 24-hour format.
Every xx minutes between HH hours and HH hours The connector fetches data from the SuccessFactors
system every ‘xx’ minutes between HH hours and HH
hours.
Note
If you want the polling to run for the entire day, en
ter 1 and 59.
Time Zone Select the time zone that you want to use as the refer
ence for scheduling the data polling cycle.
Prerequisites
Context
You need to provide operation details to access and modify records in the SuccessFactors SOAP Web service.
You use the operations modeler wizard to provide these details and also generate the XSD file.
Procedure
1. In the Connect to System window, provide values in the fields based on the descriptions in the table below
and choose Next.
Field Description
Proxy Communication Select this checkbox if you want to manually specify the
proxy details
2. In the Entity Selection window, select the entity that you want to perform the operation on from the Entity
List. Choose Next.
You can configure the filter conditions to execute delta sync scenarios. For more information, see
Configuring Delta Sync Scenarios [page 161]. Refer to the following table when specifying filter conditions.
Field Description
Note
The field set contains the set of filterable fields re
turned from the SuccessFactors API that you can use
in the filter condition.
If the type is Property, the system reads the value from the
property that you have defined in the integration flow ele
ment.
Note
Multiple conditions can be added if required.
Remove Any condition that is already added to the list can be se
lected and removed from the final SuccessFactors query.
6. Choose Finish.
The Finish button is only activated if you have selected fields of the entity in step 3. When you choose
Finish, the system creates a XML schema file with the selected entities. You can access the schema file in
the src.main.resources.wsdl folder of your project. If there is an existing XML schema file, you have the
option of overwriting the existing file or creating a new file after choosing the Finish option. This file can be
used in the integration flow like a mapping step.
One of the root elements in the XML schema file is the Entity Name. In cases where the Entity Name is in
the format <EntityName>_$XX, only <EntityName> is used as the root element of the XML schema file.
$XX is dropped from the root element name of the XML schema so that you can use the same integration
flow in other SuccessFactors company IDs without changing the mapping.
You can configure the SuccessFactors connector to fetch the modified or delta records instead of fetching all
the records. This optimizes the polling mechanism. This is known as a delta sync configuration.
If you want to add more filter conditions after you have configured the delta sync, use the appropriate
operators and add them. Once the query is executed, the relevant scenarios are executed.
Note
The following steps guide you through the configuration of the delta sync conditions only. For an end-to-
end procedure for creating and executing operations, see Modeling Operations.
Delta Sync
With this configuration, the system fetches all records from the beginning of time (1/1/19070, by default) in the
first run. Only modified records are fetched in the subsequent runs.
1. In the Configure Filter Condition for Fields window, select a field of type DATETIME for the Filter Field.
Example: lastModified
2. In the Operation field, select >.
3. In the Type field, select Delta Sync. maxDateFromLastRun is automatically populated in the Value field.
If the payload from the SuccessFactors system has execution_timestamp as one of the fields, that time
stamp is used as the reference date for the subsequent delta sync polling cycles. The date specified in the
Query is ignored.
With this configuration, you can modify the query in an existing delta sync configuration. The system will
consider the new query and fetch only modified records in the subsequent polling cycles.
1. In the Model Operation window, add or remove the new fields that you wish to fetch.
2. In the Configure Filter Conditions for Fields window, you can add new filter conditions.
3. You can also modify or remove existing filter conditions in the Configure Filter Conditions for Fields window.
4. Continue with the existing delta sync configuration.
You perform these steps only to reset an existing delta sync configuration. After reset, the configuration
enables you to fetch data from the beginning of time (1/1/1970) in the first polling cycle and fetch only
modified records in the subsequent polling cycles.
1. In the channel configuration, enter a new channel name in the Channel Details section. The new name
resets the existing delta sync configuration.
Caution
Choose a unique channel name. Do not use names that were used in earlier delta sync configurations.
Fetch Records After a Specified Date in the First Run and Fetch Modified
Records in Subsequent Runs
With this configuration, you can specify a date that will be used as a reference to fetch records. The system
fetches the records that have been modified or added after the specified date in the first polling cycle. The
modified records are fetched in the subsequent polling cycles.
1. In the Model Configuration window, select the fields that you want to fetch.
2. In the Configure Filter Condition for Fields window, select a DATETIME type field in theFilter Field window.
Example: lastModified
3. In the Operation field, choose >=.
4. In the Value field, enter the date after which you want the records to be fetched from the system.
Context
You use this procedure to configure the SuccessFactors adapter with the OData V2 message protocol.
Remember
The OData V2 message protocol is only available if you are using the SuccessFactors adapter in the receiver
channel.
Restriction
An integration flow you deploy in SAP Cloud Platform Integration deploys in multiple IFLMAP worker nodes.
Polling is triggered from only one of the worker nodes. The message monitoring currently displays the
process status from the worker nodes where the Scheduler is not started. This results in the message
monitor displaying messages with less than a few milliseconds, where the schedule was not triggered.
These entries contain firenow=true in the log. You can ignore these entries.
Tip
If your input payload contains nodes without data, the output also contains empty strings. If you want to
avoid empty strings in the output, ensure that the input payload does not contain any empty nodes.
Procedure
1. On the Model Configuration tab, double-click the channel that you want to configure.
2. Go to the General tab and choose Browse in the Adapter Type screen area.
3. In the Choose Adapter window, select SuccessFactors and choose OK.
4. Choose OData V2 from the dropdown list in the Message Protocol field.
5. Go to the Adapter Specific tab.
6. Provide values in the fields based on the descriptions in the following table.
Address URL of the SuccessFactors data center that you would like
to connect to.
Address Suffix The system provides a value for this field based on the
protocol you choose. For SOAP, the value is /odata/v2.
Credential Name Credential name that you have used while deploying cre
dentials on the tenant.
Proxy Type Type of proxy you want to use to connect to the Success
Factors system.
Operation Details Operation that you have created using the operations
modeler. For more information, see . [page 165]
Content Type Format of the request payload. You can select Atom or
JSON.
Content Type Encoding Encoding standard to be used for encoding content. Cur
rently, UTF-8 is supported.
Page Size This field is only applicable for Query operations. It indi
cates the number of records that the SAP Cloud Platform
Integration system reads from the SuccessFactors system
in one polling cycle when Operation is executed.
Timeout (in min) Maximum time system waits for a response before the
connection ends or is timed out. By default, 5 minutes is
the timeout value if you do not provide input.
7. If you want to process messages in batches while using the SuccessFactors ODataV2 adapter in the
receiver channel of a Local Integration Process, select Process in Batches.
Restriction
You cannot use Process in Batches option with Query operation if the Process Call step is used in a
Multicast branch.
Note
By selecting Process in Batches, you enable the adapter to process messages in batches. The size of a
message batch is defined by the value that you specify in Page Size.
In the Process Call step in which you are calling the Local Integration Process, ensure that you enable
looping and select the Expression Type as Non-XML, Condition Expression as $
{property.<ReceiverName>.<ChannelName>.hasMoreRecords} contains 'true', and
Maximum Number of Iterations as 999.
○ Do not declare the property hasMoreRecords in any of the integration flow steps (For example,
content modifier or script). It is available by default. You can directly use this property while
entering the Condition Expression in Process Call step.
○ Ensure that the receiver system name in the Condition Expression is the SuccessFactors system
that you are connecting to using the receiver channel in the Local Integration Process. Do not enter
the receiver system name from the main integration flow.
○ If you have specified a channel name for the receiver channel in the Local Integration Process,
provide that name in the Condition Expression.
Prerequisites
Context
You need to provide operation details to access and modify records in the SuccessFactors SOAP Web service.
You use the operations modeler wizard to provide these details and also generate the EDMX file.
Procedure
1. If you want to use a local EDMX file to connect to the system, perform the following substeps:
a. Select the Local EDMX File checkbox.
Field Description
Proxy Communication Select this checkbox if you want to manually specify the
proxy details
Note
If you are connecting to a system that supports HTTPS communication, you must ensure the following:
○ Java Development Kit is installed on your system.
○ You have referenced JDK in the Eclipse configuration file.
Note
For information about referencing JDK in the Eclipse configuration file, see the Eclipse
documentation.
○ You have imported the security certificate of the system that you are connecting to your JDK
keystore.
Note
For information about importing certificates to the JDK keystore, see the JDK documentation.
3. In the Select Entity for an operation window, select the Entity and choose Next.
4. Choose the Sub-Levels from the dropdown list.
Remember
If you are performing the Insert (POST) operation and the payload contains one level of sub-entities,
choose 1 from the dropdown list.
Ensure that you select values for the Sub-Levels field only from the dropdown list.
Note
The navigation depth is the level up to which you want to view the entity association. For example,
consider that entity B is associated with entity A and entity C is associated with entity B. If you choose
entity A in the Select Entity for an operation window and choose a navigation depth of 1, you can
navigate up to entity B. If you choose a navigation depth of 2, you can navigate up to entity C.
5. Choose the Operation from the dropdown list based on the descriptions in the table below.
Operation Description
Read (GET) Used to fetch a unique entity from the OData service.
Passes the key fields along with the entity in the URI (Uni
versal Resource Indicator). Format:<Entity>(Keyfield 1,
Keyfield 2, and so on)
Upsert (UPSERT) Used to perform Update and Insert operations using one
command to the OData service exposed by the Success
Factors system. It checks if the record exists in the table. If
the record is present, it updates the content of the record.
If the record is not present, it will create a new record with
the parameters specified in the payload.
Restriction
This operation is not supported if you specify JSON as
the request payload type.
6. Select the required fields for the operation from the Fields screen area and choose Next.
Remember
If you choose a PUT or POST operation, this is the last step. Choose Finish.
7. Enter values in the Top and Skip fields based on the descriptions in the table below. This is only applicable
for Query operations.
Field Description
Top If you enter the value 'x', only the top 'x' values are fetched
from the OData service provider.
Skip If you enter the value 'x', the top 'x' values are ignored and
the remaining records are fetched from the OData service
provider.
8. Select the values based on the descriptions in the table below to add filter conditions to the operation. The
filter step is only available for query (GET) operations.
Note
The field set contains the set of filterable fields re
turned from the SuccessFactors API that you can use
in the filter condition.
If the type is Property, the system reads the value from the
property you have defined in the integration flow element.
Note
Multiple conditions can be added if required.
Remove Any condition that is already added to the list can be se
lected and removed from the final SuccessFactors query.
9. Choose Finish.
Results
The SAP Cloud Platform Integration enterprise service bus (ESD) processes data in the XSD format. You use
this XSD file in the mapping step for data transformation.
The EDMX file contains the OData entity specification from the OData service provider. You can use this file in
the subsequent operation modeling steps to connect to the OData service provider.
Context
You use this procedure to configure the SuccessFactors adapter with the OData V4 message protocol.
Remember
The OData V2 message protocol is only available if you are using the SuccessFactors adapter in the receiver
channel.
Tip
If your input payload contains nodes without data, the output also contains empty strings. If you want to
avoid empty strings in the output, ensure that the input payload does not contain any empty nodes.
Procedure
1. On the Model Configuration tab, double-click the channel that you want to configure.
2. Go to the General tab and choose Browse in the Adapter Type screen area.
3. In the Choose Adapter window, select SuccessFactors and choose OK.
4. Choose OData V4 from the dropdown list in the Message Protocol field.
5. Go to the Adapter Specific tab.
6. Provide values in the fields based on the descriptions in the table below.
Field Description
Address URL of the SuccessFactors data center that you would like
to connect to.
Credential Name Credential name that you have used while deploying cre
dentials on the tenant.
Proxy Type Type of proxy you want to use to connect to the Success
Factors system.
Resource Path Provide the resource path of the entity that you want to
access.
Query Options Query options that you want to send to the OData V4 serv
ice with operation details.
For more information on Query Options, refer to steps 7 and 8 in Modeling Operations for SuccessFactors
OData V2 Web Service [page 165].
Context
You use this procedure to configure the SuccessFactors adapter with the OData V2 message protocol.
Procedure
1. On the Model Configuration tab, double-click the channel that you want to configure.
2. Go to the General tab and choose Browse in the Adapter Type screen area.
3. In the Choose Adapter window, select SuccessFactors and choose OK.
4. Choose OData V2 from the dropdown list in the Message Protocol field.
5. Go to the Adapter Specific tab.
6. Provide values in the fields based on the descriptions in the table below.
Field Description
Address URL of the SuccessFactors data center that you would like
to connect to.
Address Suffix The system provides a value for this field based on the
protocol you choose. For SOAP, the value is /odata/v2.
Credential Name Credential name that you have used while deploying cre
dentials on the tenant.
Proxy Type Type of proxy you want to use to connect to the Success
Factors system.
Operation Select the operation you want to perform from the drop
down list.
Note
Only GET for the sender channel and GET andPOST
for the receiver channel are currently supported.
Note
You can find the entity name in the relevant API docu
mentation.
Example: creationDate=1&active=true
Note
In the case of the GET operation, you can fetch just
the modified records in subsequent runs by using the
condition lastModifiedDate=${deltasync.maxDate
FromLastRun}.
Page Size The number of records that are read from the Success
Factors system in one request.
7. If you are configuring the sender channel, perform the following substeps to configure the scheduler:
a. Go to the Scheduler tab.
b. Enter the scheduler details based on the descriptions in the table below.
Daily Run message polling every day to fetch data from the
SuccessFactors system.
Note
If the specified date is not applicable to a month,
data polling is not executed in that particular
month. For example, if the 30th day of the month is
selected as the polling date, polling will not be exe
cuted in the month of February as February 30 is
not a valid date.
Time The time at which the data polling cycle has to be initi
ated. For example, if you want data polling to start at
4.1PM, enter 16:10. Note that the time must be entered
in 24-hour format.
Every xx minutes between HH hours and HH hours The connector fetches data from the SuccessFactors
system every ‘xx’ minutes between HH hours and HH
hours.
Note
If you want the polling to run for the entire day, en
ter 1 and 59.
Time Zone Select the time zone that you want to use as the refer
ence for scheduling the data polling cycle.
Caution
If a cluster is updated with the latest node assembly, it is restarted after the update. If you have
deployed projects on the cluster with scheduler settings, you face the following issues:
○ Run Once settings: If you have selected Run Once in the scheduler, the system deploys the
project after the cluster is updated. This results in the system performing the operation again.
You see copies of the same result after the cluster update.
○ Specific time schedule: If you have configured a specific date in the scheduler and those
projects are deployed again after a cluster update, you might see those projects in an error
state.
To avoid this, you have to undeploy the project after the system has executesd the operation
according to the scheduler settings.
You configure the JMS adapter to enables asynchronous messaging by using message queues.
Prerequisites
The JMS messaging instance that is used in asynchronous messaging scenarios with the JMS or AS2 adapters
has limited resources. Cloud Platform Integration Enterprise Edition sets a limit on the queues, storage, and
connections that you can use in the messaging instance.
There are also technical restrictions on the size of the headers and exchange properties that can be stored in
the JMS queue.
The following size limits apply when saving messages to JMS queues:
● There are no size limits for the payload. The message is split internally when it is put into the queue.
● There are no size limits for attachments. The message and the attachment are split internally when put
into the queue.
● Headers and exchange properties defined in the integration flow before the message is saved to the queue
must not exceed 4 MB in total.
Note
The JMS Adapter generates message queues during deployment. To avoid errors, you must manually
delete any message queues that are no longer required in the Message Queue Monitor.
Caution
Do not use this adapter type together with Data Store Operations steps, Aggregator steps, or global
variables, as this can cause issues related to transactional behavior.
This adapter type cannot process ZIP files correctly. Therefore, don't use this adapter type together with
Encoder or Decoder process steps that deal with ZIP compression or decompression.
Context
You configure the receiver and sender JMS adapter to enables asynchronous messaging by using message
queues. The JMS incoming message is stored in a permanent persistence and scheduled for processing in a
queue. The processing of messages from the queue is not serialized. The messages are processed
concurrently. The sender does not have to wait while the message is being processed and if needed retried.
The JMS adapter stores only simple data types, which includes in particular: exchange properties that do
not start with Camel, as well as headers (primitive data types or strings).
Note
You can use the JMS Receiver Adapter in the Send step to save the message to the JMS queue and to
continue the processing afterwards.
Supported Headers:
● JMSTimestamp
Specifies the time when a JMS message was created.
Procedure
1. Double-click the channel that you want to configure on the Model Configuration tab page.
2. On the General tab page, choose Browse in the Adapter Type screen area.
3. Select JMS in the Choose Adapter window and choose OK.
4. Choose the Adapter-Specific tab page and enter the details as shown in the table below:
Retry Handling Retry Interval (in m) Enter a value for the amount of time
to wait before retrying message deliv
ery.
Maximum Retry Interval (in m)* Enter a value for the maximum
amount of time to wait before retrying
(only configurable when Exponential
message delivery. The minimum value
Backoff is selected)
is 10 minutes. The default value is set
to 60 minutes.
Note
For high-load scenarios, or if you
are sure that only small messages
will be processed in your scenario,
you should deselect the checkbox
to improve the performance. But
be aware that there is a risk of an
outage, for example,if you run out
of memory.
Retention Threshold for Alerting (in d) Enter the time period (in days) by
which the messages have to be
fetched. The default value is 2.
Expiration Period (in d)* Enter the number of days after which
the stored messages are deleted (de
fault is 90).
Note the following behavior, which can be observed in message monitoring if you have configured a Retry
Interval in the adapter:
The following figure shows a straightforward case when JMS queues are configured during message
processing between a sender and a receiver.
In the setup shown, integration flow 1 receives a message from a sender, processes it, and writes the result
to JMS queue 1 (using a JMS receiver adapter), from where it is picked up by integration flow 2 (using a
JMS sender adapter). The latter processes the message and writes it to another queue, JMS queue 2
(using a JMS receiver adapter). From there, integration flow 3 picks up the message (using a JMS sender
adapter), processes it, and sends it to a receiver.
MPL Description
MPLs for those messages that are reading from and writing into one queue (using the JMS sender or
receiver adapter, respectively) are correlated with each other using a correlation ID. You can use the
Message Monitor or the OData API to search for messages that belong to one correlation ID. To get back to
our example, the MPLs are correlated in the following way:
○ MPL1 and MPL2 are correlated by a correlation ID (for example, by correlation ID: ABC)
○ MPL2 and MPL3 are correlated by another correlation ID (for example, by correlation ID: XYZ)
If an error occurs when sending the message to the receiver (for example, the receiver cannot be reached),
the following happens:
During a retry, the message is in status Retry. For each retry, a separate Run is generated and displayed in
the Monitoring application within one MPL (and can be accessed using one MPL ID).
Results
In certain cases, usage of the JMS sender adapter can cause a node failure. This can happen, for example, if the
JMS adapter tries repeatedly to process a failed (large) message. The classic approach in such cases is to
undeploy the integration flow and reprocess the message. However, if you do this, the content of the original
message is lost.
To avoid such situations, the JMS adapter provides the option Dead-Letter Queue (switched on by default). If
this option is selected, the message is stored in the dead-letter queue if it cannot be processed after two
retries.
To be more specific: The first retry of the message is executed with a delay of 7 minutes. If the message then
still fails, it is stored in the dead-letter queue and manual interaction is required to process the message again.
If you are running scenarios with the JMS sender adapter, the Managing Locks editor helps you deal with
messages that cannot be processed.
After the last retry, a lock entry is written, which can be investigated in the Message Monitoring application
under Managing Locks.
Attribute Value
Component JmsDeadLetter
Note
You can use this value to search for the message under
Managing Message Queues.
When you release the lock, the system starts retrying the message again.
Next Steps
Note
Note that the following specific exchange properties are available to be used in context of the JMS adapter:
Related Information
https://blogs.sap.com/2017/07/17/cloud-integration-configure-dead-letter-handling-in-jms-adapter/
Headers and Exchange Properties [page 7]
Defining a Send Step [page 357]
Context
The ODC adapter enables you to communicate with systems that expose data through the OData Channel for
SAP NetWeaver Gateway. You must configure a channel with ODC adapter to connect to a service developed
using SAP NetWeaver Gateway. For more information, see OData Channel.
Procedure
1. On the Model Configuration tab, double-click the channel that you want to configure.
2. In the General tab, choose Browse in the Adapter Type screen area.
3. In the Choose Adapter window, select ODC and choose OK.
4. Choose Adapter Specific tab.
5. Provide values in fields based on description in table.
Field Description
Credential Name Alias that you used while deploying basic authentication
credentials
Operation Select the operation that you want to perform from the
dropdown list.
Resource Path Path to the resource that you want to perform the opera
tion on
Prerequisites
Context
The LDAP adapter enables you to communicate with systems that expose data through LDAP service.
In case you have input messages in different formats, you need to use a mapping step to create a target
payload that can be recognized by the LDAP adapter. You can use this schema as a template for the target in
mapping step.
Note
You cannot update multiple records in a single processing cycle. You can only perform a given operation on
one record at a time.
Remember
You must use SAP Cloud Platform Connector for connecting to an LDAP service using the LDAP adapter.
LDAP adapter supports version 2.9 or higher versions of the cloud connector.
Procedure
1. On the Model Configuration tab, double-click the channel that you want to configure.
2. Go to the General tab and choose Browse in the Adapter Type screen area.
3. In the Choose Adapter window, select LDAP and choose OK.
4. Choose the Adapter Specific tab.
5. Provide values in fields based on description in table.
Field Description
Address Enter the URL of the LDAP directory service that you are
connecting to
Proxy Type Select the proxy type that you want to use. Currently, only
on premise option is supported.
Authentication Select the authentication type that you want to use. Cur
rently, only simple option is supported.
Credential Name Enter the credential name that you have deployed in the
tenant
Input Type Select the type of input that you are providing (Applicable
only for Insert operation)
The LDAP adapter supports input via JNDI attributes. If you choose this as the input type, you use a script step
to obtain values to attributes that are then passed to the LDAP service.
importClass(com.sap.gateway.ip.core.customdev.util.Message);
importClass(java.util.HashMap);
importClass(javax.naming.directory.Attribute);
importClass(javax.naming.directory.BasicAttribute);
importClass(javax.naming.directory.BasicAttributes);
importClass(javax.naming.directory.Attributes);
function processData(message) {
var body = message.getBody();
var dn= "cn=Markus,ou=users,dc=testcompany,dc=com";
var givenNameAttr = new BasicAttribute("givenName", "Jack");
var displayNameAttr = new BasicAttribute("displayName", "Reacher");
var telephoneNumberAttr = new BasicAttribute("telephoneNumber",
"100-100-100");
var attributes = new BasicAttributes();
attributes.put(givenNameAttr);
attributes.put(displayNameAttr);
attributes.put(telephoneNumberAttr);
attr =new BasicAttribute("title", "Developer");
attributes.put(attr);
attr =new BasicAttribute("sn", "Brutus");
attributes.put(attr);
var resultingMap = new HashMap();
resultingMap.put("dn", dn);
resultingMap.put("attributes", attributes);
message.setBody(resultingMap);
return message;
}
In the script, the values for attributes givenName, displayName and telephoneNumber are declared in the
script before they are passed to the LDAP adapter. Similarly, you can also create a script where these values
are dynamically fetched during runtime.
The example schema contains a set of attributes for a given record. In case you want the schema to contain
additional attributes, you can manually edit the schema before using it in the mapping step.
For example, if you want to add a field, telephone number, you can add an element in the schema under the
sequence element.
Let us consider a scenario where you want to add an attribute to the message (payload) that you are sending to
the LDAP service. For example, you want to add a password attribute. Due to security concerns, you should
encode the password before you add it.
You can achieve this by adding a Script step after the mapping step in the integration flow. Here's an example of
the script that you can use in the Script step:
import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.HashMap;
import javax.xml.bind.DatatypeConverter;
import javax.naming.directory.Attribute;
import javax.naming.directory.Attributes;
import javax.naming.directory.BasicAttribute;
import javax.naming.directory.BasicAttributes;
def Message processData(Message message)
{
You can use the Modify operation to change the DN of an LDAP record. You can do this by adding the tag
<DistinguishedName_Previous> to the input payload with the old DN. Specify the modified DN in
<DistinguishedName> tag and perform the Modify operation.
Here's an example that shows a sample input payload for modiying the DN of an LDAP record:
Related Information
The XI sender adapter allows you to connect a tenant to a local Integration Engine in a sender system.
Note
In the following cases certain features might not be available for your current integration flow:
More information: Adapter and Integration Flow Step Versions [page 405]
● SapAuthenticatedUserName
Contains the user name of the client that calls the integration flow.
If the sender channel is configured to use client certificate authentication, no such header is set (as it is not
available in this case).
● Avoid to change the temporary storage location for operative scenarios (or do this very carefully).
Such an action can result in data loss. The reason s that outdated messages (which will not be retried any
more) can still be stored there, also when you have changed the Temporary Storage attribute in the
meantime. When you plan to change this attribute, make sure there are no messages any more in the
originally configured temporaty storage.
● Avoid changing (sender or receiver) participant or channel name in the integration flow.
The name of the configured Temporary Storage is generated based on these names. If you change these
names in the integration flow model and deploy the integration flow again, a temporary storage is
generated with the new name. However, there can still be messages in the old storage. These messages will
not be retried any more after the new storage has been created.
● Apply the right transaction handling.
JMS queues and data stores support different transactional processing. As there are additional
implications of each transactional processing option with other integration flow steps, we urgently
recommend to follow these rules:
○ If as Temporary Storage you select JMS Queue in an XI receiver adapter and the XI adapter is used in a
sequential multicast or splitter pattern, as Transaction Handling you have to select Required for JMS.
If as Temporary Storage you select Data Store in an XI receiver adapter and the XI adapter is used in a
sequential multicast or splitter pattern, as Transaction Handling you have to select Required for JDBC.
For the XI sender adapter no transaction handler is required.
For the XI receiver adapter no transaction handler is required if the XI adapter is not used in a
sequential multicast or in a split scenario.
There is no distributed transaction support. Therefore, you cannot combine JMS and JDBC
transactions. As a consequence, transactional behavior cannot be guaranteed in scenarios using the XI
receiver adapter with JMS storage in multicast scenarios together with flow steps that need a JDBC
transaction handler (like, for example, Data Store or Write Variables).
When you have created a sender channel (with XI adapter selected), you can configure the following attributes.
Sender: Connection
Parameter Description
Address Address under which a sender system can reach the tenant
Note
When you specify the endpoint address /path, a sender can also call the integration
flow through the endpoint address /path/<any string> (for example, /path/
test/).
Be aware of the following related implication: When you in addition deploy an integration
flow with endpoint address /path/test/, a sender using the /path/test endpoint
address will now call the newly deployed integration flow with the endpoint address /
path/test/. When you now undeploy the integration flow with endpoint address /
path/test, the sender again calls the integration flow with endpoint address /path
(original behavior). Therefore, be careful reusing paths of services. It is better using com
pletely separated endpoints for services.
● Client Certificate: Sender authorization is checked on the tenant by evaluating the sub
ject/issuer distinguished name (DN) of the certificate (sent together with the inbound
request). You can use this option together with the following authentication option: Cli
ent-certificate authentication (without certificate-to-user mapping).
● User Role: Sender authorization is checked based on roles defined on the tenant for the
user associated with the inbound request. You can use this option together with the fol
lowing authentication options:
○ Basic authentication (using the credentials of the user)
The authorizations for the user are checked based on user-to-role assignments de
fined on the tenant.
○ Client-certificate authentication and certificate-to-user mapping
The authorizations for the user derived from the certificate-to-user mapping are
checked based on user-to-role assignments defined on the tenant.
Depending on your choice, you can also specify one of the following properties:
Note
Note the following:
○ You can also type in a role name. This has the same result as selecting the role
from the value help: Whether the inbound request is authenticated depends on
the correct user-to-role assignment defined in SAP Cloud Platform Cockpit.
○ When you externalize the user r, the value help for roles is offered in the integra
tion flow configuration as well.
○ If you have selected a product profile for SAP Process Orchestration, the value
help will only show the default role ESBMessaging.send.
Communication Party Specify the communication party for the response. Per default no party is set.
Communication Component Specify the communication component for the response. The default is DefaultXIService.
Quality of Service Defines how the message (from the sender) is processed by the tenant.
● Best Effort
The message is sent synchronously; this means that the tenant waits for a re
sponse before it continues processing.
No temporaty storage of the message needs to be configured, as message re
quest and response are processed immediately after another.
● At Least Once
The message is sent asynchronously. This means that the tenant does not wait
for a response before continuing processing. It is expected that the receiver
guarantees that the message is processed exactly once.
This option guarantees that the message is processed at least once on the ten
ant. If a message with identical message ID is received multiple times from a
sender,, all of them will be processed.
If you choose this option, the message needs to be temporarily stored on the
tenant (in the storage configured under Temporary Storage). As soon as the
message is store there, the sender receives successfully received status mes
sage. If an error occurs, the message is retried from the temporary storage.
● Exactly Once (Only possible if as Temporary Storage the option Data Store is se
lected)
The message is sent asynchronously. This means that the tenant does not wait
for a response before continuing processing.
This option guarantees that the message is processed exactly once on the ten
ant. If a message with identical message ID is received multiple times from a
sender, only the first one will be processed. The subsequent messages can be
identified as duplicates (based on the value of the message header
SapMessageIdEx, see below) and will not be processed.
Note
For Exactly Once handling, the sender XI adapter saves the protocol-spe
cific message ID in the header SapMessageIdEx. If this header is set, XI
receiver uses the content of this header as the message ID for outbound
communication. Usually, this is the desired behavior and enables the re
ceiver to identify any duplicates. However, if the sender system is also the
receiver system, or several variants of the message are sent to the same
system (for example, in an external call or multicast), the receiver system
will incorrectly identify these messages as duplicates. In this case, the
header SapMessageIdEx must be deleted (for example, using a content
modifier) or overwritten with a new generated message ID. This deactivates
Exactly Once processing (that is, duplicates are no longer recognized by the
protocol).
If you choose this option, the message needs to be temporarily stored on the
tenant (in the storage configured under Temporary Storage). As soon as the
message is store there, the sender receives successfully received status mes
sage. If an error occurs, the message is retried from the temporary storage.
Temporary Storage Temporary storage location for messages that are processed asynchronously. Mes
sages for which processing runs into an error can be retried from the temporary
(only if as Quality of Service the op
storage.
tion Exactly Once is selected)
You can choose among the following storage types:
● Data Store
The message is temporarily stored in the database of the tenant (in case of an
error). In case of successful message processing, the message is immediately
removed from the data store.
You can monitor the content of the data store in the Monitor section of the Web
UI under Manage Stores in the Data Stores tile.
Note
The data store name is automatically generated and contains the following
parts:
Below the data store name, you find a reference to the associated integra
tion flow in the following form: <integration flow name>/XI
● JMS Queue
The message is stored temporarily in a JMS queue on the configured message
broker.
If possible, use this option as it comes with a better performance.
You can monitor the content of the data store in the Monitor section of the Web
UI under Manage Stores in the Message Queues tile.
Note
The name of the JMS queue is automatically generated and contains the
following parts:
Note
This option is only available if you have an Enterprise Edition license.
Lock Timeout (in min) Enter a value for the retry timeout of the in-progress repository.
Retry Interval (in min) Enter a value for the amount of time to wait before retrying message delivery.
Exponential Backoff Select this option to double the retry interval after each unsuccessful retry.
Maximum Retry Interval (in min) You can set an upper limit on that value to avoid an endless increase of the retry in
(only configurable when Exponential terval. The default value is 60 minutes. The minimum value is 10 minutes.
Backoff is selected)
Dead-Letter Queue Select this option to place the message in the dead-letter queue if it cannot be proc
essed after three retries caused by an out-of-memory. Processing of the message is
(only if as Temporary Storage the op
stopped then.
tion JMS Queue is selected)
In such cases, a lock entry is created which you can view and release in the Monitor
section of the Web UI under Managing Locks.
Use this option to avoid out-of-memory situations (caused in many cases by large
messages).
For more information, read the SAP Community blog Cloud Integration – Configure
Dead Letter Handling in JMS Adapter .
Encrypt Message during Persistence Select this option in case the messages should be stored in an encrypted way during
(only in case you have selected certain processing steps.
Exactly Once as Quality of Service)
Note
In the following cases certain features might not be available for your current integration flow:
● A feature for a particular adapter or step was released after you created the corresponding shape in
your integration flow.
● You are using a product profile other than the one expected.
More information: Adapter and Integration Flow Step Versions [page 405]
● Avoid to change the temporary storage location for operative scenarios (or do this very carefully).
Such an action can result in data loss. The reason s that outdated messages (which will not be retried any
more) can still be stored there, also when you have changed the Temporary Storage attribute in the
● In the current release, you cannot dynamically configure any parameters of the adapter.
● Currently, the adapter is not supported in the Request-Reply and the Send step.
● In the current version, explicit retry configuration using retry headers is not supported.
● When you have selected Exactly Once (as Quality of Service), currently only AtLeastOnce (ALO) is
supported by the XI adapter, no ExactlyOnce (EO) in the strict sense. That means if the sender system
sends the same message multiple times, the XI adapter forwards all these messages to the receiver
(multiple times).
● The XI adapter does not support acknowledgements.
● ExactlyOnceInOrder (EOIO) is not supported by the XI adapter.
When you have created a receiver channel (with XI adapter selected), you can configure the following
attributes.
Receiver: Connection
Parameter Description
Address Address under which the local integration engine of the receiver system can be called
https://<host name>:<port>/sap/xi/engine?type=receiver&sap-
client=<client>
Note
You can find out the constituents (HTTPS port) of the URL by choosing transaction SMICM in the receiver
The endpoint URL that has actually been used at runtime is displayed in the message processing log (MPL) in
the message monitoring application (MPL property RealDestinationUrl).
Proxy Type The type of proxy that you are using to connect to the target system:
Note
If you select the On-Premise option, the following restrictions apply to other parameter values:
○ Do not use an HTTPS address for Address, as it leads to errors when performing consistency
checks or during deployment.
○ Do not use the option Client Certificate for the Authentication parameter, as it leads to errors
when performing consistency checks or during deployment.
Note
If you select the On-Premise option and use the SAP Cloud Connector to connect to your on-premise
system, the Address field of the adapter references a virtual address, which has to be configured in
the SAP Cloud Connector settings.
● If you select Manual, you can manually specify Proxy Host and Proxy Port (using the corresponding entry
fields).
Furthermore, with the parameter URL to WSDL you can specify a Web Service Definition Language
(WSDL) file defining the WS provider endpoint (of the receiver). You can specify the WSDL by either up
loading a WSDL file from your computer (option Upload from File System) or by selecting an integration
flow resource (which needs to be uploaded in advance to the Resources view of the integration flow).
This option is only available if you have chosen a Process Orchestration product profile.
Location ID To connect to an ASP Cloud Connector instance associated with your account, enter the location ID that you
only in case defined for this instance in the destination configuration on the cloud side. You can also enter $
On-Premise {header.headername} or ${property.propertyname} to dynamically read the value from a
is selected header or a property.
for Proxy
Type.
● None
No authentication is configured.
● Basic Authentication
The tenant authenticates itself against the receiver based on user credentials (user and password).
Note that when this authentication option is selected, the required security artifact (User Credentials)
has to be deployed on the tenant.
● Certificate-Based Authentication
The tenant authenticates itself against the receiver based on X.509 certificates.
Note that when this authentication option is selected, the required security artifact (Keystore) has to be
deployed on the tenant.
● Principal Propagation
The tenant authenticates itself against the receiver by forwarding the principal of the inbound user to the
cloud connector, and from there to the back end of the relevant on-premise system. You can only use
principal propagation if you have selectedBest Effort as the Quality of Service.
Note
This authentication method can only be used with the following sender adapters: HTTP, SOAP, IDoc,
XI sender (and Quality-of-Service Best Effort)
Note
Please note that the token for principal propagation expires after 30 minutes.
If it takes longer than 30 minutes to process the data between the sender and receiver channel, the
token for principal propagation expires, which leads to errors in message processing.
For special use cases, this authentication method can also be used with the AS2 adapter.
Credential Name of User Credentials artifact that needs to be deployed separate on the tenant (it contains user name
Name and password for the user to be authenticated).
Private Key Optional entry to specify the alias of the private key to be used for authentication. If you leave this field empty,
Alias the system checks at runtime for any valid key pair in the tenant keystore.
Compress Enables the tenant to send compressed request messages to the receiver (which acts as WS provider) and to
Message indicate to the receiver that it can handle compressed response messages.
Return When selected, writes the HTTP response code received in the response message from the called receiver
HTTP
system into the header CamelHttpResponseCode.
Response
Code as This feature is disabled by default.
Header
Note
You can use this header, for example, to analyze the message processing run (when level Trace has been
enabled for monitoring). Furthermore, you can use this header to define error handling steps after the
integration flow has called the XI receiver.
You cannot use the header to change the return code since the return code is defined in the adapter and
cannot be changed.
Clean Up Select this option to clean up the adapter-specific headers after the receiver call.
Request
Headers
XI ● Communication Party
Identifiers Specifies the Communication Party header value of the request message sent to the receiver system.
for Sender A communication party typically represents a larger unit involved in an integration scenario. You usually
use a communication party to address a company.
An XI mes
This parameter can be configured dynamically.
sage con
● Communication Component
tains spe
Specifies the Communication Component header value of the request message sent to the receiver sys
cific header
tem.
elements
You typically use a communication component to address a business system as the sender or receiver of
that are
messages.
used to ad
This parameter can be configured dynamically.
dress the
sender or a
receiver of
the mes
sage, such
as commu
nication
party, com
munication
compo
nent, and
service in
terface. For
more infor
mation on
the con
cepts be
hind these
entities,
see the
documen
tation of
SAP Proc
ess Integra
tion at
https://
help.sap.co
m/viewer/
product/
SAP_NET
WEA
VER_PI/
ALL/.
XI ● Communication Party
Identifiers Specifies the Communication Party header value of the response message received from the receiver
for Receiver system.
A communication party typically represents a larger unit involved in an integration scenario. You usually
An XI mes
use a communication party to address a company.
sage con
This parameter can be configured dynamically.
tains spe
● Communication Component
cific header
Specifies the Communication Component header value of the response message received from the re
elements
ceiver system.
that are
You typically use a communication component to address a business system as the sender or receiver of
used to ad
messages.
dress the
This parameter can be configured dynamically.
sender or a
receiver of
Note
the mes
sage, such To get this information, go to the receiver system and choose transaction SLDCHECK. In section
as commu LCR_GET_OWN_BUSINESS_SYSTEM, you find the business system ID (which typically has the form
nication <SID>_<client>).
party, com
● Service Interface and Service Interface Namespace
munication
These attributes specify the service interface that determines the data structure of the response mes
compo
sage received from the receiver system.
nent, and
The receiver interface is described according to how interfaces are defined in the Enterprise Services Re
service in
pository, that means, with a name and a namespace.
terface. For
This parameter can be configured dynamically.
more infor
mation on
the con
cepts be
hind these
entities,
see the
documen
tation of
SAP Proc
ess Integra
tion at
https://
help.sap.co
m/viewer/
product/
SAP_NET
WEA
VER_PI/
ALL/.
XI Message Select this option to specify how the XI Message ID shall be defined.
ID
You can choose among the following options:
Determinati
on ● Generate (default)
Generates a new XI message ID.
● Reuse
Take over the message ID passed with the header SapMessageIdEx. If the header is not available in
runtime, a new message ID is generated
● Map
Maps a source message ID to the new XI message ID.
For more information on how to use this option in an end-to-end scenario, check out the SAP Community
blog Cloud Integration – Configuring ID Mapping in XI Receiver Adapter
Quality of Defines how the message (from the tenant) is expected to be processed by the receiver.
Service
There are the following options:
● Best Effort
The message is sent synchronously; this means that the tenant waits for a response before it continues
processing.
No temporary storage of the message needs to be configured, as message request and response are
processed immediately after another.
● Exactly Once
The message is sent asynchronously. This means that the tenant does not wait for a response before
continuing processing. It is expected that the receiver guarantees that the message is processed exactly
once.
If you choose this option, the message needs to be temporarily stored on the tenant (in the storage con
figured under Temporary Storage). As soon as the message is store there, the sender receives success
fully received status message. If an error occurs, the message is retried from the temporary storage.
Temporary Temporary storage location for messages that are processed asynchronously. Messages for which processing
Storage runs into an error can be retried from the temporary storage.
This automatically generated name is subject to a length restriction and must be no more than 40
characters (including the underscore). If the data store name exceeds this limit, you must shorten
the participant name or channel name accordingly.
Below the data store name, you find a reference to the associated integration flow in the following
form: <integration flow name>/XI
● JMS Queue
The message is stored temporarily in a JMS queue on the configured message broker.
If possible, use this option as it comes with a better performance.
You can monitor the content of the data store in the Monitor section of the Web UI under Manage Stores
in the Message Queues tile.
Note
The name of the JMS queue is automatically generated and contains the following parts:
Note
This option is only available if you have an Enterprise Edition license.
Retry Enter a value for the amount of time to wait before retrying message delivery (in case of an error).
Interval (in
min)
Exponential Select this option to double the retry interval after each unsuccessful retry.
Backoff
Maximum You can set an upper limit on that value to avoid an endless increase of the retry interval. The default value is
Retry
60 minutes. The minimum value is set to 10 minutes.
Interval (in
min)
(only con
figurable
when
Exponential
Backoff is
selected)
Dead-Letter Select this option to place the message in the dead-letter queue if it cannot be processed after three retries
Queue caused by an out-of-memory. Processing of the message is stopped then.
(only if as In such cases, a lock entry is created which you can view and release in the Monitor section of the Web UI
Temporary under Managing Locks.
Storage the
Use this option to avoid out-of-memory situations (caused in many cases by large messages).
option JMS
Queue is For more information, read the SAP Community blog Cloud Integration – Configure Dead Letter Handling in
selected) JMS Adapter .
Encrypt Select this option in case the messages should be stored in an encrypted way during certain processing
Message steps. It is recommended to choose this option if the message can contain sensitive data. Note that this set
during ting might decrease performance of the integration scenario.
Persistence
(only in
case you
have se
lected
Exactly
Once as
Quality of
Service)
Prerequisites
● You have defined the RFC destination in the SAP Cloud connector for your application. To create
destinations you need to either have administrator or developer role in SAP Cloud Platform cockpit.
For more information on how to create RFC destinations in the Cloud connector, see Creating an RFC
Destination [page 629] .
● You have opened the integration flow in the editor for editing purposes.
● SAP Netweaver ABAP system is up and running.
● You have generated the remote function module XSD file. For more information on how to generate XSD
file, see Generating XSD/WSDL for Function Modules Using ESR (Process Integration) [page 629].
Context
RFC executes the function call using synchronous communication, which means that both systems must be
available at the time the call is made. When the call is made for the function module using the RFC interface,
the calling program must specify the parameters of the connection in the form of an RFC destination. RFC
destinations provide the configuration needed to communicate with an on-premise ABAP system through an
RFC interface.
The RFC destination configuration settings are used by the SAP JAVA Connector (SAP JCo) to establish and
manage the connection.
Remember
Field Description
Send Confirm Transaction (Applicable to BAPI functions) By default this option is disa
bled. You can enable this option if you want to support BAPI
functions that require BAPI_TRANSACTION_COMMIT to be
invoked implicitly by the RFC receiver adapter.
Note
Ensure the following ABAP functions are whitelisted in
Cloud connector before using this option:
● BAPI_TRANSACTION_COMMIT
● BAPI_TRANSACTION_ROLLBACK
Caution
If you enable this option for non-BAPI functions,
BAPI_TRANSACTION_COMMIT can still be invoked,
which is redundant and hence may impact RFC function
execution time.
Note
You can create dynamic destinations by using regular expressions (header, property) in the Content
Modifier. To do that, you need to first select the Content Modifier in the integration flow. Then go to Message
Header and assign corresponding value to the header name as the destination name. Select your RFC
adapter and assign dynamic destination by using the expression: ${header/property.<header/property
name>}. For example ${header.abc} or ${property.abc} where abc is the value of the header or property.
Procedure
1. In the Model Configuration editor, double-click the channel that you want to configure.
2. Go to General tab. Select RFC from the Adapter Type dropdown.
3. Go to the Adapter Specific tab and enter the value for the field shown in the table below:
Field Description
Send Confirm Transaction (Applicable to BAPI functions) By default this option is dis
abled. You can enable this option if you want to support
BAPI functions that require BAPI_TRANSACTION_COM
MIT to be invoked implicitly by the RFC receiver adapter.
Note
Ensure the following ABAP functions are whitelisted in
Cloud connector before using this option:
○ BAPI_TRANSACTION_COMMIT
○ BAPI_TRANSACTION_ROLLBACK
Caution
If you enable this option for non-BAPI functions,
BAPI_TRANSACTION_COMMIT can still be invoked,
which is redundant and hence may impact RFC func
tion execution time.
Note
You can create dynamic destinations by using regular expressions (header, property) in the Content
Modifier. To do that, you need to first select the Content Modifier in the integration flow. Then go to
Message Header and assign corresponding value to the header name as the destination name. Select
your RFC adapter and assign dynamic destination by using the expression: ${header/
property.<header/property name>}. For example ${header.abc} or ${property.abc} where abc is the
value of the header or property.
Prerequisites
● Deployment of the ProcessDirect adapter must support N:1 cardinality, where N (producer) → 1
(consumer).
Caution
Multiple producers can connect to a single consumer, but the reverse is not possible. The cardinality
restriction is also valid across integration projects. If two consumers with the same endpoint are
deployed, then the start of the second consumer fails.
Context
ProcessDirect Receiver Adapter: If the ProcessDirect adapter is used to send data to other integration flows,
the integration flow is considered as a producer integration flow. In this case, the integration flow has a receiver
ProcessDirect adapter.
ProcessDirect Sender adapter: If the ProcessDirect adapter is used to consume data from other integration
flows, the integration flow is considered as a consumer integration flow. In this case, the integration flow has a
sender ProcessDirect adapter.
● Decompose large integration flows: You can split a large integration flow into smaller integration flows
and maintain the in-process exchange without network latency.
● Customize the standard content: SAP Cloud Platform Integration enables integration developers to
customize their integration flows without modifying them entirely. The platform provides plugin
touchpoints where integration developers can add custom content. This custom content is currently
connected using HTTP or SOAP adapters. You can also use the ProcessDirect adapter to connect the
plugin touchpoints at a lower network latency.
● Allow multiple integration developers to work on the same integration flow: In some scenarios,
integration flows can be large (100 steps or more), and if they are owned or developed by just one
integration developer this can lead to an overreliance on one person. The ProcessDirect adapter helps you
to split a large integration flow into smaller parts that can be owned and managed by multiple integration
developers independently. This allows several people to work on different parts of the same integration
flow simultaneously.
● Reuse of integration flows by multiple integration processes spanning multiple integration projects:
Enables the reuse of integration flows such as error handling, logging, extension mapping, and retry
handling across different integration projects and Camel context. Integration developers therefore only
need to define repetitive logic flows once and can then call them from any integration flow using the
ProcessDirect adapter.
● Dynamic endpoint: Enables the producer integration flow to route the message dynamically to different
consumer integration flow endpoints. The producer integration flow look ups the address in the headers,
body, or property of the exchange and the corresponding value is then resolved to the endpoint to which
the exchange is to be routed.
● Multiple MPLs: MPL logs are interlinked using correlation IDs.
● Transaction processing: Transactions are supported on integration flows using the ProcessDirect adapter.
However, the scope of the transaction is isolated and restricted to a single integration flow.
● Header propagation: Header propagation is not supported across producer and consumer integration
flows unless configured in <Allowed Header(s)> in the integration flow's runtime configuration.
● Property propagation: Property propagation is not supported across producer and consumer integration
flows, except for SAP_SAPPASSPORT property. This property is for internal use only and users are not
allowed to modify it.
Tip
ProcessDirect adapter improves network latency, as message propagation across integration flows do not
involve load balancer. For this reason, we recommend that you consider memory utilization as a parameter
Restriction
Procedure
1. In the Model Configuration editor, double-click the channel that you want to configure.
2. Go to General tab. Select ProcessDirect from the Adapter Type dropdown.
3. Go to the Adapter Specific tab and enter the value for the field shown in the table below:
Parameter Description
Address URL of the target system that you are connecting to. For
example, /localiprequiresnew.
Note
It may or may not start with "/". It can contain alpha
numeric characters and special characters such as
underscore "_" or hyphen "-". You can also use simple
expressions, for example, ${header.address}.
Remember
○ If the consumer has an <Escalated End Event> in the <Exception Sub-Process>, then in
case of exception in the consumer, the MPL status for the producer varies based on the following
cases:
○ If the producer integration flow starts with <Timer>, the MPL status for the consumer will be
Escalated and for Producer, it will be Completed.
○ If the producer integration flow starts with <HTTP> Sender, the MPL status for the consumer
will be Escalated and for producer, it will be Failed.
○ The combination of <Iterating Splitter> and <Aggregator> in the producer integration
flow might generate an extra MPL (Aggregator MPL) due to the default behavior of Aggregator.
○ The <Send> component is incompatible with the ProcessDirect adapter as the adapter supports
asynchronous mode for message exchange and it expects a response.
To learn more about the adapter, see blog on ProcessDirect Adapter in SAP Community .
Context
Procedure
Prerequisites
● You have configured the connections to an On-Premise repository if you have to import mapping from that
repository into the workspace.
Note
For more information on setting the connections to the repository, see the task 'Setting the
Connections to the External Repository' in Configuring the Tool Settings [page 19]. You can import
message mappings (.mmap) from ES Repository with server version 7.1 onwards, and operation
mapping from ES Repository with server version 7.3 EHP1 SP3 .
● You have imported the mappings to your local workspace from the On-Premise repository.
Note
For more information about how to import mappings from the ES Repository, see Importing SAP
NetWeaver PI Objects from On-Premise Repository [page 26].
Context
You perform this task to assign a mapping that is available in your local workspace.
Your local workspace can contain mappings that are already imported from an external repository, such as the
ES Repository, or you have obtained them from some other integration project.
You have to be cautious in distributing the mappings imported from ES Repository if they contain any
sensitive data. Although you securely import mappings from the repository by providing the user
credentials, copying the imported mappings from one Integration Project to another does not require any
authorization.
In an integration flow project, the src.main.resources.mapping package can contain message mapping
(.mmap), operation mapping (.opmap), XSL mapping and XSLT mapping.
Procedure
1. To add a mapping element to an integration flow model, perform the steps below:
a. In the Model Configuration editor, select the sequence flow within the pool.
b. From the context menu of the sequence flow, choose Add Message Transformers and then select
Mapping.
2. Select the mapping in the integration flow.
3. From the context menu, you can choose one of the mapping options: Assign Mapping, New Message
Mapping, Switch to XSLT Mapping or Switch to Operation Mapping
4. Open the Properties view.
5. If you want to assign mapping then execute the following substeps:
a. To select a mapping for the integration flow, choose Browse….
Note
You can assign an operation mapping only if the sender and receiver channel is configured with
SOAP adapters.
b. In the Choose Mapping dialog, choose the dropdown icon and select Local to obtain the mapping.
Note
Local option shows mapping of different types, such as message mapping (.mmap), operation
mapping (.opmap), XSL mapping and XSLT mapping, available within the current project.
c. Select a mapping.
d. Choose OK.
6. Note
XSLT Mapping version 1.1 supports functions to set header, exchange properties.
You can set header and property as shown in the example below.
Sample Code
<xsl:param name="exchange"/>
<xsl:template match="/">
<xsl:value-of select="hci:setHeader($exchange, 'myname',
'myvalue')"/>
<xsl:value-of select="hci:setProperty($exchange, 'myname',
'myvalue')"/>
<Test1/>
</xsl:template>
</xsl:stylesheet>
Note
To assign xslt mapping for partner directory, the valid format for value of header name is
pd:<Partner ID>:<Parameter ID>:<Parameter Type>, where the parameter type is either
binary or string. For example, the correct header name is pd:SenderABC:SenderXSD:Binary.
You can
For example, when you like to access the header myHeader, you need the following expression:
<xsl:param name="myHeader"/>
d. Choose OK.
Note
○ To view or modify the definition of message mapping in the mapping editor, click Name in the
Properties view. If you have assigned an operation mapping, the navigation to the mapping editor is
not supported.
○ For XSLT mapping, you can select the output format of the message to be either string or bytes
from the Output Format field.
Restriction
Message mapping does not work if the integration flow project contains whitespaces.
Related Information
Context
You can also use content modifier to define local properties for storing additional data during message
processing. These properties can be further used in connectors and conditions.
Example
In SFTP Connector, the user can specify the receiver file name as - File_${property.count}.txt.
Procedure
2. If you want to add a content modifier in the integration flow, choose Add Message Transformers
Content Modifier from the context menu of a connection within the pool.
3. Select the added content modifier in the integration flow model to configure it.
4. To configure the content modifier with a saved configuration that is available as a template, choose Load
Element Template from the context menu of the Content Modifier element.
5. In the Properties view, select the Header tab.
6. In the Action column you can either Create additional headers, set values for existing headers of messages
(by using constants, another header, an XPath, a property, an external parameter, or by forming an
expression).
a. To enter an XPath, select xpath in the Type column and browse for an XPath from the lookup in the
Value column.
Note
○ The Data Type column is mainly used for the XPath type. The data type can belong to any Java
class. An example of a data type for an XPath is java.lang.String.
○ If the XPath contains a namespace prefix, specify the association between the namespace and
the prefix on the Runtime Configuration tab page of the integration flow Properties view.
Note that data written into the message header at a certain processing step (for example, in a Content
Modifier or Script step) will also be part of the outbound message addressed to a receiver system.
Because of this, it is important to consider the following restriction regarding the header size if you use
an HTTP-based receiver adapter:
If the message header exceeds 8 KB, it might lead to situations that the receiver cannot accept the
inbound call. (Relevant for all http-based receiver adapters)
b. To enter a header, select header in the Type column, and browse for a header from the lookup in the
Value column.
○ This lookup dialog shows you a list of headers that you have specified:
○ In the Header tab page of Content Modifier flow steps
○ In the Allowed Headers field of the Runtime Configuration tab page
c. To enter a property, select property in the Type column and define a value based on the selected value
type.
d. To enter an external parameter, select external parameter in the Type column and define the parameter
key. For more information, see Externalizing Parameters of Integration Flow [page 373].
Note
Defining multiple parameters in the same field or column is not supported for tables.
7. If you are using version 1.2 then you can specify name or expression of the header/property to Delete an
existing header/property through the Action column, by executing the following substeps.
a. If you click on the icon in Name field, then Specify Header or Expression dialog appears.
b. If you want to delete header using header name then enter the required name in Header field.
c. If you want to delete headers using expression then enter the required name or expression in
Expression field.
○ In Include field you can enter name or expression of the header you want to delete. For example, if
you enter 'test*', then all headers starting with test are deleted.
○ In Exclude field you can enter name or expression of the header you do not want to delete. For
example, if you enter 'test*', then all headers starting with test are not deleted.
8. In the same way as for headers, in Action column, the user can either Create additional properties, set
values for existing properties of messages (by using constants, another header, an XPath, a property, an
external parameter, or by forming an expression) or Delete properties as explained above, by selecting the
Property tab in the Properties view.
Note
○ Special characters like {}, [] are not allowed in header name, for a header of type Constant.
○ Header values can be lost following the external system call, whereas properties will be available
for complete message execution.
○ During outbound communication, headers will be handed over to all message receivers and
integration flow steps, whereas properties will remain within the integration flow and will not be
handed over to receivers.
○ You must specify name of the property you want to delete.
Note
○ When you save the configuration of the Content Modifier element as a template, the tool stores the
template in the workspace as <ElementTemplateName>.fst.
○ If you add a content modifier without header, body and property then you cannot trace the
element.
<order>
<book>
<BookID>A1000</BookID>
<Count>5<Count>
</book>
</order>
In the Body tab of the Content Modifier, you specify the content expected in the outgoing message. Keep a
placeholder for the header information to modify the content as shown below:
<invoice>
<vendor>${header.vendor}</vendor>
${in.body}
<deliverydate>${header.date}</delivery>
</invoice>
<invoice>
<vendor>ABC Corp</vendor>
<order>
<book>
<BookID>A1000</BookID>
<Count>5<Count>
</book>
</order>
<deliverydate>25062013</deliverydate>
</invoice>
Related Information
An integration flow is a BPMN (Business Process Model and Notation)-like model that allows you to specify
how a message is to be processed on a tenant.
To interpret an integration flow model at runtime, it is transformed into an XML structure that is compatible
with Apache Camel (http://camel.apache.org ), an Open Source integration framework for Java that
supports the mediation and routing of messages of any format.
The only prerequisite for a message that is to be processed by the Camel framework is that it comprises the
following elements:
● Headers
Contain information related to the message, for example, information for addressing the message sender.
● Attachments
Contain optional data that is to be attached to the message.
● Body
Contains the payload (usually with the business-related data) to be transferred in the message.
For as long as a message is being processed, a data container (referred to as Exchange) is available. This
container is used to store additional data besides the message that is to be processed. An Exchange can be
seen as an abstraction of a message exchange process as it is executed by the Camel framework. An Exchange
is identified uniquely by an Exchange ID. In the Properties area of the Exchange, additional data can be stored
temporarily during message processing. This data is available for the runtime during the whole duration of the
message exchange.
You can use the Content Modifier step to modify a message by adding additional data to it.
More precisely, this step type allows you to modify the content of the following three data containers during
message processing:
● Message Header
You can add headers to the message, and edit and delete headers.
● Message Body
You can modify the message body part.
● Exchange Property
You can write data to the message exchange, and edit and delete the properties.
For example, you can retrieve the value of a particular element of the payload of an inbound message and write
this value to the header of the message (to make it available for subsequent processing steps).
You need to specify additional parameters in the Content Modifier step to tell the integration runtime how
exactly to access the data from the incoming message (which is to be written to one of the three data
containers above).
Here's a simple example to show how this works: Let's say you want to write the value of the element
CustomerNumber from an inbound XML message to the message header, to make it available for subsequent
process steps.
On the Message Header tab of the Content Modifier, add a new entry. Specify XPath as the Type (because you
want to address the CustomerNumber element in an incoming XML message). For Value, enter the exact XPath
expression that is to be used to address this element (for example, /Order/Customer/CustomerNumber). In an
additional field, you now need to specify the data format expected for the content of the CustomerNumber
Example
The following example shows how to modify both the header and body data container of a message using the
Content Modifier step.
<order>
<book>
<BookID>A1000</BookID>
<Count>5</Count>
</book>
</order>
On the Header tab of the Content Modifier, enter the following to write constant values to the message header:
On the Body tab, keep placeholders for the header information specified in the first Content Modifier step ($
{header.vendor} and ${header.date}) to modify the content as shown below. Additionally, use a
placeholder ${in.body} for the incoming message.
<invoice>
<vendor>${header.vendor}</vendor>
${in.body}
<deliverydate>${header.delivery date}</delivery>
</invoice>
<invoice>
<vendor>ABC Corp</vendor>
<order>
<book>
<BookID>A1000</BookID>
<Count>5</Count>
</book>
</order>
<deliverydate>25062013</deliverydate>
</invoice>
You use this task to encode messages using an encoding scheme to secure any sensitive message content
during transfer over the network.
Procedure
2. If you did not select the encoder pattern when creating the integration flow, choose Add Message
Transformers Encoder from the context menu of a connection within the pool.
3. Select the encoder in the integration flow model to configure it.
4. In the Properties view, select the encoding scheme from the dropdown list. You can select one of the
following encoding schemes:
○ Base64 Encode
Encodes the message content using base64.
○ GZIP Compress: Compresses the message content using GNU zip (GZIP).
○ ZIP Compress: Compresses the message content using zip (only zip archives with a single entry
supported).
○ MIME Multipart Encode: Transforms the message content into a MIME multipart message.
If you want to send a message with attachments, but the protocol (for example, HTTP or SFTP) does
not support attachments, you can send the message as a MIME multipart instead.
Note
Note that SAP Cloud Platform Integration does not support the processing of MIME multipart
messages that contain multiple attachments with the same file name.
Example
A message with the header Content-Type "text/plain" is sent to a MIME multipart encoder. The add multipart
headers inline functionality is activated.
Sample Code
Sample Code
Message Body
Message-ID: <...>
Body text
------=_Part_0_1134128170.1447659361365
Content-Type: application/binary
Content-Transfer-Encoding: binary
Content-Disposition: attachment; filename="Attachment File Name"
[binary content]
------=_Part_0_1134128170.1447659361365
<message>
Input for encoder
</message>
If you select Base64 Encode, the output message would look like this:
PG1lc3NhZ2U+DQoJSW5wdXQgZm9yIGVuY29kZXINCjwvbWVzc2FnZT4NCg==
Related Information
A Multipurpose Internet Mail Extensions (MIME) multipart message allows you to combine different kinds of
content in one message (for example, plain text and attachments).
To mention a use case, if you want to send a message with attachments, but the protocol (for example, HTTP or
SFTP) does not support attachments, you can send the message as a MIME multipart instead.
With a multipart subtype you can specify how the different content types are combined as MIME multipart
message. The property Multipart Subtype in the Encoder step allows you to specify the Content-Type property
of the MIME message. For more information on the different options for Multipart Subtype, refer to the general
definition of the Multipart Content-Type.
An input message for the MIME Multipart Encoder step doesn’t have to be composed in a specific way.
An inbound message for a MIME Multipart Decoder step has to be a MIME multipart message. Hereby, the
multipart headers can either be stored as Camel headers or as part of the message body.
You have the option to dynamically (based on the content of the processed message) add custom headers to a
MIME multipart message. Do enable this option, you have to activate Add Multipart Header Inline. In that case,
the option Include Headers is displayed.
You can now enter regular expressions for the headers which are to be added dynamically. With such regular
expressions (regex), you can define placeholders for the custom headers:
Tip
Example:
When you enter for Include Headers the string (x-.*|myAdditionalHeader), all headers that start with
x- and the header myAdditionalHeader are added dynamically.
The following table summarizes how the Encoder step transforms the message depending on whether you
select or deselect the option Add Multipart Header Inline.
Selected The Encoder transforms the inbound message into a new message where the message
body (of the resulting message) is a MIME multipart message with headers.
Body and attachments (if available) of the inbound message will be added as separate
parts of the multipart message. The attachments are removed from the resulting message.
Note that the message will also always be transformed into a MIME multipart message, re
gardless whether it contains attachments or not.
● The inbound message has attachments: Encoder transforms message body and at
tachments of the inbound message into a MIME multipart message. The headers of
the multipart message will be added as Camel headers. The message body will be re
placed by the rest of the message.
● The inbound message has no attachments: Encoder does not change the inbound
message.
The following figures illustrate how the property Add Multipart Header Inline influences the processing of the
message.
The following table summarizes how the Decoder step transforms the message depending on whether you
select or deselect the option Multipart Headers Inline.
Selected The Decoder transforms the first part of the multipart message into the message body of
the resulting message and the following parts (if available) will be transformed into attach
ments of the resulting message.
In case the inbound message is, other than expected, no MIME multipart message with in
line headers, the complete message body is interpreted as a preamble of the MIME multi
part, and the resulting message is empty.
● The inbound message either doesn’t contain the multipart header as Camel header or
the Content-Type is no multipart, the Decoder step doesn’t change the inbound mes
sage.
● In all other cases, the header of the inbound message will be used as header of the
multipart message (and deleted). The message body of the resulting message will be
built up out of those parts which are contained in the message body (and, if available,
out of the attachments).
You use this task to decode the message received over the network to retrieve original data.
Procedure
2. If you want to add a decoder in the integration flow, choose Add Message Transformers Decoder
from the context menu of a connection within the pool.
3. Select the newly added decoder in the integration flow model to configure it.
Note
If this option is not selected and the content type camel-header is not set to a multipart type, no
decoding takes place.
If this option is selected and the body of the message is not a MIME multipart (with MIME headers
in the body), the message is handled as a MIME comment and the body is empty afterwards.
Note
Note that SAP Cloud Platform Integration does not support the processing of MIME multipart
messages that contain multiple attachments with the same file name.
Example
Let us suppose that an input message to the decoder is a message encoded in Base64 that looks like this:
PG1lc3NhZ2U+DQoJSW5wdXQgZm9yIGVuY29kZXINCjwvbWVzc2FnZT4NCg==
<message>
Input for encoder
</message>
Context
You use this task if you want to filter information by extracting a specific node from the incoming message.
2. If you want to add content filter in the integration flow, choose Add Message Transformers Content
Filter from the context menu of a connection within the pool.
3. Select the added content filter in the integration flow model to configure it.
4. In the Properties view, enter an Xpath to extract a specific message part from the body. For example, in the
Xpath Expression field, enter /ns0:MessageBulk/Message/MessageContent/text() .
Tip
To quickly specify the Xpath in the content filter, select Define Content Filter from the context menu of
the element.
5. In the Properties view, select a particular value for the output, from the Value Typedropdown menu.
Example
<Message>
<orders>
<order>
<clientId>I0001</clientId>
<count>100</count>
</order>
<order>
<clientId>I0002</clientId>
<count>10</count>
</order>
</orders>
</Message>
If you enter an Xpath- /Message/orders/order/count/text(). The output of the Content Filter would be-
10010
This integration flow step is used to calculate a digest of the payload or parts of it and store the result in a
message header.
Context
In detail, the Message Digest integration flow step transforms a message into a canonical XML document. From
this document, a digest (hash value) is calculated and written to the message header.
Note
Canonicalization transforms an XML document into a form (the canonic form) that makes it possible to
compare it with other XML documents. With the Message Digest integration flow step, you can apply
canonicalization to a message (or to parts of a message), calculate a digest out of the transformed
message, and add the digest to the message header.
In simple terms, canonicalization skips non-significant elements from an XML document. To give some
examples, the following changes are applied during the canonicalization of an XML document: Unification
of quotation marks and blanks, or encoding of empty elements as start/end pairs.
You also have the option to define a filter to apply canonicalization only to a sub-set of the message.
Procedure
Option Description
Filter (XPath) If you only want to transform part of the message, enter an XPath expression to specify the part (op
tional attribute).
Digest Algo Select the hash algorithm to be used to calculate the digest.
rithm
You can choose between the following hash algorithms:
○ SHA-1
○ SHA-256
○ SHA-384
○ SHA-512
○ MD5
Target Enter the name of the target header element which is to contain the hash value (digest).
Header Name
This is a mandatory attribute.
Context
The converter element enables you to transform an input message in one format to another. You can also
specify some conditions that are used as reference to transform the message.
The CSV to XML converter transforms a message in CSV format to XML format. Conversely, the XML to CSV
converter transforms a message in XML format to CSV format.
Restriction
You cannot use XML to CSV converter to convert complex XML files to CSV format.
Procedure
3. Drag from palette to the integration process and define the message path.
Note
Alternatively, you can also add the converter from context menu. Choose Message Transformers
Converter
4. If you want to use CSV to XML Converter, perform the following substeps.
a. Select the Converter element and access Properties tab page.
b. Provide values in fields based on description in table.
Field Description
Name Provide a name for the converter step. This field is op
tional.
XML Schema Choose Browse and select the file path to the XML
schema file that is used as the basis for message trans
formation. The XML file schema format is used as the
basis for creation of the payload. This file has to be in
the package source.main.resources.xsd
Path to Target Element in XSD XPath in the XML Schema File where the content from
CSV has to be placed.
Record Marker in CSV The corresponding record in the CSV file that has to be
considered for conversion. This entry is the first text in
every new record of the CSV.
Note
If this value is not specified then all the records
would be considered for conversion.
c. In the Field Separator in CSV field, choose the character that you want to use as the field separator in
CSV file from the dropdown list.
Note
If you want to use a field separator that is not available in the dropdown list, manually enter the
character in Field Separator in CSV field.
d. If you want to exclude headers in the first line of CSV file for conversion, select Exclude First Line
Header checkbox.
e. Save changes.
5. If you want to use XML to CSV Converter, right-click on the CSV to XML Converter and choose Switch to
XML to CSV Converter. Perform the following substeps:
a. In the context menu of Converter, choose Switch to XML to CSV Converter.
b. Access the Properties tab page.
c. In the Path to Source Element field, enter the path of the source element in XML file.
Note
If you want to use a field separator that is not available in the dropdown list, manually enter the
character in Field Separator in CSV field.
e. If you want to use the field names in the XML file as headers in CSV file, select the Field Names as
Headers in CSV checkbox.
f. If you want to include the parent element of the XML file in the CSV file, in the Advanced tab page
select Include Parent Element checkbox.
g. If you want to include the attribute values of the XML file in the CSV file, in the Advanced tab page
select Include Attribute Values checkbox.
h. Save changes.
Example
COMPETENCY,
Role Specific,
Computer Skills,
"Skilled in the use of computers, adapts to new technology, keeps abreast of
changes, learns new programs quickly, uses computers to improve productivity."
Value for the field XPath of Record Identifier in XSD is given as CompetencyList\Competency.
After it is processed by the CSV to XML converter, the XML output appears in the following format:
The JSON-to-XML converter enables you to transform messages in JSON format to XML format.
Procedure
Option Description
JSON Prefix (only if the option Use Namespace Mapping Enter the mapping of the JSON prefix to the XML name
is selected) space. The JSON namespace/prefix must begin with a let
ter and can contain aA-zZ and 0-9.
JSON Prefix Separator Enter the JSON prefix separator to be used to separate
the JSON prefix from the local part. The value used must
not be used in the JSON prefix or local name.
To ensure a successful conversion of you data, you should make yourself familiar with the conversion rules.
The conversion from JSON format to XML format follows the following rules:
● The value of a complex element (having attributes for example) is represented by a separate member
"$":"value".
● An element with multiple child elements of the same name is represented by an array. This also holds if
between the children with the same name other children with another name reside.
Example: <root><childA>A1</childA><childB>B</childB><childA>A2</childA></root> is
transformed in the non-streaming case to {"root":{"childA":["A1","A2"],"childB":"B"}},
which means that the order of the children is not preserved.
In the streaming case, the result is: {"root":{"childA":["A1"],"childB":"B","childA":
["A2"]"}}, which means that a non-valid JSON document is created because the member "childA"
appears twice.
● An element with no characters or child elements is represented by "element" : ""
Related Information
● The XML element and attribute names must not contain any delimiter characters, because the delimiter is
used in JSON to separate the prefix from the element name.
● Elements with mixed content are not supported.
● Conversion of "@attr":null to XML is not supported. You get a NullPointerException; as of cluster version
1.21 you get a JsonXmlException.
● JSON member names must not contain in their local or prefix part any characters that are not allowed for
XML element/attribute names (for example, space or column (':') are not allowed). XML element/attribute
names are of type NCName (see http://www.w3.org/TR/1999/REC-xml-names-19990114/#NT-NCName
).
● JSON texts that start with an array (for example, [{"a":{"b":"bvalue"}}] ) are not supported. The
JSON text must start with a JSON object such as {{"a":{"b":"bvalue"}}}. A
javax.xml.stream.XMLStreamException is thrown. As of cluster version 1.21, a JsonXmlException is thrown.
● The root JSON object must contain exactly one member. If the root JSON object contains more than one
member (for example, {"a":"avalue","b":"bvalue"}), a javax.xml.stream.XMLStreamException is
thrown. If the root JSON object is empty ({}), a javax.xml.stream.XMLStreamException is thrown. As of
cluster version 1.21, a JsonXmlException or IllegalStateException (for {}) is thrown.
If you need to convert a JSON file that doesn't fulfil this requirement, you can do the following:
Add a content modifier before the JSON to XML converter that changes the message body. In the entry
field in tab Message Body, enter:
{"root": ${in.body}}
● If no XML namespace is defined for a JSON prefix, the full member name with the prefix and JSON
delimiter is set as the element name: "p:A" -> <p:A>.
Related Information
Whether you select the option Use Namespace Mapping or not, leads to different transformation results.
{"abc:A":{"xyz:B":{"@xyz:attr1":"1","@attr2":"2","$":"valueB"},"C":
["valueC1","valueC2"],"D":"","E":"valueE"}}
abc http://com.sap/abc
xyz http://com.sap/xyz
No Namespace Mapping
{"abc.A":{"xyz.B":{"@xyz.attr1":"1","@attr2":"2","$":"valueB"},"C":
["valueC1","valueC2"],"D":"","E":"valueE"}}
The XML-to-JSON converter enables you to transform messages in XML format to JSON format.
Procedure
Option Description
JSON Output Encoding Enter the JSON output encoding. The default value is from
header or property.
XML Namespace (only if the option Namespace Mapping If you select from header or property, the converter tries
is selected) to read the encoding from the message header or ex
change property CamelCharsetName. If there is no value
defined, UTF-8 is used.
JSON Prefix Separator (only if the option Namespace Enter the JSON prefix separator to be used to separate
Mapping is selected) the JSON prefix from the local part. The value used must
not be used in the JSON prefix or local name.
Suppress JSON Root Element Choose this option to create the JSON message without
the root element tag.
You can specify whether the whole XML document or only specified XML elements are to be presented by
JSON arrays.
Note
A JSON object is an unordered set of name/value
pairs that begins with { and ends with }. Each name
is followed by : and the name/value pairs are sepa
rated by ,.
Related Information
To ensure a successful conversion from XML format to JSON format, you should make yourself familiar with
the conversion rules.
The conversion from XML format to JSON format follows the following rules:
● An element is represented as JSON member whose name is the concatenation of JSON prefix
corresponding to the XML namespace, JSON delimiter, and the element name. If the element has no
namespace, no prefix and JSON delimiter is added to the JSON name.
● An attribute is represented as JSON member whose name is the concatenation of '@' , the JSON prefix
corresponding to the XML namespace, JSON delimiter, and the attribute name. If the attribute has no
namespace, no prefix and JSON delimiter are added to the JSON name.
● An element with no characters or child elements is represented by "element" : ""
● An element with multiple child elements of the same name is represented by an array. This also holds if
between the children with the same name other children with another name reside.
Related Information
To ensure a successful conversion form XML to JSON format you have to know the limitations for this
conversion.
● The XML element and attribute names must not contain any delimiter characters, because the delimiter is
used in JSON to separate the prefix from the element name.
● Elements with mixed content are not supported.
● XML comments (<!-- comment -->) are not represented in the JSON document; they are ignored.
Related Information
The individual tags of an XML document are processed consecutively,irrespective of where in the overall
structure the tag occurs and how often (multiplicity). This means that during the streaming process the
converter cannot know if an element occurs in the structure more than once. In other words, during the
streaming process the object model that reflects the overall structure of the XML document (and, therefore,
also all information that can only be derived from the object model, like the multiplicity of elements) is not in
place. This is different to the non-streaming case, where the converter can calculate the multiplicity of the XML
elements from the object model of the complete XML document. The multiplicity is needed to create a correct
JSON document. Elements whose multiplicity is greater than one must be transformed to a JSON member
with an array. For example, you may think that for the XML document <root><B>b1</B><B>b2</B></root>,
you create the JSON document {“root”:{“B”:”b1”,”B”:”b2”}}. However, this JSON document is invalid,
because the member name “B” occurs twice on the same hierarchy level.
To illustrate this behavior, let’s consider how the following simple XML structure has to be converted to JSON:
<root>
<A>a</A>
<B>b1</B>
<B>b2</B>
<C>c</C>
</root>
Without streaming, the converter would produce the following JSON structure:
{"root":{"A":"a","B":["b1","b2"],"C":"c"}}
As expected, the XML element root/B would transform into a JSON member with an array as value, where the
array has two values (b1 and b2) – according to the multiplicity of root/B. Note that a JSON array is indicated
by the following type of brackets: [ … ].
With streaming with all elements to JSON arrays, the converter would produce the following JSON structure:
{"root":[{"A":["a"],"B":["b1","b2"],"C":["c"]}]}
All XML elements are transformed into members with a JSON array as value.
With streaming and specific elements as arrays (where root/A and root/B are specified), the converter would
produce the following JSON structure:
{"root":{"A":["a"],"B":["b1","b2"],"C":"c"}}
An array is produced only for the XML elements root/A and root/B, but not for root/C.
With streaming and specific elements as arrays (where only root/A is specified), the converter would produce
the following invalid JSON structure:
{"root":{"A":["a"],"B":"b1",”B”:"b2","C":"c"}}
Whether you select the option Use Namespace Mapping or not, leads to different transformation results.
Example
http://com.sap/abc abc
http://com.sap/xyz xyz
{"abc:A":{"xyz:B":{"@xyz:attr1":"1","@attr2":"2","$":"valueB"},"C":
["valueC1","valueC2"],"D":"","E":"valueE"}}
Note
Example
{"A":{"B":{"@attr1":"1","@attr2":"2","$":"valueB"},"C":
["valueC1","valueC2"],"D":"","E":"valueE"}}
Examples and Special Cases of JSON Message without Root Element Tag
Example
The following example shows the transformation from XML to JSON (option Suppress JSON Root Element is
selected).
<root>
<A>a</A>
<B>b</B>
<C>c</C>
<D>d</D>
</root>
{
"A": "a",
"B": "b",
"C": "c",
"D": "d"
Special Cases
<root>v</root>
"v"
<root></root>
""
Input XML Message: Root Element with Simple Value (options Suppress JSON Root Element, Steaming All are
selected)
<root><A>a<A><B><C>c</C></B></root>
{"A":["a"],"B":[{"C":["c"]}]}
with dropRootElement=false
{"root":{["A":["a"],"B":[{"C":["c"]}]]}}
The EDI to XML converter enables you to transform messages in EDI format to XML format. You can convert
EDIFACT and ASC-X12 format into XML format.
Context
Note
Contact the SAP Cloud Platform Integration Team to obtain EDI related XSD files.
2. Choose Add Message Transformers Converter from the context menu of a connection within the
pool.
3. Choose Switch to EDI to XML Converter, from the context menu,.
4. On the EDI to XML Converter properties tab page, define the following parameters to convert the EDI data
format to XML data format.
Field Description
Note
○ You can add XSD files to the integration flow. For
more details, please refer to the topic Validating
Message Payload against XML Schema, in de
veloper's guide.
○ The file name of the xml schema for EDIFACT
should have the following format:
UN-EDIFACT_ORDERS_D96A.xsd
The file name comprises of following three parts
separated by '_':
○ First part "UN-EDIFACT" refers to the EDI
standard with organization name. This value
is fixed and cannot be customised.
○ Second part "ORDERS" refers to the mes
sage type.
○ Third part "D96A" refers to the version .
○ The file name of the xml schema for ASC-X12
should have the following format:
ASC-X12_810_004010.xsd
The file name comprises of following three parts
separated by '_':
○ First part "ASC-X12" refers to the ASC-X12
standard with organization name. This value
is fixed and cannot be customised.
○ Second part "810" refers to the message
type.
○ Third part "004010" refers to the version .
○ The above mentioned values should match with
schema content.
Note
This header name is fetched from camel header. The
header is added in script element. This script element
is added before converter element. You can add value
for this header in the script element.
Note
○ Any EDIFACT message is an interchange. An interchange can have multiple groups. And each
group consists of message types. For EDIFACT message, the EDI elements in SAP Cloud Platform
Integration support only 1 message type per interchange but does not support any group segment
(GS) per interchange segment.
○ Any ASC-X12 message is an interchange. An interchange can have multiple groups. And each
group consists of transaction sets. For ASC-X12 message, the EDI elements in SAP Cloud Platform
Integration support only 1 group segment (GS) per interchange segment and only 1 transaction set
(ST) per group segment.
○ SAP Cloud Platform Integration does not support repetition characters. Repetition character is a
single character which separates the instances of a repeating data element. For example, ^ (caret
sign) is a repetition character.
Note
EDI_Document_Number header contains document number for the incoming EDI file.
Example
● The following example shows the transformation from EDI to XML format of EDIFACT message.
Input sample EDIFACT EDI Message
UNA:+.? '
UNB+UNOC:3+SENDERABC:14:XXXX+ReceiverXYZ:14:YYYYY+150831:1530+1+HELLO WORLD++A
+++1'
UNH+1+ORDERS:D:96A:UN++2'
BGM+220+MY_ID+9+NA'
DTM+137:201507311500:203'
DTM+2:201508010930:203'
● The following example shows the transformation from EDI to XML format of ASC-X12 message.
Input sample ASC-X12 EDI Message
Sample Code
The XML to EDI converter transforms a message in XML format to EDI format. You can convert EDIFACT and
ASC-X12 format into XML format.
Context
Note
Contact the SAP Cloud Platform Integration Team to obtain EDI related XSD files.
Procedure
2. Choose Add Message Transformers Converter from the context menu of a connection within the
pool.
3. Choose Switch to XML to EDI Converter, from the context menu,.
4. Provide values in fields based on description in table:
Note
○ You can add XSD files to the
integration flow. For more de
tails, please refer to the topic
Validating Message Payload
against XML Schema.
○ The file name of the xml
schema for ASC-X12 should
have the format, ASC-
X12_810_004010.xsd. It con
tains three parts separated
by _:
○ First part ASC-X12 refers
to the ASC-X12 standard
with organization name.
This value is fixed and
cannot be customised.
○ Second part 810 refers
to the message type.
○ Third part 004010 refers
to the version.
○ The aforementioned values
should match with the
schema content.
Note
This header name is fetched from
Camel header. The header is
added in script element. This
script element is added before
the converter element. You can
add value for this header in the
script element.
Note
You can also manually specify the
custom separator.
Note
○ You can add XSD files to the
integration flow. For more de
tails, please refer to the topic
Validating Message Payload
against XML Schema, in de
veloper's guide.
○ The file name of the xml
schema for ASC-X12 should
have the format, ASC-
X12_810_004010.xsd. It con
tains three parts separated
by _:
○ First part ASC-X12 refers
to the ASC-X12 standard
with organization name.
This value is fixed and
cannot be customised.
○ Second part 810 refers
to the message type.
○ Third part 004010 refers
to the version.
○ The aforementioned values
should match with the
schema content.
○ During runtime only XSD’s
from Integration Content Ad
visor (ICA) are supported.
Note
This header name is fetched from
camel header. The header is
added in script element. This
script element is added before
converter element. You can add
value for this header in the script
element.
Note
You can also manually specify the
custom separator.
Note
○ Any EDIFACT message is an interchange. An interchange can have multiple groups. And each
group consists of message types. For EDIFACT message, the EDI elements in SAP Cloud Platform
Integration support only 1 message type per interchange but does not support any group segment
(GS) per interchange segment.
Note
EDI_Document_Number header contains document number for the incoming EDI file.
Example
● The following example shows the transformation from XML to EDI format of EDIFACT message.
Input sample EDIFACT XML Message
UNA:+.? '
UNB+UNOC:3+SENDERABC:14:XXXX+ReceiverXYZ:14:YYYYY+150831:1530+1+HELLO WORLD++A
+++1'
UNH+1+ORDERS:D:96A:UN++2'
BGM+220+MY_ID+9+NA'
DTM+137:201507311500:203'
DTM+2:201508010930:203'
CNT+16:10'
UNT+5+1'
UNZ+1+1'
Context
You use this task to execute custom Java script or Groovy script for message processing. SAP Cloud Platform
Integration provides a Java API to support this use case.
Note
Note that data written into the message header at a certain processing step (for example, in a Content
Modifier or Script step) will also be part of the outbound message addressed to a receiver system. Because
of this, it is important to consider the following restriction regarding the header size if you use an HTTP-
based receiver adapter:
If the message header exceeds 8 KB, it might lead to situations that the receiver cannot accept the inbound
call (relevant for all http-based receiver adapters).
Note
Cloud Integration supports the XML Document Object Model (DOM) to process XML documents.
Note
Any application that parses XML data is prone to the risk of XML External Entity (XXE) Processing attacks.
To overcome this issue, you should take the following measures to protect integration flows that contain
Script steps (using Groovy script or Java Script) against XXE Processing attacks:
Procedure
In the script you require Script Function, which will be executed at runtime. The function definition is as
follows:
import com.sap.gateway.ip.core.customdev.util.Message
def Message processData(Message message) {
def body = message.getBody()
//modify body
message.setBody(body + "enhancements")
//modify headers
def map = message.getHeaders()
def value = map.get("oldHeader");
message.setHeader("oldHeader", value + "modified")
message.setHeader("newHeader", "headerValue")
//modify properties
map = message.getProperties()
value = map.get("oldProperty")
message.setProperty("oldProperty", value + "modified")
message.setProperty("newProperty", "script")
return message
}
8. Add or modify Header, Body, and Property by using the interfaces below on the "message" object.
1. You can use the following interfaces for Header:
○ public java.util.Map<java.lang.String,java.lang.Object> getHeaders()
○ public void setHeaders(java.util.Map<java.lang.String,java.lang.Object> exchangeHeaders)
○ public void setHeader(java.lang.String name, java.lang.Object value)
2. You can use the following interfaces for Body:
○ public java.lang.Object getBody()
○ public void setBody(java.lang.Object exchangeBody)
○ public java.lang.Object getBody(java.lang.String fullyQualifiedClassName)
Note
SAP Cloud Platform Integration framework supports conversion of payload into the following
formats:
○ String
○ InputStream
○ byte[]
To convert the payload into String or InputStream use the following fullyQualifiedClassName:
○ java.lang.String
○ java.io.InputStream
To convert the payload into byte[] use the following:
○ For groovy script - def body = message.getBody(java.lang.String) as byte[]
○ For java script - var body = [message.getBody()]
3. You can use the following interfaces for Property:
Note
○ You can now use content assist feature for groovy script which means that you can view list of
existing methods of message class, once you start typing initial letters of the required method. You
can add content assist jar file in integration project to use this feature. Please refer to https://
tools.hana.ondemand.com/#cloudintegration to download the file.
○ You should not add or modify a Property name starting with sap.
○ If no Script Function is specified in the script flow step, the Script Function name will be
processData by default.
Caution
When converting parts of the message object like the body or even headers or properties into a string
(as string) or into a byte array (as byte[]) please consider copies of the existing data is created which
requires extra memory. This resource consumption may even exceed the memory size of the original
object if string conversion is executed.
Example:
Depending on the size of the object byte[] or string conversion can endanger worker node of OOM
failure. Please consciously decide which part of the message object should be converted.
9. If you want to access the security artifacts such as secure store and keystore that are deployed using the
deployment wizard, refer the table below:
import com.sap.it.api.ITApiFactory;
import com.sap.it.api.securestore.SecureStoreService;
import com.sap.it.api.securestore.UserCredential;
import
com.sap.it.api.securestore.exception.SecureStoreException;
Sample Code
def service =
ITApiFactory.getApi(SecureStoreService.class, null);
import com.sap.it.api.ITApiFactory;
import com.sap.it.api.keystore.KeystoreService;
import com.sap.it.api.keystore.exception.KeystoreException;
Sample Code
10. If you want to access artificats like value mapping, number ranges refer to the table below:
import com.sap.it.api.ITApiFactory;
import
com.sap.it.api.keystore.KeystoreService
;
import
com.sap.it.api.keystore.exception.Keyst
oreException;
Sample Code
def service =
ITApiFactory.getApi(KeystoreServic
e.class, null);
def keymanagers =
service.getKeyManagers(alias);
……………………………
}
Note
Next Steps
There are the following additional Java interfaces for the message processing log (MPL) which you can address
with the script step (either in Groovy Script or JavaScript):
● MessageLogFactory
Provides access to the message processing log.
The interface MessageLogFactory can be used through the variable messageLogFactory in order to
retrieve an instance of MessageLog.
You can use the following methods in an instance of MessageLog in order to set a property of a given type in
the message processing log:
You can use the following method in an instance of MessageLog in order to add string to an attachment using
message processing log:
You can use the following method in an instance of MessageLog in order to add, set, get, remove headers to/
from an attachment using message processing log:
You can use the following method in an instance of MessageLog in order to add, set attachment objects as a
map using message processing log:
Use method addAttachmentAsString to add a longer, structured document to the message processing log
(MPL). Use method setStringProperty only for short strings (containing one or a few words).
If the value "null" is specified for the parameter mediaType, then the value "text/plain" is assumed as
media type.”
As an example, the following code lines allow you to set a string property:
Groovy:
Caution
Note that the properties provided by the script step are displayed in alphabetical order in the resulting
message processing log (MPL). That means that the sequence of properties in the MPL does not
necessarily reflect the sequence applied in the script.
Related Information
Script example to identify any exceptions that arise when sending messages using the HTTP receiver.
You can use Groovy programming language (Groovy script) to identify any exceptions that arise when sending
messages using the HTTP receiver.
Sample Code
import com.sap.gateway.ip.core.customdev.util.Message;
def Message processData(Message message) {
messageLog.addAttachmentAsString("http.ResponseBody",
ex.getResponseBody(), "text/plain");
// copy the http error
response to an iflow's property
message.setProperty("http.ResponseBody",ex.getResponseBody());
// copy the http error
response to the message body
message.setBody(ex.getResponseBody());
// copy the value of http
error code (i.e. 500) to a property
message.setProperty("http.StatusCode",ex.getStatusCode());
// copy the value of http
error text (i.e. "Internal Server Error") to a property
message.setProperty("http.StatusText",ex.getStatusText());
}
}
return message;
}
Context
Procedure
You can use the transient data store to temporarily store messages.
Context
The transient data store (data store for sakes of simplicity) supports four types of operations:
If you use a Write operation, you can store the messages in the data store by configuring the data store
name and a unique Entry ID.
You can also specify the number of messages you fetch in each poll.
If you use Select operation, you can transfer only overwritten data fetched from the data store.
Note
A data store operations step has to be triggered explicitly, for example, by a Timer event.
Next Steps
You can display the content of the data store in the Data Store Viewer of the Integration Operations feature.
Note
● In the case of Select operations, you can select multiple messages. The content of the selected
messages will appear in the following format:
<messages>
<message id=”<Entry ID>”>
<Content of selected message>
</message>
<message id=”<Entry ID>”>
………..
……….
……….
</message>
Related Information
Context
Procedure
Note
Adding data store operations enables Write operations by default. Click the Switch to option to choose
the Select operation.
Note
When retrieving a message body from a data store operation, only XML is supported.
Data Store Name Specifies the name of the data store (no white spaces).
You can dynamically define the data store name based on a header or exchange
property. Use the format ${header.headername} to dynamically read the
name from a header, or ${property.propertyname} to read it from an
exchange property.
The maximum length allowed for the data store name is 40 characters. If you enter
a longer string, a validation error is raised. Note that this length restriction applies
to the value that is used for this parameter at runtime. Therefore, if you configure
this parameter dynamically, make sure that the expected header or property value
does not exceed this length restriction. Otherwise, a runtime error will be raised.
Visibility Defines whether the data store is shared by all integration flows (deployed on the
tenant) or only by one specific integration flow.
○ Global: Data store is shared across all integration flows deployed on the
tenant.
○ Integration Flow: Data store is used by one integration flow.
Number of Polled Messages Specifies the maximum number of messages to be fetched from the data store
within one poll (default is 1).
You can also configure this property in such a way that the number of fetched
messages per poll is dynamically evaluated at runtime based on the incoming
message. To do this, you can enter the following kind of expressions:
Delete on Completion Select this option to delete a message from the data store after having
successfully processed the message.
Results
There is no in-order processing of data store entries. In other words, the sequence of data store entries
selected with a Select operation is random and non-deterministic. If you have restricted the number of
entries to be fetched (with the Number of Polled Messages attribute), the selection of retrieved entries is also
random.
Context
Procedure
Note
The data store write operation stores the payload only, not the headers.
Headers are only stored if they are primitive data types, serializable standard data types (such as
strings), or lists/sets of these entities. Any other data types do remain part of the processed message,
but are not considered during a data store operation and so cannot be retrieved from the data store in
subsequent steps.
Attribute Description
Data Store Name Specifies the name of the data store (no white spaces).
You can dynamically define the data store name based on a header or exchange
property. Use the format ${header.headername} to dynamically read the
name from a header, or ${property.propertyname} to read it from an
exchange property.
The maximum length allowed for the data store name is 40 characters. If you enter
a longer string, a validation error is raised. Note that this length restriction applies
to the value that is used for this parameter at runtime. Therefore, if you configure
this parameter dynamically, make sure that the expected header or property value
does not exceed this length restriction. Otherwise, a runtime error will be raised.
Visibility Defines whether the data store is shared by all integration flows (deployed on the
tenant) or only by one specific integration flow.
○ Global: Data store is shared across all integration flows deployed on the
tenant.
○ Integration Flow: Data store is used by one integration flow.
Entry ID Specify an entry ID that will be stored together with the message content.
Details for the entry ID are read from the incoming message. You can enter the
following kinds of expressions:
In the case of Write operations, if the Entry ID is not defined, the data store
component uses the value of the SapDataStoreId header as the entry ID. If
this header is not defined, the data store component generates an entry ID and
sets the SapDataStoreId header with the generated value.
Tip
If you like the system to generate an entry ID for the data store operation,
remove the header SapDataStoreId before the data store write step and
leave the Entry ID field in the data store empty.
In the case of Delete and Get operations, you can explicitly define an Entry ID or
pass the header SapDataStoreId.
Retention Threshold for Alerting Time period (in days) within which the messages have to be fetched before an alert
in (d) is raised.
Expiration Period in (d) Number of days after which the stored messages are deleted (default is 90 days,
maximum possible value is 180 days).
The minimum value of Expiration Period should be at least twice that of Retention
Threshold for Alerting.
Encrypt Stored Message Select this option to encrypt the message in the data store.
Overwrite Existing Message Select this option to overwrite an existing message in the data store.
Trying to overwrite an existing entry without having this option selected will result
in a DuplicateEntryException.
Include Message Headers Select this option to store message headers in addition to the payload.
Note
Camel* and SAP_* headers will not be stored.
Note
Only select this option if you want to read the message afterwards by
retrieving it with Get operations.
Caution
Consider that all headers will be stored and it may take up a lot of space.
Related Information
Context
Note
Adding data store operations enables Write operations by default. Click the Switch to option to choose
the Get operation.
Attribute Description
Data Store Name Specifies the name of the data store (no white spaces).
You can dynamically define the data store name based on a header or exchange
property. Use the format ${header.headername} to dynamically read the
name from a header, or ${property.propertyname} to read it from an
exchange property.
The maximum length allowed for the data store name is 40 characters. If you enter
a longer string, a validation error is raised. Note that this length restriction applies
to the value that is used for this parameter at runtime. Therefore, if you configure
this parameter dynamically, make sure that the expected header or property value
does not exceed this length restriction. Otherwise, a runtime error will be raised.
Visibility Defines whether the data store is shared by all integration flows (deployed on the
tenant) or only by one specific integration flow.
○ Global: Data store is shared across all integration flows deployed on the
tenant.
○ Integration Flow: Data store is used by one integration flow.
Entry ID Specify an entry ID that will be stored together with the message content.
Details for the entry ID are read from the incoming message. You can enter the
following kinds of expressions:
${xpath./CustomerReviews/CustomerReview/ProductId/
text()}
In the case of Write operations, if the Entry ID is not defined, the data store
component uses the value of the SapDataStoreId header as the entry ID. If
this header is not defined, the data store component generates an entry ID and
sets the SapDataStoreId header with the generated value.
Tip
If you like the system to generate an entry ID for the data store operation,
remove the header SapDataStoreId before the data store write step and
leave the Entry ID field in the data store empty.
In the case of Delete and Get operations, you can explicitly define an Entry ID or
pass the header SapDataStoreId.
Delete on Completion Select this option to delete a message from the data store after having
successfully processed the message.
Throw Exception on Missing You have the option to throw an exception if the entry with the specified Entry ID
Entry does not exist in the datastore.
Remember
If you disable this option, the header SAP_DatastoreEntryFound is set
to false and no exception is thrown, even if the Entry ID does not exist.
Related Information
Context
Note
Adding data store operations enables Write operations by default. Click the Switch to option to choose
the Delete operation.
Attribute Description
Data Store Name Specifies the name of the data store (no white spaces).
You can dynamically define the data store name based on a header or exchange
property. Use the format ${header.headername} to dynamically read the
name from a header, or ${property.propertyname} to read it from an
exchange property.
The maximum length allowed for the data store name is 40 characters. If you enter
a longer string, a validation error is raised. Note that this length restriction applies
to the value that is used for this parameter at runtime. Therefore, if you configure
this parameter dynamically, make sure that the expected header or property value
does not exceed this length restriction. Otherwise, a runtime error will be raised.
Visibility Defines whether the data store is shared by all integration flows (deployed on the
tenant) or only by one specific integration flow.
○ Global: Data store is shared across all integration flows deployed on the
tenant.
○ Integration Flow: Data store is used by one integration flow.
Entry ID Specify an entry ID that will be stored together with the message content.
Details for the entry ID are read from the incoming message. You can enter the
following kinds of expressions:
${xpath./CustomerReviews/CustomerReview/ProductId/
text()}
In the case of Write operations, if the Entry ID is not defined, the data store
component uses the value of the SapDataStoreId header as the entry ID. If
this header is not defined, the data store component generates an entry ID and
sets the SapDataStoreId header with the generated value.
Tip
If you like the system to generate an entry ID for the data store operation,
remove the header SapDataStoreId before the data store write step and
leave the Entry ID field in the data store empty.
In the case of Delete and Get operations, you can explicitly define an Entry ID or
pass the header SapDataStoreId.
Related Information
You use write variables to specify values for variables and support message flow execution.
Procedure
2. To add write variables in the integration flow, choose Add Message Persistence Write Variables from
the context menu of a connection within the pool.
3. Select the Write Variables tab in the Properties view.
4. Choose Add.
5. Specify a name for the variable and perform the following substeps to assign values to the variable.
a. To assign a value using an XPath, select xpath in the Type column and enter the XPath expression in the
Value column.
b. If the XPath contains a namespace prefix, specify the association between the namespace and the
prefix on the Runtime Configuration tab page of the integration flow Properties view.
○ The value for name of the variable must be constant and should not be a reference to some
other value. For example, a valid value is Variable1 and not ${header.source}.
○ The Data Type column is applicable for the xpath and expression types. The data type can be
any Java class. An example of a data type for an XPath is java.lang.String.
c. To assign a value using a header, select header in the Type column and enter the header in the Value
column.
d. To assign an external parameter, select external parameter in the Type column and define a parameter
key.
Note
Defining multiple parameters in the same field or column is not supported for tables.
Note
○ If you want the variable to be used in multiple integration flows, select the global scope checkbox.
○ By default, stored variables are deleted 400 days after the last update; the system raises an alert 2
days before the variables expire.
○ The default data store name for variables is ‘sap_global_store’. You should not use this value as the
data store name for data store operations.
○ Variables should not have same name as header id in integration flow.
○ Properties are local variables.
○ Variables cannot be downloaded using data store viewer.
Context
You use this task to configure a process step to store a message payload so that you can access the stored
message and analyze it at a later point in time.
In the integration flow, you mark a process step for persistence by specifying a unique step ID, which can be a
descriptive name or a step number. For example, a step ID configured after a mapping step could be
MessageStoredAfterMapping.
At runtime, the runtime node of the cluster stores information, such as GUID, MPL GUID, tenant ID, timestamp,
or payload, for the messages at the process steps that have been marked for persistence.
Note
Messages can be stored in the runtime node for 90 days, after which the messages are automatically
deleted.
● Providing access to data owned by the customer : Since messages are processed on behalf of the
customer, you might need to access the customer-owned data after the messages have been processed to
provide it to the customer.
Procedure
The XML validator validates the message payload in XML format against the configured XML schema.
Prerequisites
You have the XML schema (XSD files) added in the .src.main.resources.xsd location of your integration flow
project. If you do not thave the specified location in your project, you need to create one first and then add the
XSD files.
Context
You use this procedure to assign XML schema (XSD files) to validate the message payload in a process step.
The validator checks the message payload against configured XML schema, and report discrepencies in
Procedure
Note
○ You can have references to other XSDs within the same project. XSDs residing outside the projects
cannot be referred.
○ You can enter a value less than 5000 for attribute maxOccurs, in input xsd. You can also enter
unbounded, if you do not want to check for max occurrence but would like to support any number
of nodes.
○ If there are any validation errors in the payload, the details of the error is visible in MPL
attachment. The link for the attachment is available in MPL log.
○ Use ${header.XmlValidationResult to get more details on validation excecptions.
7. If you want to continue the processing even if the system encounters error while validating, then select the
check box Prevent Exception on Failure.
Note
Context
Prerequisites
You have selected the Content-Based Router (Multiple Interfaces/Multiple Receivers) pattern or you have added
the router element to the integration flow model from the palette.
Context
You perform this task when you have to specify conditions based on which the messages are routed to a
receiver or an interface during runtime. If the message contains XML payload, you form expressions using the
Xpath-supported operators. If the message contains Non-XML payload, you form expressions using the
operators shown in the table below:
= ${header.SenderId} = '1'
!= ${header.SenderId} != '1'
in ${header.SenderId} in '1,2'
A condition with expression ${header.SenderId} regex '1.*' routes all the messages having Sender
ID starting with 1.
Note
● You can define a condition based on property or exception that may occur.
● If condition ${property.SenderId} = '1' is true, then router routes the message to a particular
sender having Sender ID as 1.
● If condition ${exception.message}contains 'java.lang.Exception' is true, then router
routes the message to a particular receiver else it routes to other receiver.
Procedure
6. In the Condition field, formulate a valid XML or non-XML condition that routes the message to its
associated receiver.
Recommendation
We recommend you to ensure that the routing branches of a router are configured with the same type
of condition, either XML or non-XML, and not a combination of both. At runtime, the specified
conditions are executed in the same order as displayed in the Properties view of the Router. If the
conditions are a combination of both non-XML and XML type, the evaluation fails.
Tip
To quickly configure routing conditions, select Define Condition from the context menu of the routing
branch.
7. If you want to set the selected route as the default, such that, its associated receiver handles the error
situation if no receiver is found, select the Default Route option.
Note
Only the route that was recently selected as Default Route is considered as the default route.
To quickly set a route as a default route from the Model Configuration editor tab page, select Set as
Default Route from the context menu of routing branch.
Note
To configure an alert, see the Alert Configuration section in the Operations Guide.
10. To model the integration flow when no receiver is found, follow the steps below:
a. From the Palette select the Terminate End notation and drop it inside the pool in the Model
Configuration.
b. Place the cursor on the Router to select the connection from the hover menu and drag the connection
to the Terminate End notation.
Note
If you do not add the Terminate End notation in your integration flow model, the Raise Alert or
Throw Exception option does not apply.
11. Select the Router notation and, in the Properties view, check the overview of all the routes and their
conditions that you have defined for the scenario.
A splitter step decomposes a composite message into a series of individual messages and sends them to a
receiver.
Prerequisites
You have selected the Splitter step or you have explicitly added the splitter step to the integration flow model.
Context
You perform this task if you need to break down a composite message into a series of individual messages and
send them to a receiver. You split the composite message based on the message type, for example, IDoc or
PKCS#7/CMS Signature-Content, or the manner in which the message is to be split, for example, general or
iterative splitter.
Splitter Types
PKCS#7/CMS Signature-Content Is used when an agent sends a PKCS7 Signed Data mes
sage that contains a signature and content. This splitter
type breaks down the signature and content into separate
files.
Note
A possible use case for this splitter type is where a
message obtained from a sender contains multiple
account statements and needs to be split into sepa
rate individual messages, each containing the account
statement for one specific account owner.
Note
A possible use case for this splitter type is where a
message obtained from a sender contains multiple or
ders from different customers and needs to be split
into individual messages. Each resulting message
should contain the order of one customer only and is
to be routed to this customer. In this case it might
make more sense for the resulting split messages to
contain only data that is related to the corresponding
customer, without any of the additional information
that is contained in enveloping elements.
Example
Consider the following XML payload structure as input for the Splitter step:
<orders>
<order>
<clientId>I0001</clientId>
<count>100</count>
</order>
<order>
<clientId>I0002</clientId>
<count>10</count>
</order>
</orders>
If you use Token as Expression Type and enter the order as Token, the XML payload is split into two output files.
<order>
<clientId>I0001</clientId>
<count>100</count>
</order>
<order>
<clientId>I0002</clientId>
<count>10</count>
</order>
Next Steps
When a message is split (as configured in a Splitter step of an integration flow), the Camel headers listed below
are generated every time the runtime finishes splitting an Exchange. You have several options for accessing
these Camel headers at runtime. For example, suppose that you are configuring an integration flow with a
Splitter step before an SFTP receiver adapter. If you enter the string split_${exchangeId}_Index$
{header.CamelSplitIndex} for File Name, the file name of the generated file on the SFTP server contains
the Camel header CamelSplitIndex. In other words, the information on the number of split Exchanges
induced by the Splitter step.
● CamelSplitIndex
Provides a counter for split items that increases for each Exchange that is split (starts from 0).
● CamelSplitSize
Provides the total number of split items (if you are using stream-based splitting, this header is only
provided for the last item, in other words, for the completed Exchange).
● CamelSplitComplete
Indicates whether an Exchange is the last split.
Note
You can use the Gather step after Splitter in an integration flow if you have configured the Splitter Type as
General or Iterating.
Restriction
Related Information
A general splitter is used to split a composite message comprising of N messages into N individual messages
each containing one message with the enveloping elements of the composite message.
Context
A possible use case for this splitter type is where a message obtained from a sender contains multiple account
statements and needs to be split into separate individual messages, each containing the account statement for
one specific account owner.
Typically, the target message also needs to contain enveloping information such as the bank name or other
attributes to make the account statement complete. In this case, it would make sense to choose the general
splitter pattern.
Procedure
Expression Type Specify the expression type you want to use to define the
split point (the element in the message structure below
Note which the message is to be split).
○ You are using a product profile other than the one means that the next split is only executed if Stop on
More information: Adapter and Integration Flow Step If Stop on Exception is set, the next split is not exe
Versions [page 405] cuted, irrespective of the end event used in an excep
tion subprocess.
○ XPath
The splitter argument is specified by an XPath ex
pression.
○ Line Break
If the input message is a non-XML file, it is split ac
cording to the line breaks in the input message.
Empty lines in input messages is ignored and no
empty messages are created.
(Enabled only if you select XPath in the Expression Type You can specify the absolute or relative path.
field)
Note
Note that only the following types of XPath expres
sions are supported:
○ |
○ +
○ *
○ >
○ <
○ >=
○ <=
○ [
○ ]
○ @
Note
When addressing elements with namespaces in the
Splitter step, you have to specify them in a specific
way in the XPath expression, and the namespace dec
laration in the incoming message must follow a cer
tain convention:
<root xmlns:n0=“http://
myspace.com“></root>
Caution
You cannot split by values of message elements.
<customerList>
<customers>
<customerNumber>0001</
customerNumber>
<customerName>Paul Smith</
customerName>
</customers>
<customers>
<customerNumber>0002</
customerNumber>
<customerName>Seena
Kumar</customerName>
</customers>
</customerList>
/customerList/customers/
customerNumber=0001
Note
Note the following behavior if the specified XPath ex
pression does not match the inbound payload: If no
split point is found for the specified XPath, no out
bound payload will be generated from the Splitter
step and the integration flow will be ended with status
Completed.
Grouping The size of the groups into which the composite message
is to be split.
Parallel Processing Select this checkbox if you want to enable (parallel) proc
essing of all the split messages at once.
Number of Concurrent Processes If you have selected Parallel Processing, the split mes
sages are processed concurrently in threads. Define how
(Enabled only if Parallel Processing is selected)
many concurrent processes to use in the splitter. The de
fault is 10. The maximum value allowed is 50.
Timeout (in s) Maximum time in seconds that the system waits for proc
essing to complete before it is aborted. The default value
(Enabled only if Parallel Processing is selected)
is 300 seconds.
Caution
Note that after the specified timeout the splitter proc
essing stops without exception.
Note
If Stop on Exception is set, the exception subprocess
has no Effect on the exception. The processing is
stopped, and the message is set to Failed.
Related Information
An iterating splitter is used to split a composite message into a series of messages without copying the
enveloping elements of the composite message.
Context
A possible use case for this splitter type is where a message obtained from a sender contains multiple orders
from different customers and needs to be split into individual messages. Each resulting message should
contain the order of one customer only and is to be routed to this customer. In this case it might make more
sense for the resulting split messages to contain only data that is related to the corresponding customer,
without any of the additional information that is contained in enveloping elements.
Procedure
Expression Type Specify the expression type you want to use to define the
split point (the element in the message structure below
which the message is to be split).
Note
The defined Stop on Exception handling takes priority
over the end event defined in an exception subpro
cess with respect to continuing with the next split. It
means that the next split is only executed if Stop on
Exception is not set.
○ XPath
The splitter argument is specified by an XPath ex
pression.
○ Token
The splitter argument is specified by a keyword.
○ Line Break
If the input message is a non-XML file, it is split ac
cording to the line breaks in the input message.
Empty lines in input messages will be ignored and no
empty messages are created.
Token (Enabled only if you select Token in the Expression The keyword or token to be used as a reference for split
Type field) ting the composite message
(Enabled only if you select XPath in the Expression Type You can specify the absolute or relative path.
field)
Note
Note that only the following types of XPath expres
sions are supported:
○ |
○ +
○ *
○ >
○ <
○ >=
○ <=
○ [
○ ]
○ @
Note
When addressing elements with namespaces in the
Splitter step, you have to specify them in a specific
way in the XPath expression, and the namespace dec
laration in the incoming message must follow a cer
tain convention:
<root xmlns:n0=“http://
myspace.com“></root>
Caution
You cannot split by values of message elements.
<customerList>
<customers>
<customerNumber>0001</
customerNumber>
<customerName>Paul Smith</
customerName>
</customers>
<customers>
<customerNumber>0002</
customerNumber>
<customerName>Seena
Kumar</customerName>
</customers>
</customerList>
/customerList/customers/
customerNumber=0001
Grouping The size of the groups into which the composite message
is to be split.
Parallel Processing Select this checkbox if you want to enable processing of all
the split messages at once.
Number of Concurrent Processes If you have selected Parallel Processing, the split mes
sages are processed concurrently in threads. Define how
(Enabled only if Parallel Processing is selected)
many concurrent processes to use in the splitter. The de
fault is 10. The maximum value allowed is 50.
Timeout (in s) Maximum time in seconds that the system will wait for
processing to complete before it is aborted. The default
(Enabled only if Parallel Processing is selected)
value is 300 seconds.
Caution
Note that after the specified timeout the splitter proc
essing stops without exception.
Note
If Stop on Exception is set, the exception sub-process
has no effect on the exception. The processing is
stopped, and message is set to Failed.
Related Information
Context
You use the EDI splitter to split inbound bulk EDI messages, and during processing you can configure the
splitter to validate and acknowledge the inbound messages. If you choose to acknowledge the EDI message,
then the splitter transmits a functional acknowledgement after processing the bulk EDI message. A bulk EDI
message can contain one or more EDI formats, such as EDIFACT, EANCOM, and ASC-X12. You can configure
the EDI splitter to process different EDI formats depending on the business requirements of the trading
partners.
Procedure
Timeout (in sec) Set the time limit in seconds for the EDI split
ter to process individual split messages. If
there are any processes still pending once the
time has lapsed, the splitter terminates the
processes and updates the MPL status.
EDIFACT
Example
Consider a scenario where you receive a
bulk EDI message containing five pur
chase orders. In Interchange mode, if a
single EDI message fails, the entire inter
change is rejected. However, in Message
mode, if a single EDI message fails, only
the invalid message is rejected and the
valid messages are dispatched for further
processing.
Note
If you wish to remove an XSD file
from the project, then select the rele
vant XSD file and choose Remove.
Process Invalid Messages If you select this option, then all the invalid
split messages are routed through new route
where the condition is defined as
EDI_MESSAGE_STATUS=failure.
Note
This header name is fetched from camel
header. The header is added in script ele
ment. This script element is added before
converter element. You can add value for
this header in the script element.
Note
In case of rules violation, you see the ac
knoedgement in a specific format. Here's
how the acknowledge is formatted:
Include UNA Segment The trading partner uses the UNA segment in
the CONTRL message to define special char
acters, such as separators and indicators.
This option enables the splitter to include spe
cial characters in the CONTRL message. If not
selected, the UNA segment is not included in
the CONTRL message.
X12
Note
If you wish to remove an XSD file
from the project, then select the rele
vant XSD file and choose Remove.
Note
This header name is fetched from camel
header. The header is added in script ele
ment. This script element is added before
converter element. You can add value for
this header in the script element.
Exclude AK3 and AK4 Notifies the splitter to exclude the AK3 and
AK4 segments from the functional acknowl
edgement message. However, it retains the
details of the AK1, AK2, AK5, and AK9 seg
ments in the functional acknowledgement.
The error codes for UN-EDIFACT interchange and message levels are given below:
Interchange Level
If you select Interchange transaction mode, the splitter treats the entire EDI interchange as a single
entity, and includes interchange errors in the acknowledgement.
Message Level
8 Invalid date.
Example
The following segments are part of message payload of EANCOM, and the table below mentions the headers
and values for the given payload.
UNB+UNOC:3+4006501000002:14+5790000016839:14+100818:0028+0650+++++XXXXX'
UNH+1+INVOIC:D:96A:EN:EAN008'
SAP_EDI_Document_Standard EANCOM
SAP_EDI_Sender_ID 4006501000002
SAP_EDI_Sender_ID_Qualifier 14
SAP_EDI_Receiver_ID 5790000016839
SAP_EDI_Receiver_ID_Qualifier 14
SAP_EDI_Interchange_Control_Number 0650
SAP_EDI_Message_Type INVOIC
SAP_EDI_Message_Version D
SAP_EDI_Message_Release 96A
SAP_EDI_Message_Controlling_Agency EN
SAP_EDI_Message_Association_Assign_Code EAN008
UNB+UNOC:3+4006501000002:14+5790000016839:14+100818:0028+0650+++++XXXXX'
UNH+1+INVOIC:D:96A:UN'
SAP_EDI_Document_Standard UN-EDIFACT
SAP_EDI_Sender_ID 4006501000002
SAP_EDI_Sender_ID_Qualifier 14
SAP_EDI_Receiver_ID 5790000016839
SAP_EDI_Receiver_ID_Qualifier 14
SAP_EDI_Interchange_Control_Number 0650
SAP_EDI_Message_Type INVOIC
SAP_EDI_Message_Version D
SAP_EDI_Message_Release 96A
SAP_EDI_Message_Controlling_Agency UN
If following segments are part of message payload of ASC-X12 , then respective headers and values for the
same are given in the table below.
GS*IN*GSRESNDR*GSRERCVR*20030709*0816*12345*X*004010~
810*0001~
SAP_EDI_Document_Standard ASC-X12
SAP_EDI_Sender_ID WWRESNDR
SAP_EDI_Sender_ID_Qualifier ZZ
SAP_EDI_Receiver_ID WWRERCVR
SAP_EDI_Receiver_ID_Qualifier ZZ
SAP_EDI_Interchange_Control_Number 000046668
SAP_EDI_Message_Type 810
SAP_EDI_Message_Version 004010
SAP_EDI_GS_Sender_ID GSRESNDR
SAP_EDI_Receiver_ID GSRERCVR
SAP_EDI_GS_Control_Number 12345
SAP_GS_Functional_Id_Code IN
SAP_GS_Responsible_Agency_Code X
SAP_ISA_Acknowledgment_Requested 0
SAP_ISA_Auth_Information_Qualifier 00
SAP_ISA_Control_Standards_Identifier
SAP_ ISA_Security_Information_Qualifier 00
SAP_ISA_Usage_Indicator P
SAP_ISA_Version_Number 004010
SAP_MessageProcessingLogID
SAP_ST_Control_Number
The PKCS#7/CMS Signature-Content splitter is used when an agent sends a PKCS7 Signed Data message that
contains a signature and content. This splitter type breaks down the signature and content into separate files.
Context
Field Description
Payload File Name Name of the file that will contain the payload after the
splitting step
Signature File Name Name of the file (extension .sig) that will contain the sig
nature after the splitting step
Wrap by Content Info Select this option if you want to wrap PKCS#7 signed data
containing the signature into PKCS#7 content.
PayloadFirst Select this option if you want the payload to be the first
message returned.
BASE64 Payload Select this option if you want to encode the payload with
the base64 encoding scheme after splitting.
BASE64 Signature Select this option if you want to encode the signature us
ing the base64 encoding scheme after splitting.
Related Information
The two splitter types General Splitter and Iterative Splitter behave differently in their handling of the
enveloping elements of the input message.
The following figures illustrate the behavior of both splitter types. In both cases an input message comprising a
dedicated number of items is split into individual messages.
The General Splitter splits a composite message comprising N messages into N individual messages, each
containing one message with the enveloping elements of the composite message. We use the term enveloping
elements to refer to the elements above and including the split point. Note elements that follow the one which
is indicated as split point in the original message (but on the same level), are'nt counted as enveloping
elements. They will not be part of the resulting messages.
General Splitter
Caution
Note that in case there are elements in the original message that follow the one indicated as split point (and
on the same level), the General Splitter generates result messages where these elements are missing. In the
following example (for sakes of simplicity with only two instead of three split messages), the split point is
set to element C that is followed by element E. As shown in the figure, element E is missing in each result
message.
Iterating Splitter
The Iterating Splitter splits a composite message into a series of messages without copying the enveloping
elements of the composite message.
Related Information
You can use the Aggregator step to combine multiple incoming messages into a single message.
Prerequisites
Caution
Usage of an Aggregator step in a Local Integration Process or Exception Subprocess is not supported.
Note
When you use the Aggregator step in combination with a polling SFTP sender adapter and you expect a
high message load, please consider the following recommendation:
For the involved in SFTP sender adapter set the value for Maximum Messages per Poll to a small number
which is larger than 0 (for example, 20) (under Advanced Parameters). That way, you ensure a proper
message processing status logging at runtime.
Procedure
1. Open the related integration flow and select an Aggregator step (in the palette under Message Routing).
2. Open the Properties view for the step.
3. Specify which incoming messages belong together and are to be aggregated. To do that, open Correlation
tab and enter an XPath expression that identifies an element based on which the incoming messages
should be correlated (in the field Correlation Expression (XPath)).
This attribute determines the messages which are to be aggregated in the following way: messages with
the same value for the message element defined by the XPATH expression are being stored in the same
aggregate.
4. Open the Aggregation Strategy tab and specify the following attributes:
Attribute Description
○ Combine in Sequence
Aggregated message contains all correlated incoming
messages, and the original messages are put into a
sequence.
○ Combine
Aggregated message contains all correlated incoming
messages.
Message Sequence Expression (XPath) Enter an XPath expression for that message element
based on which a sequence is being defined.
(only if for Aggregation Algorithm the option Combine in
Sequence has been selected) You can use only numbers to define a sequence. Each se
quence starts with 1.
Last Message Condition (XPath) Define the condition (XPATH = value) to identify the
last message of an aggregate.
Note that the header attribute can only have one of the
following values:
○ timeout
Processing of the aggregate has been finished be
cause the configured Completion Timeout has been
reached.
○ predicate
Processing of the aggregate has been finished be
cause the Completion Condition has been fulfilled.
Completion Timeout Defines the maximum time between two messages before
aggregation is being stopped (period of inactivity).
Note that the header attribute can only have one of the
following values:
○ timeout
Processing of the aggregate has been finished be
cause the configured Completion Timeout has been
reached.
○ predicate
Processing of the aggregate has been finished be
cause the Completion Condition has been fulfilled.
Data Store Name Enter the name of the transient data store where the ag
gregated message is to be stored. The name should begin
with a letter and use characters (aA-zZ, 0-9, - _ . ~ ).
Note that only local data stores (that apply only to the in
tegration flow) can be used. Global data stores cannot be
used for this purpose.
Note
The Integration Operations feature provides a Data
Store Viewer that allows you to monitor your transient
data stores.
Results
The Aggregator step creates a message in the multi-mapping format with just one message instance:
● CamelAggregatedCompletedBy
This header is relevant for use cases with message aggregation.
The header attribute can only have one of the following values:
○ timeout
Processing of the aggregate has been stopped because the configured Completion Timeout has been
reached.
○ predicate
Processing of the aggregate has finished because the Completion Condition has been met.
Related Information
Prerequisites
Context
Multicast enables you to send the same message to more than one receiver. This allows you to perform
multiple operations on the same message in a single integration flow. Without multicast, you require separate
integration processes to perform different operations on the same incoming message.
1. Parallel multicast
2. Sequential multicast
Parallel multicast initiates message transfer to all the receiver nodes in parallel.
Sequential multicast provides an option to define the sequence in which the message transfer is initiated to the
receiver nodes.
If you want to aggregate the message that is sent to more than one receiver node, you can use the join and
gather elements.
1. You cannot use Process in Batches option in SuccessFactors adapter with SOAP message protocol
within a multicast branch. For more information, see Configuring SuccessFactors Adapter with SOAP
Message Protocol [page 153].
2. You cannot use Splitter within a Multicast branch.
3. If you are using SFTP receiver with parallel multicast, the message might be corrupted in one or more
of the Multicast branches. The message is corrupted in the branches where the SFTP receiver does not
write the complete data in its respective SFTP files.
You can overcome this limitation in two ways:
1. Add a content modifier before the SFTP receiver. In the content modifier, specify the Body as $
{bodyAs(byte[])}.
2. You can switch from parallel multicast to sequential multicast.
Remember
You cannot use Exception Subprocess with Multicast as Exception Subprocess is not executed in case of an
exception within the Multicast branches.
If you want to enable the execution of Exception Subprocess when an exception occurs in one of the
Multicast branches, ensure that you use version 1.1 of both Multicast and Integration Process.
Procedure
You use this procedure to add a multicast and join elements to your integration flow.
1. In the Model Configuration editor, access the palette.
2. If you want to add a multicast or join element, perform the following substeps.
a. Choose Message Routing.
b. If you want to add parallel multicast, drag the element from palette to the integration
flow.
c. If you want to add sequential multicast, drag the element from the palette to the
integration flow.
3. If you want to combine the messages, see Defining Join and Gather [page 313].
4. Add the necessary elements based on your scenario to complete the integration flow.
5. Save the changes.
Context
The Join element enables you to bring together the messages from different routes before combining them into
a single message. You use this in combination with the Gather element. Join only brings together the messages
from different routes without affecting the content of messages.
The Gather step enables you to merge messages from more than one route in an integration process. You
define conditions based on the type of messages that you are gathering using the Gather step. You can choose
to gather:
Based on this, you choose the strategy to combine the two messages.
● For XML messages of the same format, you can combine without any conditions (multimapping format) or
specify the XPath to the node at which the messages have to be combined.
● For XML messages of different formats, you can only combine the messages
● For plain text messages, you can only specify concatenation as the combine strategy
● Specify valid xpath expression that includes namespace prefixes if incoming payload contains namespace
declarations, including default namespace declarations.
Remember
● If you want to combine messages that are transmitted to more than one route by Multicast, you need to
use Join before using Gather.
● If you want to combine messages that are split using Splitter, you use only Gather.
If your incoming payload contains namespace declarations including default namespace, ensure that you
specify xpath with namespace prefixes. Also ensure that the namespace prefix mapping is defined in the
runtime configuration. If the xpath you have defined does not exist in any of the branches of the incoming XML,
the scenario fails with an exception.
Sample Code
<root xmlns="http:defaultnamespace.com">
<f:table xmlns:f="http://www.w3schools.com/furniture">
<f:name>African Coffee Table</f:name>
<f:width>80</f:width>
<f:length>120</f:length>
</f:table>
<table>
<name>African Coffee Table</name>
<width>80</width>
<length>120</length>
</table>
</root>
● //f:table
● /d:root/f:table
● /d:root/d:table
Procedure
You use this procedure to combine messages using Gather in an integration process.
1. Open the integration flow in Model Configuration editor.
2. In the palette of the Model Configuration editor, choose Message Routing.
4. Connect all the messages that you want to merge to the element.
Value Description
XML (Same Format) If messages from different routes are of the same format
XML (Different Format) If messages from different routes are of the different for
mat
Plain Text If messages from different routes are of the plain text for
mat
7. Select value for Aggregation Algorithm field based on description given in table.
Note
In case you are us
ing the mapping
step to map the
output of this
strategy you can
have the source
XSD in the LHS
and specify the
Occurrence as
0..Unbounded.
Combine at XPath Combine the incoming Combine from Source XPath of the node that
messages at the speci (XPath) you are using as refer
fied XPath ence in the source
message to retrieve
the information.
Note
In case you are us
ing the mapping
step to map the
output of this
strategy you can
add the XSD’s
from the different
multicast
branches one after
another in LHS.
The sequence of
the messages is
important and so
this strategy
makes sense only
with the sequential
multicast.
8. Save changes.
This flow element does not work directly with Router. It is recommended to model the flow using Local
Integration Process.
Message-level security features allow you to digitally encrypt/decrypt or sign/verify a message (or both). The
following standards and algorithms are supported.
Context
Ensuring security is important, during message exchange, to preserve the integrity of messages. A sender
participant uses signatures to ensure authenticity of the sender, and encryption to prevent non-repudiation of
messages during the message exchange.
Procedure
Related Information
Simple Signer makes it easy to sign messages to ensure authenticity and data integrity when sending a
message to participants on the cloud.
Context
You work with the Simple Signer to make the identity of the sender known to the receiver(s) and thus ensure
the authenticity of the messages. This task guarantees the identity of the sender by signing the messages with
a private key using a signature algorithm.
In the integration flow model, you configure the Simple Signer by providing a private key alias. The signer uses
the alias name to get the private key of type DSA or RSA from the keystore. You also specify the signature
algorithm for the key type, which is a combination of digest and encryption algorithms, for example,
SHA512/RSA or SHA/DSA. The Simple Signer uses the algorithm to generate the corresponding signature.
Procedure
1. In the Model Configuration editor, select the Signer element and, from the context menu, select the Switch
to Simple Signer option.
2. To configure the signer with a saved configuration that is available as a template, choose Load Element
Template from the context menu of the Signer element.
3. On the Simple Signer tab page, enter the parameters to create a signature for the incoming message.
Option Description
Name Enter a name for the signer. It must consist of alphanumeric ASCII characters or underscores and
start with a letter. The minimum length is 3, the maximum length is 30.
Private Key Alias Enter an alias to select the private key from the keystore.
You can also specify that the alias is read dynamically from a message header; for example, if you
specify ${header.abc} then the alias value is read from the header with the name "abc“.
Signature Select a signature algorithm for the RSA or DSA private key type.
Algorithm
Signature Header Enter the name of the message header where the signature value in Base64 format is stored.
Name
Note
When you save the configuration of the Signer element as a template, the tool stores the template in
the workspace as <ElementTemplate>.fst.
Context
You work with the PKCS#7/CMS signer to make your identity known to the participants and thus ensure the
authenticity of the messages you are sending on the cloud. This task guarantees your identity by signing the
messages with one or more private keys using a signature algorithm.
In the integration flow model, you configure the PKCS#7/CMS signer by providing one or more private key
aliases. The signer uses the alias name to get the private keys of type DSA or RSA from the keystore. You also
specify the signature algorithm for each key type, which is a combination of digest and encryption algorithms,
for example, SHA512/RSA or SHA/DSA. The PKCS#7/CMS signer uses the algorithm to generate
corresponding signatures. The data generated by the signer is known as the Signed Data.
Procedure
2. Right-click a connection within the pool and choose Add Security Element Message Signer .
3. In the Model Configuration editor, select the Signer element.
Note
By default, the Signer is of type PKCS#7/CMS. If you want to work with XML Digital Signature or Simple
Signature, choose the relevant option from the context menu.
4. To configure the signer with a saved configuration that is available as a template, choose Load Element
Template from the context menu of the Signer element.
5. In the Properties view, enter the details to sign the incoming message with one or more signatures.
Parameter Description
Block Size (in bytes) Enter the size of the data that is to be encoded.
Include Content in Signed Data You can choose to include the original content that is to be
signed in the SignedData element. This SignedData ele
ment is written to the message body.
You also have the option to keep the original content in the
message body and to include the signed data elsewhere:
Up to version 1.2 of the PKCS#7/CMS Signer you can
choose to include the signed data in the
SapCmsSignedData header. From version 1.3 of the
PKCS#7/CMS Signer onwards, you can include the signed
data in the SapCmsSignedData property.
Encode Signed Data with Base64 You can also Base64-encode the signed data in either the
message body or the message header, to further protect it
during message exchange.
Note
When you Base64-encode the signed data, you en
code either the message header or body, depending
on where the signed data is placed. When verifying
the message, make sure you specify which part of the
message (header or body) was Base64-encoded.
Signer Parameters For each private key alias, define the following parameters:
Note
When you save the configuration of the Signer element as a template, the tool stores the template in
the workspace as <ElementTemplate>.fst.
You sign with an XML digital signature to ensure authenticity and data integrity while sending an XML resource
to the participants on the cloud.
Context
Procedure
1. In the Model Configuration editor, select the Signer element and, from the context menu, select the Switch
to XML Digital Signer option.
2. To configure the signer with a saved configuration that is available as a template, choose Load Element
Template from the context menu of the Signer element.
3. On the XML Digital Signer tab page, enter the parameters to create an XML digital signature for the
incoming message.
Parameter Description
Private Key Alias Enter an alias for selecting a private key from keystore.
You can also enter ${header.headername} or ${prop
erty.propertyname} to read the name dynamically from a
header or exchange property.
Signature Algorithm Signature algorithm for the RSA or DSA private key type
Digest Algorithm Digest algorithm that is used to calculate a digest from the
canonicalized XML document
XML Schema file path (only if the option Detached XML Choose Browse and select the file path to the XML
Signatures is selected) schema file that is used to validate the incoming XML
document. This file has to be in the package
source.main.resources.xsd
Signatures for Elements (only if the option Detached XML Choose Add to enter the XPath to the attribute of type ID,
Signatures is selected) in order to identify the element to be signed. Example: /
nsx:Document/SubDocument/@Id
Parent Node (only if the option Enveloped XML Signature is Specify how the parent element of the Signature element
selected for the attribute Signature Type) is to be specified. You have the following options:
Parent Node Name (only if the option Enveloped XML A local name of the parent element of the Signature ele
Signature is selected for the attribute Signature Type and ment
Specified by Name and Namespace is selected for Parent
This attribute is only relevant for Enveloped XML Signa
Node)
ture case. The Signature element is added at the end of
the children of the parent.
Parent Node Namespace (only if the option Enveloped Namespace of the parent element of the Signature ele
XML Signature is selected for the attribute Signature Type ment
and Specified by Name and Namespace is selected for
This attribute is only relevant for Enveloped XML Signa
Parent Node)
ture case. In the Enveloped XML Signature case, a null
value is also allowed to support no namespaces. An empty
value is not allowed.
XPath Expression (only if the option Enveloped XML Enter an XPath expression for the parent node to be speci
Signature is selected for the attribute Signature Type and fied.
Specified by XPath expression is selected for Parent Node)
This attribute is only relevant for Enveloped XML Signa
ture case.
Key Info Content Specifies which signing key information will be included in
the KeyInfo element of the XML signature. You can select a
combination of the following attribute values:
Note
The KeyInfo element might not contain the whole
certificate chain, but only the certificate chain
that is assigned to the key pair entry.
○ X.509 Certificate
X509Certificate element containing the X.509 certifi-
cate of the signer key
○ Issuer Distinguished Name and Serial Number
X509IssuerSerial element containing the issuer dis
tinguished name and the serial number of the X.509
certificate of the signer key
○ Key Value
Key Value element containing the modulus and expo
nent of the public key
Note
You can use any combination of these four attrib
ute values.
Sign Key Info With this attribute you can specify a reference to the Key
Info element.
For more information about the various attributes, see the following:
http://www.w3.org/TR/xmldsig-core/
4. On the Advanced tab page, under Transformation, specify the following parameters.
Property Description
Canonicalization Method for SignedInfo Specify the canonicalization method to be used to trans
form the SignedInfo element that contains the digest
(from the canonicalized XML document).
Transform Method for Payload Specify the transform method to be used to transform the
inbound message body before it is signed.
○ CamelXmlSignatureTransformMethods
Specifies transformation methods in a comma-separated list.
You can use this header to specify transformation methods in a comma-separated list. This header will
overwrite the value of the option Transform Method for Payload.
Example
Sample Code
Example of this use case: The XML signature verifier of the receiving system expects an XML
signature as shown in the following code snippet.
The signature is a detached signature, because the signature element is a sibling of the signed
element B. However, the receiving system requires the enveloped-signature transform method to
be specified in the Transforms list. To ensure this, you have to configure a detached signature in the
XML Signer step, then add a Content Modifier step before the XML Signer step, where you specify
the header "CamelXmlSignatureTransformMethods" with the constant value “http://www.w3.org/
2000/09/xmldsig#enveloped-signature,http://www.w3.org/TR/2001/REC-xml-c14n-20010315".
For more information about the various methods, see the following:
http://www.w3.org/2000/09/xmldsig#enveloped-signature,http://www.w3.org/TR/2001/REC-xml-
c14n-20010315
http://www.w3.org/2000/09/xmldsig#enveloped-signature
http://www.w3.org/TR/2001/REC-xml-c14n-20010315
http://www.w3.org/TR/2001/REC-xml-c14n-20010315#WithComments
http://www.w3.org/2001/10/xml-exc-c14n#WithComments
5. On the Advanced tab page, under XML Document Parameters, specify the following parameters.
Property Description
Reference Type Enter the value of the type attribute of the content refer
ence.
Output Encoding Select an encoding scheme for the output XML document.
Exclude XML Declaration Specify whether the XML declaration header shall be
omitted in the output XML message.
Disallow DOCTYPE Declaration Specify whether DTD DOCTYPE declarations shall be dis
allowed in the incoming XML message.
Note
When you save the configuration of the Signer element as a template, the tool stores the template in
the workspace as <ElementTemplate>.fst.
Related Information
● Enveloping XML Signature: The input message body is signed and embedded within the signature. This
means that the message body is wrapped by the Object element, where Object is a child element of the
Signature element.
Example
A template of the enveloping signature is shown below and describes the structure supported by XML
signature implementation. ("?" denotes zero or one occurrence; the brackets [] denote variables whose
values can vary.)
<Signature>
<SignedInfo>
<CanonicalizationMethod>
<SignatureMethod>
<Reference URI="#[generated object_id]"
type="[optional_type_value]">
<Transform Algorithm="canonicalization method">
<DigestMethod>
<DigestValue>
</Reference>
(<Reference URI="#[generated keyinfo_id]">
<Transform Algorithm="http://www.w3.org/TR/2001/REC-xml-
c14n-20010315"/>
<DigestMethod>
<DigestValue>
</Reference>)?
</SignedInfo>
<SignatureValue>
(<KeyInfo Id="[generated keyinfo_id]">)?
<--!The Id attribute is only added if there exists a reference-->
<Object Id="[generated object_id]"/>
</Signature>
● Enveloped XML Signature: The digital signature is embedded in the XML message to be signed. This
means that the XML message contains the Signature element as one of its child elements. The Signature
element contains information such as:
○ Algorithms to be used to obtain the digest value
○ Reference with empty URI, which means the entire XML resource
○ Optional reference to the KeyInfo element (attribute Also Sign Key Info)
Example
A template of the enveloped signature is shown below and describes the structure supported by XML
signature implementation. ("?" denotes zero or one occurrence; the brackets [] denote variables whose
values can vary):
<[parent element]>
...
<Signature>
<SignedInfo>
<CanonicalizationMethod>
<SignatureMethod>
<Reference URI="" type="[optional_type_value]">
● Detached XML Signature: The digital signature is a sibling of the signed element. There can be several
XML signatures in one XML document.
You can sign several elements of the message body. The elements to be signed must have an attribute of
type ID. The ID type of the attribute must be defined in the XML schema that is specified during the
configuration.
Additionally, you specify a list of XPath expressions pointing to attributes of type ID. These attributes
determine the elements to be signed. For each element, a signature is created as a sibling of the element.
The elements are signed with the same private key. Elements with a higher hierarchy level are signed first.
This can result in nested signatures.
Example
A template of the detached signature is shown below and describes the structure supported by XML
signature implementation. ("?" denotes zero or one occurrence; the brackets [] denote variables whose
values can vary):
Sample Code
● Key Info ID
● Object ID
The sender canonicalizes the XML resource to be signed, based on the specified transform algorithm. Using a
digest algorithm on the canonicalized XML resource, a digest value is obtained. This digest value is included
within the 'Reference' element of the 'SignedInfo' block. Then, a digest algorithm, as specified in the signature
algorithm, is used on the canonicalized SignedInfo. The obtained digest value is encrypted using the sender's
private key.
Note
Canonicalization transforms the XML document to a standardized format, for example, canonicalization
removes white spaces within tags, uses particular character encoding, sorts namespace references and
eliminates redundant ones, removes XML and DOCTYPE declarations, and transforms relative URIs into
absolute URIs. The representation of the XML data is used to determine if two XML documents are
identical. Even a slight variation in white spaces results in a different digest for an XML document.
XML Advanced Electronic Signature (XAdES) allows you to create digital signatures based on XML Digital
Signature that include additional qualifying properties.
Context
The additional properties allow you to create signatures that are compliant with the European Directive (http://
eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2000:013:0012:0020:EN:PDF ).
XAdES is an industry standard based on XML Signature and issued by the European Telecommunications
Standards Institute (ETSI). It allows you to enhance the digital signature with additional data, for example,
timestamps to provide evidence that the signature key was valid at the time the signature was created.
XAdES Forms
Form Allows you to ...
XAdES-BES (XAdES-BES (1) and XAdES- Create an electronic signature based on XML Digital Signature that includes ad
BES (2))
ditional properties to further qualify the signature.
(XAdES Basic Electronic Signature)
XAdES-EPES Create an electronic signature based on XML Digital Signature that unambigu
(XAdES Explicit Policy based Electronic ously refers to a signature policy agreed between signer and verifier. An elec
Signature) tronic signature built in this way enforces the usage of the signature policy for
signature validation and thus increases the security level of the usage of the digi
tal signature.
Note
There are additional XAdES forms defined by the specification. SAP currently only supports XAdES-BES
and XAdES-EPES.
Procedure
Related Information
2.5.7.4.1 Limitations
SAP currently only supports XAdES-BES and XAdES-EPES. There are a number of additional limitations.
● No support for the QualifyingPropertiesReference element (see section 6.3.2 of the XAdES
specification at http://uri.etsi.org/01903/v1.3.2/ts_101903v010302p.pdf ).
● No support for signature forms XAdES-T and XAdES-C.
● No support for the Transforms element contained in the SignaturePolicyId element contained in the
SignaturePolicyIdentifier element.
● No support of the CounterSignature element; this implies that there is no support for the
UnsignedProperties element.
● At most one DataObjectFormat element is supported.
This option allows you to add timestamps (for the signing time), a reference to the signer's key certificate, and
other information that further qualifies the signature.
Context
Procedure
1. In the Model Configuration editor, select the Signer element and, from the context menu, select the Switch
to XAdES-BES (1) option.
2. Add the signing time and the certificate to the signature.
Option Description
Time Select this option if you want to add the signing time to the signature.
This measure helps to provide evidence that the signature key was valid at the time the signa
ture was created.
Certificate Specify whether the certificate of the signing key is to be added to the signature.
Digest Algorithm Specify a digest algorithm that is to be used to calculate a digest value from the certificate.
(only when either
Certificate This measure helps to control whether the certificate that is used to verify the message content
Only or corresponds to the one that has been used to sign the message content.
Certificate
Chain is selected SHA256 is proposed as the default.
for Certificate)
If as Certificate you have selected Certificate Only, only one URI is allowed.
If as Certificate you have selected Certificate Chain, you can add for each certificate in
the certificate chain an URI. The URI must be added at the position where the corresponding
certificate in the chain is located. At the position 0 the signing certificate URI must be placed. If
for a certain certificate in the chain no URI is available, enter an empty string at the correspond
ing place in the URI list.
Adding the signer's key certificate and its digest value to the signed document makes sure that the signed
document contains an unambiguous reference to the signer's certificate.
Option Description
Certified Roles Specify the roles of the signer which are certified by an attribute certificate.
(optional)
An attribute certificate associates the identifier of a certificate to some attributes of its owner, in this
case, to a role
To specify a certified role, choose Add and specify the following attributes:
○ Encoding
Select the encoding scheme.
An empty string indicates that the Encoding attribute is not specified. In this case, it is assumed
that the PKI data is ASN.1 data encoded in DER.
○ ID (optional)
4. Under Data Object Format, specify the format of the signed data.
Option Description
Description (optional) Provide an informal text to describe the format of the signed data.
MIME Type (optional) Specify the Internet Media Type (MIME type) that determines the data object format.
Example: text/xml
Option Description
Qualifier (only relevant if an Further qualify how the identifier is defined in case Abstract Syntax Notation One
Identifier Name has been (ASN 1) is used.
specified)
You can select one of the following values:
○ empty string
○ OIDAsURI – uses Uniform Resource Name.
○ OIDAsURN – uses Uniform Resource Name.
Description (only relevant if an Provide a reference to further documentation of the identifier.
Identifier Name has been
specified)
This option allows you to add further contextual information to the signature, like, for example, the place where
the signature has been created, or the type of commitment assured by the signer when creating the signature.
Context
1. In the Model Configuration editor, select the Signer element and, from the context menu, select the Switch
to XAdES-BES (2) option.
2. Under Production Place enter information on the purported place where the signer claims to have
produced the signature (City, State/Province, Postal Code, and Country Name).
3. Specify the commitment to be undertaken by the signer (for example, proof of origin, proof of receipt,
proof of creation).
Option Description
Commitment Specify the commitment. You can select one of the following values:
○ Proof of origin
Indicates that the signer recognizes to have created, approved and sent the signed data ob
ject.
The URI for this commitment is: http://uri.etsi.org/01903/v1.2.2#ProofOfOrigin .
○ Proof of receipt
Indicates that signer recognizes to have received the content of the signed data object.
The URI for this commitment is: http://uri.etsi.org/01903/v1.2.2#ProofOfReceipt .
○ Proof of delivery
Indicates that the trusted service provider (TSP) providing that indication has delivered a
signed data object in a local store accessible to the recipient of the signed data object.
The URI for this commitment is: http://uri.etsi.org/01903/v1.2.2#ProofOfDelivery .
○ Proof of sender
Indicates that the entity providing that indication has sent the signed data object (but not
necessarily created it).
The URI for this commitment is: http://uri.etsi.org/01903/v1.2.2#ProofOfSender .
○ Proof of approval
Indicates that the signer has approved the content of the signed data object.
The URI for this commitment is: http://uri.etsi.org/01903/v1.2.2#ProofOfApproval .
○ Proof of creation
Indicates that the signer has created the signed data object (but not necessarily approved,
nor sent it).
The URI for this commitment is: http://uri.etsi.org/01903/v1.2.2#ProofOfCreation .
Documentation Enter references (URIs) to one or more documents where the commitment is described.
Reference
Commitment Enter additional qualifying information on the commitment made by the signer.
Qualifier
Enter a text or an XML fragment with the root element CommitmentTypeQualifier.
Adding these properties helps to avoid legal disputes as the commitment undertaken by the signer can be
compared with the commitment made in the context of an applied signature policy.
4. Under XML Document Parameters, select the namespace of the XAdES version and enter a namespace
prefix.
This option allows you to create a digital signature based on XML Digital Signature that unambiguously refers
to a signature policy agreed between signer and verifier. An electronic signature created this way enforces the
usage of the signature policy for signature validation and thus increases the security level of the digital
signature.
Context
A signature policy is a set of rules for the creation and validation of a digital signature.
Procedure
1. In the Model Configuration editor, select the Signer element and, from the context menu, select the Switch
to XAdES-EPES option.
2. Specify the properties of the signature.
Option Description
Signature Policy Specify whether a signature policy is to be added, and, if yes, in which form this should
be done.
○ None
○ Implied
The signature policy can be unambiguously derived from the semantics of the type
of data object(s) being signed, and some other information.
Using this option, the SignaturePolicyImplied element will be part of the signature.
○ Explicit ID
The signature contains an identifier of a signature policy together with a hash value
of the signature policy that allows verification that the policy selected by the signer
is the one being used by the verifier.
Identifier (only if the value Specify an identifier that uniquely identifies a specific version of the signature policy.
Explicit ID has been
selected for Signature You can specify the identifier by one of the following options:
Policy) ○ By means of a Uniform Resource Identifier (URI) (preferred option when dealing
with XML documents)
In this case, the Name of the Identifier consists of the identifying URI. In this case,
the property Qualifier must not be specified (empty string).
○ By means of an Object Identifier when using ASN.1 (Abstract Syntax Notation One)
To support an OID, the content of Identifier consists of an OID, either encoded as a
Uniform Resource Name (URN) or as a Uniform Resource Identifier (URI). The op
tional Qualifier attribute can be used to provide information about the applied en
coding (values OIDAsURN or OIDAsURI).
Identifier Qualifier (only if Qualify how the identifier is defined in case Abstract Syntax Notation One (ASN 1) is
the value Explicit ID used.
has been selected for
Signature Policy) You can select one of the following values:
○ Empty string
○ OIDAsURI – uses Uniform Resource Name.
○ OIDAsURN – uses Uniform Resource Name.
Description (only if the Enter a description of the signature policy.
value Explicit ID has
been selected for
Signature Policy)
Digest Algorithm (only if Specify the digest algorithm used to calculate the digest value of the signature policy
the value Explicit ID document.
has been selected for
Signature Policy) As default, SHA256 is used.
Digest Value (only if the Specify the digest value of the signature policy document (base 64-encoded).
value Explicit ID has
been selected for You can either enter the digest value manually or calculate it.
Signature Policy)
To calculate the digest value, you have the following options:
○ Calculate from Identifier
Calculates the digest value from the value of the Identifier provided above. Note that
the Identifier must be a valid URL and start with http:// or https://.
○ Browse to local File
Calculates the digest value from the content of a file.
Policy Qualifier (only if the Enter additional information qualifying the signature policy. To do this, enter text or an
value Explicit ID has XML fragment with the root element SigPolicyQualifier.
been selected for
Signature Policy)
For certain message headers you can define specific elements in the XAdES form.
● CamelXmlSignatureXAdESQualifyingPropertiesId
Specifies the Id attribute value of the QualifyingProperties element.
● CamelXmlSignatureXAdESSignedDataObjectPropertiesId
Specifies the Id attribute value of the SignedDataObjectProperties element.
● CamelXmlSignatureXAdESSignedSignaturePropertiesId
Specifies the Id attribute value of the SignedSignatureProperties element.
● CamelXmlSignatureXAdESDataObjectFormatEncoding
Specifies the value of the Encoding element of the DataObjectFormat element.
● CamelXmlSignatureXAdESNamespace
You perform this task to ensure that the signed message received over the cloud is authentic.
Context
In the integration flow model, you configure the Verifier by providing information about the public key alias, and
whether the message header or body is Base64-encoded, depending on where the Signed Data is placed. For
example, consider the following two cases:
● If the Signed Data contains the original content, then in the Signature Verifier you provide the Signed Data
in the message body.
● If the Signed Data does not contain the original content, then in the Signature Verifier you provide the
Signed Data in the header SapCmsSignedData and the original content in the message body.
The Verifier uses the public key alias to obtain the public keys of type DSA or RSA that are used to decrypt the
message digest. In this way the authenticity of the participant who signed the message is verified. If the
verification is not successful, the Signature Verifier informs the user by raising an exception.
Under Public Key Alias you can enter one or multiple public key aliases for the Verifier.
Note
In general, an alias is a reference to an entry in a keystore. A keystore can contain multiple public keys. You
can use a public key alias to refer to and select a specific public key from a keystore.
You can use this attribute to support the following use cases:
● Management of the certificate lifecycle. Certificates have a certain validity period. Using the Public Key
Alias attribute in the Verifier step, you can enter both an alias of an existing certificate (which will expire
within a certain time period) and an alias for a new certificate (which does not necessarily have to exist
already in the keystore). In this way, the Verifier is configured to verify messages signed by either the old or
the new certificate. As soon as the new certificate has been installed and imported into the keystore, the
Verifier refers to the new certificate. In this way, certificates can be renewed without any downtime.
● You can use different aliases to support different signing senders with the same Verifier step. Using the
Public Key Alias attribute, you can specify a list of signing senders.
Note
Exceptions that occur during runtime are displayed in the Message Processing Log view of the Operations
Integration perspective.
2. Right-click a connection within the pool and choose Add Security Element Signature Verifier .
3. In the Model Configuration editor, select Verifier.
Note
By default, the Verifier is of type PKCS#7/CMS. If you want to work with XML Signature Verifier, then
choose the Switch to XML Signature Verifier option from the context menu.
4. In the Properties view, enter the details to verify the signatures of the incoming message.
Parameter Description
Header is Base64 Encoded Select this option to verify if the Signed Data encoded in
Base64 is included in the header.
Body is Base64 Encoded Select this option to verify if the Signed Data encoded in
Base64 is included in the message body.
Public Key Alias Enter an alias name to select a public key and correspond
ing certificate from the keystore.
Context
The XML Signature Verifier validates the XML signature contained in the incoming message body and returns
the content which was signed in the outgoing message body.
Note
For enveloping and enveloped XML signatures the incoming message body must contain only one XML
signature.
The Verifier supports enveloping and enveloped XML signatures and detached XML signatures. In the
enveloping XML signature case, one reference whose URI references the only allowed 'Object' element via ID,
and an optional reference to the optional KeyInfo element via ID is supported
Then, the digest value of the canonicalized 'SignedInfo' is calculated. The resulting bytes are verified with the
signature on the 'SignedInfo' element, using the sender's public key. If both the signature on the 'SignedInfo'
element and each of the 'Reference' digest values verify correctly, then the XML signature is valid.
Note
Procedure
1. In the Model Configuration editor, select Verifier element and from the context menu, select Switch to XML
Signature Verifier option.
2. In the XML Signature Verifier tab page, enter the parameters to verify XML digital signature for the
incoming message.
Parameter Description
<Signature>
<SignedInfo>
<CanonicalizationMethod>
<SignatureMethod>
<Reference
URI="#[object_id]">
(<Transform
Algorithm=[canonicalization
method]/>)?
<DigestMethod>
<DigestValue>
</Reference>
(<Reference
URI="#[keyinfo_id]">
(<Transform
Algorithm=[canonicalization
method]/>)?
<DigestMethod>
<DigestValue>
</Reference>)?
</SignedInfo>
<SignatureValue>
(<KeyInfo
(Id="[keyinfo_id]")?>)?
<Object Id="[object_id]"/>
</Signature>
<[parent]>
<Signature>
<SignedInfo>
<CanonicalizationMethod>
<SignatureMethod>
<Reference URI="">
<Transform
Algorithm="http://www.w3.org/
2000/09/xmldsig#enveloped-
signature"/>
(<Transform
Algorithm="[canonicalization
method]"/>)?
<DigestMethod>
<DigestValue>
</Reference>
(<Reference
URI="#[keyinfo_id]">
(<Transform
Algorithm="[canonicalization
method]"/>)?
<DigestMethod>
<DigestValue>
</Reference>)?
</SignedInfo>
<SignatureValue>
(<KeyInfo
(Id="[keyinfo_id]")?>)?
</Signature>
</[parent]>
(<[signed element]
Id="[id_value]">
<!-- signed element must
have an attribute of type ID -->
...
</[signed element]>
<other sibling/>* <!--
between the signed element and
the corresponding signature
element, there can be other
siblings -->
<Signature>
<SignedInfo>
<CanonicalizationMethod>
<SignatureMethod>
<ReferenceURI="#[id_value]"type="
[optional_type_value]">
<!-- reference URI
contains the ID attribute value
of the signed element -->
<TransformAlgorithm=[canonicaliza
tion method]/>
<DigestMethod>
<DigestValue>
</Reference>
(<ReferenceURI="#[generated_keyin
fo_Id]">
<TransformAlgorithm="http://
www.w3.org/TR/2001/REC-xml-
c14n-20010315" />
<DigestMethod>
<DigestValue>
</Reference>)?
</SignedInfo>
<SignatureValue>
(<KeyInfoId="[generated_keyinfo_i
d]">)?
<Signature>)+
Check for Key Info Element Select this option to check that the XML Signature con
tains a Key Info element
Note
In case multiple public key aliases are specified (using
the Public Key Alias attribute), this option is manda
tory (to make sure that from the KeyInfo the public
key can be derived).
Disallow DOCTYPE Declaration Select this option to disallow DTD DOCTYPE declaration in
the incoming XML message
Public Key Alias Enter an alias name to select a public key and correspond
ing certificate
Using the Public Key Alias, you can enter one or multiple
public key aliases for the Verifier.
Note
In general, an alias is a reference to an entry in a key
store. A keystore can contain multiple public keys. You
can use a public key alias to refer to and select a spe
cific public key from a keystore.
Context
You perform this task to protect the message content from being altered while it is being sent to other
participants on the cloud, by encrypting the content. In the integration flow model, you configure the Encryptor
by providing information on the public key alias, content encryption algorithm, and secret key length. The
In addition to encrypting the message content, you can also sign the content to make your identity known to
the participants and thus ensure the authenticity of the messages you are sending. This task guarantees your
identity by signing the messages with one or more private keys using a signature algorithm.
Procedure
2. Right-click a connection within the pool and choose Add Security Element Content Encryptor .
3. In the Model Configuration editor, select Encryptor.
4. To configure the encryptor with a saved configuration that is available as a template, choose Load Element
Template from the context menu of the Encryptor element.
5. In the Properties view, specify the general settings for encryption.
Block Size (in bytes) Enter the size of the data that is to be encoded.
Encode Body with Base64 Select this option if the message body will be base64-en
coded.
Content Encryption Algorithm Specify the algorithm that is to be used to encrypt the
payload.
7. Specify the settings for the signing process under Signature (only if you selected Signed and Enveloped
Data for Signatures in PKCS7 Message).
For each private key alias (specified under Signer Parameters), you define the following parameters:
Signer Parameters For each private key alias, define the following parameters:
Note
When you save the configuration of the Encryptor element as a template, the tool stores the template
in the workspace as <ElementTemplateName>.fst.
You have the option to encrypt the message content using Open Pretty Good Privacy (OpenPGP).
Context
You have the following options to protect communication at message level based on the OpenPGP standard:
Procedure
2. Right-click a connection within the pool and choose Add Security Element Content Encryptor .
3. In the Model Configuration editor, position the cursor on the Content Encryptor step and in the context
menu select Switch to PGP.
4. Open the Properties view.
5. Specify whether or not to sign the payload (in addition to applying payload encryption). To do this, select
one of the following options for Signatures in PGP Message:
Option Description
Include Signature Select this option if you want to apply both encryption and signing.
Parameter Description
Signatures Select this option if you want to sign the payload with a
signature.
Content Encryption Algorithm In the dropdown list, select the algorithm you want to use
to encrypt the payload.
Note
The length of the secret key depends on the encryp
tion algorithm that you choose.
Compression Algorithm Select the algorithm you want to use to compress the pay
load.
Integrity Protected Data Packet Select if you want to create an Encrypted Integrity Pro
tected Data Packet. This is a specific format where an ad
ditional hash value is calculated (using SHA-1 algorithm)
and added to the message.
Encryption User ID of Key(s) from Public Keyring You can specify the encryption key user IDs (or parts of
them). Based on this, system look for the public key in
PGP public keyring.
7. Under Signature, specify the following parameters for message signing (only if you have selected Include
Signature for Signatures in PGP Message):
Option Description
Digest Algorithm Specify the digest algorithm (or hash algorithm) that is to be applied.
The sender calculates a digest (or hash value) from the message content using a digest algo
rithm. To finalize the payload signing step, this digest is encrypted by the sender using a private
key.
Next Steps
You can provide a file name with the PGP message in the Literal Packet, according to the specification at
https://tools.ietf.org/html/rfc4880 , chapter 5.9 Literal Data Packet (Tag 11).
A file name that differs from an empty string can be specified by the header CamelFileName.
Note
You have the option to decrypt the message content using Open Pretty Good Privacy (OpenPGP).
Context
2. Right-click on a connection within the pool and choose Add Security Element Content Decryptor .
3. In the Model Configuration editor, position the cursor on the Content Decryptor step and in the context
menu select Switch to PGP.
4. Open the Properties view.
5. Specify the following parameters for message decryption:.
Option Description
Name Name of the selected PGP Decryptor element. The default name is PGP Decryptor.
If you add a second PGP Decryptor to the integration flow, you have to change the name because the
Decryptor name must be unique.
Signatures Specify the expected payload content type which is to be decrypted. You have the following options:
in PGP Mes ○ No Signatures Expected
sage When you select this option, the decryptor expects an inbound message that doesn't contain a sig
nature.
Note
If you select this option, inbound messages that contain a signature cannot be processed.
○ Signatures Optional
When you select this option, the decryptor can process messages that either contain a signature
or not.
○ Signatures Required
When you select this option, the decryptor expects an inbound message that contains a signature.
Signer User Specify the signer user ID of key(s) parts of all expected senders.
ID of Key(s)
from the Based on the signer user ID of key(s) parts, the public key (for message verification) is looked up in the
Public Key PGP public keyring.
ring
The signer user ID of key(s) key parts specified in this step restrict the list of expected senders and, in
this way, act as an authorization check.
Context
Context
This is how using an End Message event has an impact on the message status (shown in the message
processing log).
Note
To catch any exceptions thrown in the integration process and handle them, you can use an Exception
Subprocess.
If an exception occurs during the processing sequence which has been handled in an Exception Subprocess,
the message status displayed in the message processing log is Failed.
When there is no error during exception handling, the message status displayed in the message processing log
is Completed.
If you like to configure your integration flow that way that the message status displayed in the message
processing log is Failed (even in case an exception occurs during the processing sequence which has been
handled successfully in an Exception Subprocess), you have the following options:
Procedure
You can configure an integration flow to automatically start and run on a particular schedule.
Context
If you want to configure a process to automatically start and run on a particular schedule, you can use this
procedure to set the date and time on which the process should run once or repetitively. The day and time
combinations allow you to configure the schedule the process requires. For example, you can set the trigger
just once on a specific date or repetitively on that date, or you can periodically trigger the timer every day,
specific days in a week or specific days in a month along with the options of specific or repetitive time.
A Timer Start event and a Start Message event (sender channel) must not be modelled in the same integration
flow. This would result in an error ERROR: Multiple 'Start Events' not allowed within an
Integration Process pool.
Note
When you use timer with Run Once option enabled, the message is triggered only when the integration flow
is deployed. If you want to trigger the message with an integration flow that you have already deployed, you
have to Undeploy the integration flow and Deploy it again. If you restart the integration flow bundle,
message will not be triggered.
When you delploy or undeploy an integration flow with Timer-Scheduler, the system automatically releases
all the scheduler locks.
Note
You cannot externalize the timer element in older integration flows. If you want to externalize the timer in
existing integration flows, reconfigure the timer and save the integration flow. Extract parameters for the
timer properties to appear in Externalized Parameters tab page.
When you deploy small integration flows with Timer Start (for example, an integration flow with timer,
content modifier and mail adapter), due to extremely fast processing times, multiple schedules are
triggered.
Procedure
Daily Run message polling every day to fetch data from the Suc
cessFactors system
Monthly on Day Execute the message polling every month on the specified
date to fetch data from the SuccessFactors server.
Note
If the specified date is not applicable to a month, the
data polling is not executed in that specific month. For
example, if 30th day is selected in the month of Febru
ary, polling is not executed as 30th is not a valid day
for February.
Time The time at which the data polling cycle has to be initiated.
For example, if you want the data polling to be started at
4.1PM, enter 16:10. Note that the time must be entered in
24-hour format.
Every xx minutes between HH hours and HH hours The connector fetches data from the SuccessFactors sys
tem, every ‘xx’ minutes between HH hours and HH hours.
Note
If you want the polling to run for the entire day, enter 1
and 59.
Time Zone Select the Time Zone that you want to use as reference for
scheduling the data polling cycle.
An escalation event stops message processing. For synchronous messages, an error messages is sent to the
sender.
Context
A retry or any other error handling mechanism is not triggered. The message status changes to ESCALATED.
Procedure
Receiver not found Receiver could not be found because the URL points to a
non-existent resource (for example, HTTP 404 error).
Not authenticated to invoke receiver Receiver could not be called because authentication has
failed (for example, HTTP 401 error).
Not authorized to invoke receiver Receiver could not be called because of insufficient per
missions (for example, HTTP 403 error).
Receiver tries to redirect Receiver could not be reached (HTTP 302 error).
Internal server error in receiver Internal server error occurred in the receiver system (for
example, HTTP 500 error).
Others – not further qualified Escalation category has not been further qualified.
Context
Prerequisites
You have added the service call element to the integration flow model from the palette.
Context
Service Call is used to call an external system. Such calls enable transaction of data from or to the target
system. It can be used for following types of operations:
1. Request-Reply
2. Content Enrichment
Context
You can use this task to enable request and reply interactions between sender and receiver systems.
Example:
Suppose a currency conversion is required for a transaction between Indian and German business partners. In
such a scenario, using the Request Reply pattern, a user can successfully convert Indian currency to German
currency and vice versa. As shown in the integration flow, the request mapping converts the currency in the
payload to Euro and reply mapping converts the currency in the payload back to Indian Rupee.
3. From the Palette, select Others Receiver , and drop it outside the Integration Process pool.
4. From the speed button of the Request-Reply element, create a connection that represents a channel from
the Request-Reply element to the Receiver.
5. Configure the channel with the required adapter details.
6. You can add integration flow elements between the Request-Reply and End Message element, for
processing response payload from Receiver system.
7. You can also connect End Message element back to the sender. Also, ensure that no adapter is configured
for this channel (as highlighted in the integration flow screenshot).
Note
In this release, the channel connecting the external system with the Request-Reply element can be
configured with the SOAP, SuccessFactors, HTTP and IDOC adapters.
Prerequisites
Context
The content enricher adds the content of a payload with the original message in the course of an integration
process. This converts the two separate messages into a single enhanced payload. This feature enables you to
make external calls during the course of an integration process to obtain additional data, if any.
Consider the first message in the integration flow as the original message and the message obtained by making
an external call during the integration process as the lookup message. You can choose between two strategies
to enrich these two payloads as a single message:
● Combine
● Enrich
<EmployeeList>
<Employee>
<id>111</id>
<name>Santosh</name>
<external_id>ext_111</external_id>
</Employee>
<Employee>
<id>22</id>
<name>Geeta</name>
<external_id>ext_222</external_id>
</Employee>
</EmployeeList>
Lookup Message
<EmergencyContacts>
<contact>
<c_id>1</c_id>
<c_code>ext_111</c_code>
<isEmergency>0</isEmergency>
<phone>9999</phone>
<street>1st street</street>
<city>Gulbarga</city>
</contact>
<contact>
<c_id>2</c_id>
<c_code>ext_111</c_code>
<isEmergency>1</isEmergency>
<phone>1010</phone>
<street>23rd Cross</street>
<city>Chitapur</city>
</contact>
<contact>
<c_id>3</c_id>
<c_code>ext_333</c_code>
<isEmergency>1</isEmergency>
<phone>007</phone>
<street></street>
<city>Raichur</city>
</contact>
</EmergencyContacts>
If you use Combine as the aggregation strategy, the enriched message appears in the following format.
Enriched Message
<multimap:messages xmlns:multimap=”http://sap.com/xi/XI/SplitAndMerge”>
<message1>
<EmployeeList>
<Employee>
<id>111</id>
<name>Santosh</name>
<external_id>ext_111</external_id>
</Employee>
<Employee>
<id>22</id>
<name>Geeta</name>
<external_id>ext_222</external_id>
</Employee>
</EmployeeList>
</message1>
<message2>
<EmergencyContacts>
Enrich offers you control on how you can merge the original and lookup message. In this example, we consider
the node <ext_111> as the reference to enrich the original message with the lookup message.
Consequently, you specify the following values while configuring the Content Enricher.
<EmployeeList>
<Employee>
<id>111</id>
<name>Santosh</name>
<external_id>ext_111</external_id>
<contact>
<c_id>1</c_id>
<c_code>ext_111</c_code>
<isEmergency>0</isEmergency>
<phone>9999</phone>
<street>1st street</street>
<city>Gulbarga</city>
</contact>
<contact>
<c_id>2</c_id>
<c_code>ext_111</c_code>
<isEmergency>1</isEmergency>
<phone>1010</phone>
<street>23rd Cross</street>
<city>Chitapur</city>
</contact>
In the enriched message, you can see the content of the lookup message after the node <location>.
Remember
If lookup message contains more than one entry of the key element, content enricher enhances the
enriched message with all the entries referred by the key element in lookup message. In the above example,
the lookup message contains the key element ext_111 in two places. You can see that the enriched
message contains both the <contact> entries that the key element refers to.
Procedure
You use this procedure to add Content Enricher element to an integration flow.
1. Open the integration flow in Model Configuration editor.
2. From the context menu of integration flow, choose Add Tasks Service Call .
3. From the context menu of the service call element, choose Switch to Content Enricher.
4. In the Properties view, choose the Aggregation Strategy field.
5. Perform the required subprocedure below based on the strategy you want to use.
Procedure
Procedure
Original Message Path to Node Path to the node in the original mes
sage where content has to be en
riched. Ensure that you provide it in
the format <xx>/<yy>/<zz> with
<xx> being the root node and <zz>
being the node where new content will
reside
Lookup Message Path to Node Path to the node that would be used in
the lookup message to enrich the con
tent. Ensure that you provide it in the
format <xx>/<yy>/<zz> with <xx>
being the root node and <zz> being
the reference node
You use the Send step type to configure a service call to a receiver system for scenarios and adapters where no
reply is expected.
Prerequisites
Context
You can use this step in combination with the following adapter types (for the channel between the send step
and the receiver):
● Mail adapter
● SFTP adapter
Procedure
2. From the context menu of the integration flow, choose Add Tasks Service Call .
3. From the context menu of the service call element, choose Switch to Send.
4. Create a connection between the send step and the receiver and configure the channel (use either an
SFTP, Mail adapter or a SOAP (SAP RM) adapter).
You can invoke a local integration process from the main integration process by using a local call.
Related Information
Prerequisites
Context
1. In the palette, choose Tasks Process Call and drop the shape into the modelling area.
2. Position the cursor on the Process Call shape and in the context menu choose Assign Local Integration
Process.
3. Enter the name of the local integration process you like to assign.
4. Choose Ok.
Related Information
Context
Procedure
1. In the palette, choose Tasks Process Call and drop the shape into the modelling area.
2. Position the cursor on the Process Call shape and in the context menu choose Assign Local Integration
Process.
3. Specify the following attributes.
Expression Type Specify the kind of expression you want to enter in the
Condition Expression field.
○ XML
For XPath expressions, for example: //
customerName = ‘Smith’
○ Non-XML
For Camel Simple Expression Language
Note
Examples:
Max. Number of Iterations Maximum number of iterations that the loop can perform
before it stops (99999 iterations maximum).
Note
The local loop process refers to a while loop. The sub process will run as long as the loop condition is
fulfilled.
Every morning, an account owner wants to check all transactions performed on his account. He calls a
specific web service and has defined this request:
<accountID>12345<accountID>
<accountinforesponse>
<transaction>
<id>1</id>
...
</transaction>
...
<transaction>
<id>55</id>
...
</transaction>
<hasMore>true</hasMore>
</accountinforesponse>
The account owner has to call the web service again and again until there are no more transactions
available and he gets the response:
<hasMore>false</hasMore>
To simplify the call, he can use the loop embedded in the HCP Integration Service. He needs to define a
while condition in the Xpath such as/accountinforesponse[hasMore = ‘true’].
As long as data are available, the call will continue. The sub-process inducing the loop uses the ServiceCall
step in the Request-Reply mode to call the web service. As soon as the web service gets the response
<hasMore>false</hasMore>, the processing exits the loop and continues with the next step. The last
response of the web service is the new payload, that will be taken as message body into the next step.
Context
Procedure
You can use this task if you want to simplify your integration process. It enables you to fragment your
integration process into smaller processes (local integration processes), which are in turn called from the main
integration process or from other local integration processes.
Context
Note
For the main integration process the SAP icon is displayed in the top left corner, whereas for the local
integration processes this icon is not displayed.
Restriction
You cannot use the following integration flow steps within the Local Integration Process step:
a. If you want to add a local integration process to the integration flow, choose Others Local
Integration Process from the palette. On the Properties tab, provide a name.
b. You can add various elements between the start event and end event of the process.
A local integration process does not support multicast and splitter elements.
3. Invoke the local integration process.
a. If you want to call the local integration process from the main integration process, add a process call in
the main integration process. To do this, choose Tasks Process Call from the palette.
b. Choose Assign Local Integration Process from the context menu of the process call within the pool.
Select the local integration process that needs to be assigned to the process call in the integration
flow.
Note
You can use an Error End event to throw the exception to default exception handlers.
Context
You use this element to catch any exceptions thrown in the integration process and handle them.
Restriction
Procedure
2. To add an exception subprocess to the integration flow, choose Others Exception Subprocess from
the palette. The subprocess can be dropped into the integration process and should not be connected to
any of the elements of the integration flow.
3. Select the exception subprocess you have added to the integration flow model to configure it.
4. Start the process with Error Start event always.
5. End the process with either End Message or an Error End or Escalation End event.
○ You can use an End Message event to wrap the exception in a fault message and send it back to the
sender in the payload.
○ You can use an Error End event to throw the exception to default exception handlers.
6. You can also add other flow elements between the start and end events .
Note
○ For example, you can choose Add Service Call from the context menu of a connection within the
pool. This enables you to call another system to handle the exception.
○ The following elements are not supported within an Exception Subprocess:
○ Another Exception Subprocess
○ Integration Process
○ Local Integration Process
○ Sender
○ Receiver
○ Start Message
○ Terminate Message
○ Timer Start
○ Start Event
○ End Event
○ Router
Note
○ The message processing log will be in an error state even if a user catches an exception and
performs additional processing on it.
○ You can get more details on exception using ${exception.message} or $
{exception.stacktrace}.
○ You cannot catch exceptions of local integration process in the main integration process.
You can define how to handle errors when message processing fails at runtime.
Context
1. In the Model Configuration editor, click on the graphical area outside the integration flow model.
2. In the Properties view, select the Error Configuration tab.
3. Select one of the following options.
Parameter Description
○ None
When a message exchange could not be processed,
no error handling strategy is used.
○ Raise Exception
When a message exchange could not be processed,
an exception is raised back to the sender.
○ Raise Exception (Deprecated)
When a message exchange could not be processed
and there is IDoc or SOAP SAP RM channel in the in
tegration flow, then an exception is raised back to the
sender.
Return Exception to Sender When defining the error handling strategy for SOAP mes
sages, you can now define if in case of an exception the
SOAP fault exception is to be returned to the sender. If you
don’t select this option, an error message containing a
URL is sent back to the sender instead. You can use this
URL to access the message processing log.
At design time of an integration project, the way how a message is to be processed is specified by a number of
integration flow steps (for example, content modifier, encryptor or routing step). When an integration flow is
being processed at runtime, errors can occur at several individual steps within the process flow (referred to as
processing steps to differentiate them from the integration flow steps as modelled at design time).
Integration flow steps can specify message processing at different levels of complexity. Therefore, in general an
integration flow step (design time) can result in multiple processing steps steps at runtime.
Each processing step gets a unique Step ID which is displayed in the message processing log.
For example, a content modifier step in an integration flow can (at runtime) be related to multiple processing
steps: The content modifier step can be configured in a way that one processing step changes the message
header, one the message body, and another one an exchange property.
To relate a modelled integration flow step (like the content modifier mentioned above) to an error occurring at
a certain processing step at runtime, you can use an identifier which is referred to as Model Step ID.
To allow an integration flow developer to relate to a certain integration flow step during error handling, the
runtime provides the Model Step ID of the integration flow step where the error occurred as Exchange Property
SAP_ErrorModelStepID. The content of the property can then be evaluated in the error handling.
Note
You can use the property for instance in a condition definition of a Router step to choose a different error
handling strategy depending on the step where the error occurred.
When referring to an integration flow step for error handling, you need to know which Model Step ID has been
defined for an integration flow step. To display this attribute, position the cursor on the step and in the tooltip
you get the Model Step ID displayed as ID. To display this attribute for an adapter, position the cursor on the
connection shape in the integration flow model and in the context menu choose Technical Information.
With the Runtime Configuration you can specify general properties of the integration flow.
Context
Procedure
1. In the Model Configuration editor page, select the graphical area outside the integration flow model.
2. In the Properties view, choose the Runtime Configuration tab.
3. Specify the following properties.
Note
xmlns:ns0=https://
hcischemas.netweaver.neo.com/hciflow
/ns0:HCIMessage/
SenderID='Sender01'
Message Processing Log Configure the log level to display in the Monitoring editor.
Note
If you want to enter several headers with similar
names, use wildcards to make the entering faster.
○ None
Session handling is switched off
○ On Exchange
Each exchange corresponds to a single session (use
this option for stateful services)
○ On Integration Flow
Only one session will be used across the whole inte
gration flow (only use this option for stateless serv
ices)
Note
Once an HTTP session has been initialized, there
is usually no further authentication for the dura
tion of the session (one of the advantages of us
ing sessions). This means that all further HTTP
requests on that server are processed in the con
text of the user that was logged on when the ses
sion was initialized. If, however, this behavior
does not meet your requirements (for example,
the user is dynamic and can change from request
to request), you can select either an exchange
session scope (if the user remains the same for
at least the processing of a single message) or no
session.
Note
SuccessFactors (OData V4) and SuccessFactors
(REST) adapters do not support HTTP session
handling.
Related Information
You can configure transaction handling on integration process or local integration process level.
Context
Transactional processing means that the message (as defined by the steps contained in a process) is
processed within one transaction.
For example, consider a process with a Data Store Write operation. When transaction handling is activated, the
Data Store entry is only committed if the whole process is executed successfully. In an error case, the
transaction is rolled back and the Data Store entry is not written. When transaction handling is deactivated, the
Data Store entry is committed directly when the integration flow step is executed. In an error case, the Data
Store entry is nevertheless persisted (and not removed or rolled back).
Caution
Choosing JDBC transaction handling is mandatory to ensure that ag
gregations are executed consistently.
Data Store operations Required for JDBC (recommended but not mandatory)
In case you choose Not Required, the related database operation is commit
Write
ted per single step and no end-to-end transaction handling is implemented.
This applies also for scenarios that include In general, no transaction handling is required.
the AS2 adapter.
Note
These adapters do not require JMS transaction handling. Their retry
handling works independent from the selected transaction handler.
JMS sender adapter together with JDBC re Required for JDBC
sources (Data Store, Aggregator, Write varia
bles) Note
This applies also for scenarios that include However, note that it is not recommended to use transactional JMS re
the AS2 adapter. sources and JDBC resources in parallel.
Several JMS receiver adapter together with a Required for JMS (mandatory)
JMS sender adapter
This setting is mandatory to ensure that the data is consistently updated in
the JMS queue.
Note
Distributed transactions between JMS and JDBC resources are not
supported.
For more details on parallel processing, check the documentation of General or Iterating Splitter and Parallel
Multicast.
Let us assume that you like to configure a message multicast and the integration flow also contains a Data
Store operations step. In this case, you can choose one of the following options to overcome the mentioned
limitation:
Procedure
1. Depending on whether you like to configure transaction handling for an integration process or a local local
integration process, select the header of the corresponding shape in the integration flow modelling area.
2. Specify the details for transactional processing:
To configure transactional process for an Integration Process, select one of the following options for the
Transaction Handling property.
Attribute Description
Required for JDBC You can specify that Java Database Connectivity (JDBC)
transactional database processing is applied (to ensure
that the process is accomplished within one transaction).
Required for JMS You can specify that Java Message Service (JMS) transac
tional database processing is applied (to ensure that the
process is accomplished within one transaction).
To configure transactional process for a local Integration Process, select one of the following options for
the Transaction Handling property.
Attribute Description
Required for JMS You can specify that Java Message Service (JMS) transac
tional database processing is applied.
Required for JDBC You can specify that Java Database Connectivity (JDBC)
transactional database processing is applied (to ensure
that the process is accomplished within one transaction).
Context
Externalizing parameters is useful when the integration content has to be used across multiple landscapes,
where the endpoints of the integration flow can vary in each landscape. This method helps you to declare a
parameter as a variable entity to allow you to customize the parameters on the sender and receiver side with a
change in landscape.
When you externalize a parameter, it will be available in the integration flow configuration when you import
content in SAP Cloud Platform Integration Web Application.
Partial Parameterization enables to change part of a field rather than the entire field. This variable entity of the
field is entered within curly braces.
List of parameters that can be externalized on the sender side
● Mail Adapter: Address, Protection, Authentication, Credential Name, Mail Attributes (From, To, Cc, Bcc,
Subject, Mail Body)
● IDoc (IDoc SOAP)Adapter: Address, Proxy Type, URL to WSDL, Private Key Alias, Credential Name,
Authentication, Allow Chunking.
● SOAP (SAP RM) adapter: Address, Proxy Type, URL to WSDL, Private Key Alias, Allow Chunking.
Note
If you externalize the Proxy Type parameter, you do not see the display values from the UI in the parameter
file, but the technical representatives of these values (for example, sapcc for the value On-Premise, and
default for the value Internet).
Note
You can externalize all attributes related to the configuration of the authentication option. This includes the
attributes with which you specify the authentication option as such, as well as all attributes with which you
specify further security artifacts that are required for any configurable authentication option (Private Key
Alias or Credential Name).
● Externalize all attributes related to the configuration of all options, for example, Authentication and
Credential Name and Private Key Alias.
● Externalize only one of the following attributes: Private Key Alias or Credential Name.
Avoid incomplete externalization, for example, only externalizing the attribute for the Authentication
parameter but not the related Credential Name parameter. In such cases, the integration flow configuration
(based on the externalized parameters) cannot work properly.
The reason for this is the following: If you have externalized the Authentication parameter and only the
Private Key Alias parameter (but not Credential Name), all authentication options in the integration flow
configuration dialog (Basic, Client Certificate, and None) are selectable in a dropdown list. However, if you
now select Basic from the dropdown list, no Credential Name can be configured.
Procedure
Note
You can extract certain parameters that are already configured in the integration flow, so that the
parameters are externalized. You can view the extracted parameters in the Externalized Parameter
view. If you have already extracted a parameter, the parameter cannot be extracted again.
a. From the Project Explorer view, open the <integration flow>.iflw in the editor.
b. In the Model Configuration editor, right-click the graphical area outside the model and choose Extract
Parameters from the context menu.
Note
In the Console view, you can see a summary of the parameter extraction.
Note
You can view the externalized parameters of the field in the integration flow. The field is now
available as a variable.
e. From the toolbar of the view, choose (Sync with Editor icon) to synchronize the integration flow
that is currently open with the Externalized Parameters view.
f. You can edit parameter values and then choose (Save Parameters icon), from the toolbar of the
view.
Note
The Save Parameters icon is enabled only if the parameter values and the type are consistent. For
drop down or combo box the values should be edited from here else, the parameter references are
lost.
g. Choose (Reload icon) to update the view with the list of externalized parameters.
h. If you have made modifications to the integration flow model and there are unused parameters then
you must choose (Remove Unused Parameters icon) to remove unused parameters.
2. Externalize parameters by manual configuration.
Remember
You can use this method to externalize fields that allow text input.
Here's a list of attributes that you cannot externalize using manual configuration method:
○ Channel name
Example: If you are using SuccessFactors adapter, you cannot externalize the Name field in the
General tab.
○ Flow step name
Example: If you are using CSV to XML Converter, you cannot externalize the Name field in the
Properties tab.
○ Branch name in Multicast
Example: When you use Multicast, you can assign a name to each outgoing branch from the
Multicast step in the Branch Name field. You cannot externalize this Branch Name field.
○ Table Elements
Example: If you are configuring the Sender step, you cannot externalize the Subject DN or Issuer
DN fields when you are using certificate-based authentication.
You can use the step below if you want to externalize certain parameters in such a way that you do
provide any values in the integration flow but want to manually configure the attributes in the
Externalized Parameters view.
a. In the Project Explorer view, open the <integration flow>.iflw in the editor.
b. In the Model Configuration editor, double-click the channel or select the flow step field that you want to
externalize.
c. In the Properties view or the Adapter Specific tab for adapters, enter a value for the field that you want
to externalize in this format: {{<parameter_name>}}.
Note
○ You can use alphanumeric characters, underscore and hyphen for parameter name.
○ You can externalize the value of the field so that the field is now a variable. For example, you
might need to change the 'Address' field of a connector at the receiver without making any
other changes to the integration flow. So, you enter {{server}} in the 'Address' field. The
steps below show how you can specify data for the externalized parameters.
Only part of the field can also be changed. For example, in the given URL, http://{{host}}:
{{port}}/FlightBookingScenario, host and port are the variable entities.
Note
In the Externalized Parameters view, you can see all the parameters that have been externalized in
the integration flow. For example, if you have externalized the 'Address' field of a connector as
{{server}} or you have selected this field for extraction, you can see the parameter under the
Name column.
Note
In the Externalized Parameters view, you can view all the parameters that have been externalized in
the integration flow. For example, if you have externalized the 'Address' field of a connector as
{{server}} or you have selected this field for extraction, you can see the parameter under the
Name.
h. From the toolbar of the Externalized Parameters view, choose (Save Parameters icon).
Note
○ The Save Parameters icon in the Externalized Parameters view is enabled only if the parameter
values and the type are consistent.
○ If you do not save the view using the Save Parameters icon, the parameters.prop file is not
updated with the new externalized parameters, which is indicated with an error marker in the
Integration Flow editor. If the editor has no unsaved changes, the error marker remains even
Prerequisites
Note
Please refer to the below procedure steps for enabling mass configuration of system properties.
Context
You use this task when you want to provide the same values for the common parameterized attributes across
multiple integration projects. The tool offers quick parameterization with the Configurations view that displays
a list of externalized parameters from all the projects. If multiple integrations require same values for the
externalized parameters, you can choose the Mass Parameterize option to fill values for all the common
parameters.
Procedure
1. For receiver system, you must have a single participant for every logical system.
Note
○ If all receivers are the same logical system then you can have one participant rather than multiple
participants in the integration flow to which all channels can be connected.
○ If the channel names are same or not added, then you must add unique channel names.
2. To externalize certificate information at the sender system, execute the following substeps:
a. In the Certificate Based Authentication table, enter the key names( {{sender}}, {{issuer}}).
b. In the Externalized Parameters view, enter the values.
Note
Note
4. To externalize system details for receiver address, execute the following substep:
a. In the Address field, enter the value in the format https://{{host}}:{{port}}/<appDetails>?
client={{client}} .
5. To externalize authentication modes at sender and receiver systems, execute the following substeps:
a. Extract all parameters relevant to sender authentication mode using the format
<sender>_enableBasicAuthentication_<randomnumber>.
b. Extract all parameters relevant to receiver channel authentication mode using the format
<receiver>_enableBasicAuthentication_<randomnumber>.
6. In the context menu, select Execute Checks.
7. To view and configure parameters, execute the following substeps:
Note
A common parameter that has been assigned different values, is indicated using the tag Parameter
has Different Values. If you delete this tag and enter a new value or keep it empty, the value of the
common parameter changes accordingly across all the projects that use it.
f. Confirm if you want to update the common parameters with the new values.
g. If you want to view or change externalized parameters in the integration flow from the Configurations
view, double-click any file under the Configurations view to open the Model Configuration editor and the
Externalized view.
h. If you want to deploy the projects, select Deploy Integration Content and enter the Tenant ID.
i. Choose Finish.
Results
The completion of this task triggers the build automatically and your projects get deployed with the updated
externalized parameters.
Context
Procedure
Context
You perform this task to create a message mapping when the receiver system accepts a different message
format than that of the sender system. The message mapping defines the logic that maps input message
structures with the required format of output message structures.
You define a message mapping using the mapping editor by following the steps below:
Procedure
1. Create a mapping object under the package src.main.resources.mapping under the Project Explorer view.
2. Define the signature for message mapping by specifying the source and target elements under the
Signature section.
3. Define the mapping logic in the Definition tab page.
A multi-mapping is a mapping that allows you to transform a source (input) message to multiple target
(output) messages or multiple source to multiple target messages.
Multi-mappings reference multiple message structures. You can, for example, use a multi-mapping to map a
message to multiple different (and generally smaller) messages. In that case, the cardinality of the mapping is
1:n.
In your mapping, always add the namespace http://sap.com/xi/XI/SplitAndMerge to the root tag.
Context
You perform this task to create a mapping object required for message mapping.
Procedure
4. In the New wizard, select SAP NetWeaver Cloud Integration Message Mapping .
5. Choose Next.
6. In the New Message Mapping wizard, enter a name for the mapping.
7. Choose Finish.
The new message mapping object is created under src.main.resources. mapping package and the Message
Mapping Overview editor opens.
Context
Procedure
1. In the Message Mapping Overview editor, under the Signature Source Elements section, Add a file.
Note
○ If you have selected an XSD or WSDL file, you can also perform multi mapping by selecting another
source WSDL or XSD.
○ If you have selected an EDMX file as your first source element or you are choosing an edmx file as
your next source element then you will not be able to perform multi mapping. The new file will
overwrite the existing file and only a single file can be seen as source elements.
Restriction
In the content used for mapping, if the XSD indicator maxOccurs value exceeds 65535, replace it with
unbounded. Otherwise, it will cause the tenant to crash.
2. If you are performing multi mapping then change the cardinality of mapping to the required cardinality.
The default cardinality for single mapping is 1:1.
3. Choose OK.
4. Under Target Elements section, repeat steps 1 to 3.
Context
You perform this task to the define a mapping for the mapping object.
Procedure
1. In the Definition tab page editor, to map a source field to target field, drag a source node and drop onto a
target node in the tree structure.
Note
○ Alternately, you right-click on the source field, drag and drop it onto the target node and select Map
automatically
○ If you have selected multiple messages under source or target elements, system creates a new
node Messages, which has all the other nodes under it.
.
2. If you want to perform any of the actions such as duplicating the subtree, disabling a field or adding a
variable, then from the context menu of the selected node, choose the required option.
When you need to map different node structures of the source message to only one node structure of the
target message you use the option of duplicating the subtrees.
3. If you want to map the child elements of a recursive node (if any), from the context menu choose the
option of Expand Recursive Node.
Note
○ Recursive node acts as a design time placeholder to indicate creation of a node at this location at
runtime of the type indicated in the repeatedNode property.
○ In case you do not want to use the child elements or variable of the recursive node, then from the
context menu, choose the option of Collapse Recursive Node.
4. Double-click a node to view the graphical representation of mapping in the graphical editor.
Note
If you want to perform mapping directly in the graphical editor, then drag the source and target nodes
from the tree structure into the graphical area.
5. In the graphical editor, connect the source and target node using connectors.
6. If you want to add a function to the connection between the source and target nodes, double-click on the
functions displayed under the Functions folder on the right-side of the graphical editor.
7. Connect the function to the required node with the use of connectors.
Note
Note
To check the correctness of the mapping, you can right-click on the editor and choose Execute Checks,
if there are any errors, you get notified by a red error mark close to the field or in Problems view. Some
such errors are described below:
○ In case, there are any unassigned mandatory fields, an error is indicated by a red error mark appears
close to such fields in the Definition tab page.
○ If source or target messages are not assigned, an error message is displayed in the Problems view.
○ If any of the source or target messages are not available in required location, an error mark in red
appears on the element in the Overview tab page.
○ Also, if the mapping is not present in the folder src.main.resources.mapping then an error is shown in
the Problems view.
○ The above errors are also displayed once you have saved the mapping.
Context
You use this task to handle the inconsistencies in the message structures or mappings that occur due to
modification of source or target elements.
● When you change an XSD or WSDL that are being used in existing mapping as source or target element
● When you change the source or target nodes in the message mapping tree structure
Whenever you open a mapping where any changes have been made to the source or target element,
the editor notifies you about the change.
Procedure
1. In the Definition tab page editor, choose the icon ( Check for consistencies between changed message
structures ).
2. In the Reload Confirmation dialog, choose OK to continue with correcting of structural inconsistencies.
Note
This action discards any recent changes or additional mappings that you have done after changing the
source or target element.
3. In the Missing Fields wizard, you can choose to reassign or delete the inconsistent fields.
Note
4. If you want to reassign the missing fields, then follow the substeps below:
a. Right-click on the missing field in the old structure.
b. Drag the missing field from the old structure and drop it on the required field in the new structure.
c. If you want to map all the children fields related to the selected missing field in old structure with the
matching fields in the new structure, choose the option Map Automatically.
d. If you want to map only the single selected field, then choose Create mapping option.
Note
○ Alternatively, you can simply drag the fields from old structure and drop onto the fields in the new
structure.
○ In case you have moved a disabled missing field, then the mapping also moves along with the field
but the new field is not marked as disabled.
○ In case you have moved a missing field with a variable defined under it, then the variable also
appears under the new field, and the mapping of the old variable is assigned to the new variable.
5. If you do not require the missing fields in your mapping, then select the missing field and choose the icon
(Delete Field) to mark for deletion.
Note
If you want to use the field marked for deletion, then select the icon (Delete Field icon) again to cancel
the delete action.
Context
You use this procedure when you want to see the mapping details offline without signing into the Eclipse or
NWDS.
Procedure
Note
○ If the mapping is exported to the selected location without any error, a dialog box with message
Export to excel is completed, is displayed.
○ If you are performing Export to Excel on the mapping for the second time, then it displays a dialog
to replace the earlier excel with the new excel.
○ If you trying to export a mapping, for which an excel is already created and in use by another user,
then the excel does not open and an error is displayed.
○ If any target node is disabled in the mapping, then the node is greyed out in excel and it displays
the value Disabled under the under the corresponding cell of column Type.
○ Similarly, if a node or field is recursive, it displays the value ...[Recursive] under the under the
corresponding cell of column Type.
○ Also, currently only .xlsx format of excel is supported.
Context
Procedure
Context
You can execute consistency checks to validate if the configurations of the integration flow elements are
adhering to their definition. This helps you validate the integration flow model and the configured attributes are
supported at runtime. The inconsistencies can occur if the integration flow model does not adhere to the
modeling constraints that SAP supports for specific scenarios. When you trigger consistency check execution,
the tool validates the consistency of your integration flow model against the predefined constraints set by the
SAP supported scenarios.
Procedure
Tip
You can also click on the graphical model outside the Integration Process pool, and choose Execute
Checks from the context menu.
Context
Procedure
Prerequisites
You have obtained the details about the system management node from the SaaS Admin and entered in
Windows Preferences SAP Cloud Platform Integration Operations Server page.
You have ensured that the supported version of Eclipse IDE, Eclipse Juno (Classic 4.2.2), is installed on your
system to avoid deployment failure.
Context
You perform this task to trigger deployment of the integration project. The deployment action triggers the
project build and deployment on the Tenant Management Node (Secure).
Procedure
1. If you are deploying only one integration flow project, follow the steps below:
a. Right-click on the project and choose Deploy Integration Content from the context menu.
You can distinguish each deployment of the project bundle by adding a qualifier in the
MANIFEST.MF file. Open the MANIFEST.MF file of the project and enter the value for Bundle-
Version as <version>.qualifier. For example, 1.0.0.qualifier.
b. In the Integration Runtime dialog, enter the Tenant ID of the Tenant Management Node.
c. Choose OK.
2. Check the deployment status of the integration project in the runtime by following the steps below:
a. In the Node Explorer view, select the Tenant that is represented as <Tenant ID>.
b. Open the Tasks view and check if the 'Generate and Build' task is available.
c. In the Node Explorer, select the worker node type represented as IFLMAP and check for the
project bundle that is deployed in the Component Status view.
Note
You can identify the project bundle by checking the Version column. If you have specified the
qualifier, the Version column displays the bundle version along with the timestamp.
Context
You use this procedure to view the errors of an integration content artifact for monitoring purposes.
Procedure
5. In the Node Explorer pane, expand cluster <node name> TM:<VM Name> .
6. Choose the Component Status View tab.
7. In the context menu of an artifact in error state, select Show Error Details.
You activate tracing to check the message payload after each and every integration flow step.
Prerequisites
Caution
Do not activate integration flow tracing for productive scenarios. This feature was designed for support use
cases and testing purposes. It can cause problems if used in productive scenarios, especially if you expect a
high message load.
● You have obtained the following roles to retrieve and view traces.
Roles
Operations Roles
esbmessagestorage.read
NodeManager.read
● You have verified that the integration flow for which tracing is required is deployed.
Context
You use tracing to track the message flow of processed messages, view the relevant message payload at
various points in the message flow, and identify any errors that occur during message execution. Tracing
details are displayed in an integration flow as little yellow envelopes.
You can set the log level for an integration flow scenario in the Manage Integration Content page of the Web-
based Monitor.
Select your integration flow in the Manage Integration Content and set the log level for this integration flow in
the Log Configuration section.
Procedure
1. In Eclipse, in the Node Explorer, double-click the tenant management node to open the tenant editor. To
enable the trace in
2. On the Trace Configuration tab, select the integration flow.
Note
On the Trace Configuration tab, under the Trace Status attribute, you can view the following statuses:
Expired: The trace has expired after being enabled (default duration of expiration time is 10 minutes).
Suspended: The SaaS administrator has switched off the trace temporarily for operational reasons.
Note
You use this task to display message trace for a particular integration flow.
In the Message Monitoring editor, for a completed or failed message, the Properties view displays the message
processing log (MPL) for this message. The MPL provides information on the steps during processing of a
particular message and the View MPL button opens the particular integration flow with the message trace,
displaying path of message flow. If the particular project does not exist in workspace then it automatically
imports project.
If the current project state is different from deployed state then tracing of message flow occurs till the point,
sequence of elements match.
View Trace imports trace content in project with name same as bundle id. If existing project name is not same
as that of bundle id, then view trace creates a new project in the workspace with same bundle-id.
Payload and header tabs appear in property sheet, on clicking the message icon in the message trace. If in any
case splitter is used, then there are multiple message payloads and the icon has multiple message symbols. In
-------------------------
Message 1
------------------------
<ns0:BSNMessage xmlns:ns0="https://bsnschemas.netweaver.neo.com/bsnflow">
<SenderId>Token</SenderId>
<ReceiverId></ReceiverId>
<MessageType></MessageType>
<FileName></FileName>
<NumberOfRecords></NumberOfRecords>
<MessageId></MessageId>
<MessageContent></MessageContent>
</ns0:BSNMessage>
There are multiple sequences which traverse one path. The property sheet displays this information. For
example, “Displaying content for message 1 (out of 3)”. Here, ‘3’ denotes number of times the sequence
traverses a particular path. Clicking on the text enables user to switch to a different sequence.
In case of huge payload, the whole information is not visible. In such a scenario, we can use ‘Export Payload’
button to view content using external tool/application.
Note
Context
Procedure
Select a processed message in Message Monitoring editor. On clicking the same, Message Processing Log
dialog box opens. Select one of the following from the dialog box:
○ You can export only message processing log for a configure-only integration flow.
○ You cannot export MPL for multiple messages. Please select one message only.
Context
You can import the content by using Import Trace. Import Trace imports content with trace only.
Procedure
Cheatsheet
Cheatsheets guide you in performing simple end-to-end use cases. Cheatsheets are very useful if you are
working with the tool for the first time and you want to understand the related steps to complete a task. It
shows you the flow of steps and has UI controls to automatically open the relevant UI elements, such as a
wizard or a view.
Context-sensitive Help
Context -sensitive help provides more details about a specific view, wizard or page. It is useful when you want
more information about the parameters in the UI elements.
SAP Cloud Platform Integration Web application allows you to assemble integration contents into packages
and publish them, so that integration developers can use these packages in their integration scenarios.
As an integration developer, you can now create integration packages for your specific domain or organization.
You can also view different packages published by other integration developers and consume them for your
integration purposes. You can modify these packages based on your requirements and upload them through
the web application.
Procedure
1. Launch the SAP Cloud Platform Integration Web application by accessing the URL provided by SAP.
2. Enter your credentials to log on to it.
3. Choose the Design tab.
4. Choose Create.
5. In the <integration package name> editor page, enter the required data.
6. Perform the following substeps as required:
○ If you want to add an integration flow as an artifact, perform the following substeps:
1. Choose Add Process Integration in the Artifacts () section.
2. Enter the required details in the Create Process Integration Artifact dialog box.
3. Choose Browse in the Integration Flow field to import an integration flow from your local system.
4. Choose the required file in the Open dialog box.
5. Choose Open.
6. Choose OK.
Note
○ Create a zip file of the required integration flow in your project folder before importing it as an
artifact. The zip file must not contain any bin folder, and you must not keep the integration flow
in any sub folder.
You can also select Import to upload an integration flow from your local system.
Note
Create a zip file of the required value mapping in your project folder before importing it as an
artifact.
○ If you want to add a data integration as an artifact, perform the following substeps:
1. Choose Add Data Integration in the Artifacts () section..
2. Enter the required details in the Create Data Integration Artifact dialog box.
3. Choose OK.
○ If you want to add an OData service as an artifact, choose Add OData Service in the Artifacts ()
section. For more information, see .
7. If you want to remove the integration package, choose Delete Package.
8. If you want to keep the integration package, choose Save.
9. If you want to terminate the creation of integration package, choose Cancel before saving it.
You use this procedure to upload and upload your existing integration packages that suits your integration
scenario in the application. This reduces redundancy and duplication efforts of creating a package from
scratch.
Context
Procedure
1. Launch the SAP Cloud Platform Integration Web application by accessing the URL provided by SAP.
2. Enter your credentials to log on to it.
3. Choose Import in the Design tab to upload an integration package.
Importing a new version of a package (zip import) over an existing package will not overwrite the
externalized parameters' configured values of the existing package.
Import of package will fail with Uniqueness Conflict, when the package contains one or more artefacts that
exists already in other packages in the tenant.
Uploaded file appears in the Design tab of the application.
You use this procedure to perform various miscellaneous functions with the artifacts of an integration package.
Procedure
1. Launch the SAP Cloud Platform Integration application by accessing the URL provided by SAP.
2. Enter your credentials to log on to it.
3. Choose the Design tab to view the list of integration packages.
4. If you want to view the details of an integration package, choose <integration package name>.
You can configure only integration flows. For more information, see Configure Externalized Parameters of
an Integration Flow [page 486].
8. Choose Download to download an artifact. The integration flow download feature allows users to
download the integration flow artifact with two options:
a. Default Values Only: Downloads the integration flow with default values only.
Note
When the downloaded integration flow imported into the system with default values, these values
are seen to the user under externalized parameter view.
b. Merged Configured and Default Values: Downloads the integration flow with values that consists of
configured and default values. This option would replace the default value with the configured value
and accept it as a new default value.
Note
When you import the integration flow in the tenant, the configured values become the default
value. For more information, see Externalize Parameters of an Integration Flow [page 489]
Procedure
1. Launch SAP Cloud Platform Integration Web application by accessing the URL provided by SAP.
2. If you are a new user, choose Signup.
3. If you have user credentials, choose Login.
4. Choose the Design tab.
5. Select the required integration package.
6. In the <integration package name> editor, choose Package Content.
7. Choose Edit.
8. If you want to edit integration content of type process integration, data integration, file, or value mapping,
choose
You get the following functions:
○ View metadata
○ Delete
○ Download
○ Configure (only for integration flows)
○ Deploy (only for data flows and integration flows)
9. If you want to edit artifact of type URL, choose .
You get the following functions:
○ View metadata
○ Delete
10. If you want to edit the metadata of an artifact, then perform the following substeps:
a. Choose View metadata.
b. In the <Artifact Name Details> dialog box, choose Edit.
Note
The name of integration flow should begin with alphabets 'Aa' to 'Zz' or underscore '_' . It can also
contain numbers '0-9', space ' ', dot '.' and hyphen '-'.
SAP provides preshipped content to address various integration scenarios. You can copy these integration
packages from the Discover section to the Design space. SAP also publishes updates with new enhancements
and bug fixes.
There are two modes through which the package update is done:
1. Manual
2. Automatic
1. Immediate
2. Scheduled
Manual
1. If you have copied a package into your Design space and an update is available, only then Update Available
label appears next to the package name.
If the content is set to update automatically , the update of the packages in the Design space happens in one of
two ways.
You can always update the package manually even if the package is set to be updated automatically.
Post Update
After successful update, the updated packages display the last modified details next to the package names,
and the design-time artifacts are replaced with the new updated content.
If the artifacts contains previously deployed content the update procedure triggers the deployment of the new
design-time content. If the deployment of updated design-time content fails, then the system retries the
deployment three times with an interval of 1 hour. With every iteration, if the deployment fails, the artifact is
rolled back to the previously deployed runtime version.
You need to manually deploy the newly updated design-time content. If the manual deployment also fails,
contact your tenant administrator with the relevant log details of the issue.
In some scenarios, you can customize the content according to your needs. But once the artifact is modified, it
will no longer receive any updates. Such artifacts appear with the grayed-out label of Update Available.
Note
● Package update only updates the content of the package. All attributes and configured externalized
parameters of the package maintained by the user are retained.
● Updates will be available even if the packages are renamed.
● A renamed artifact will no longer receive any updates.
You use this procedure to download your integration packages in your local system, so that you can provide it
to your content management team to upload and publish them. Publishing integration packages makes them
available to non-SAP integration developers.
Context
Procedure
The destination of export depends on the default browser setting. By default, the downloaded file takes the
name of the integration package. In case the name contains special characters, the browser changes the
file name by appending them.
The SAP Cloud Platform Integration Web application helps you to access and design integration content
available for a particular account on an OnDemand integration infrastructure.
Use
The Web application provides you with an interface for accessing and managing integration content. You can
either directly use the prepackaged integration content from the catalog or upload it to your workspace to
adapt it to your requirements.
You can use the artifacts available in integration packages to solve the integration challenges of your scenario.
Optionally, you can also upload integration packages from your local folder and use them. You can use the
integration flows available in packages by configuring and deploying them. You can also edit the integration
flows by adding or removing elements, before you configure and deploy them.
The application consists of the following components which allow you to perform different tasks:
After completing this task, you can save the changes, save
as a version or deploy the integration flow.
You can also access the dedicated message monitor and in
tegration content monitor to view more information.
Settings Select the product profile to specify the target runtime for
your content.
Related Information
Related Information
You can use integration flows to specify specific integration patterns like mapping or routing.
A graphical editor allows you, the integration developer, to model the message processing steps and specify in
detail what happens to the message during processing.
The following figure provides a simplified and generalized representation of an integration flow.
You define a participant of an integration scenario as a sender or receiver. The senders and receivers typically
represent the customer systems that are connected to the tenant and exchange messages with each other.
Connectivity (Adapters)
An integration flow channel allows you to specify which technical protocols should be used to connect a sender
or a receiver to the tenant.
Note
To specify an adapter, click the connection arrow between the sender/receiver and the Integration Process
box.
You use integration flow steps to specify what should happen to a message during processing. Various step
types support the wide range of integration capabilities of the Cloud-based integration platform.
Note
To insert a step into an integration flow, drag and drop the desired step type from the palette on the right of
the graphical modeling area.
Message Flows
Some new features might not be available for your current integration flow due to the following reason:
A feature for a particular adapter or step was released after you created the corresponding shape in your
integration flow.
● You can continue using the older version of the integration flow shape (adapter or step).
● You can use the new integration flow shape (adapter or step).
To do this, you first have to delete the old flow step or channel and then create it again.
SAP Cloud Platform Integration offers an easy version management capability for your integration artifacts.
After creating an artifact, when you choose on Save as Version, a new version of the artifact is created. You can
specify the version in the Version information dialog that appears after you choose Save as Version. You can
also add a comment there for your reference so that you have some additional information that indicates the
need for each version.
By default, you only see the latest version of the artifact in the integration package. However, the older versions
of the artifact are available for you in the backend. If you want to switch to a different version of that artifact,
you can easily do that by clicking on the version number and choosing the (Revert) icon corresponding to the
version that you want to switch to.
Example
You are currently using the version of the artifact, but you want to switch to version 1.1.8. You click on the
version number 1.2.0, and in the Version History dialog, you choose (Revert) icon next to 1.1.8. You can
choose More to see comments about each version, if any.
Remember
● The artifact version management is specific to the tenant that you are using it in. It will not work across
tenants. It is also not user specific.
● When you create an artifact, it is created with the version 1.0.0 by default.
● If you choose Save after editing an artifact, the artifact version will be Draft. Please note that you can
only see the latest Draft version.
The Cloud Integration Web UI (also known as Content Hub) allows you to use Cloud integration content for
different target integration platforms. Accordingly, different product profiles are available to adapt the user
interface of the integration content designer to the specifications and capabilities of the target integration
platform.
A product profile defines a set of capabilities for Cloud integration content design supported by a specific
target integration platform. In particular, a specific product profile supports the configuration of a specific set
of adapter types and integration flow steps.
Prior to start working with Cloud integration content, you need to know on which target integration platform(s)
the Cloud integration content is to be deployed and executed.
If you encounter use cases where both on premise and Cloud-based integration platforms are involved, you
might like to have several options and, accordingly, both product profiles are of interest for you.
The following figure illustrates the use case for the product profiles SAP Cloud Platform Integration and SAP
Process Orchestration 7.5 SP0.
When you have decided on the product profiles in question, the process is as follows:
Under Settings, you choose the default product profile. When you create a new integration flow, this choice will
be applied by default.
The integration flow editor shows the options and executes checks based on the chosen product profile. The
reason for this is that the target integration platform imposes specific restrictions on the Cloud integration
content.
You have the option to configure a product profile also for an individual integration flow (under Runtime
Configuration).
Related Information
Set Default Product Profile for HCI Web Application [page 408]
Configure Product Profile for an Integration Flow [page 407]
You can use product profile in an integration flow, to develop content for a particular runtime.
Prerequisites
Context
Product profile is a collection of capabilities such as success factor adapter, splitter or datastore elements,
available for a particular product. You can consume these capabilities at the time of designing integration flows.
The tool enables you to design for multiple runtimes at the same time. You should select specific product
profile to develop content for the respective runtime.
Note
If a product profile does not support a particular capability then the checks report errors for unsupported
components in the integration flow.
Note
○ If you want to publish content in the catalog, then it is a recommendation to select a specific
product profile.
○ If you switch product profiles, then the system reloads the palette and displays relevant
capabilities.
Related Information
The tenant administrator can view and configure the product profile, to mark one of them as default for the
tenant.
Prerequisites
Context
Product profile is a collection of capabilities such as success factor adapter, splitter or datastore elements,
available for a particular product. You can consume these capabilities at the time of designing integration flows.
The tool enables you to design for multiple runtimes at the same time. You should select specific product
profile to develop content for the respective runtime. You can execute the following steps to configure a
product profile for a tenant.
Note
You can click on the product profile name under Name field to see the available capabilities for the
relevant profile.
Note
You can also configure product profile at integration flow level. For more details please refer to Define
Product Profile [page 407].
Related Information
The Integration Designer allows you to model specific patterns which are handled at runtime in an unexpected
way.
Integration flow step with The same message is proc The messages are delivered Configure only one outgoing
more than one outgoing se essed in parallel after the in to the different receivers in a sequence flow and parallel
quence flows tegration flow step. sequence. processing using a multicast
of messages.
For example, after a Message Hereby, the order in that the
Persistence step the mes messages are delivered is
sage is supposed to be sent randomly generated.
to multiple receivers in paral
In addition to that, the follow
lel.
ing behavior may occur: the
message which results from
the processing in the previ
ous sequence flow is taken
as input for the next se
quence flow.
Note
As an example, consider
two parallel sequence
flows where the first one
contains an encryption
step and the second one
not. In that case, the re
ceiver of the second se
quence flow also gets an
encrypted message (al
though in the second se
quence flow no encryp
tion step has been con
figured).
If one of the below listed steps is contained in an integration flow, the processing of the message is executed in
one transaction.
Caution
Such steps might lead to resource shortages because long running transactions can cause node instability
and impede other processes that are running in transactions.
Some of the above mentioned steps or adapters persist data in the database. In case of an error, the whole
process is rolled back and the original state is being re-established. That means, data from failed processes
remain and, in case message processing fails, customers normally cannot access data about the failed
processing (due to the roll-back).
In case an error is propagated back to the calling component, all data that have been written in the course
of the (failed) transaction are being removed (in other words: not persisted in the database). For the calling
component, an error implies, therefore, to restart the integration flow.
Transactional processing is also to be considered in scenarios that contain asynchronous decoupling. Let’s
assume integration flow A contains a Data Store Operation step. Integration flow B contains a Select
operation on the Data Store and runs into an error. In that case, that data is preserved that has been written
to the database by integration flow A. This behavior makes sense in particular when you consider the case
that integration flow B changes or deletes the data that has been stored by integration flow A. In case
integration flow B fails, the original data from integration flow A can be retrieved.
Use APIs to generate integration flows and access them using the SAP Cloud Platform Integration tenant.
Prerequisites
Context
Based on specific end-to-end business integration requirement you can filter and choose prepackaged APIs
from the Discover page. These prepackaged APIs displayed here are from the SAP API Business Hub. Use them
to generate and deploy generic integration flow templates to accelerate your integration process. Prepackaged
API content contains different types of APIs, and each API contains a set of resources and operations. The
operations are available in the open API specification format.
Example
Let us consume an API to create an integration flow. In the Discover page, select the All tab to search and
filter the relevant API. Here we use SAP Cloud Platform API package that contains an artifact for managing
the lifecycle of your application. Select the API that is of the type REST.
Before you generate an integration flow, it is recommended you understand the resources and its
operations. Choose Generate Integration Flow, and provide relevant details for creating an integration flow.
● In the Design tab, find the generated integration flow in the pre-existing package.
● Optional: You can enhance the integration flow and then deploy it.
Note
Creation of integration flow is only possible for APIs that have Basic Authentication, and DELETE, GET,
POST, PUT, and GET_ID operations.
To generate an integration flow and add it to the workspace of an existing integration package, do as follows:
Procedure
Results
You have generated an integration flow and added it to an existing integration package.
SAP delivers rich prepackaged integration content out of the box that enables you get started quickly. These
integration packages are created for some of the commonly used integration scenarios. They contain
integration flows, value mappings, documentation, and so on. By using this content, you can easily get started
with minimal integration developer effort.
You can also find integration content from SAP partners in the Discover tab. If you are viewing the package
in Highlights section, they can be identified by the tag Partner on the top-right part of the integration
package tile. You can also find these packages by filtering for the Vendor with any other available value
apart from SAP.
You cannot download content from integration packages by SAP partner vendors.
You can access this content on the SAP API Business Hub here: https://api.sap.com/shell/integration . Copy
the content to your workspace (Design tab), configure it and deploy it. Here's a representation of the typical
workflow:
You can also watch this video for more information on prepackaged integration content:
Prerequisites
You have logged-on to the Web application with the integration developer role.
Context
You have to add the integration packages to your customer workspace (Design tab page). This enables you to
access the artifacts in that package, configure, and deploy them. There are two options to do this:
1. Copy integration package from catalog (Discover tab page) to your customer workspace.
2. Upload integration package from your local file system to your customer workspace.
You use this procedure to add integration packages to your customer workspace.
Procedure
1. If you want to copy integration package in catalog to your customer workspace, mouse-over the integration
package tile and choose .
Note
d. Choose Open.
You see the integration package that you uploaded in your customer workspace.
Prerequisites
Context
An integration flow is a graphical representation of the flow and processing of messages between two or more
participants using an integration runtime platform, ensuring successful communication.
Here's how you can create an integration flow in SAP Cloud Platform Integration.
Procedure
For the new integration flow you can define two attributes, Name and ID.
Note
The integration flow ID needs to be unique across the tenant, whereas you can specify the same
Name multiple times. When you create a new integration flow, by default the entry for ID is
generated automatically based on the string entered for Name (with underscore replacing space
characters).
However, you can also manually edit the ID field and, once it is modified manually, changing the
name of the integration flow will not make any auto-changes to the ID field again.
If there is already an integration flow with the same ID, the system will throw an error. Provide a
valid and unique ID for the integration flow and choose OK.
To find more information about the integration flow with the same ID (for example, the integration
package name and the integration flow name), click Show More below the error message.
You can select the product profile for the integration flow in theProduct Profile field. The integration
flow templates used during creation adheres to the latest version of a component available in the
product profile.
c. Choose OK.
You see a message confirming that the integration flow has been created.
5. If you want to upload an integration flow, execute the following substeps:
a. Choose the Upload radio button.
b. Choose Browse to select the integration project archive from your local file system. When you select
the archive file, the Name and ID fields in the dialog box are filled automatically.
The integration flow ID needs to be unique across the tenant, whereas you can specify the same
Name multiple times. When you create a new integration flow, by default the entry for ID is
generated automatically based on the string entered for Name (with underscore replacing space
characters).
However, you can also manually edit the ID field and, once it is modified manually, changing the
name of the integration flow will not make any auto-changes to the ID field again.
If there is already an integration flow with the same ID, upload will throw an error. Provide a valid
and unique ID for the integration flow and choose OK.
To find more information about the integration flow with the same ID (for example, the integration
package name and the integration flow name), click Show More below the error message.
You see a message confirming that the integration flow has been created.
c. Choose OK.
Note
You can copy an integration flow by choosing in the Action column, and then Copy.
○ An integration flow can be copied within the same package or to a different package.
○ The copied integration flow has the same version as the source.
○ An integration flow can be copied from a configure-only package to a non-configure-only
package.
○ An integration flow cannot be copied from a non-configure-only package to a configure-only
package.
You use an Integration Process to define the steps to process the message transfer between the sender and
receiver systems.
An integration flow template contains the following shapes: Sender (this represents your sender system),
Receiver (this represents a receiver system), and Integration Process. The Integration Process shape contains a
Start event and an End event.
General
Parameter Description
Name If you want to provide a name for the integration process, en
ter a name here. The default is set to Integration Process.
Select the Processing tab and provide values in the fields as follows.
Transaction Management
Parameter Description
Note
See Defining Transaction Handling [page 890] for more
information.
Timeout (in min) (only if Required for JDBC or Required for Enter the time in minutes.
JMS is selected in Transaction Handling)
Note
This value refers to the transaction itself (for example,
data base operations). The timeout will only terminate
processes referenced in the transaction, not any other
operations that are part of the integration flow.
SAP Cloud Platform Integration allows you to extend the capabilities of standard integration content provided
by SAP. This approach allows you to implement specific integration scenarios relevant to your business use
case without changing the content provided by SAP.
SAP (or SAP partners) provide prepackaged integration content (also referred to as standard content) that
covers a large variety of integration requirements. This content usually comprises a set of integration flows,
value mappings, and other integration artifacts that cover a standard integration use case. If you want to adapt
these capabilities to specific business requirements, you might consider directly editing and modifying the
related integration flows contained in the standard integration package.
However, this does have some disadvantages. For example, you will not get any further updates to modified
standard integration content. Furthermore, if you copy the updated standard package from the SAP Integration
Content Catalog to the Design workspace of your tenant, you will overwrite all changes that you have made to
that package.
Note
Using the described concepts, the customer can keep the standard integration content unchanged and
specify any required enhancements (for example, specific mappings) in one or more custom integration
flows separate from the standard integration flow.
● Ensures better lifecycle management of prebuilt integration flows. If you make custom changes to a
standard integration flow (prebuilt by SAP), this has an impact on lifecycle management, as you will not
receive future updates related to modified artifacts on your tenant. Additionally, if you copy the
updated standard package from the content catalog to the Design workspace of your tenant that
contains the modified packages, you will overwrite all changes that have been made to that package.
Therefore, defining all required changes in a dedicated integration flow helps you to better manage the
lifecycle of your integration packages.
● Provides flexibility to integration developers to customize their integration flows without modifying
them entirely.
● Enables the reuse of integration flows for mapping across different integration projects.
As an integration flow extension is based on the interplay of standard integration content with one or more
custom integration flows, two target groups are involved in the overall process:
This section provides an overview of the concepts and components required to implement an extension.
Note
The use cases that we use to explain the concept in detail are all mapping extensions, and we use variations
of mapping extensions for our examples and tutorials as well.
Note that you can, however, extrapolate the concept to any other kind of extension.
Customer Exits
The basic concept is that a standard integration flow (predefined by SAP or an SAP partner as part of a
standard integration package) contains one or more customer exits, through which one or more customer
integration flows (designed by the customer who is extending the standard content) are called. The standard
integration flow and the custom integration flows (called through the customer exit) have to be deployed on the
same tenant.
To support mapping extensions, standard integration flows can provide two types of customer exits:
● Pre-exit
This exit is only required if the customer needs to extend the source message of the mapping. Using this
exit, the standard integration flow calls a pre-exit integration flow (which is designed by the customer).
The following figure shows the general overview of the message flow when a pre-exit and a post-exit are
involved in the extension scenario.
The message flow in such an extension scenario works in the following way:
Let us assume that the mapping predefined in the standard integration flow defines the transformation of an
original message A to message B.
We also assume that the customer has enhanced the source structure of the mapping (resulting in message
A'). The pre-exit integration flow contains a mapping that transforms message A' to message A. Message A is
then handed over to the standard integration flow and can be consumed by the predefined mapping.
The predefined mapping transforms message A to message B. Message B is then passed to the post-exit
integration flow, where it is mapped against the actual enhancement of the target structure (defined as
message C).
As indicated in the figure, the standard integration flow merges the original message (with the extended
structure, message A') with message B before handing it over to the post-exit integration flow. How this is done
is explained further below. The post-exit mapping then consumes the merged message. Merging the message
is a prerequisite for using fields from the original message A' that might have got lost during the standard
mapping step in the post-exit mapping.
Technically, the communication between the standard integration flow and the pre-exit and post-exit
integration flow is accomplished through the ProcessDirect adapter.
This adapter allows you to directly connect different integration flows deployed on the same tenant. In
contrast to any of the available HTTP-based adapters ( for example, the HTTP or SOAP adapter),
communication through the ProcessDirect adapter is not routed through the load balancer and, therefore,
does not imply any network latency.
The ProcessDirect adapter (both sender and receiver) has only one parameter, which is Address. When two
integration flows need to communicate with each other through this adapter, the value specified for
Address (in the related sender and receiver ProcessDirect adapters) has to match in both integration flows.
Note
The standard integration flow is developed as part of an integration package that is made available in the
Integration Content Catalog (either by SAP or an SAP partner). For the sake of simplicity, we refer to this
target group as the integration content publisher.
The custom integration flows (pre-exit and post-exit integration flows) are developed by customers who
want to consume the integration package and build the extension on top. For the sake of simplicity, we refer
to this target group as the customer.
Depending on whether the standard integration flow contains both a pre-exit and a post-exit, or only a post-
exit, the customer needs to create either two integration flows (a pre-exit and a post-exit integration flow)
or only one post-exit integration flow to specify its own extensions.
We first describe the structure of the standard integration flow, and then the structure of the custom (pre-exit/
post-exit) integration flows.
The standard integration flow that supports mapping extensions is generally structured in the following way. It
contains one or more customer exits (pre-exit or post-exit). These exits are encapsulated within local
integration processes. From the local integration process, the related pre-exit or post-exit integration flow is
called by the ProcessDirect receiver adapter.
The following figure shows the general structure of the standard integration flow containing both a pre-exit
and a post-exit. Furthermore, the message flow is illustrated using the same notation for the different
messages as already used above.
● The main integration process contains the main business logic (in this section, we'll just look at a mapping)
and two routing steps leading to two different message processing paths (both containing a Process Call
step).
● The local integration processes contain further steps to modify the message and – most important – the
outbound call to a receiver. The receiver represents the pre-exit or post-exit integration flow. The outbound
communication with the receiver is defined in a ProcessDirect adapter.
In the model, the local integration process on the left represents the pre-exit, and the local integration
process on the right represents the post-exit.
Let's go into more detail. The integration flow is designed so that the message is processed in the following way
at runtime:
● The sender system sends a message (A', which is the extended source message structure) to SAP Cloud
Platform Integration through a sender channel.
● The Content Modifier creates two exchange properties:
○ custom_extension_enabled
This exchange property is used as a parameter to control whether the extension concept should be
applied to the standard integration flow or not. If its value is set to true, the pre-exit and post-exit
integration flows are called. If its value is false, the standard mapping is applied without any further
extension.
This parameter is externalized so that the value can be specified during integration flow configuration
without the need to edit the integration flow model.
○ original_payload
In this exchange property, the content of the original message received by the sender is stored (to
make it available in a later step where the original message is merged with the transformed message
as preparation for the post-exit mapping step).
If the message is first handed over to the local integration process with the pre-exit, the following steps are
executed on the message:
A Request-Reply step calls a receiver (which represents the pre-exit integration flow) through a ProcessDirect
receiver adapter.
The address of the ProcessDirect adapter is externalized and can be configured during integration flow
configuration. The customer needs to make sure that the address of the ProcessDirect receiver adapter (of the
standard integration flow) is the same as the one used in the ProcessDirect sender adapter of the pre-exit
integration flow (to be connected).
If the message is first handed over to the local integration process with the post-exit, the following steps are
executed on the message:
● A Content Modifier constructs a message body that is composed of the following two parts:
○ The original payload received from the sender (contained in the original_payload exchange
property created in the Content Modifier of the main integration process)
(message A' in the figure)
○ The message that results as output from the Mapping step in the main process
(message B in the figure)
The following figure shows the general structure of a pre-exit or post-exit integration flow.
● The sender shape represents the standard integration flow, which passes on the original message. The
sender component is connected to the integration process component through a ProcessDirect sender
adapter.
● Detailed instructions about the mapping are provided in a separate topic. In short, the mapping steps in
the pre-exit and the post-exit integration flow process the inbound message in the following way:
○ Pre-exit mapping: Transforms the extended source message A' into the original message A, which can
be consumed by the standard mapping.
○ Post-exit mapping: Transforms the message A',B from the standard integration flow into the final
message C.
The following list contains the main tasks in an integration flow extension process:
Create and publish the standard integration flow. Make sure that it contains the required customer exits (post-
exit and, if applicable, pre-exit point). Externalize those parameters that the customer needs to configure the
required parameters (for example, the addresses of the involved ProcessDirect adapters) without changing the
standard integration flow.
Note
It is a prerequisite that you, the customer, have imported the integration package that contains the
standard integration flow (that is to be extended) into your own workspace.
1. Copy the desired standard integration flow (as contained in the Integration Content Catalog) into your own
workspace.
2. Create the required custom integration flows (with the pre-exit and post-exit steps).
Make sure you deploy the custom integration flows on the same tenant as the standard integration flow.
3. Configure the standard integration flow so that it can be used together with the custom integration flows.
To access the configuration user interface with the externalized parameters, open the standard integration
flow and select Configure.
Examples:
○ For the custom_extension_enabled parameter, enter the value true.
○ For the Address parameter of the ProcessDirect receiver adapter, enter the same value as used in the
Address field of the custom integration flow's ProcessDirect sender adapter.
Related Information
A demo example, explained step by step, covers the key aspects of integration flow extension.
This section provides a more detailed view of the concept overview given in Integration Flow Extension -
Concepts [page 419]. We walk you through the most important concepts and involved components using a
simple demo scenario.
You should be able to set up and run the scenario within approximately one hour.
The demo scenario covers both the design of the standard integration flow and of the custom integration
flow. Consider the following:
● This demo scenario covers both sides of the extension concept: the design of the standard integration
content and the customer extension (in a post-exit integration flow). Note that these tasks are typically
accomplished by different target groups. Therefore, the demo scenario is aimed at experts from both
target groups who want a more detailed understanding of the concepts and also hands-on experience
of the tasks.
The tasks described in the two parts of the demo scenario are associated with the different target
groups as follows:
○ Creating the Standard Integration Flow [page 429]
Describes the tasks accomplished by the content publisher (at SAP or an SAP partner
organization) who makes this integration flow available as part of a predefined integration package
that customers can copy into their own workspace.
○ Creating the Custom Integration Flow with the Post-Exit Mapping [page 441]
Describes the tasks accomplished by the customer who is extending the content.
● The example scenario does not reflect a real-life integration use case. We keep the mapping as simple
as possible and focus on the main principles and concepts so that you can see the differences between
the standard mapping provided by SAP and the customer extension. To make it easy for you to
reproduce the example, the integration flows are based on the scenario described in SOAP Sender
Adapter: Example Integration Flow [page 661].
● To find instructions for a mapping extension based on a real-life example, check out Mapping Extension
Step by Step (Example from SAP Hybris C4C) [page 447].
● The demo scenario does not use a pre-exit step.
In this demo scenario, the sender system is simulated by a SOAP client, which sends a SOAP message toCloud
Integration. The original message has a simple structure (only four elements). The standard mapping is also
kept simple and concatenates (me.res) two of the original fields
The mapping extension in the post-exit integration flow (the customer extension) adds an additional field to the
target message and fills it with the value of a field from the original message ( changing uppercase letters to
lowercase letters).
To learn how to design the standard integration flow, see Creating the Standard Integration Flow [page 429].
To learn how to design the post-exit integration flow, see Creating the Standard Integration Flow [page 429].
To learn how to execute the scenario, see Executing the Scenario [page 445].
The following figure shows the mapping used in this demo scenario.
The figure also indicates which fields correspond to the messages A and B as used in the concept overview in .
The fields supplierForename and supplierForename are concatenated (resulting in the field
supplierName of target message B).
The following figure shows the custom mapping extension in the post-exit integration flow.
The custom mapping maps the supplierSurname field of the source message to the new field
AdditionalField and applies a simple function (changing uppercase characters to lowercase characters).
Note
This section is targeted at the content publisher at SAP or an SAP partner organization.
The standard integration flow is depicted in the following figure, where the key elements are highlighted.
● Filter
The integration flow does not have a receiver component. The result of the mapping can be analyzed in the
response that is sent back to the SOAP client.
Related Information
Follow the instructions in the following topic: Create the SOAP Sender Channel [page 662]
This property stores the content of the inbound message (message A, which is retrieved from the
WebShop) by dynamically evaluating the Camel simple expression ${in.body} at runtime.
1. Select Externalize.
2. Go to the Exchange Property tab.
3. Enter custom_extension_enabled under Name, select Expression as the Type, , and enter
{{ custom_extension_enabled }} in the Value field.
4. Click Define Value (in the Value field), and on the next screen enter false. Then, choose OK
When the integration flow is being processed, the Content Modifier does the following:
● It writes the complete message body (retrieved from the external OData service) into the property
original_payload.
● Secondly, it writes the value specified during integration flow configuration for the externalized parameter
custom_extension_enabled into the Exchange property. This property is used in the subsequent
routing step to determine whether the default route is taken (where only the SAP mapping is processed
and nothing else is done to the message) or whether the customer exit is taken (if the value of
custom_extension_enabled is set to true).
Related Information
Before defining the standard mapping, you need to upload to the tenant certain resource files (Web Services
Definition Language, or WSDL files for short) that define the structure of source message (A) and target
message (B) of the mapping. To facilitate the design of the demo scenario, we provide you with the content of
the WSDL files in this section.
1. Copy the content of the following coding example into a text editor and save it as a file with
extension .wsdl (for example, A.wsdl).
This WSDL file defines the structure of the source message of the standard mapping (corresponding to the
original message A, which is sent to the integration flow through the SOAP client).
Sample Code
You will notice that the WSDL contains exactly the same fields that are shown in the topic Standard
Mapping [page 427] for message A.
2. Do the same with the following code sample, but save it under a different name, for example, B.wsdl.
This WSDL file defines the structure of the target message B of the standard mapping.
Sample Code
Tip
You might have noticed that in this WSDL document the namespace prefix p2 is used instead of the
namespace prefix p1 (used in the WSDL for message A). For example, note the following entry:
p2="http://cpi.sap.com/demo.
This is an important detail. Using the correct namespace prefixes (and making some further related
settings in the integration flow as shown below) makes sure that the merged message that is passed on
to the post-exit mapping can be interpreted in the right way. Using namespaces correctly is critical
because the merged message contains the structures of messages A and B, which have many
elements with the same name.
3. In the integration flow editor, click somewhere outside the Integration Process shape and select Runtime
Configuration. In field Namespace Mapping enter the following:
xmlns:p1=http://cpi.sap.com/demo;xmlns:p2=http://sap.com/xi/XI/SplitAndMerge
The namespace mapped to the prefix p2 relates to the final merged message.
4. In the integration flow editor, click somewhere outside the Integration Process shape and select Resources.
5. Choose Add Schema WSDL and browse to the A.wsdl file.
1. In the integration flow model, select the Message Mapping shape from the palette (under Mapping) and
place it in the model after the second Content Modifier.
3. Choose Create.
4. The graphical mapping editor is opened.
5. Choose Add source message and browse to the WSDL file of the source message (in our example, the file
A.wsdl). Choose Upload from file system.
6. Choose Add target message and repeat the steps with the WSDL file for the target message (B.wsdl).
7. Connect the fields of the source and target message in the following way using the cursor:
○ Connect orderNumber in the source message with orderNumber in the target message.
○ Connect supplierForename in the source message with supplierName in the target message.
○ Connect supplierSurname in the source message with supplierName in the target message.
○ Connect productName in the source message with productName in the target message.
8. Click supplierForename in the source message. The connectors between supplierForename and
supplierSurname (in the source message) and supplierName (in the target message) are highlighted.
9. In the section below the graphical mapping editor, select the function Text Concat . Connect the
source and target fields in the following way:
Connect supplierForename with string1 and supplierSurname with string2. Connect concat with
supplierName.
Furthermore, you can add a delimiter string (for example, / or an empty space).
This mapping function merges the fields supplierForename and supplierSurname from the source
message to one single field supplierName in the target message (where the values are separated in the
target field by the delimiter string).
10. Choose OK. The integration flow model is displayed again.
Before continuing to the routing step, we first add a local integration process shape to the integration flow
model.
1. In the palette, select Process Local Integration Process and position the shape below the (main)
Integration Process shape.
2. In the palette, select Transformation Filter and place the filter shape to the right of the start event of
the local integration process.
3. Click the Filter shape and, on the Processing tab, enter the following string in the XPath Expression field: /
p1:Order_MT
Keep the default setting of Value Type as Nodelist.
Sample Code
<ns1:Messages xmlns:ns1="http://sap.com/xi/XI/SplitAndMerge">
<ns1:Message1>
${property.originalpayload}
</ns1:Message1>
<ns1:Message2>
${in.body}
</ns1:Message2>
</ns1:Messages>
This step creates the message (A,B), which will be passed on to the custom integration flow.
Again, you will notice the following dynamic expressions:
○ Expression ${property.original_payload} is replaced at runtime by the content of the original
message A (which was stored prior to the standard mapping step in the Exchange property).
○ Expression ${in.body} is replaced at runtime by the content of the message that enters the Content
Modifier. This is message B, which results from the standard mapping.
5. Add a Request-Reply step (you can find this in the palette under Call External Call ) to the right of the
Content Modifier.
6. Add a receiver shape below the local integration process and connect it to the Request-Reply step.
7. When connecting the shapes, select ProcessDirect as the Adapter Type.
8. The ProcessDirect adapter has only one parameter that you need to configure. Specify the Address as an
externalized parameter.
Select Externalize and, on the next screen, enter {{endpoint}} as Address.
Click in the area outside the integration process and the local integration process shapes and select
Externalized Parameters.
You will find the two parameters that you have already defined as externalized parameters (one from the
Content Modifier on the More tab and the endpoint address of the ProcessDirect adapter on the Receiver tab).
Now that you have defined the local integration process, you can continue with the design of the main
integration process.
1. In the main integration process, add a Router step to the right of the message mapping. You can find the
step type in the palette under Message Routing Router .
2. Add another end message event (from the palette under Events) and place it below the existing one.
3. Connect the Router step with the second end message event.
4. Click the upper route (connection of router step and upper end message event) and, on the Processing tab,
select Default Route.
6. In the lower route, add a Process Call step (which you can find in the palette under Call Local Call ).
Select the Process Call shape and, on the Processing tab, choose Select. You can now select the local
integration process that you have already defined in a preceding step.
7. In the lower route, after the Process Call shape, add a Filter step (you can find this in the palette under
Transformation Filter ).
This part of the integration flow model should now look like this:
8. On the properties sheet of the Filter step. under Processing, add the following expressing in the XPath
Expression field:
/p2:Messages/p2:Message1/p1:Order_MT
Note that this setting contains two different namespace prefixes, p1 and p2. The mapping of these prefixes
with the relevant namespaces is defined on the Runtime Configuration tab of the integration flow (as shown
in Defining the Standard Message Mapping [page 433]).
Using different namespace prefixes and addressing them correctly as described is necessary to make sure
that the correct parts of the message are filtered. Note that this Filter step comes after the Process Call
step, and so it gets the message processed in the post-exit integration flow as input.
Note
● The sender shape in this model represents the standard integration flow (which is the source integration or
producer integration flow). The custom integration flow receives the message from the producer
integration flow through the ProcessDirect adapter (as both integration flows are deployed on the same
tenant).
● The post-exit message mapping step contains the mapping as explained under Post-Exit Mapping [page
428].
Related Information
1. Create a new integration flow (this will be the custom integration flow).
2. Remove the receiver component.
3. Connect the sender shape with the start message event and select ProcessDirect as Adapter Type.
4. Configure the address of the ProcessDirect adapter as an externalized parameter in the same way as
described in Defining the Local Integration Process [page 437] for the ProcessDirect receiver adapter in the
standard integration flow.
It is a prerequisite for the custom mapping that you upload the WSDL files defining the source and target
message structures to the Resources tab of the integration flow (as shown for the standard mapping in Defining
the Standard Message Mapping [page 433]). As you can see, three message structures, A, B, and C, are
involved in the mapping, and the corresponding resource files (WSDLs) have to be made available to the
integration flow.
To make it easy for you to define the mapping step, we provide the content of the WSDL files below.
You can use this content to create the WSDL file for the source and target message of the custom mapping:
● The WSDL file describing the original message A (as contained in the standard integration content)
Use file A.wsdl from Defining the Standard Message Mapping [page 433].
● The WSDL file describing the message structure after the standard mapping (message B)
Use file B.wsdl from Defining the Standard Message Mapping [page 433].
However, note one important detail: In this WSDL file, you need the original namespace prefix p1.
Otherwise, message processing results in a namespace conflict and the mapping step cannot be executed
without an error.
Correspondingly, to define the WSDL file for message B (B.wsdl), copy the following content:
Sample Code
Note
Both WSDL files together define the structure of the merged message (A, B), which is passed on from the
standard integration flow (as you will see when defining the message mapping).
For target message C, create a WSDL file with the following content and import it to the integration flow
Resources tab:
Sample Code
Note that message structure C is similar to message structure B; it has one additional field
additionalField. Save the file with the name C.wsdl.
1. Add a message mapping shape between the start message and the end message event.
Choose the + icon and define a name for the message mapping.
2. Choose Create.
The mapping editor opens.
3. Select Add source message and browse to the file A.wsdl.
4. Choose the Edit message icon.
9. Now, open the upper part of the source structure and connect the field supplierSurname in the source
structure with the field additionalField in the target structure.
The figure below shows the final mapping and also indicates which parts are associated with the messages
A, B, and C.
10. Finally, to apply the mapping function for the connection of the fields supplierSurname (source
structure) and AdditionalField (target structure), open the mapping expression Text
toLowerCase . Connect the source and target field in the following way: supplierSurname with
toLowerCase and connect the latter with AdditionalField.
11. Choose OK to return to the integration flow model.
12. Save and deploy the integration flow.
To execute the scenario, you need to install a SOAP client (for example, SoapUi) and configure it accordingly.
You need to configure the SOAP client so that it calls the endpoint address of the standard integration flow.
More information: Set Up the SOAP Client and Start Message Processing [page 665]
Sample Code
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:demo="http://cpi.sap.com/demo">
<soapenv:Header/>
<soapenv:Body>
<demo:Order_MT>
<orderNumber>1</orderNumber>
<supplierForename>WALTER</supplierForename>
<supplierSurname>SMITH</supplierSurname>
<productName>Notebook</productName>
</demo:Order_MT>
</soapenv:Body>
</soapenv:Envelope>
1. Configure the standard integration flow (open the standard integration flow and choose Configure).
2. Specify a value for the Address parameter of the ProcessDirect adapter (or leave the default value as it is).
3. Go to the More tab and leave the default value for the custom_extension_enabled parameter as false.
8. Repeat all these steps, but with the externalized parameter custom_extension_enabled of the standard
integration flow specified as true.
9. The SOAP client should receive a response in which the custom mapping has been applied.
The field additionalField is available and contains the content from the original field
supplierSurname (but uppercase characters have been transformed to lowercase characters).
This section explains how to do mapping extensions based on a real example from the product area SAP Hybris
Cloud for Customer (C4C).
Note
Note
For more background information regarding this example, check out the following blog in SAP Community:
Extending standard integration flow to support Customer extensions .
Let's assume that you want to do a mapping extension that also includes the extension of the source message
structure. This means that both a pre-exit and a post-exit are involved. The integration flows discussed in this
section have the same structure as those explained in Integration Flow Extension - Concepts [page 419].
● The sender is an SAP ERP system sending an IDoc message to Cloud Integration. An IDoc adapter is used
as the sender channel for the standard integration flow.
● The receiver is an SAP Cloud application that expects a SOAP message (through the SOAP adapter).
● You also need to extend the source IDoc message. To support this use case, the standard integration flow
contains a pre-exit. The pre-exit step will then enable you to design a corresponding pre-exit integration
flow in order to map the extended source message to the original message structure. The message
processed by the pre-exit integration flow can then be consumed by the standard mapping.
You need to create two integration flows, one for the pre-exit mapping and another one for the post-exit
mapping. We explain the required steps to extend the mapping and to connect the custom integration flows
with the standard integration flow.
The following figure shows the overall message flow between the standard integration flow and the pre-exit and
post-exit integration flows, including sender and receiver.
● Message A
Contains the original source message structure that is defined by the requirements of the sender system
and can be consumed by the standard mapping.
● Message A'
Contains an extended source structure (defined by the customer-specific requirements with regard to the
sender system).
Typically, the customer derives message structure A' from message structure A by adding additional fields.
In our example, the sender system is an SAP ERP system that sends an IDoc to SAP Cloud for Customer.
Messages A and A' are IDoc messages.
● Message B
Note that the letters A, A', and so on correspond to the notation as used in the general concept description in
Integration Flow Extension - Concepts [page 419].
Related Information
4.7.3.1 Prerequisites
The standard integration flow (provided as part of a predelivered integration content package) with both a pre-
exit and a post-exit has the following design. The message flow as explained above is also depicted.
Prerequisite: You have copied the integration package with the standard integration flow into your own
workspace.
We also assume that you, the customer, have already defined the source message structure (the extended IDoc
structure A') and the target message structure that depends on the intended final mapping (this is referred to
as message C).
Note
The WSDL files defining the message structures A and B (source and target message of the standard
mapping) are contained as part of the standard integration flow (from the predelivered integration
package).
You can get these by deploying the standard integration flow (from the predelivered integration package)
on your customer tenant, opening the standard integration flow, clicking in the area outside the (local)
integration process shapes, and selecting the Resources tab. Here, you will find the related WSDL files and
you can download them.
The following figure shows the resources as uploaded to the standard integration flow:
The standard integration flow contains a mapping that transforms message A to message B. The following
figure shows this mapping as displayed in the mapping editor:
Note
Let's assume that you want to extend the source structure by additional fields.
1. Extend the source structure of the message (to get message A').
You want to add additional elements to the source message, for example, the following element:
Z101YANEXT2.
Edit the WSDL file for message A (COD_STOCK_REPLICATE.COD_STOCK_REPLICATE01.wsdl)
accordingly (using the WSDL for message A from the Resources tab of the standard integration flow).
Save the new WSDL file under any name (for example, YANCOD_STOCK_REPLICATE01.wsdl).
2. Create a new integration flow.
We recommend that you add this new integration flow to the package that contains the standard
integration flow (both have to be deployed on the same tenant).
3. Model the integration flow according to the following figure:
4. When connecting the sender shape with the message start event, select the ProcessDirect adapter type.
Go to the Connection tab of the ProcessDirect adapter and as Address enter any value starting with a slash
(/), for example /preexit_flow.
5. In the integration flow editor, click somewhere outside the Integration Process shape and select Resources.
6. Choose Add Schema WSDL and browse to the WSDL file defining the source message (message A',
this is file YANCOD_STOCK_REPLICATE01.wsdl).
In the same way, add the WSDL file for message A to the integration flow (file
COD_STOCK_REPLICATE.COD_STOCK_REPLICATE01.wsdl).
7. Repeat these steps for the target message (here, add the WSDL file for message A). Note that the pre-exit
steps map the extended source message A' to the original message A. As mentioned above, you get the
9. Choose the + icon next to the message mapping shape and enter a name for the message mapping (for
example, pre_exit_mapping).
13. Choose Add target message and repeat the steps with the WSDL file for the target message (message A).
14. Connect all fields that are identical in the source and target mapping without assigning any mapping
function. Do not connect the additional fields (which have no equivalence in the target message A) to any
target field.
Create another integration flow with the same shapes as in the pre-exit integration flow (also within the same
integration package). Make the following specific settings:
● In the Address field of the ProcessDirect adapter, enter a string that is different to the one in the pre-exit
integration flow, for example /postexit_flow.
● Click in the area outside the integration process shape, go to the Resources tab, and upload the WSDL files
for the messages A', B, and C.
Note that you get the resource for message B from the standard integration flow, whereas the resource for
message C depends on the specific custom enhancements related to the target structure. Typically,
message C is similar to message B, but contains additional fields.
● Configure the message mapping in the following way:
For the source message, upload both resource files for messages A' and B. For the target message, upload
the resource file for message C.
You can proceed in the same way as described for the demo example in Creating the Custom Mapping
[page 442].
To make sure that the standard integration flow and post- and pre-exit integration flows are processed in
conjunction as expected, make sure that the ProcessDirect adapter addresses match in the right way.
The ProcessDirect adapter address of the pre-exit integration flow must match the ProcessDirect adapter
address in the pre-exit of the standard integration flow.
The same applies for the ProcessDirect adapter address of the post-exit integration flow and the corresponding
ProcessDirect adapter address in the post-exit of the standard integration flow.
The content transport mechanism enables you to reuse content between multiple tenants by exporting it from
one tenant and importing it on another.
Let us consider an example where you have two tenants, test and production. You design and test an
integration flow on the test landscape and it works as expected. Instead of redesigning the integration flow on
the production landscape, you can use the same integration content that you have tested in the test landscape
in the production landscape.
There are currently certain limitations when working in the Cloud Foundry environment. For more
information on the limitations, see SAP Note 2752867 .
● Transport using CTS+: In this option, with a single click, you can transport integration content from one
landscape to another through CTS+ system. For more information, see Content Transport Using CTS+
[page 460].
● Transport using Transport Management Service: In this option, with a single click, you can transport
integration content from one landscape to another through Transport Management Service. For more
information, see Content Transport Using Transport Management Service [page 461].
● MTAR Download: In this option, you download a MTAR file from the tenant you want to export integration
content and manually upload this MTAR file to a CTS+ system. For more information, see Content
Transport using MTAR Download [page 462].
● Manual export and import: In this option, you use the Export feature in your workspace to export
integration content from the tenant you want to export integration content and manually import this in the
target tenant. For more information, see Content Transport using Manual Export and Import [page 463].
Note
● If the configuration or the transport results in an error, the error codes are displayed along with the
error message.
● The externalized parameters/configured values of the integration flows will not be overwritten during a
content re-transport.
Related Information
All the tasks mentioned here are one-time activities. It is recommended that the tenant administrator performs
these tasks to facilitate content transport.
Remember
There are currently certain limitations when working in the Cloud Foundry environment. For more
information on the limitations, see SAP Note 2752867 .
You should enable the service, Solutions Lifecycle Management in your account. Ensure that you perform this
step in all the accounts where you want to use content transport. This is a one-time activity. You can do that by
performing the following steps:
The content transport mechanism provides you the flexibility of directly exporting integration content to CTS+
system, or exporting the content as an MTAR file to a local file system. To use either of the modes, you need to
enable the Transport Mode found in the Settings ( (settings)). You can do this by navigating to (settings)in
the Web UI and then choosing the Transport tab. Based on your requirement select either CTS+ or Transport
Management Service or MTAR Download from the dropdown list.
Remember
● Please note that this step needs to be performed on the tenant from which you want to export content.
For example, if you are exporting content from your Test tenant, you need to perform this step only in
the Test tenant.
● Transport Settings will be available for you only if you have the role AuthGroup.Administrator assigned
to your account. Please assign the role to your user if you do not see the Transport Settings.
Tip
If you do not see the Settings( ) options, please contact your tenant administrator for performing this
step.
Next Steps
You have to create HTTP destinations to facilitate connections between your tenant, solutions lifecycle
management service and CTS+/Transport Management Service systems. If you want to use:
● MTAR Download option, create a HTTP destination in Solutions Lifecycle Management for your tenant
management node.
● CTS+ option, create two HTTPS destinations in Solutions Lifecycle Management, one for your tenant
management node (Cloud Integration application) and one for the CTS+ system.
● Transport Management Service option, create three HTTPS destinations in Solutions Lifecycle
Management, one for your tenant management node (Cloud Integration application), another one for
getting oauth tokens, and the third one to the Transport Service backend.
You must create a destination in solutions lifecycle management as the first step to enable content transport.
Prerequisites
Remember
There are currently certain limitations when working in the Cloud Foundry environment. For more
information on the limitations, see SAP Note 2752867 .
You must enable the Solutions Lifecycle Management service. For information on how to do this, see Enabling
Content Transport [page 457].
Context
You should create a destination in the Solutions Lifecycle Management service to enable import of content into
your target tenant.
Procedure
Tip
You see the page header as Service: Solutions Lifecycle Management - Overview.
Type HTTP
Description You can provide a description for your reference. This field
is optional.
URL Provide the URL of the system that you want a create a
destination to.
Remember
Please ensure that the credentials of the user used in
the destination is a member of the SAP Cloud
Platform Integration account from which you want to
transport content. The user should also have role
AuthGroup.IntegrationDeveloper and
IntegrationContent.Transport.
5. Choose Save.
Prerequisites
Remember
There are currently certain limitations when working in the Cloud Foundry environment. For more
information on the limitations, see SAP Note 2752867 .
● You have set the Transport Mode as CTS+ in the tenant Transport settings. For more information, see
Enabling Content Transport [page 457].
● You have created two HTTP destinations on Solutions Lifecycle Management. One for the tenant
management node. For more information, see Creating HTTP Destination in Solutions Lifecycle
Management [page 458]. The other one is for CTS+ system via cloud connector. For more information see,
Integration with Transport Management Tools.
Context
SAP Cloud Platform Integration provides an option to transport integration content directly to CTS+ system.
You can then transport this content from the CTS+ system to your target SAP Cloud Platform Integration
tenant. Here's how you can transport content to CTS+ directly:
Procedure
Remember
You are not allowed to use the character "_" in the Transport Comments field.
You see a prompt with the Transport ID. The integration package will be transported to the CTS+ system.
4. To import the content in the target system, follow the steps mentioned .
Prerequisites
Remember
There are currently certain limitations when working in the Cloud Foundry environment. For more
information on the limitations, see SAP Note 2752867 .
Note
Transport Management Service has been moved from Beta version to GA version.
● You have set the Transport Mode as Transport Management Service in the tenant Transport settings. For
more information, see Enabling Content Transport [page 457].
● You have created two HTTP destinations on Solutions Lifecycle Management, one for the tenant
management node and one for the Transport Management Service system. For more information, see
Creating HTTP Destination in Solutions Lifecycle Management [page 458].
● For more information on integration with Transport Management tools, see: Integration with Transport
Management Tools
Context
SAP Cloud Platform Integration provides an option to transport integration content directly to Transport
Management Service system. You can then transport this content from the Transport Management Service
system to your target SAP Cloud Platform Integration tenant.
For more detailed information with screenshots on working with Transport Management Service, see Cloud
Integration – Using Transport Management Service for a Simple Transport Landscape on SAP Community.
Here's how you can transport content to Transport Management Service directly:
Procedure
You are not allowed to use the character "_" in the Transport Comments field.
You see a prompt with the Transport ID. The integration package will be transported to the CTS+ system.
4. To import the content in the target system, follow the steps mentioned in official documentation for
Transport Management Service .
Prerequisites
Remember
There are currently certain limitations when working in the Cloud Foundry environment. For more
information on the limitations, see SAP Note 2752867 .
● You have logged into the SAP Cloud Platform Integration web application source and target tenants.
Access the (Design) menu.
● You have selected MTAR Download as your Transport Mode. For more information, see Enabling Content
Transport [page 457].
Context
You can use the MTAR Download option to download a MTAR file/s of the integration content to be transported
Procedure
Remember
You are not allowed to use the character "_" in the Transport Comments field.
Prerequisites
You have logged into the SAP Cloud Platform Integration web application source and target tenants. Access the
(Design) tab (workspace).
Context
One of the options to transport content from one tenant to another is to use the Export and Import options for
your integration package. The application imports the integration package to your local file system in the form
of a .zip file. You can import the same file in the target tenant using the Import option.
Procedure
SAP Cloud Platform Integration allows you to transport integration packages from a source tenant to a target
tenant in order to re-use the content of the integration package (integration flows, value mappings, OData
services) on the target tenant.
However, having transported an integration package to a target tenant does not mean that the contained
integration flows run without any further actions.
For example, you need to make sure that security material and keystore content that is used/referenced by the
integration flow is as well available on the target tenant. If that is not the case, the integration flow will fail.
This topic provides an overview of how to handle the different kinds of artifacts to make sure that integration
content can be executed without errors also on the target tenant.
Design view Integration flow, value mapping, OData Can be transported as part of an
service integration package (with one of the
chosen transport options such like
manual export/import, usage of SAP
CTS+ or usage of the SAP Cloud
Platform Transport Service).
Design view Integration flow resources (schemas, Are transported along with the
mappings) integration flow.
Operations view/Keystore Public key or certificate Download public part of key pair
(certificate, certificate chain, root
certificate, signing request) from
source tenant and import to target
tenant.
More information:
Operations view/Keystore Private/public key pair You need to newly create the key pair
and upload it to the keystore of the
target tenant..
More information:
Operations view/Security Material User Credentials artifact Cannot be reused; newly create artifact
on target tenant (and use same artifact
name in order not to break any
integration flow references to the
artifact).
More information:
Operations view/Security Material Secure Parameter artifact Cannot be reused; newly create artifact
on target tenant (and use same artifact
name in order not to break any
integration flow references to the
artifact).
More information:
Operations view/Security Material OAuth2 Credentials Cannot be reused; newly create artifact
on target tenant (and use same artifact
name in order not to break any
integration flow references to the
artifact).
More information:
Operations view/Security Material Known Hosts Newly upload the known hosts file to
the target tenant.
More information:
Operations view/Security Material PGP Public Keyring Newly upload the keyring file to the
target tenant.
More information:
Operations view/Security Material PGP Secret Keyring Newly upload the keyring file to the
target tenant.
More information:
More information:
Operations view/JDBC Data Sources Operations view/JDBC Data Source Cannot be reused; newly create artifact
on target tenant (and use same artifact
name in order not to break any
integration flow references to the
artifact).
More information:
Partner Directory content You need to migrate the Partner Directory content using the
OData API.
More information:
When you plan to automate certain manual steps described in this topic, you can use the SAP Cloud Platform
Integration OData API.
● Reading a security artifact from the source tenant and adding it to the target tenant
● Exporting a certificate from the source tenant and importing it to the target tenant
● Generating a key pair
● Creating a certificate-to-user mapping
SAP Cloud Platform Integration enables you fetch content directly from ES repository. This enables you to
reuse the integration content that you already have in your ES repository and avoid the overhead of creating
that content again in the Cloud Integration web application.
To enable this content import feature, you have to configure the connection settings to connect to ES
repository. Since this is an on-premise system, you need to connect to it via Cloud Connector.
For more information on Cloud Connector, see SAP Cloud Platform Connectivity Cloud Connector
documentation.
You can also check out this blog on SAP Cloud Platform Integration community: Importing Message Mapping
from ES Repository in SAP Cloud Platform Integration .
Prerequisites
Context
You have to configure connectivity to the ES Repository from SAP Cloud Platform Integration in the
Settings() tab of your application. You need to have tenant administrator role to access this tab. If you do not
see the icon, in your application, please contact your tenant administrator to obtain this role or configure
connectivity to the ES Repository.
Procedure
1. Choose .
2. Choose the ES Repository tab.
You see some fields that enable to configure connectivity to ES Repository.
3. Choose Edit. Provide values in fields based on description in table:
Field Description
Address Enter the URL configured in the cloud connector that con
nects to the ES Repository.
Credential Name Enter the credential name that you have deployed in the
operations view of this application.
Location ID Enter the location ID that you have specified in your cloud
connector.
4. Choose Save.
Results
Prerequisites
You have configured a connection to ES Repository in the Settings() tab. For more information, see
Configuring Connectivity to ES Repository [page 467].
You have opened the integration flow in which you want to add the integration content and you are editing it.
Context
After you have configured the connection to ES Repository, you can import content from it in the Resources tab
of the integration flow editor. Currently, you can import:
● Message mapping
● Operation mapping
● WSDL
Restriction
Procedure
2. Choose Add Mapping Message Mapping/Operation Mapping depending on the resource you want
to import.
Remember
There are currently certain limitations when working in the Cloud Foundry environment. For more
information on the limitations, see SAP Note 2752867 .
This feature enables you to define an association between fields of messages with different structuring. For
example, consider the record Employee and we need to update the employee identification number. In the
sender system, the field name is Employee ID. However, in the receiver system, the same field is called ID.
Similarly, the table illustrates how the same fields might have different identifiers in the source and target
systems.
Employee User
Employee ID ID
Caution
If you delete a message mapping step and the mapping definition resource associated with it is not used in
any other message mapping step, then the resource also gets deleted.
Use
You use message mapping, to define an association between fields of messages with different structuring. This
enables the Cloud Integration system to recognize and update the relevant fields in the target systems.
Remember
There are currently certain limitations when working in the Cloud Foundry environment. For more
information on the limitations, see SAP Note 2752867 .
Let us consider the example of replicating employee details from one system A to system B. Each employee set
in system A contains name, date of birth, address and phone number. System B also provides similar
functionality. However, the field names are different in system B. In system A, the employee date of birth is
stored in DOB field whereas in system B, the employee date of birth is stored in birthday field. In such
scenarios, you need an artifact that can define what the equivalent fields are in system A and system B
respectively. Message mapping can help you achieve that. Using a graphical editor, you can define a table or a
map (hence the name mapping) which the system uses as a reference to identify equivalent fields in system A
and system B.
In the same scenario, let us assume that the date of birth in system A is in YYYY-MM-DD format. You want to
change the format to DD-MM-YYYY, the format in system B. In this case, you can use a mapping function that
will transform the data into the format that you want, which in this case is DD-MM-YYYY.
The mapping editor provides some standard functions like Arithmetic, Boolean, Constants, Conversions and
Date.
Note
1. Simulate: The mapping simulate option enables you to test the entire mapping structure. The system will
show if the mapping contains any errors, giving you a chance to fix these errors before deploying the
integration flow. Once you complete the mapping, you can choose Simulate to run a simulation of the mapping.
2. Display Queue: The display queue option enables you to test the mapping of a specific node. In the Mapping
expression area, provide a Test Input File and choose (Display Queue) to display the simulated output for the
provided test input file.
Note
● Even if the integration flow is not in edit mode you can execute simulate and display queue test. You can
hence perform the tests for configure only content as well.
● You can refer to input xml file uploaded for simulation, for display queue also, and vice versa.
Prerequisites
You have added a message mapping step in the integration flow and defined the message paths.
Context
Remember
There are currently certain limitations when working in the Cloud Foundry environment. For more
information on the limitations, see SAP Note 2752867 .
In this procedure, you need to understand the different aspects of creating a message mapping step and the
associated mapping definition (mmap) file.
Procedure
1. If you want to use an existing mapping definition (mmap file), select the mapping step and choose
Processing Select and select the mapping definition file.
2. If you want to create a new mapping definition (mmap file), perform the following substeps:
a. Select the mapping step and choose (Create).
b. Specify a name for the mmap file and choose Create.
c. Add source and target messages.
External reference in WSDL and schema are now supported in Message Mapping with resources.
Sample Code
d. Drag fields from the source to the required field in the target to create a mapping.
e. If you want to perform an operation, add the required function in the Mapping Expression screen area.
3. You can create your own custom mapping function. Here's how:
a. In the Mapping Expression screen area, choose .
b. Enter the name of the script, which will be the name of the custom function and choose OK.
Results
The set-up would look similar to this screenshot after following the steps mentioned above.
The highlighted sections briefly explain the various features and behaviors of elements in the mapping viewer.
8
● : This button maps all the entities of the selected
node from the source structure with the target
structure.
Note
Order of mappings in the spreadsheet is not
guaranteed as seen in the mapping editor.
You can also upload message mapping file from other integration flow. To know more about adding
resources, see : Manage Resources of an Integration Flow [page 912]
Prerequisites
You have ensured that your message mapping is complete, that is, your sources are mapped to your targets.
Context
When you create message mappings, you need to check whether or not they function as expected. By
simulating a mapping, you can verify the correctness and functioning of them.
Note
Message Mapping Simulation is supported only for the following product profiles:
Procedure
1. Launch SAP Cloud Platform Integration web application by accessing the URL provided by SAP.
Note
Browsers that support the application are Internet Explorer 10, Google Chrome, and Safari.
2. Choose .
Note
You can edit the integration flow only if you choose Edit.
The input XML file data is displayed on the source message table.
9. Choose Test.
The output XML file data is displayed on the target message table.
10. If you want to download the output test XML, choose Download.
Note
Prerequisites
Context
You use the value mapping artifact to represent multiple values for a single object. For example, a product in
Company A is referred by the first three letters as 'ID The same product is referred in Company B by product
code ''0100IDE". When Company A sends message to Comapny B, it needs to take care of the difference in the
representations of the same product. So Company A defines an integration flow with a mapping element that
contains reference to the value mapping definition. You create such value mapping groups in a Value Mapping
artifact.
1. Choose the integration package where you want to add the Value Mapping artifact and choose Edit.
Operation mapping is used to relate an outbound service interface operation with an inbound service interface
operation.You can also relate IDoc and RFC interfaces with entities of the same type or with service interface
operations.
Note
You can only assign an operation mapping to an integration flow, creation of an operation mapping is not
supported.
Note
To know more about importing the mapping content into an integration flow, see Importing Mapping
Content from ES Repository [page 468]
Prerequisites
● You have created a value mapping artifact. For more information, see Creating Value Mapping [page 476].
● You are editing the value mapping artifact.
Context
You create a value mapping artifact to act as a bi-directional look up table. Value mapping offers the distinct
advantage of giving you bi-directional look up capabilities which is used quite often in productive mapping
scenarios. You can also import CSV files exported from process integration (PI) systems in your value mapping.
Procedure
1. If you want to import a CSV file that contains value mappings, choose Import and select the CSV file you
want to import.
2. In the Bi-Directional mapping section, choose Add to add a bi-directional mapping.
3. In the Value mappings for section, choose Add to add a value mapping.
There are scenarios in which values in the Source Agency are unique but the values in the Target Agency
are repetitive for different source values. For such cases, the default values are generated and auto-
assigned by the application. In case you want to modify the default value, then choose the Default Values
tab and select the Target Value to set the default value for the Source agency.
4. Provide values for the agency and identifier in the respective fields.
5. Choose to remove a bi-directional or value mapping.
You can choose Delete All to remove all bi-directional and value mappings from the respective sections.
6. Choose Save to keep the changes.
7. Choose Save as version to retain a copy of the current artifact.
You can view the version history of an artifact by choosing the current version mentioned along with it.
8. Choose Cancel to revert the changes.
Note
If you edit a web edited value mapping in eclipse, then you get a default value for 1:N, M:,1 and M:N
mappings. SAP recommends that all groups should contain only one agency identifier and value pair.
Some of the function names in web UI differ from the ones in Eclipse.
Deployment of a Value Mapping with configuration details copied from another deployed Value
Mapping is not recommended.
SAP Cloud Platform Integration supports importing of CSV files exported from process integration (PI)
systems into value mapping. This allows you to reuse the value mappings from your PI system and save time.
The CSV files that can be imported to the value mapping should adhere to the same formatting guidelines as
the CSV exported from PI systems. Here's a typical example of a valid CSV file that can be imported into value
mapping:
Sample Code
,Country1|Receiver1,Country1|Code1
Receiver_US,N75_AAEX_FileSystem_XiPattern2|,US
Receiver_DE,N75_AAEX_FileSystem_XiPattern1|,DE
Receiver_DE,N75_AAEX_FileSystem_XiPattern3|,UK
From the example above, we can infer the following guidelines for a CSV file that can be imported in value
mapping:
You perform this task to assign XSLT mapping that is available in your local workspace.
Context
You perform this task to assign XSLT mapping that is available in your local workspace.
Note
You can now leverage the XSLT 3.0 capabilities through XSLT mapping version 1.2. Some of the capablities
are:
The XSLT Mapping step version 1.2 also allows invocation of the Java function call from the XSLT resource.
Procedure
1. Click on Edit.
2. In the palette, choose XSLT Mapping.
3. In the General tab enter details for Name field.
4. In the Processing tab, select either Integration Flow or Header for mapping Source field.
a. If you select integration flow as source, then click on Select to choose the xslt or xsl file from Resource
folder or file system.
Note
○ To view or modify content of xslt mapping in the editor, click on the resource name in the
Resource field.
○ For XSLT mapping, you can select the output format of the message to be either string or
bytes from the Output Format field.
b. If you select header as source, then enter name for the header in the Header Name field.
Note
To assign xslt mapping for partner directory, the valid format for value of header name is
pd:<Partner ID>:<Parameter ID>:<Parameter Type>, where the parameter type is either
Binary or String. For example, the correct header name is pd:SenderABC:SenderXSD:Binary.
New features are introduced through new versions of the components. To consume this new feature you must
migrate to new version.
Prerequisites
Context
Once you migrate, old fields will be retained with same values and new fields are populated with default values.
If default value is not present for the mandatory field, then you must add the value.
Note
If you want to view the older version, then you must save the integration flow as a version before migrating
to new version.
Caution
Note relevant for the case when the integration flow contains an older adapter versions that does not
support yet HTTP session handling:
If you like to migrate an integration flow to a new version that supports HTTP session handling and the
original integration flow (to be migrated) contains an earlier version of a receiver adapter that doesn’t
support HTTP session handling yet, you might run into a version conflict. In that case, the following error is
displayed:
The following adapter types can be affected by this problem: HTTP, SOAP, or IDoc receiver adapter.
● Re-model the adapter in the integration flow migrated to the new version (as proposed in the error
message).This means that you need to re-create the related communication channel.
● Revert back to the earlier integration flow version. To do that, you need to make sure that you save the
original integration flow as version prior to migrating it to the new version.
For additional information on version migration of integration flow components, you can check this SAP
community blog: Versioning & Migration of Components of an Integration Flow in SAP Cloud Platform
Integration’s Web Application .
Note
Migration option will be available only if the selected component has the highest version in the
component.
Note
You can revert to previous version after migrating also, by executing the following steps:
1. Go to package view.
2. Click on Draft version.
3. Click on the timer icon to revert back to previous version.
With Runtime Configuration you can specify general properties of the integration flow.
Context
Procedure
1. Open the integration flow and select the graphical area outside the integration flow model.
2. Choose the Runtime Configuration tab.
3. Specify the following properties.
With the property Product Profile you choose the target runtime environment for the integration content
designed with the application.
A product profile defines a set of capabilities for Cloud integration content design supported by a specific
target integration platform. For example, a specific product profile supports the configuration of a specific
set of adapter types and integration flow steps.
Note
xmlns:ns0=https://
hcischemas.netweaver.neo.com/hciflow
/ns0:HCIMessage/
SenderID='Sender01'
Message Processing Log Configure the log level to display in the Monitoring editor.
Note
If you want to enter several headers with similar
names, use wildcards to make the entering faster.
○ None
Session handling is switched off
○ On Exchange
Each exchange corresponds to a single session (use
this option for stateful services)
○ On Integration Flow
Only one session will be used across the whole inte
gration flow (only use this option for stateless serv
ices)
Note
Once an HTTP session has been initialized, there
is usually no further authentication for the dura
tion of the session (one of the advantages of us
ing sessions). This means that all further HTTP
requests on that server are processed in the con
text of the user that was logged on when the ses
sion was initialized. If, however, this behavior
does not meet your requirements (for example,
the user is dynamic and can change from request
to request), you can select either an exchange
session scope (if the user remains the same for
at least the processing of a single message) or no
session.
Note
SuccessFactors (OData V4) and SuccessFactors
(REST) adapters do not support HTTP session
handling.
Related Information
Prerequisites
Context
SAP Cloud Platform Integration Web application allows you to configure an integration flow individually or
multiple integration flows at once. You can configure the runtime, error configuration and also provide a
description about the integration flow.
Note
If you want timer or SuccessFactors sender adapter scheduler to be available in quick configuration, you
need to externalize these parameters while creating the integration content.
If you want SucessFactors sender adapter scheduler to be available in quick configure for existing
integration flows, you should externalize the scheduler parameters and republish the integration package.
You must click on the participant’s name of sender and receiver elements, to view the header and property
information. Also, you must drag and drop message flow over the participant's name, to assign
communication channels.
Restriction
You cannot use quick configure option for integration flows in Monitor and Discover tab pages. You can only
configure integration flows in your customer workspace (Design tab page).
Procedure
1. In Design tab page, select the integration package that contains the integration flow you want to configure.
2. Choose Package content.
You see an overview of all the artifacts available in the selected integration package.
3. In the Actions column for the integration flow you want to configure, choose and Configure.
4. Select relevant details of externalised components in Sender, Reciever or More tab.
5. To view and modify externalised values for integration flow configuration or integration flow steps or local
integration process or integration process.
a. Select Type of component.
Note
You can also view all parameters of the component using All Parameters option.
Note
You can use error configuration to handle errors when message processing fails at runtime. You select
an error handling strategy based on the descriptions below:
Option Description
Raise Exception (Deprecated) If a message exchange could not be processed and there is an
IDoc or SOAP ( SAP RM) channel in the integration flow, an ex
ception is raised to the sender.
Related Information
Prerequisites
Context
Remember
This component or some of its features might not be available in the Cloud Foundry environment. For more
information on the limitations, see SAP Note 2752867 .
SAP standard integration packages offer a feature where you can configure multiple integration flows that
connect to the same system at once. You need to enter the configuration details only once in the mass
configuration screen. All integration flows that connect to that system instance are updated with the details.
This eliminates the need for individually entering the configuration details in each integration flow.
For example, consider a scenario in which all the integration flows connect to the same instance of SAP ERP.
The configuration details like Host, Port and Client are the same for all integration flows. Once you have
specified these details in the mass configuration screen, the application updates these details in all the relevant
integration flows. This simplifies the task of configuring multiple integration flows that share common system
properties.
Remember
● The system consolidates field names into a single section only if they match exactly across all the
integration flows you select for configuration. These field names are sender and receiver's system and
authentication details.
● You can also configure authentication details during mass configuration.
● Configuring multiple integration flows is only supported for integration flows with SOAP and IDOC
adapters, version 1.0 and 1.1..
Procedure
2. Choose Design .
3. Select the package that contains the integration flows that you want to configure and choose Package
content.
Note
You can also download the artifacts to your local folder by choosing Download instead of Configure.
The system displays the externalized system properties pertaining to the selected integration flows that
you want to configure. If there are common properties, the application displays them as a single entry.
6. If you want to save the configuration, choose Save.
7. If you want to deploy the integration flows, choose Deploy.
Context
Use the Externalize feature to declare a parameter as a variable and reuse it across multiple components in the
same integration flow. You can define multiple parameters for each field, but you can only assign a single value
for each parameter. If a value contains common strings that can be reused, then you can split the value and
assign the strings to different parameters.
The Externalization Parameters value field of an integration flow has been defined as following:-
1. Default value: The parameter value defined by editing it in the integration flow is called as the parameter's
default value. It's a predefined integration value that can be modified by integration developer.
2. Configured Value: The value configured from the configuration view is defined as the configured value of a
parameter.
Note
● Don't use the following characters while defining a parameter in an integration flow:
○ &
○ <
○ >
○ "
○ '
Example
Assume that in an integration flow you're configuring a communication channel with HTTPS sender
adapter. Here let us externalize the Address field, by defining a parameter and value. In the externalization
dialog box, for the Address field, you define a parameter as {{HostPort}} and its value as https://
localhost:8080/dir. Now, you've declared a variable for Address parameter that can be reused in
different components in the same integration flow.
{{Host}} localhost
{{Port}} 8080
Note
You can’t reuse the same parameter name more than once for the same field, and defining multiple
parameters for the same field or column aren’t supported for tables.
To create a new parameter in the Externalization Editor, perform the following steps:
Example
{{Parameter_1}}
4. Choose the <Define Value> tag, and provide a default value for the parameter in the dialog box.
Note
You can only view configured value at this step. To configure the value of a parameter choose Configure.
Note
You can remove the externalization of a field by manually removing the parameter.
To create a new parameter for a cell in the table, perform the following steps:
1. In the relevant table cell, enter the parameter in curly brackets and press Tab .
2. Choose the <Define Value> tag, and provide a default value for the parameter in the dialog box.
3. Choose OK to save the changes.
Note
○ If you need to remove the externalization of a parameter click the delete icon.
○ You aren’t allowed to split a value into multiple strings.
Example
○ {{header_message}}
○ {{header_queryresponse}}
4. Enter the value of the parameter under the Parameter Value field in the table that appears on the right side
of the editor.
Note
The table allows you to define the values for multiple parameters at the same time.
Note
You can click on Preview under the property sheet to view the resolved value of the parameter.
The Preview button will be enabled only after externalizing a text area attribute.
To create a new parameter in the property sheet of an integration component, perform the
following steps:
1. Select an integration flow component.
2. In the relevant component field define a parameter in curly brackets, and press Enter .
Example
{{parameter_1}}
3. Choose the <Define Value> tag, and provide a default value for the parameter in the dialog box.
4. Choose OK to save the changes.
Example
{{Parameter}}.
Note
○ Modifying the parameter value may change the configuration of other integration flow components
that use the same parameter.
○ You can use only valid value format. For example, string value field doesn't support in the integer
field.
Note
○ You can update a value from the property sheet and Externalization editor for any externalized
parameter.
○ You can only remove the externalization of a field by manually deleting the parameter from the
Externalization editor.
Note
When the integration flow is validated, deployed, and executed the configured value always precedes the
default value of the parameter. The default value in the integration flow editor can be updated by the
integration developer, however, it won’t take precedence over the configured value.
Types of Controls
Checkbox, drop down, help service, string, scheduler, text area, integer, and individual cells of a table are the
supported controls in the Externalization.
Checkbox You can define a new parameter for the Checkbox control in
the Externalization view. Once you provide name for the pa
rameter value and update the value, it indicates that the
checkbox control is externalized.
Note
If the parameter isn’t configured from the Configure
View, the Configured value is flagged as <No Value
Configured>
Dropdown You can define a new parameter for Dropdown control in the
Externalization view. Once you tab out of the parameter col
umn, a token is created on dropdown control. It indicates
that control is externalized.
Text area control You can define a new parameter for Text area control in the
Externalization view. You can also modify the default value of
parameter. After configuring the parameter from the
Configure view, the parameter update dialog shows both the
default and configured value.
Table cell You can define a new parameter for Table cell in the
Externalization view. User can add or edit the default value of
parameter by choosing the token in parameter update dia
log.
Help Service You can select or browse the resource using Help Service
control.
Note
If the help service control is externalized, the select but
ton of the corresponding field will be disabled. It's only
available to browse the resource if the control isn't exter
nalized.
Scheduler You can define a new parameter for dropdown control in the
Externalization view. Select Configure view to modify the
Scheduler control. You can find the details of configured
value by clicking show button.
SAP Cloud Platform Integration Web application allows you to configure integration flow components in an
editor.
General Provide a name for the integration flow artifact and give a
brief description about the flow.
Runtime Configuration You can specify general properties of the integration flow.
For more information, see Specify the Runtime Configura-
tion [page 482].
Error Configuration Define how to handle errors when message processing fails
at runtime. For more information, see Define Error Configu-
ration [page 912].
Problems See all errors and warnings associated with integration com
ponents and resources. For more information, see Problems
View [page 916].
SAP Cloud Platform Integration provides a new version of integration flow editor in the web application. The
new editor comes with a host of enhancements such as:
You will notice that the new editor provides an improved look and feel when you are viewing or editing
integration flows.
Zoom Capabilities
You can easily zoom in and out of the editor with your mouse scroll or the '+' and '-' action buttons on the top
right of the editor. This feature is particularly helpful when you have to edit large and complex integration flows.
All integration flow steps and adapters provide quick action buttons that help you to connect, delete, view
information, or more such quick actions. This is very helpful in quickly building an integration flow. For example,
when you wanted to delete an adapter in the old WebUI, you had to select the channel and then select the
(Delete) icon in the palette. This process was time consuming and not very convenient. This process just takes
two clicks with the new editor: you select the channel and choose the icon.
Quick Buttons
In the old editor, the quick buttons to connect, delete or see technical information were made available when
you hover the mouse pointer over the component, but in the new editor, you need to select the component to
access them
When you have a large integration flow, you will need a visualization that will help you see the entire integration
flow and also navigate to a specific area in that flow. The overview mode provides you that option. By choosing
the dedicated button on the bottom right of the screen, you get a bird's eye view of the integration flow.
The new editor also has some constraints to help you ensure that you do not make a mistake while modeling an
integration flow. For example, the editor will not allow you to add integration flow steps outside the integration
process.
The editor also prevents you from adding multiple incoming messages to integration flow steps. Currently, it is
not supported for Request Reply, Content Enricher and Send steps.
Auto-Save
Consider a scenario where you have existed the integration editor while modelling an integration flow or
encountered a session timeout. In both these cases, you have not saved the changes made to the integration
flow. To identify an unsaved integration flow artifact, you can find the Unsaved Changes text appearing under
the name of the aritifact.
In such scenarios, the Auto-Save functionality helps you recover the unsaved version of the integration flow.
The next time when you open the specific integration flow in the CPI WebUI, a pop-up appears asking if you
would like to recover the unsaved integration flow. If you choose Recover the application restores the
integration flow you were working on before. If you do not want to keep the version that you were working
before then choose Discard.
Note
The Auto-save also helps you to recover the unsaved version of your script or an XSLT resource.
This feature prevents you from losing work and by default, the application saves your work for every 60
seconds.
You can copy and paste adapter configurations within an integration flow.
For example, to copy and paste the configuration between two receiver adapters, navigate to the Design tab,
and under your integration package, open an artifact in Edit mode. Select one of the receiver adapters and
choose the Copy button to copy its configuration details. Choose another receiver adapter and click on
Paste button. The configuration details are copied into this receiver adapter.
Note
● The paste function overwrites any existing configuration on the adapter including its type.
● The copy and paste only works between two sender adapters and two receiver adapters. Copy of a
sender adapter details to a receiver adapter or the vice versa, is not supported.
● This Copy-Paste feature is available only for adapter within the communication channel.
Context Sensitive help paves a new way of accessing the help information of a particular adapter or flow step,
thereby reducing the time spent in searching for the information.
Consider a scenario where you need to know more about how to define a script for message processing. Select
the Script shape in the editor and click on Help icon in the property sheet . This will open the context-specific
information in a new window or tab.
Note
● The Context sensitive help can be accessed in both read-only and editable mode of an integration flow.
This feature is available only for integration flow artifacts and not for OData artifacts.
● This feature can also be accessed in Discover and Monitor views.
Context
Procedure
In case the Authentication Type option is still displayed for the sender participant, you might have created
this integration flow shape some time back or you have selected a certain product profile where inbound
authorization is still be performed per sender participant.
In that case, either proceed with the following step or create a new sender participant shape (and, in that
case, continue configuring the authorization option in the associated sender channel).
More information: Adapter and Integration Flow Step Versions [page 405]
4. When you specify the Authentication Type as part of the Sender, consider the following.
○ Role-based Authentication
Select this options if you like to configure one of the following use cases:
○ Basic authentication
○ Client certificate authentication with certificate-to-user mapping
○ Client Certificate Authentication
Select this option if you like to configure the use case that the permissions of the sender are to be
checked on the tenant by evaluating the distinguished name (DN) of the client certificate (sent by the
sender).
Choose Add… to browse and add an authorized client certificate or enter the Subject DN and Issuer DN
manually.
Which option you choose, depends on the combination of authentication and authorization option you like
to configure for inbound calls.
Prerequisites
Context
You must create a communication channel between SAP Cloud Platform Integration and the sender/receiver
system to facilitate communication between them.
Note
You must click on the participant’s name of sender and receiver elements, to view the header and property
information. Also, you must drag and drop message flow over the participant's name, to assign
communication channels.
You use this procedure if you want to change the adapter assigned to the communication channel in integration
flows.
Procedure
1. Access SAP Cloud Platform Integration web application by launching the URL provided by SAP.
2. Choose Design .
3. Select the integration package that contains the integration flow or create a new
4. Select the integration flow and choose Edit.
Note
In the case of OData service artifacts in integration packages, you have to edit the OData service
artifact in order to edit the required integration flow.
5. If you want to define sender channel, choose Sender and drag the pointer to Start.
Note
In the case of integration flows in OData service artifacts, you cannot change the OData sender
adapter.
6. If you want to define receiver channel, choose End and drag the pointer to Receiver.
Note
In the case of integration flows in OData service artifacts, you can save the integration flow and deploy
the OData service.
You should configure the adapters assigned to communication channels before deploying the integration flow.
Adapter
Feature Description
AMQP Enables an SAP Cloud Platform tenant to consume messages from queues or topic subscriptions in
an external messaging system.
Sender adapter
Supported message protocol: AMQP (Advanced Message Queuing Protocol) 1.0
AMQP Enables an SAP Cloud Platform tenant to send messages to queues or topics in an external messaging
system.
Receiver adapter
Supported message protocol: AMQP (Advanced Message Queuing Protocol) 1.0
Ariba Connects an SAP Cloud Platform tenant to the Ariba Network. Using this adapter, SAP and non-SAP
cloud applications can receive business-specific documents in commerce eXtensible Markup Lan
Sender adapter
guage (cXML) format from the Ariba network.
The sender adapter allows you to define a schedule for polling data from Ariba.
Ariba Connects an SAP Cloud Platform tenant to the Ariba network. Using this adapter, SAP and non-SAP
cloud applications can send business-specific documents in commerce eXtensible Markup Language
Receiver adapter
(cXML) format to the Ariba network.
AS2 Enables an SAP Cloud Platform tenant to exchange business-specific documents with a partner
through the Applicability Statement 2 (AS2) protocol.
Sender adapter
A license for SAP Cloud Platform Enterprise Edition is required to use this feature.
Sender adapter: Can return an electronic receipt to the sender of the AS2 message (in the form of a
Message Disposition Notification (MDN))
AS2 Enables an SAP Cloud Platform tenant to exchange business-specific documents with a partner
through the Applicability Statement 2 (AS2) protocol.
Receiver adapter
A license for SAP Cloud Platform Enterprise Edition is required to use this feature.
AS4 Enables an SAP Cloud Platform tenant to securely process incoming AS4 messages using Web Serv
ices. The AS4 sender adapter is based on the ebMS specification that supports the ebMS handler
Sender adapter
conformance profile.
AS4 Enables an SAP Cloud Platform tenant to establish a connection between any two message service
handlers (MSHs) for exchanging business documents. The AS4 receiver adapter uses the Light Client
Receiver adapter
conformance policy and supports only message pushing for the sending MSH and selective message
pulling for the receiving MSH.
Receiver adapter:
● Supports one-way/push message exchange pattern (MEP) that involves the transfer of business
documents from a sending MSH to a receiving MSH.
● Supports one-way/selective-pull message exchange pattern (MEP) that involves the receiving
MSH initiating a selective pull request to the sending MSH. The sending MSH responds by send
ing the specific user message.
● Supports storing and verification of receipts.
ELSTER Enables an SAP Cloud Platform tenant to send a tax document to the ELSTER server.
Receiver adapter ELSTER (acronym for the German term Elektronische Steuererklärung) is used in German fiscal man
agement to process tax declarations exchanged over the Internet.
The adapter supports the following operations: Getting the version of the ERiC (ELSTER Rich Client)
library, validating a tax document, and sending a tax document.
Facebook Enables an SAP Cloud Platform tenant to access and extract information from Facebook based on
certain criteria such as keywords or user data.
Receiver adapter
Using OAuth, the SAP Cloud Platform tenant can access resources on Facebook on behalf of a Face
book user.
HTTPS Establishes an HTTPS connection between an SAP Cloud Platform tenant and a sender system.
Sender adapter
HTTP Establishes an HTTP connection between an SAP Cloud Platform tenant and a receiver system.
● Supports HTTP 1.1 only (target system must support chunked transfer encoding and may not rely
on the existence of the HTTP Content-Length header)
● Supports the following methods: DELETE, GET, HEAD, POST, PUT, TRACE
Method can also be determined dynamically by reading a value from a message header or prop
erty during runtime.
IDoc Allows an SAP Cloud Platform tenant to exchange Intermediate Document (IDoc) messages with a
sender system that supports communication via SOAP Web services.
Sender adapter
A size limit for the inbound message can be configured for the sender adapter.
IDoc Allows an SAP Cloud Platform tenant to exchange Intermediate Document (IDoc) messages with a re
ceiver system that supports communication via SOAP Web services.
Receiver adapter
JDBC Allows an SAP Cloud Platform tenant to connect to a JDBC (Java Database Connectivity) database
and to execute SQL commands on the database.
Receiver adapter
Sender adapter The sender adapter consumes messages from a queue. The messages are processed concurrently.
To prevent situations where the JMS adapter tries again and again to process a failed (large) message,
you can store messages (where the processing stopped unexpectedly) in a dead-letter queue after
two retries.
Certain constraints apply with regard to the number and capacity of involved queues, as well as for the
headers and exchange properties defined in the integration flow before the message is saved to the
queue (as described in the product documentation).
Receiver adapter The receiver adapter stores messages and schedules them for processing in a queue. The messages
are processed concurrently.
LDAP Connects an SAP Cloud Platform tenant to a Lightweight Directory Access Protocol (LDAP) directory
service (through TCP/IP protocol).
Receiver adapter
Supported operations: Modify distinguished name (DN), Insert
SAP Cloud Connector is required to connect to an LDAP service. The LDAP adapter supports version
2.9 or higher of the SAP Cloud Connector.
Mail Enables an SAP Cloud Platform tenant to read e-mails from an e-mail server.
Sender adapter To authenticate against the e-mail server, you can send the user name and password in plain text or
encrypted (the latter only if the e-mail server supports this option).
You can protect inbound e-mails at the transport layer with IMAPS, POP3S, and STARTTLS.
For more information on possible threats when processing e-mail content with the Mail adapter, see
the product documentation.
Mail Enables an SAP Cloud Platform tenant to send e-mails to an e-mail server.
Receiver adapter To authenticate against the e-mail server, you can send the user name and password in plain text or
encrypted (the latter only if the e-mail server supports this option).
● You can protect outbound e-mails at the transport layer with STARTTLS or SMTPS.
● You can encrypt outbound e-mails using S/MIME (supported content encryption algorithms:
AES/CBC/PKCS5Padding, DESede/CBC/PKCS5Padding).
OData Connects an SAP Cloud Platform tenant to systems using the Open Data (OData) protocol in either
ATOM or JSON format (only synchronous communication is supported).
Sender adapter
Supported versions: OData version 2.0
OData Connects an SAP Cloud Platform tenant to systems using the Open Data (OData) protocol.
ODC Connects an SAP Cloud Platform tenant to SAP Gateway OData Channel (through transport protocol
HTTPS).
Receiver adapter
Supported operations: Create (POST), Delete (DELETE), Merge (MERGE), Query (GET), Read (GET),
Update (PUT)
OpenConnectors Connects an SAP Cloud Platform tenant to more than 150 non-SAP Cloud applications that are sup
ported by SAP Cloud Platform Open Connectors.
Receiver adapter
● Uses APIs to fetch data from specific third-party applications.
● Is designed to handle large volumes of incoming data.
● Supports messages in both JSON and XML format, for request and response calls.
● Allows you to define specific values for variables.
ProcessDirect Connects an integration flow with another integration flow deployed on the same tenant.
Sender adapter An integration flow with a ProcessDirect sender adapter (as consumer) consumes data from another
integration flow.
ProcessDirect Connects an integration flow with another integration flow deployed on the same tenant.
Receiver adapter An integration flow with a ProcessDirect receiver adapter (as producer) sends data to another integra
tion flow.
RFC Connects an SAP Cloud Platform tenant to a remote receiver system using Remote Function Call
(RFC).
Receiver adapter
RFC is the standard interface used for integrating on-premise ABAP systems to the systems hosted
on the cloud using SAP Cloud Connector.
SFTP Connects an SAP Cloud Platform tenant to a remote system using the SSH File Transfer protocol to
read files from the system. SSH File Transfer protocol is also referred to as Secure File Transfer proto
Sender adapter
col (or SFTP).
Supported versions:
SFTP Connects an SAP Cloud Platform tenant to a remote system using the SSH File Transfer protocol to
write files to the system. SSH File Transfer protocol is also referred to as Secure File Transfer protocol
Receiver adapter
(or SFTP).
Supported versions:
SOAP SOAP 1.x Exchanges messages with a sender system that supports Simple Object Access Protocol (SOAP) 1.1
or SOAP 1.2.
Sender adapter
The message exchange patterns supported by the sender adapter are one-way messaging or request-
reply.
A size limit for the inbound message can be configured for the sender adapter.
SOAP SOAP 1.x Exchanges messages with a receiver system that supports Simple Object Access Protocol (SOAP) 1.1
or SOAP 1.2.
Receiver adapter
The adapter supports Web services Security (WS-Security).
SOAP SAP RM Exchanges messages with a sender system based on the SOAP communication protocol and SAP Re
liable Messaging (SAP RM) as the message protocol. SAP RM is a simplified communication protocol
Sender adapter
for asynchronous Web service communication that does not require the use of Web Service Reliable
Messaging standards.
A size limit for the inbound message can be configured for the sender adapter.
SOAP SAP RM Exchanges messages with a receiver system based on the SOAP communication protocol and SAP
Reliable Messaging (SAP RM) as the message protocol. SAP RM is a simplified communication proto
Receiver adapter
col for asynchronous Web service communication that does not require the use of Web Service Relia
ble Messaging standards.
SuccessFactors Connects an SAP Cloud Platform tenant to a SuccessFactors sender system using the REST message
REST protocol.
SuccessFactors Connects an SAP Cloud Platform tenant to a SuccessFactors receiver system using the REST mes
REST sage protocol.
Receiver adapter The adapter supports the following operations: GET, POST
SuccessFactors Connects an SAP Cloud Platform tenant to SOAP-based Web services of a SuccessFactors sender
SOAP system (synchronous or asynchronous communication).
SuccessFactors Connects an SAP Cloud Platform tenant to SOAP-based Web services of a SuccessFactors receiver
SOAP system (synchronous or asynchronous communication).
Receiver adapter The adapter supports the following operations: Insert, Query, Update, Upsert
SuccessFactors Connects an SAP Cloud Platform tenant to a SuccessFactors system using OData V2.
OData V2
Features of OData version 2.0 supported by the adapter:
Receiver adapter
● Operations: GET (get single entity as an entry document), PUT (update existing entry with an en
try document), POST (create new entry from an entry document), MERGE (incremental update
of an existing entry that does not replace all the contents of an entry), UPSERT (combination of
Update OR Insert)
● Query options: $expand, $skip,and $top
● Server-side pagination
● Client-side pagination
● Pagination enhancement: Data retrieved in chunks and sent to Cloud Integration
● Deep insert: Creates a structure of related entities in one request
● Authentication options: Basic authentication
● Reference links: Link two entities using the <link> tag
SuccessFactors Connects an SAP Cloud Platform tenant to a SuccessFactors system using OData V4.
OData V4
Features of OData version 4.0 supported by the adapter:
Receiver adapter
● Operations: GET, POST, PUT, DELETE
● Navigation
● Primitive types supported according to OData V4 specification
● Structural types supported for create/update operations:
Edm.ComplexType, Edm:EnumType, Collection(Edm.PrimitiveType) and Collection(Edm.Com
plexType)
Twitter Enables an SAP Cloud Platform tenant to access Twitter and read or post tweets.
Receiver adapter Using OAuth, the SAP Cloud Platform tenant can access resources on Twitter on behalf of a Twitter
user.
XI Connects an SAP Cloud Platform tenant to a remote sender system that can process the XI message
protocol.
Sender adapter
XI Connects an SAP Cloud Platform tenant to a remote receiver system that can process the XI message
protocol.
Receiver adapter
In addition to these adapters, SAP OEM partners also provide four additional adapters to improve the
connectivity options:
Feature Description
Amazon WS Adapter by Advantco Amazon Web Services adapter helps customers reduce the
implementation time by connecting to Amazon SQS, SWF,
S3, and SNS. It is included within the SAP Cloud Platform In
tegration subscription and customers can download it from
SAP Software Download Center.
Salesforce Adapter by Advantco Salesforce Adpater helps customers reduce the implemen
tation time by connecting to the Salesforce Application. It is
included within the SAP Cloud Platform Integration sub
scription and customers can download it from SAP Software
Download Center.
Microsoft Dynamics CRM Adapter by Advantco Microsoft Dynamics CRM adapter helps customers reduce
the implementation time by connecting to the MS Dynamics
CRM system. It is included within the SAP Cloud Platform In
tegration subscription and customers can download it from
SAP Software Download Center.
If you are quick configuring an integration flow, you can configure a few adapter parameters. Refer to the
relevant adapter configuration details for information on those parameters.
If you are developing an OData service, you can configure a SOAP, OData or HTTP adapter assigned to the
receiver channel and the OData adapter assigned to the sender channel.
Note
In HTTP communication spanning multiple components (for example, from a sender, through the load
balancer, to a Cloud Integration tenant, and from there to a receiver), the actual timeout period is
influenced by each of the timeout settings of the individual components that are interconnected between
sender and receiver (precisely spoken, of those components that can control the Transmission Control
● When considering inbound communication (through HTTP-based sender adapters), note that in
particular the load balancer has an own timeout setting that has an influence on the overall timeout.
For the inbound side, SAP Cloud Integration supports no communication that waits for longer than 10
minutes.
● When considering outbound communication, note that in the involved HTTP-based receiver channel
you can configure a dedicated timeout. However, timeout setting has no influence on the TCP timeout
when the receiver or any additional component interconnected between the Cloud Integration tenant
and the receiver have a lower timeout. For example, consider that you have configured a receiver
channel timeout of 10 minutes and there is another component involved with a timeout of 5 minutes. In
case, nothing is transferred for a certain time, the connection will be closed after the 5th minute.
Related Information
In many integration scenarios, messages or events have to be exchanged between applications or systems via
messaging systems. With the Advanced Message Queuing Protocol (AMQP) adapter, SAP Cloud Platform
Integration can be used as a provider or a consumer of such messages or events. Cloud Integration can
connect to external messaging systems using the AMQP protocol, consume messages or events using the
AMQP sender adapter, or store messages or events in the message broker using the AMQP receiver adapter.
Note
Note that customer-specific headers with prefix JMS aren’t allowed. These headers aren’t forwarded by the
messaging systems.
Queues, topics, and messages, can only be created and monitored by using tools provided by the
messaging system provider. Those monitors aren’t integrated into Cloud Integration. In Cloud Integration,
the integration flows using the AMQP adapter are monitored and the messages are sent to or consumed
from the messaging system.
Related Information
You use the Advanced Message Queuing Protocol (AMQP) sender adapter to consume messages in SAP Cloud
Platform Integration from queues or topic subscriptions in an external messaging system.
Note
Queues, topics, and messages can only be monitored using tools provided by the messaging system
provider. Those monitors are not integrated into Cloud Integration. In Cloud Integration, the integration
flows using the AMQP adapter are monitored and the messages are sent to or consumed from the
messaging system.
Note
To be able to connect to queues or topics, you have to create queues and/or topics in the message broker.
This needs to be done on the messaging system, with the configuration tools provided by the messaging
system. In some messaging systems, you need to configure a Lock Duration to make sure that the message
is not consumed more than once. This timeout must be longer than the expected processing time of the
message, otherwise this would lead to duplicate messages.
Once you have created a sender channel and selected the AMQP sender adapter, you can configure the
following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
The following values are displayed in the General tab after a channel has been established. If you want to
change the configurations, you need to configure a new channel.
General
Parameter Description
● TCP
● WebSocket
Select the Connection tab and provide values in the fields as follows:
Connection
Parameter Description
Proxy Type The type of proxy that you’re using to connect to the target
system.
Path (only if WebSocket is selected as the Transport Protocol Specify the access path of the messaging system.
in the General tab)
Connect with TLS Select if TLS has to be used for the connection.
Location ID (only if On-Premise is selected for Proxy Type) To connect to an SAP Cloud Connector instance associated
with your account, enter the location ID that you defined for
this instance in the destination configuration on the cloud
side.
● SASL
● OAuth2 Client Credentials
● None
Note
OAuth2 Client Credentials are only available for the
WebSocket transport protocol.
Credential Name (only if SASL or OAuth2 Client Credentials Specify the alias of the deployed credentials.
are selected for Authentication)
Select the Processing tab and provide values in the fields as follows.
Processing
Parameter Description
Queue Name Specify the name of the queue or topic subscription to con
sume from.
Number of Current Processes Specify the number of processes used for parallel message
processing. Note, that these processes are started from
each worker node.
Note
The maximum number of parallel processes cannot ex
ceed 99. The default is set to 1.
Related Information
AMQP.org
https://blogs.sap.com/2019/11/20/cloud-integration-connecting-to-external-messaging-systems-using-the-
amqp-adapter/
https://blogs.sap.com/2020/01/17/cloud-integration-how-to-connect-to-an-on-premise-amqp-server-via-
cloud-connector/
Note
To be able to connect to queues or topics, you have to create queues and/or topics in the message broker.
This needs to be done on the messaging system with the configuration tools provided by the messaging
system.
Note
Queues, topics, and messages can only be monitored using tools provided by the messaging system
provider. Those monitors are not integrated into Cloud Integration. In SAP Cloud Platform Integration, the
Once you have created a receiver channel and selected the AMQP receiver adapter, you can configure the
following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
The following values are displayed in the General tab after a channel has been established. To change the
configurations, you need to configure a new channel.
General
Parameter Description
● TCP
● WebSocket
Select the Connection tab and provide values in the fields as follows.
Connection
Parameter Description
Proxy Type The type of proxy that you’re using to connect to the target
system.
Path (only if WebSocket is selected as the Transport Protocol Specify the access path of the messaging system.
in the General tab)
Connect with TLS Select if TLS has to be used for the connection.
Location ID (only if On-Premise is selected for Proxy Type) To connect to an SAP Cloud Connector instance associated
with your account, enter the location ID that you defined for
this instance in the destination configuration on the cloud
side.
● SASL
● OAuth2 Client Credentials
● None
Note
OAuth2 Client Credentials are only available for Web
Socket.
Credential Name (only if SASL or OAuth2 Client Credentials Specify the alias of the deployed credentials.
is selected for Authentication)
Select the Processing tab and provide values in the fields as follows.
Processing
Parameter Description
Expiration Period (in s) Specify the Time to Live (TTL) for the message. If nothing is
specified, the setting for the queue or topic subscription in
the messaging system applies.
● Persistent
● Non-Persistent
Related Information
AMQP.org
https://blogs.sap.com/2019/11/20/cloud-integration-connecting-to-external-messaging-systems-using-the-
amqp-adapter/
https://blogs.sap.com/2020/01/17/cloud-integration-how-to-connect-to-an-on-premise-amqp-server-via-
cloud-connector/
You use this procedure to configure a sender and receiver channel of an integration flow with the Ariba Network
adapter. These channels enable the SAP and non-SAP cloud applications to send and receive business-specific
documents in cXML format to and from the Ariba Network. Examples of business documents are purchase
orders and invoices.
Restriction
An integration flow you deploy in SAP Cloud Platform Integration deploys in multiple IFLMAP worker nodes.
Polling is triggered from only one of the worker nodes. The message monitoring currently displays the
process status from the worker nodes where the Scheduler is not started. This results in the message
monitor displaying messages with less than a few milliseconds, where the schedule was not triggered.
These entries contain firenow=true in the log. You can ignore these entries.
Related Information
The Ariba Sender Adapter connects an SAP Cloud Platform tenant to the Ariba Network. Using this adapter,
SAP and non-SAP cloud applications can receive business-specific documents in commerce eXtensible
Markup Language (cXML) format from the Ariba network. The sender adapter allows you to define a schedule
for polling data from Ariba.
Once you have created a sender channel and selected the Ariba sender adapter, you can configure the
following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
Select the General tab and provide values in the fields as follows.
General
Parameter Description
Select the Processing tab and provide values in the fields as follows.
Processing
Parameter Description
Ariba Network ID Enter the ID associated with the Ariba Network. The default
value is set to AN01000000001.
cXML version Default value provided by SAP is 1.2.025. If you are entering
the version, it must be above 1.2.018.
User Agent Enter the user agent details. The convention is a textual
string representing the client system conducting the cXML
conversation. It must consist of the software company name
and the product name.
Select the Connection tab and provide values in the fields as follows.
Connection
Parameter Description
Ariba Network URL Specify the URL to which the cXML requests are posted, or
from where the cXMLs are polled.
Connection Mode Select one of the options based on the description given be
low:
Account Type Select one of the options based on the description given be
low:
Request Type Select one of the options based on the request types of
buyer/supplier that you want to poll.
Use Quote Message to poll the quote request for Buyer ac
count type.
Maximum Messages Enter the number of messages to be polled from the Ariba
Network for the above-selected Request Type. The maxi
mum allowed value is 200.
System ID Provide the System ID for receiving the requests from a spe
cific ERP system belonging to a supplier. If the system ID is
not provided, then the adapter fetches all requests from ev
ery ERP systems belonging to that specific supplier.
Note
If the buyers’ account is enabled with multi-ERP. It is
mandatory to provide the system ID for multi-ERP sys
tems belonging to the buyer.
Authentication Domain Select one of the options based on the description given be
low:
Authentication Select one of the options based on the description given be
low:
● Shared Key: If you have set the shared key in your Ariba
account.
● Client Certificate: If you have configured your certificate
from a trusted certificate authority in the Ariba account.
Credential Name Enter a name. This name is treated as an alias to the secure
store where the user credentials are deployed. This value
should be set according to the Authentication selected
above. If you have selected Client Certificate, then enter the
alias details in the Private Key Alias field.
Select the Scheduler tab and provide values in the fields as follows.
Scheduler
Parameter Description
Schedule on Day Specify a date and time or interval for executing the data
polling.
Schedule to Recur Specify a recurring pattern for consistently running the poll
ing process. It can be scheduled on daily, weekly, or monthly
basis.
Note
If the specified date is not applicable in a particular
month, the data polling is not executed in that
month. For example, if the 30th day is selected,
polling will not be executed in the month of Febru
ary, as 30th is not a valid day for February.
On Date (only if Schedule on Day is selected) Specific date on which the data polling process has to be ini
tiated to fetch data from the Ariba system.
On Time The time at which the data polling cycle has to be initiated.
For example, if you want the data polling to be started at 4:10
p.m., enter 16:10. Note that the time must be entered in 24-
hour format.
Every xx minutes between HH hours and HH hours The connector fetches data from the Ariba system every ‘xx’
minutes between HH hours and HH hours.
Note
If you want the polling to run for the entire day, enter 1
and 59.
Time Zone Select the time zone you want to use as reference for sched
uling the data polling cycle.
The Ariva Receiver Adapter connects an SAP Cloud Platform tenant to the Ariba network. Using this adapter,
SAP and non-SAP cloud applications can send business-specific documents in commerce eXtensible Markup
Language (cXML) format to the Ariba network.
Once you have created a receiver channel and selected the Ariba receiver adapter, you can configure the
following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
Select the General tab and provide values in the fields as follows.
Select the Processing tab and provide values in the fields as follows.
Processing
Parameter Description
Ariba Network ID Enter the ID associated with the Ariba Network. The default
value is set to AN01000000001.
cXML Version Default value provided by SAP is 1.2.025. If you are entering
the version, it must be above 1.2.018.
User Agent Enter the user agent details. The convention is a textual
string representing the client system conducting the cXML
conversation. It must consist of the software company name
and the product name.
Select the Connection tab and provide values in the fields as follows.
Connection
Parameter Description
Ariba Network URL Specify the URL to which the cXML requests are posted, or
from where the cXMLs are polled.
Connection Mode Select one of the options based on the description given be
low:
Account Type Select one of the options based on the description given be
low:
Authentication Domain Select one of the options based on the description given be
low:
Authentication Select one of the options based on the description given be
low:
● Shared Key: If you have set the shared key in your Ariba
account.
● Client Certificate: If you have configured your certificate
from a trusted certificate authority in the Ariba account.
Credential Name Enter a name. This name is treated as an alias to the secure
store where the user credentials are deployed. This value
should be set according to the Authentication selected
above. If you have selected Client Certificate, then enter the
alias details in the Private Key Alias field.
You use the AS2 adapter to exchange business documents with your partner using the AS2 protocol. You can
use this adapter to encrypt/decrypt, compress/decompress, and sign/verify the documents.
Note
If you (the tenant admin) want to provision the message broker to use AS2 adapter scenarios, you must
have the Enterprise Edition license. You have to set up a cluster to use the message broker. For more
information, see .
Note
There are certain limitations with regard to the usage of JMS resources.
Caution
Do not use this adapter type together with Data Store Operations steps, Aggregator steps, or global
variables, as this can cause issues related to transactional behavior.
If you deploy an integration flow in SAP Cloud Platform Integration, it deploys in multiple IFLMAP worker
nodes. Polling is triggered from only one of these nodes. The message monitor displays the process status
for the worker nodes in which the Scheduler has not started. This results in the message monitor displaying
messages with less than a few milliseconds, where the scheduler was not triggered. These entries contain
firenow=true in the log. You can ignore these entries.
Note
When you deploy an integration flow with AS2/ AS2 MDN adapter, you can see the endpoint information of
this integration flow in Manage Integration Content section of the operations view.
Related Information
Remember
This component or some of its features might not be available in the Cloud Foundry environment. For more
information on the limitations, see SAP Note 2752867 .
Note
In the following cases certain features might not be available for your current integration flow:
● A feature for a particular adapter or step was released after you created the corresponding shape in
your integration flow.
● You are using a product profile other than the one expected.
More information: Adapter and Integration Flow Step Versions [page 405]
Note
● If you are configuring the sender channel to receive AS2 messages, select the AS2 message protocol.
● If you are configuring the sender channel to receive asynchronous AS2 MDN, select the AS2 MDN
message protocol.
● If you want to call the AS2 sender channel, then use the pattern http://<host>:<port>/as2/as2;
to call the AS2 MDN sender channel, use http://<host>:<port>/as2/mdn .
● The JMS queue name contains the name of the AS2 sender channel. provide this name in a to make
troubleshooting easier. To analyze a troubleshooting scenario better, we recommend to mention the
name of the AS2 sender channel.
Once you have created a sender channel and selected the AS2 adapter, you can configure the following
attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
The General tab shows general information such as the adapter type, its direction (sender or receiver), the
transport protocol, and the message protocol.
Processing
Parameter Description
Message ID Left Part Specify the left side of the AS2 message ID. Regular expres
sion or '.*' is allowed.
Message ID Right Part Specify the right side of the AS2 message ID. Regular ex
pression or '.*' is allowed.
Partner AS2 ID Specify your partner's AS2 ID. Regular expression or '.*' is al
lowed.
Own AS2 ID Specify your own AS2 ID. Regular expression or '.*' is al
lowed.
Message Subject Specify the AS2 message subject. Regular expression or '.*'
is allowed.
Number of Concurrent Processes Define how many processes can run in parallel for each
worker node. The value depends on the number of worker
nodes, the number of queues on the tenant, and the incom
ing load, and must be less than 99.
User Role The user role that you are using for inbound authorization.
Choose Select to get a list of all the available user roles for
(only if you select User Role for Authorization).
your tenant and select the one that you want to use.
Caution
The role name must not contain any umlaut characters
(for example, ä).
Client Certificate Authorization The client certificates that you are using for inbound authori
zation. Choose Add to add a new row and then choose Select
(only if you select Client Certificate for Authorization).
to select a certificate stored locally on your computer. You
can also delete certificates from the list.
Message Settings
Mandatory File Name Select to check the incoming AS2 message contains a file-
name. If not found, then an negative MDN is sent as per the
request of the AS2 sender.
Duplicate Message ID Select to ensure that the AS2 message with the same mes
sage ID is not processed more than once.
Duplicate File Name Select to ensure that the AS2 message with the same file-
name is not processed more than once.
Note
● Ensure that the combination of Message ID Left Part, Message ID Right Part, Partner AS2 ID, Own AS2
ID, and Message Subject parameters is unique across all AS2 sender channels.
● If you use regular expressions for the above-mentioned AS2 sender parameters, then you must ensure
that the regular expression configuration is unique across the endpoints.
● The runtime identifies the relevant channel and integration flow for the incoming AS2 sender message
based on the above-mentioned parameters.
Security
Parameter Description
Decrypt Message Select this checkbox to ensure that the message is de
crypted.
You can also set the value of this attribute dynamically using
the header SAP_AS2_Inbound_Decrypt_Message.
● true
● false
Note
If the header value is set, it takes precedence over the
actual value configured in the channel.
Private Key Alias Specify the private key alias to decrypt the AS2 message.
Verify Signature Select this checkbox to ensure that the signature is verified
using one of the following options:
● Not Required
● Trusted Certificate: Used to verify the Signature.
● Trusted Root Certificate: The trust chain begins with the
use of the public alias as an intermediate certificate to
verify the inbound certificate. After successful verifica-
tion the inbound certificate is used to verify the Signa
ture.
Note
If Trusted Root Certificate is selected mention the
public key alias immediate root certificate of the in
coming message.
You can also set the value of this attribute dynamically using
the header SAP_AS2_Inbound_Verify_Signature.
Note
If the header value is set, it takes precedence over the
actual value configured in the channel.
Public Key Alias Specify the public key alias to verify the signature of the AS2
message.
(only if you select Verify Signature).
MDN
Parameter Description
Private Key Alias for Signature Specify the private key alias to sign the MDN on partner's re
quest.
Proxy Type Select the type of proxy you want to use to connect asyn
chronously to an AS2 sender system.
Note
If you select the On-Premise option, the following
restrictions apply to other parameter values:
○ Do not use an HTTPS address for Location ID,
as it leads to errors when performing consis
tency checks or during deployment.
○ Do not use Certificate-Based Authentication as
the Authentication Type, as it leads to errors
when performing consistency checks or during
deployment
Note
If you select the On-Premise option and use the
SAP Cloud Connector to connect to your on-prem
ise system, the Location ID field of the adapter re
fers a virtual address which has to be configured in
the SAP Cloud Connector settings.
● None
● Basic Authentication
The tenant authenticates itself against the receiver us
ing user credentials (user name and password).
It is a prerequisite that user credentials are specified in
a Basic Authentication artifact and deployed on the re
lated tenant.
Timeout (in ms) Specify how long in milliseconds the client has to accept the
asynchronous MDN. Enter the value '0' if you want the client
to wait indefinitely.
Select the Delivery Assurancetab and provide values in the fields as follows.
Parameter Description
Quality of Service Defines how the message from the AS2 sending partner is
processed by the tenant.
● Exactly Once
The AS2 message is temporarily stored in the JMS
queue and an MDN is transmitted to the sending part
ner. If an error occurs during processing, the message is
retried from the JMS queue.
● Best Effort
This option allows you to transmit customized MDN ac
knowledgments to the AS2 sending partner. The AS2
messages are processed immediately and the MDN is
only transmitted to the sending partner if the process
ing is successful.
By introducing a Script into the integration flow, you
can customize the original MDN found in the exchange
property SAP_AS2_MDN. For more information, see
Define Script [page 823].
Retry Interval (in min) Define how many minutes to wait before retrying message
delivery.
Exponential Backoff Select this checkbox to double the retry interval after each
unsuccessful retry.
Maximum Retry Interval (in min) Specify the maximum amount of time to wait before retrying
message delivery.
Dead-Letter Queue Select this checkbox to store those messages that cannot be
successfully processed after the second retry during a ten
ant crash. This helps you to analyse and resolve the cause of
failure.
Encrypt Message During Persistence Select this option to encrypt the message in the data store.
Parameter Description
The parameters in Maximum Message Size allows you to set a maximum size limit for processing inbound messages. All
inbound messages that exceeds the configured limit are rejected from further processing and the sender receives an error
message.
Note
The minimum allowable size limit is 1MB.
Body Size (in MB) Define the allowable size limit for processing the message
body.
Attachments Size (in MB) Define the allowable size limit for processing the attach
ment.
● The AS2 sender passes the following headers to the integration flow for message processing:
○ AS2PartnerID
○ AS2OwnID
○ AS2MessageSubject
○ AS2Filename
○ AS2MessageID
○ AS2PartnerEmail
○ AS2MessageContentType
● The AS2 MDN sender passes the following headers to the integration flow for message processing:
○ AS2PartnerID
○ AS2OwnID
○ AS2MessageID
○ AS2MessageContentType
○ AS2OriginalMessageID
● Use the following attributes to reference the values that are associated with MPL:
○ AS2 MDN sender adapter attributes:
○ AdapterId
○ adapterMessageId
○ SAP_MplCorrelationId
○ MDNStatus
○ Message Id
○ ErrorDescription
For example:
{AdapterId=AS2 Sender,
adapterMessageId=<define _AS2-147665710-6@endionAS2_CPIAS2>,
ReceiverAS2Name=HCIAS2,
MessageDirection=Inbound,
MDNType=Sending,
MDNStatus=Success,
MPL ID=AFgsPspcD-eYhvHFdfOZYKydBmzw,
MDNRequested=Synchronous,
SenderAS2Name=endionAS2,
AS2MessageID=<define _AS2-147665710-6@endionAS2_HCIAS2>}
{AdapterId=AS2 Sender,
adapterMessageId=<define_AS2-14798922282-7@gibsonAS2_HCIAS2>,
ReceiverAS2Name=HCIAS2,
MessageDirection=Inbound,
MDNType=Sending,
MDNStatus=Success,
MPL ID=AFgsQ0_3KdRx-UiOjcwGruy6Xw4V,
MDNRequested=Asynchronous,
SenderAS2Name= gibsonAS2,
AS2MessageID=<define _AS2-14798922282-7@ gibsonAS2_HCIAS2>}
Remember
This component or some of its features might not be available in the Cloud Foundry environment. For more
information on the limitations, see SAP Note 2752867 .
Note
In the following cases certain features might not be available for your current integration flow:
● A feature for a particular adapter or step was released after you created the corresponding shape in
your integration flow.
● You are using a product profile other than the one expected.
More information: Adapter and Integration Flow Step Versions [page 405]
Once you have created a receiver channel and selected the AS2 receiver adapter, you can configure the
following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
The General tab shows general information such as the adapter type, its direction (sender or receiver), the
transport protocol, and the message protocol.
Connection
Parameter Description
URL Parameters Pattern Define query parameters that are attached to the end of re
cipient URL.
Proxy Type Select the type of proxy you want to use for connecting to re
ceiver system.
● Internet
● On-Premise
Location ID(only if On-Premise is selected for Proxy Type.) Enter the location ID to identify a specific Cloud Connector
that is unique to your subaccount.
● None
● Basic authentication
● Client Certificate (only if Internet is selected for Proxy
Type.)
Private Key Alias(only if you select Client Certificate.) Enter the private key alias that enables the system to fetch
the private key from keystore for authentication.
Credential Name(only if you select Basic authentication.) Provide the name of the User Credentials artifact that con
tains the credentials for basic authentication.
Timeout (in ms) Specify the maximum time (in milliseconds) the adapter
waits for receiving a response before terminating the con
nection.
Select the Processing tab and provide values in the fields as follows.
Processing
Parameter Description
File Name Specify the AS2 file name. If no file name is specified, the de
fault file name <Own AS2 ID>_File is used. Simple ex
pressions, ${header.<header-name>}, or $
{property.<property-name>} are allowed.
Message ID Left Part Specify the left side of the AS2 message ID. Regular expres
sion or '.*' is allowed.
Message ID Right Part Specify the right side of the AS2 message ID. Regular ex
pression or '.*' is allowed.
Own AS2 ID Specify your own AS2 ID. Regular expression or '.*' is al
lowed.
Partner AS2 ID Specify your partner's AS2 ID. Regular expression or '.*' is al
lowed.
Message Subject Specify the AS2 message subject. Regular expression or '.*'
is allowed.
Own E-mail Address Specify your own e-mail ID. Simple expressions, $
{header.<header-name>}, or $
{property.<property-name>} are allowed.
Content Type Specify the content type of the outgoing message. For exam
ple: application/edi-x12. Simple expressions, $
{header.<header-name>}, or $
{property.<property-name>} are allowed.
You can also set the value of this attribute dynamically using
the header SAP_AS2_Outbound_Content_Type.
Note
If the header value is set, it takes precedence over the
actual value configured in the channel.
Custom Headers Pattern Specify a regular expression to pick message headers and
add them as AS2 custom headers.
For example, if you want to pick all EDI headers starting with
the name EDI, specify the expression as EDI.*.
Select the Security tab and provide values in the fields as follows.
Security
Parameter Description
Compress Message Select this checkbox to ensure that the outgoing AS2 mes
sage is compressed.
You can also set the value of this attribute dynamically using
the header
SAP_AS2_Outbound_Compress_Message.
● true
● false
Note
If the header value is set, it takes precedence over the
actual value configured in the channel.
Sign Message Select this checkbox to ensure that the outgoing AS2 mes
sage is signed.
You can also set the value of this attribute dynamically using
the header SAP_AS2_Outbound_Sign_Message.
● true
● false
Note
If the header value is set, it takes precedence over the
actual value configured in the channel.
(only if you select Sign Message.) You can also set the value of this attribute dynamically using
the header
SAP_AS2_Outbound_Signing_Algorithm.
● SHA1
● SHA224
● SHA256
● SHA384
● SHA512
● MD5
Note
If the header value is set, it takes precedence over the
actual value configured in the channel.
Private Key Alias Specify the private key alias to sign the AS2 message. Sim
ple expressions, ${header.<header-name>}, or $
(only if you select Sign Message.)
{property.<property-name>} are allowed.
You can also set the value of this attribute dynamically using
the header
SAP_AS2_Outbound_Signing_Private_Key_Ali
as.
Encrypt Message Select this checkbox to ensure that the message is en
crypted.
You can also set the value of this attribute dynamically using
the header SAP_AS2_Outbound_Encrypt_Message.
● true
● false
Note
If the header value is set, it takes precedence over the
actual value configured in the channel.
(only if you select Encrypt Message). You can also set the value of this attribute dynamically using
the header
SAP_AS2_Outbound_Encryption_Algorithm.
● 3DES
● AES128
● AES192
● AES256
● RC2
Note
If the header value is set, it takes precedence over the
actual value configured in the channel.
Public Key Alias Specify the public key alias to encrypt the AS2 message.
Simple expressions, ${header.<header-name>}, or $
(only if you select Encrypt Message).
{property.<property-name>} are allowed. The
header or property can contain a public key alias or X509
certificate.
You can also set the value of this attribute dynamically using
(only if you select Encrypt Message and select RC2 in the
the header
Algorithm field).
SAP_AS2_Outbound_Encryption_Key_Length.
Note
If the header value is set, it takes precedence over the
actual value configured in the channel.
MDN
Parameter Description
You can also set the value of this attribute dynamically using
the header SAP_AS2_Outbound_Mdn_Type.
Note
If the header value is set, it takes precedence over the
actual value configured in the channel.
Target URL Specify the URL where the AS2 MDN will be received from
the partner. Simple expressions, ${header.<header-name>},
(only if you select the Asynchronous MDN type).
or ${property.<property-name>} are allowed.
Request Signing Enable this option to request the partner to sign AS2 MDN.
(only if you select the Asynchronous or Synchronous MDN You can also set the value of this attribute dynamically using
type.) the header
SAP_AS2_Outbound_Mdn_Request_Signing.
(only if you enable the Request Signing option.) You can also set the value of this attribute dynamically using
the header
SAP_AS2_Outbound_Mdn_Signing_Algorithm.
Note
If the header value is set, it takes precedence over the
actual value configured in the channel.
Verify Signature You can enable this option to verify the signature of AS2
MDN.
(only if you select the Synchronous MDN type.)
You can also set the value of this attribute dynamically using
the header
SAP_AS2_Outbound_Mdn_Verify_Signature
Public Key Alias Specify the public key alias to verify the MDN signature. Sim
ple expressions, ${header.<header-name>}, or ${prop
(only if you select Verify Signature.)
erty.<property-name>} are allowed. The header or property
can contain a public key alias or X509 certificate.
Request MIC Enable this option if you want to request an integrity check.
You can also set the value of this attribute dynamically using
the header SAP_AS2_Outbound_Mdn_Request_Mic.
Verify MIC Enable this option to verify the MIC of AS2 MDN.
(only you select the synchronous MDN type.) You can also enable this option if you enable the Request
MIC option and want to verify the integrity of the message.
You can also set the value of this attribute dynamically using
the header SAP_AS2_Outbound_Mdn_Verify_Mic.
Note
● You can configure the AS2 receiver channel for the Request-Reply integration flow element. If you
request synchronous MDN, the adapter sets the received MDN response as the message payload.
● If you request synchronous MDN in the receiver channel, you may receive positive or negative MDN. In
both cases, the status of the message on the Message Monitoring tab is COMPLETED. You can process
the MDN message on your own and take the required action for positive or negative MDN, post AS2 call
for synchronous MDN.
● In an MDN message, positive MDN is represented as follows:
Sample Code
Sample Code
● If MDN signature validation fails or an incorrect message integrity check (MIC) is received, the status of
the message is FAILED.
You use AS4 message exchange protocol to securely process incoming business documents using Web
services.
The SAP Applicability Statement4 (AS4) sender adapter is based on the ebMS specification that supports the
ebMS handler conformance profile. For more information, see AS4 Profile of ebMS 3.0 Version 1.0 .
Let's look at how a message is processed: the incoming AS4 message consists of a SOAP body and the
business payload as an attachment. During processing, the payload is extracted from the attachment and
allocated to the related Camel routes. To get a more detailed understanding of the functionality of the AS4
adapter,explained with a scenario, read the blog on Cloud Integration – Working with AS4 adapter .
Note
It is expected that the incoming AS4 message has an empty SOAP body, otherwise processing-related
errors can occur.
Configure the sender channel with the AS4 adapter as a receiving Message Service Handler (MSH).
Connection
Parameter Description
Address Specify the relative endpoint of the receiving MSH. For ex
ample, /orders.
Note
When you specify the endpoint address /path, a
sender can also call the integration flow through the
endpoint address /path/<any string> (for exam
ple, /path/test/).
User Role Select a role to authorize the user to access the receiving
MSH endpoint.
Processing
Parameter Description
Initiator Party: Party ID Specify the ID of the initiating partner. For example,
APP_10000000100.
Initiator Party: Party Type Specify a party type to identify the initiating partner. For ex
ample, urn:fdc:peppol.eu:
2017:identifiers:ap
Initiator Party: Role Specify the role of the initiating partner. For example,
http://docs.oasis-open.org/ebxml-msg/
ebms/v3.0/ns/core/200704/initiator
Responder Party: Party ID Specify the ID of the responding partner. For example,
APP_10000000200.
Responder Party: Party Type Specify a party type to identify the responding partner. For
example, urn:fdc:peppol.eu:
2017:identifiers:ap
Responder Party: Role Specify the role of the responding partner. For example,
p://docs.oasis-open.org/ebxml-msg/ebms/
v3.0/ns/core/200704/responder
Service Type Specify the process identifier schema of the business docu
ment. For example, cenbii-procid-ublui.
Security
Parameter Description
Verify Signature Select the checkbox to ensure that the signature is verified
using one of the following options:
● Not Required
● Trusted Certificate: Used to verify the signature.
● Trusted Root Certificate: The trust chain begins with the
use of the public alias as an intermediate certificate to
verify the inbound certificate. After successful verifica-
tion, the inbound certificate is used to verify the signa
ture.
Note
If Trusted Root Certificate is selected, mention the
immediate root certificate of the public key alias of
the incoming message.
Public Key Alias Define the public key alias or aliases to verify the signature of
the AS4 message.
(only if you select Trusted Certificate.)
Decrypt Message Select the option appropriate for your requirements to de
crypt an AS4 message.
Receipt
Parameter Description
Private Key Alias Specify the private key alias to sign the AS4 message.
Signature Algorithm Use the relevant algorithm to sign the AS4 message.
Conditions
Parameter Description
The parameters in Maximum Message Size allow you to set a maximum size limit for processing inbound messages. Any
inbound message that exceeds the configured limit is rejected from further processing and the sender receives an error
message.
Body Size (in MB) Define the size limit for processing the message body.
Attachment Size (in MB) Define the size limit for processing the attachment.
Provides basic insights on how the AS4 messaging protocol enables message exchange between message
service handlers (MSHs).
Use
The SAP Applicability Statement 4 (AS4) receiver adapter is a secure, reliable, and payload-agnostic protocol.
It uses Web services to transmit business-to-business documents. For more information on AS4 conformance
profile defined by OASIS standard, see AS4 Profile of ebMS 3.0 Version 1.0 .
The SAP AS4 receiver adapter uses the Light Client conformance profile to address the functional
requirements of e-commerce and e-governance services. The profile only supports message pushing for
sending MSH and selective message pulling for receiving MSH. The adapter uses secure SAML tokens for
authentication and authorization between two MSHs.
The AS4 receiver adapter uses the following message exchange patterns (MEPs) for exchanging business
documents:
● One-way/push: In this pattern, the AS4 receiver adapter is the sending MSH (initiator) that transfers
business documents to the receiving MSH. The initiator receives a acknowledgment as part of the HTTP
response.
● One-way/selective pull: In this pattern, the AS4 receiver adapter is the receiving MSH (initiator) and sends
a selective pull request to the sending MSH. The sending MSH initiates the pull request by identifing the
specific user message using the message ID provided by the initiator.
Visit the blog , to understand how to integrate Business-to-Authority (B2A) Manager of SAP with ATO
(Australian Taxation Office).
Related Information
Configure the AS4 receiver channel as a sending Message Service Handler (MSH) to send business
documents.
Prerequisites
● You must deploy the private key pair in theCloud Integration keystore for signing the AS4 message.
● You must have configured an integration flow in the editor. For more information, see Integration Flow
Editor for SAP Cloud Platform Integration [page 495].
Context
Connection
Select the Connectiontab and provide values in the fields as follows.
Parameter Description
● None
● Basic Authentication
● Client Certificate
● SAML Authentication
You can also set the value of this attribute dynamically using
the header
SAP_AS4_Outbound_Authentication_Type.
● saml
● basic
● clientCert
● none
SAML Endpoint URL Provide the specific endpoint URL to support SAML-based
authentication that allows access to the sending MSH.
Private Key Alias Determine the private key alias for SAML authentication.
Timeout (in sec.) Provide a connection timeout period (in seconds) to define
how long the receiving MSH waits for the AS4 message to be
received by the sending MSH.
Processing
Parameter Description
Initiator Party: Party Type Provide the type of the sending MSH. For example:
http://abr.gov.au/PartyIdType/ABN
Initiator Party: Role Define the role of the sending MSH. For example: http://
sbr.gov.au/ato/Role/Business
Responder Party: Party Type Provide the type of the receiving MSH. For example:
http://abr.gov.au/PartyIdType/ABN
Responder Party: Role Define the role of the receiving MSH. For example: http://
sbr.gov.au/agency
Message Partition Channel Specify the partner channel details to enable the partitioned
transfer of AS4 messages between AS4 exchange partners.
Action Define the type of action that the user message is intended
to invoke. For example: Submit.002.00
Attachment Name Define the name for the payload attached to the AS4 mes
sage.
Additional Properties Define a key and value to modify an existing parameter in the
property sheet. For example, if you want to modify the MSH
details, you must define a key and its value.
Select the Security tab and provide values in the fields as follows.
Security
Parameter Description
Sign and Enrypt Message Used to sign and encrypt the payload.
You can also set the value of this attribute dynamically by us
ing the header SAP_AS4_Outbound_Security_Type.
● sign
● signAndEncrypt
● none
You can also set the value of this attribute dynamically by us
ing the header SAP_AS4_Outbound_Sign_Message.
● true
● false
Private Key Alias for Signing Specify an alias for the tenant private key that is to be used
to sign the message. The tenant private key is used to sign
the request message (that is sent to the WS provider (re
ceiver)). The tenant private key has to be part of the tenant
keystore.
Signature Algorithm Use the relevant algorithm to sign the AS4 message.
You can also set the value of this attribute dynamically by us
ing the header
SAP_AS4_Outbound_Signing_Algorithm.
● SHA256/RSA
● SHA384/RSA
● SHA512/RSA
Public Key Alias for Encryption Specify an alias for the public key that is to be used to en
crypt the message.
(only if you select Sign and Encrypt Message)
The receiver (WS provider) public key is used to encrypt the
request message (that is sent to the receiver). This key has
to be part of the tenant keystore.
You can also set the value of this attribute dynamically by us
ing the header
SAP_AS4_Outbound_Encryption_Cert. Use this
header to set the certificate value to X509 certificate object.
● 3DES
● AES128
● AES256
Select the Receipt tab and provide values in the fields as follows.
Parameter Description
Save Incoming Receipt Saves incoming receipt in the Message Store for 90 days.
You can refer these receipts for auditing purposes.
You can also set the value of this attribute dynamically using
the header SAP_AS4_Outbound_Save_Receipt.
● true
● false
Verify Receipt Signature Verifies the incoming receipt signature against the public key
alias.
You can also set the value of this attribute dynamically using
the header SAP_AS4_Outbound_Verify_Receipt.
● true
● false
Note
You can use the
SAP_AS4_Outbound_Verify_Receipt_Cert
header to set the certificate value to X509 certificate
object.
Public Key Alias Enter an alias name to select a public key and corresponding
certificate.
Note
Set the value, provided by ATO, to the SAP_AS4_Outbound_ATO_SAML_AppliesTo header for AppliesTo
parameter to fetch SAML token from Vanguard.
Related Information
Configure the AS4 receiver channel as a receiving MSH to exchange business documents.
Prerequisites
● You must deploy the public certificate in the Cloud Integration keystore for verification of the business
response.
● You must have configured an integration flow in the editor. For more information, see Integration Flow
Editor for SAP Cloud Platform Integration [page 495].
Context
Use the ebMS3 Pull message protocol to receive AS4 message (User Message).
Connection
Parameter Description
Agreement Define the operation mode agreed on by the MSHs for a spe
cific type of business transaction. This specifies the type of
the message exchange pattern.
SAML Endpoint URL Provide the specific endpoint URL to support SAML-based
authentication that allows access to the sending MSH.
Private Key Alias Determine the private key alias for SAML authentication.
Timeout (in sec.) Provide a connection timeout period (in seconds) to define
how long the sending MSH waits for the AS4 message to be
received by the receiving MSH.
Paramter Description
You can also set the value of this attribute dynamically by us
ing the header SAP_AS4_Inbound_Sign_Message.
● true
● false
Private Key Alias Specify the private key alias to sign the AS4 message.
Signature Algorithm Use the relevant algorithm to sign the AS4 message.
(only if Sign Message is enabled) You can also set the value of this attribute dynamically by us
ing the header
SAP_AS4_Inbound_Signing_Algorithm.
● sha256rsa
● sha384rsa
● sha512rsa
You can also set the value of this attribute dynamically by us
ing the header SAP_AS4_Inbound_Verify_Sign.
● true
● false
Public Key Alias Provide the public key alias to verify the signature of the AS4
message.
(only if Verify Signature is enabled)
Note
Set the value, provided by ATO, to the SAP_AS4_Outbound_ATO_SAML_AppliesTo header for AppliesTo
parameter to fetch SAML token from Vanguard.
Related Information
This adapter enables an SAP Cloud Platform tenant to send a tax document to the ELSTER server.
ELSTER (acronym for the German term Elektronische Steuererklärung) is used by the German fiscal
management to process tax declarations exchanged over the Internet.
To enable a client to send tax data to German tax authorities, those organizations provide the ERiC (ELSTER
Rich Client) library for sending tax documents. The ELSTER adapter is designed that way that it complies with
the requirements of this library and, therefore, enables Cloud Integration to connect as a client to the ELSTER
server.
Note
Using this adapter makes only sense in the context of a standard integration scenario (predefined by SAP
or an SAP partner) that includes the communication with German tax authorities.
The input payload for the ELSTER adapter is supposed to be a complete, valid payload (tax document)
including the transfer header. Note that, however, the XML document can have an arbitrary encoding (if this is
properly defined in the XML preamble). The adapter ensures that the payload is converted to the encoding the
ELSTER server supports (currently ISO-8859-15, in future versions this will change to UTF-8).
The output payload (sent by Cloud Integration through the ELSTER receiver adapter) will be validated by the
ELSTER server.
The inclusion of the transfer header implies that only applications that are registered with the German tax
authorities and have a valid vendor ID can actually send messages through the ELSTER adapter.
Note
This software collects personal data according to Article 4, Number 1 and Article 9, Paragraph 1 of the
German General Data Protection Regulation (Datenschutzgrundverordnung, DSGVO). In addition to data
that is required for the assessment of taxes, this software also collects data related to the kind of operating
system used by the user and transfers it to the fiscal authorities. This information ensures the proper
processing of the data and avoids errors in the process.
This data is used by the fiscal authorities according to Article 6, Paragraph 1, Letter e in connection with
Paragraph 3, Subparagraph 1, Letter b DSGVO in connection with federal and state tax regulations and
exclusively for the purposes mentioned.
Headers
The validate and send operation of the ELSTER receiver adapter sets a header (SAP_ERiCResponse) that
contains a technical status created by the ERiC library.
Once you have created a receiver channel and selected the Elster Receiver Adapter, you can configure the
following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
General
Parameter Description
Select the Connection tab and provide values in the fields as follows.
Connection
Parameter Description
● Get Version
Gets the versions of the ERiC library provided by the
server.
The response contains the major, minor, and micro
ERiC version (for example, 29.6.2).
● Validate
Validates the tax document.
Validation of a tax document without sending it only re
quires the document type (Data Type). Key aliases
(Private Key Alias for Encryption and Private Key Alias
for Signing) are not required in that case.
● Validate and Send
Validates the tax document sent to the ELSTER server.
In case the server cannot accept the document (maybe
it is wrong formatted) or in case the server is down, an
error message is provided. In such a case, check the
message processing log and, in case you need more in
formation, the default trace.
Data Type Indicates the type of the document provided as payload. In
formation about the type is required by the ELSTER server to
determine the method to be applied by the tax authority.
Private Key Alias for Encryption Alias for the key to be used for message encryption
Private Key Alias for Signing Alias for the key pair (private part) to be used for message
signing. Note that X.509 key pair needs to be uploaded to
the tenant keystore
Alias for the key pair (private part) to be used for message
signing. Note that X.509 key pair needs to be uploaded to
the tenant keystore.
You use the Facebook receiver adapter to extract information from Facebook (which is the receiver platform)
based on certain criteria such as keywords, user data, for example. As one example, you can use this feature in
social marketing activities to do social media data analysis based on Facebook content.
Note
● Facebook applications that access content of public pages need to request Page Public Content Access
feature and require review by Facebook.
● A user can only query their own comments. Other users' comments are unavailable due to privacy
concerns.
The connection works that way that the tenant logs on to Facebook based on an OAuth authentication
mechanism and searches for information based on criteria as configured in the adapter at design time. OAuth
allows a the tenant to access someone else’s resources (of a specific Facebook user) on behalf of the tenant. As
illustrated in the figure, the tenant (through the Facebook receiver adapter) calls the Facebook API to access
resources of a specific Facebook user. For more information on the Facebook API, go to: https://
developers.facebook.com/ .
You can also use headers to provide values in Connection settings of Facebook adapter. You can use both
exchange headers (see Dynamic Parameters (Example) [page 14] for more information) and Apache Camel
headers (see Facebook Component in Apache Camel for more information ).
Once you have created a receiver channel and selected the Facebook receiver adapter, you can configure the
following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
Select the General tab and provide values in the fields as follows.
General
Parameter Description
Select the Connection tab and provide values in the fields as follows.
Tip
You can obtain the values required for the Facebook adapter configuration in the Facebook for Developers
page.
Login to Facebook for Developers . Choose My Apps <Your Facebook App> Roles Test Users . In
this page, you will get details like User/Page ID.
To get the Post ID, you should fetch the posts using Get Posts. This will have the Post ID
Endpoint To access Facebook content, you can choose among the fol
lowing general options.
● Get Posts
Allows you to fetch specific Facebook posts.
● Get Post Comments
Allows you to fetch specific Facebook post comments.
● Get Users
Allows you to fetch details of a specific user.
● Get Feeds
Allows you to fetch feeds of a specific user or a page.
User/Page ID Specifies the Facebook user from which account the infor
mation is to be extracted.
Timeout (ms) Specifies a timeout (in miliseconds) after which the connec
tion to te Facebook platform should be terminated.
Application Secret Alias An alias by which the shared secret is identified (that is used
to to define the token of the consumer (tenant)).
Access Token An alias by which the access token for the Facebook user is
identified.
● Internet
● Manual
The authorization is based on shared secret technology. This method relies on the fact that all parties of a
communication share a piece of data that is known only to the parties involved. Using OAuth in the context of
this adapter, the Consumer (that calls the API of the receiver platform on behalf of a specific user of this
platform) identifies itself using its Consumer Key and Consumer Secret, while the context to the user itself is
defined by an Access Token and an Access Token Secret. These artifacts are to be generated for the receiver
platform app (consumer) and should be configured that way that they will never expire. This adapter only
supports consumer key/secret and access token key/secret artifacts that do not expire.
To finish the configuration of a scenario using this adapter, the generated consumer key/secret and access
token key/secret artifacts are to be deployed as Secure Parameter artifact on the related tenant. To do this, use
the Integration Operations feature, position the cursor on the tenant and chosen Deploy Artifact .... As artifact
type, choose Secure Parameter.
You use the HTTP adapter to communicate with receiver systems using HTTP message protocol.
The HTTP adapter supports only HTTP 1.1. This means that the target system must support chunked transfer
encoding and may not rely on the existence of the HTTP Content-Length header.
Note
If you want to dynamically override the configuration of the adapter, you can set the following headers
before calling the HTTP adapter:
● CamelHttpUri
Overrides the existing URI set directly in the endpoint.
This header can be used to dynamically change the URI to be called.
● CamelHttpQuery
Refers to the query string that is contained in the request URL.
In the context of a receiver adapter, this header can be used to dynamically change the URI to be called.
For example, CamelHttpQuery=abcd=1234.
● Content-Type
HTTP content type that fits to the body of the request.
The content type is composed of two parts: a type and a subtype.For example, image/jpeg (where
image is the type and jpeg is the subtype).
Examples:
○ text/plain for unformatted text
○ text/html for text formatted with HTML syntax
○ image/jpeg for a jpeg image file
○ application/json for data in JSON format to be processed by an application that requires this
format
More information on the available types: https://www.w3.org/Protocols/rfc1341/4_Content-Type.html
The list of available content types is maintained by the Internet Assigned Numbers Authority (IANA).
For more information, see http://www.iana.org/assignments/media-types/media-types.xhtml .
Note
If transferring text/* content types, you can also specify the character encoding in the HTTP
header using the charset parameter.
The default character encoding that will be applied for text/* content types depends on the HTTP
version: us-ascii for HTTP 1.0 and iso-8859-1 for HTTP 1.1.
Text data in string format is converted using UTF-8 by default during message processing. If you
want to override this behavior, you can use the Content Modifier step and specify the
If you use a Content Modifier step and you want to send iso-8859-1-encoded data to a receiver,
make sure that you specify the CamelCharsetName Exchange property (either header or property)
as iso-8859-1. For the Content-Type HTTP header, use text/plain; charset=iso-8859-1.
● Content-Encoding
HTTP content encoding that indicates the encoding used during message transport (for example, gzip
for GZIP file compression).
This information is used by the receiver to retrieve the media type that is referenced by the content-
type header.
If this header is not specified, the default value identity (no compression) is used.
More information: https://tools.ietf.org/html/rfc2616 (section 14.11)
The list of available content types is maintained by the Internet Assigned Numbers Authority (IANA).
For more information, see:http://www.iana.org/assignments/http-parameters/http-
parameters.xhtml#content-coding .
Note
Adapter tracing is supported for HTTP adapter. For more information, see .
Once you have created a receiver channel and selected the HTTP receiver adapter, you can configure the
following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
Select the General tab and provide values in the fields as follows.
General
Parameter Description
Select the Adapter Specific tab and provide values in the fields as follows.
Parameter Description
Address URL of the target system that you are connecting to, for ex
ample, https://mysystem.com
● throwExceptionOnFailure
● bridgeEndpoint
● transferException
● client
● clientConfig
● binding
● sslContextParameters
● bufferSize
Query Query string that you want to send with the HTTP request
Note
If you want to send parameters in the query string of the
HTTP adapter, these parameters must be coded in a
URL-compatible way. Individual parameter-value pairs
must be separated with an ”&” and there must be an “=”
between the name of a parameter and its value.
Example 1)
parameter1=123, parameter2=abc
Example 2)
Proxy Type The type of proxy that you are using to connect to the target
system:
Note
If you select the On-Premise option, the following
restrictions apply to other parameter values:
○ Do not use an HTTPS address for Address, as it
leads to errors when performing consistency
checks or during deployment.
○ Do not use the option Client Certificate for the
Authentication parameter, as it leads to errors
when performing consistency checks or during
deployment.
Note
If you select the On-Premise option and use the
SAP Cloud Connector to connect to your on-prem
ise system, the Address field of the adapter referen
ces a virtual address, which has to be configured in
the SAP Cloud Connector settings.
● POST
Requests that the receiver accepts the data enclosed in
the request body.
● Delete
Requests that the origin server delete the resource
identified by the Request-URl
● Dynamic
The method is determined dynamically by reading a
value from a message header or property such as $
{header.abc} or ${property.abc} during run
time.
● GET
Sends a GET request to the receiver.
● HEAD
Sends a HEAD request which is similar to a GET request
but does not return a message body.
● PUT
Updates or creates the enclosed data on the receiver
side.
● TRACE
Sends a TRACE request to the receiver that sends back
the message to the caller.
Send Body Select this checkbox if you want to send the body of the
message with the request. For methods GET, DELETE, and
(only if you select for Method the option GET, DELETE, HEAD
HEAD, the body is not sent by default because some HTTP
or Dynamic.)
servers do not support this function.
Authentication Defines how the tenant (as the HTTP client) will authenticate
itself against the receiver.
● None
● Basic
The tenant authenticates itself against the receiver us
ing user credentials (user name and password).
It is a prerequisite that user credentials are specified in
a Basic Authentication artifact and deployed on the re
lated tenant.
● Client Certificate
The tenant authenticates itself against the receiver us
ing a client certificate.
It is a prerequisite that the required key pair is installed
and added to a keystore. This keystore has to be de
ployed on the related tenant. The receiver side has to be
configured appropriately.
Note
You can externalize all attributes related to the configu-
ration of the authentication option. This includes the at
tributes with which you specify the authentication op
tion as such, as well as all attributes with which you
specify further security artifacts that are required for
any configurable authentication option (Private Key
Alias or Credential Name).
● Principal Propagation
The tenant authenticates itself against the receiver by
forwarding the principal of the inbound user to the
cloud connector, and from there to the back end of the
relevant on-premise system
Note
user and password into the header method can
only be used with the following sender adapters:
HTTP, SOAP, IDOC
Note
and enter the base64-encoded values forPlease
note that the token for principal propagation ex
pires after 30 minutes.
Note
In the following cases certain features might not be
available for your current integration flow:
○ A feature for a particular adapter or step was
released after you created the corresponding
shape in your integration flow.
○ You are using a product profile other than the
one expected.
Credential Name Identifies the User Credential artifact that contains the cre
dentials (user name and password) for the Basic authentica
(only if you select for Authentication the option Basic,
tion. For OAuth2 SAML Bearer Assertion type authentication
OAuth2 Client Credentials or OAuth2 SAML Bearer
provide the OAuth2 Credential artifact name. For more infor
Assertion.)
mation see, .
Private Key Alias Enter the private key alias that enables the system to fetch
the private key from keystore for authentication.
(only if you select Client Certificate Authentication).
Restriction
The values true and false are not supported for this
field.
Timeout (in ms) Maximum time that the tenant waits for a response before
terminating message processing
Note
In the case of integration flows in OData service artifacts, you can save the integration flow and deploy the
OData service.
Related Information
Context
You use the HTTPS sender adapter to communicate with receiver systems using HTTPS message protocol.
In the following cases certain features might not be available for your current integration flow:
● A feature for a particular adapter or step was released after you created the corresponding shape in
your integration flow.
● You are using a product profile other than the one expected.
More information: Adapter and Integration Flow Step Versions [page 405]
Supported Header:
● SapAuthenticatedUserName
Contains the user name of the client that calls the integration flow.
If the sender channel is configured to use client certificate authentication, no such header is set (as it is not
available in this case).
The following HTTPS request headers for the sample HTTPS endpoint https://
test.bsn.neo.ondemand.com/http/hello?abcd=1234 are added to exchange headers for further
processing in integration flow:
● CamelHttpUrl
Refers to the complete URL called, without query parameters.
For example, CamelHttpUrl=https://test.bsn.neo.ondemand.com/http/hello.
● CamelHttpQuery
Refers to the query string that is contained in the request URL.
In the context of a receiver adapter, this header can be used to dynamically change the URI to be called.
For example, CamelHttpQuery=abcd=1234.
● CamelHttpMethod
Refers to the incoming method names used to make the request. These methods are GET, POST, PUT,
DELETE, and so on.
● CamelServletContextPath
Refers to the path specified in the address field of the channel.
For example, if the address in the channel is /abcd/1234, then CamelServletContextPath is /abcd/1234.
Note
● Adapter tracing is supported for HTTPS adapter. For more information, see .
● When you deploy an integration flow with HTTPS sender adapter, you can see the endpoint information
of this integration flow in Manage Integration Content section of operations view.
Connection
Parameter Description
Note
● Use the following pattern: http://
<host>:<port>/http . This should be ap
pended by the unique address specified in the
channel.
● The field value supports these characters ~, -, . , $
and * .
● The Address field should start with '/ ' and can con
tain alphanumeric values, '_' and '/ '. For example a
valid address is /test/123.
● In the example mentioned above, you can use ~
only for the address part which succeeds /test/
● You can use $ only at the beginning of the address
after /.
● You can use* only at the extreme end of the ad
dress and no characters are allowed after *. A * can
only be preceded with /.
● You cannot begin address with., - or ~ . Alphanu
meric value or _ must succeed these characters.
● If you are using /*, it implies that uri containing the
prefix preceding the /* is supported. For example. if
the address is /Customer/* then uris supported are
http://<host>:<port>/http/Customer/<Any-url>.
● Uris are case insensitive. So, http://<host>:<port>/
http/test and http://<host>:<port>/http/Test is
treated as same.
Note
When you specify the endpoint address /path, a
sender can also call the integration flow through the
endpoint address /path/<any string> (for exam
ple, /path/test/).
Note
● During an inbound HTTPS communication, if the
sender adapter receives a GET or HEAD request to
fetch the CSRF token value and you have the ena
bled CSRF Protected then the adapter will return
the CSRF token and stop processing the message
further.
● Include X-CSRF-Token in the HTTP header field for
all modifying requests and these requests are vali
dated during runtime. If the validation fails then the
server returns “HTTP 403 Forbidden” status code.
Conditions
Configure to set size limit
Parameter Description
The parameters in Maximum Message Size allows you to set a maximum size limit for processing inbound messages. All
inbound messages that exceeds the configured limit are rejected and the sender receives an error message.
Note
The minimum allowable size limit is 1MB.
Body Size (in MB) Define the allowable size limit for processing the message
body.
Note
● Additional incoming request headers and URL parameters can be added to exchange headers for
further processing in integration flow. You must define these headers and paramters in Allowed
Headers list at integration flow level.
● Once the integration flow processing completes, the HTTPS sender adapter returns header and body
to end user and sets the response code. You can use Content Modifier element to send back specific
http response and customize the response.
● Address URLs for http endpoints across integration flow must be unique. If it is not unique then the
integration flow does not start.
● Adapter returns the following HTTP response code:
○ 200 - Processing is successful
○ 503 - Service is not available
○ 500 - Exception during integration flow processing
Also, you can set the header CamelHttpResponseCode to customize the response code.
● You can invoke the HTTP endpoints using the syntax <Base URI>/http/<Value of address field>. You
can get Base URI value from Services tab in Properties view of a worker node.
Atleast one integration flow with SOAP endpoint must be deployed to view details in Services tab.
● You should use Script element to customise which headers can be sent in response to the HTTP call. It
is a recommendation that you must remove internal headers and sent back only required headers.
● If an exception occurs during a HTTPS call, due to which the message is not processed and you have
selected Return Exception to Sender, then the exception is sent back to the sender. For more
information, see Define Error Configuration [page 912].
● If an exception occurs during the HTTPS call and you have not selected Return Exception to Sender, it
throws back a message and MPL ID explaining the exception, rather than displaying the stack trace.
The IDoc adapter enables the SAP Cloud Platform tenant to exchange Intermediate Document (IDoc)
messages with systems that support communication via SOAP Web services.
Related Information
The IDoc sender adapter enables the SAP Cloud Platform tenant to receive Intermediate Document (IDoc)
messages from a sender.
Note
In the following cases certain features might not be available for your current integration flow:
● A feature for a particular adapter or step was released after you created the corresponding shape in
your integration flow.
● You are using a product profile other than the one expected.
More information: Adapter and Integration Flow Step Versions [page 405]
Supported Headers
● SapAuthenticatedUserName
Contains the user name of the client that calls the integration flow.
If the sender channel is configured to use client certificate authentication, no such header is set (as it is not
available in this case).
The following specific headers are set by the IDoc sender adapter and can be used in the subsequent steps of
the integration flow.
● SapIDocType
● SapIDocTransferId
● SapIDocDbId
More information: Headers and Exchange Properties Provided by the Integration Framework [page 900]
Once you have created a sender channel and selected the IDoc adapter, you can configure the following
attributes.
The General tab shows general information such as the adapter type, its direction (sender or receiver), the
transport protocol, and the message protocol.
Address Relative endpoint address on which Cloud Integration can be reached by incoming re
quests, for example, /GetEmployeeDetails.
Note
When you specify the endpoint address /path, a sender can also call the integra
tion flow through the endpoint address /path/<any string> (for example, /
path/test/).
Be aware of the following related implication: When you in addition deploy an integra
tion flow with endpoint address /path/test/, a sender using the /path/test
endpoint address will now call the newly deployed integration flow with the endpoint
address /path/test/. When you now undeploy the integration flow with endpoint
address /path/test, the sender again calls the integration flow with endpoint ad
dress /path (original behavior). Therefore, be careful reusing paths of services. It is
better using completely separated endpoints for services.
Depending on your choice, you can also specify one of the following properties:
Note
Note the following:
○ You can also type in a role name. This has the same result as selecting the
role from the value help: Whether the inbound request is authenticated de
pends on the correct user-to-role assignment defined in SAP Cloud
Platform Cockpit.
○ When you externalize the user r, the value help for roles is offered in the in
tegration flow configuration as well.
○ If you have selected a product profile for SAP Process Orchestration, the
value help will only show the default role ESBMessaging.send.
Conditions
Parameter Description
Maximum Message Size This parameter allows you to configure a maximum size for
inbound messages (smallest value for a size limit is 1 MB). All
inbound messages that exceed the specified size (per inte
gration flow and on the runtime node where the integration
flow is deployed) are blocked.
● Body Size
● Attachment Size
Related Information
Headers and Exchange Properties Provided by the Integration Framework [page 900]
The IDoc receiver adapter enables the SAP Cloud Platform tenant to send Intermediate Document (IDoc)
messages to a receiver.
Remember
This component or some of its features might not be available in the Cloud Foundry environment. For more
information on the limitations, see SAP Note 2752867 .
Note
In the following cases certain features might not be available for your current integration flow:
● A feature for a particular adapter or step was released after you created the corresponding shape in
your integration flow.
● You are using a product profile other than the one expected.
More information: Adapter and Integration Flow Step Versions [page 405]
● SOAPAction Header
This header is part of the Web service specification.
The IDoc receiver adapter sends a request and gets an XML response. This response is standardized. It has the
same structure for all IDocs, with the exception of the type.
The following specific headers are set by the IDoc receiver adapter.
● SapIDocType
● SapIDocTransferId
● SapIDocDbId
More information: Headers and Exchange Properties Provided by the Integration Framework [page 900]
Once you have created a receiver channel and selected the IDoc adapter, you can configure the following
attributes.
The General tab shows general information such as the adapter type, its direction (sender or receiver), the
transport protocol, and the message protocol.
Connection
Parameters Description
Address Endpoint address on which Cloud Integration posts the outbound message,
for example http://<host>:<port>/payment.
You can dynamically configure this field by entering an expression such like
${header.a} or ${property.a}, depending on whether you like to
use a header or an Exchange property for dynamic configuration.
The endpoint URL that is actually used at runtime is displayed in the mes
sage processing log (MPL) in the message monitoring application (MPL
property RealDestinationUrl). Note that you can manually configure
the endpoint URL using the Address attribute of the adapter. However, there
are several ways to dynamically override the value of this attribute (for ex
ample, by using the Camel header CamelHttpUri).
Proxy Type The type of proxy that you are using to connect to the target system:
Note
If you select the On-Premise option, the following restrictions apply
to other parameter values:
○ Do not use an HTTPS address for Address, as it leads to errors
when performing consistency checks or during deployment.
○ Do not use the option Client Certificate for the Authentication
parameter, as it leads to errors when performing consistency
checks or during deployment.
Note
If you select the On-Premise option and use the SAP Cloud Con
nector to connect to your on-premise system, the Address field of
the adapter references a virtual address, which has to be config-
ured in the SAP Cloud Connector settings.
● If you select Manual, you can manually specify Proxy Host and Proxy
Port (using the corresponding entry fields).
Furthermore, with the parameter URL to WSDL you can specify a Web
Service Definition Language (WSDL) file defining the WS provider end
point (of the receiver). You can specify the WSDL by either uploading a
WSDL file from your computer (option Upload from File System) or by
selecting an integration flow resource (which needs to be uploaded in
advance to the Resources view of the integration flow).
This option is only available if you have chosen a Process Orchestration
product profile.
Location ID only in case On-Premise is se To connect to a cloud connector instance associated with your account, en
lected for Proxy Type. ter the location ID that you defined for this instance in the destination con
figuration on the cloud side. You can also enter an expression such like $
{header.headername} or ${property.propertyname} (exam
ple) to dynamically read the value from a header or a property.
Application/x-sap.doc
Text/XML
● Basic
The tenant authenticates itself against the receiver using user creden
tials (user name and password).
It is a prerequisite that user credentials are specified in a User
Credentials artifact and deployed on the related tenant. Enter the name
of this artifact in the Credential Name field of the adapter.
● Client Certificate
The tenant authenticates itself against the receiver using a client certif
icate.
This option is only available if you have selected Internet for the Proxy
Type parameter.
It is a prerequisite that the required key pair is installed and added to a
keystore. This keystore has to be deployed on the related tenant. The
receiver side has to be configured appropriately.
● None
● Principal Propagation
The tenant authenticates itself against the receiver by forwarding the
principal of the inbound user to the cloud connector, and from there to
the back end of the relevant on-premise system
Note
This authentication method can only be used with the following
sender adapters: HTTP, SOAP, IDoc, AS2.
Note
Note that the token for principal propagation expires after 30 mi
nutes.
Note
You can externalize all attributes related to the configuration of the au
thentication option. This includes the attributes with which you specify
the authentication option as such, as well as all attributes with which
you specify further security artifacts that are required for any configu-
rable authentication option (Private Key Alias or Credential Name).
The reason for this is the following: If you have externalized the
Authentication parameter and only the Private Key Alias parameter (but
not Credential Name), all authentication options in the integration flow
configuration dialog (Basic, Client Certificate, and None) are selectable
in a dropdown list. However, if you now select Basic from the dropdown
list, no Credential Name can be configured.
Credential Name (only available if you have Name of the User Credentials artifact that contains the credentials for basic
selected Basic for the Authentication param
authentication
eter)
You can dynamically configure the Credential Name field of the adapter by
using a Simple Expression (see http://camel.apache.org/simple.html .
For example, you can dynamically define the Credential Name of the re
ceiver adapter by referencing a message header $
{header.MyCredentialName} or a message property $
{property.MyCredentialName}.
Private Key Alias (only available if you have Specifies an alias to indicate a specific key pair to be used for the authenti
selected Client Certificate for the
cation step.
Authentication parameter)
You can dynamically configure the Private Key Alias parameter by specifying
either a header or a property name in one of the following ways: $
{header.headername} or $ {property.propertyname}.
Be aware that in some cases this feature can have a negative impact on per
formance.
Timeout Specifies the time (in milliseconds) that the client will wait for a response
before the connection is being interrupted.
Compress Message Enables the WS endpoint to send compressed request messages to the WS
Provider and to indicate the WS Provider that it can handle compressed re
sponse messages.
Allow Chunking Used for enabling HTTP chunking of data while sending messages.
Return HTTP Response Code as Header When selected, writes the HTTP response code received in the response
message from the called receiver system into the header
CamelHttpResponseCode.
Note
You can use this header, for example, to analyze the message process
ing run (when level Trace has been enabled for monitoring). Further
more, you can use this header to define error handling steps after the
integration flow has called the IDoc SOAP receiver.
You cannot use the header to change the return code since the return
code is defined in the adapter and cannot be changed.
Clean-up Request Headers Select this option to clean up the adapter specific- headers after the re
ceiver call.
Related Information
Headers and Exchange Properties Provided by the Integration Framework [page 900]
The JDBC (Java Database Connectivity) adapter enables you to connect integration flows with HANA or ASE
databases hosted on customers (subscriber) global account.
Remember
This component or some of its features might not be available in the Cloud Foundry environment. For more
information on the limitations, see SAP Note 2752867 .
Note
In the following cases certain features might not be available for your current integration flow:
● A feature for a particular adapter or step was released after you created the corresponding shape in
your integration flow.
● You are using a product profile other than the one expected.
More information: Adapter and Integration Flow Step Versions [page 405]
The JDBC receiver adapter helps the SAP Cloud Platform Integration tenant to connect to a JDBC (Java
Database Connectivity) database and to execute SQL commands on the database. Use it to perform
The JDBC receiver adapter uses XML SQL Format message protocol. For more information on modifying or
structuring the content of the message payload, see Defining XML Documents for Message Protocol XML SQL
Format. To try out a simple demo, visit the blog .
Note
● UPSERT operation is currently not supported for XML SQL format. It is recommended to use stored
procedure to update an existing record. If the record does not exist, then use INSERT.
● Batch messages are not processed by JDBC adapter. If the adapter receives a batch message, then
only the first record is processed the remining records are ignored. If you want to process the entire
batch message, then use a Splitter before the JDBC adapter to do the INSERT or use stored procedure.
● You must deploy the public certificate in the Cloud Integration keystore for verification of the business
response.
● You must have configured an integration flow in the editor. For more information, see Integration Flow
Editor for SAP Cloud Platform Integration [page 495].
● JDBC receiver adapter supports externalization. To externalize the parameters of this adapter choose
Externalize and follow the steps mentioned in Externalize Parameters of an Integration Flow [page
489].
Source Code
Connection
Configure the connection details as per the description
Field Description
JDBC Data Source Alias Enter the data source name. For more information, see .
Note
● Deployment of JDBC Data Source fails if you have
heterogeneous cluster setup, i.e. the cluster setup
must be configured with the same product profiles
combination of different product profiles are not
supported.
● You can use the same JDBC Data Source in differ-
ent integration flows configured with JDBC adapter.
But you cannot use the same JDBC Data Source to
configure multiple JDBC adapters within an integra
tion flow. You need to deploy two JDBC Data Source
artifacts with different names for the same data
base and use them in the integrations flows having
multiple JDBC adapters.
Connection Timeout (in s) Provide a connection timeout, in seconds, to define how long
the adapter waits for a server response before the connec
tion retry is terminated.
Query/Response Timeout (in s) Provide a query timeout, in seconds, to define the waiting
duration for receiving a query response. After the elapsed
time the adapter stops waiting for response.
You configure the JMS adapter to enable asynchronous messaging using message queues.
The JMS messaging instance that is used in asynchronous messaging scenarios with the JMS or AS2 adapter
has limited resources. Cloud Platform Integration Enterprise Edition sets a limit on the queues, storage, and
connections that you can use in the messaging instance.
For more information about JMS resource and size limits, visit the following blog: Cloud Integration – JMS
Resource and Size Limits .
You can also increase the JMS resources and assign more Enterprise units containing one queue and a
dedicated set of JMS resources.
You can find a step-by-step guide on how to activate Enterprise Messaging here:
It is also possible to activate JMS resources on Cloud Integration tenants without having the Cloud Platform
Integration Enterprise Edition. The resource limits for 5 Enterprise Messaging Units are set in the same way as
for the Cloud Platform Integration Enterprise Edition. For more information on how to activate, increase, or
manage Enterprise Messaging, please visit: Cloud Integration - Activating and Managing Enterprise Messaging
Capabilities
There are also technical restrictions on the size of the headers and exchange properties that can be stored in
the JMS queue.
The following size limits apply when saving messages to JMS queues:
● There are no size limits for the payload. The message is split internally when it is put into the queue.
● There are no size limits for attachments. The message and the attachment are split internally when put
into the queue.
● Headers and exchange properties defined in the integration flow before the message is saved to the queue
must not exceed 4 MB in total.
Note
Message queues that are no longer used (in deployed integration flows) are deleted automatically by a
system job, which runs daily. The Message Queues monitor is adapted accordingly so that deleted message
queues are no longer displayed in this case.
Unused message queues are only deleted automatically if they don’t contain any messages. Message
queues that still contain messages but are no longer required in the Message Queues monitor have to be
deleted manually.
There are certain limitations related to transactional behavior when using this adapter type together with
Data Store Operations steps, Aggregator steps, or global variables. For more information, visit the following
blog: Cloud Integration – How to configure Transaction Handling in Integration Flow .
You configure the JMS receiver and sender adapters to enable asynchronous messaging using message
queues. The incoming message is stored temporarily in a queue and scheduled for processing in a queue. The
processing of messages from the queue is not serialized.
Note
The JMS adapter stores only simple data type headers (primitive data types or strings).
Supported Headers: Headers and Exchange Properties Provided by the Integration Framework [page 900]
Related Information
The JMS Sender Adapter enables asynchronous messaging by using message queues. The sender adapter
stores incoming messages permanently and schedules them for processing in a queue. The messages are
processed concurrently.
Certain constraints apply with regard to the number and capacity of involved queues, as well as for the headers
and exchange properties defined in the integration flow before the message is saved to the queue. See JMS
Adapter [page 581]
Once you have created a sender channel and selected the JMS Sender Adapter, you can configure the following
attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
Select the General tab and provide values in the fields as follows.
General
Parameter Description
Select the Connection tab and provide values in the fields as follows.
Processing Details Queue Name Enter the name of the message queue.
Retry Handling Retry Interval (in m) Enter a value for the amount of time to
wait before retrying message delivery.
Maximum Retry Interval (in m)* Enter a value for the maximum amount
of time to wait before retrying message
(only configurable when Exponential
delivery. The minimum value is 10 mi
Backoff is selected)
nutes. The default value is set to 60 mi
nutes.
Note
For high-load scenarios, or if you
are sure that only small messages
will be processed in your scenario,
you should deselect the checkbox
to improve the performance. But be
aware that there is a risk of an out
age, for example,if you run out of
memory.
Related Information
The JMS Receiver Adapter enables asynchronous messaging by using message queues.
Certain constraints apply with regard to the number and capacity of involved queues, as well as for the headers
and exchange properties defined in the integration flow before the message is saved to the queue. See JMS
Adapter [page 581]
Once you have created a receiver channel and selected the JMS Receiver Adapter, you can configure the
following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
Select the General tab and provide values in the fields as follows.
General
Parameter Description
Select the Processing tab and provide values in the fields as follows.
Processing Details Queue Name Enter the name of the message queue.
Retention Threshold for Alerting (in d) Enter the time period (in days) by which
the messages have to be fetched. The
default value is 2.
Expiration Period (in d)* Enter the number of days after which
the stored messages are deleted (de
fault is 90).
Transfer Exchange Properties You can select this option to also trans
fer the exchange properties to the JMS
queue.
The JMS messaging instance that is used in asynchronous messaging scenarios with the JMS or AS2 adapters
has limited resources. This topic shows how to deal with this limitation.
To check the currently used resources, go to the Monitor application and open the Message Queues tile (under
Manage Stores). The JMS resources are shown at the top of the page.
Processing JMS messages always requires a consumer (JMS sender adapter) or provider (JMS receiver
adapter) and a transaction. In critical resource situations (as analyzed by the Manage Queues monitor), you
can optimize the usage of transactions based on the following calculations.
Consumers No. of runtime nodes * No. of JMS No. of runtime nodes * No. of JMS
queues queues * No. of concurrent processes
Providers Number of providers for a tenant cannot be calculated (depends on the sender
system).
Transactions Min. no. of consumers + No. of provid Max. no. of consumers + No. of provid
ers ers
Notes
● No. of JMS queues: Note that when the first integration flow that uses a JMS queue is deployed, a queue is
created.
● To find out the number of runtime nodes or the number of tenant management nodes, open the tile
Message Queues (in the Monitor section under Manage Stores) and click Details in the information box
below the header.
● Note that transactions are distributed dynamically to providers and consumers.
For more information, read the SAP Community blog: Cloud Integration – JMS Resource and Size Limits .
The Lightweight Directory Access Protocol (LDAP) Receiver Adapter enables you to communicate with
systems that expose data through LDAP service.
In case you have input messages in different formats, you need to use a mapping step to create a target
payload that can be recognized by the LDAP receiver adapter. You can use this schema as a template for the
target in mapping step.
Note
● You cannot update multiple records in a single processing cycle. You can only perform a given operation
on one record at a time.
● You can update attribute with multiple values.
Remember
You must use SAP Cloud Connector for connecting to an LDAP service using the LDAP adapter. LDAP
adapter supports version 2.9 or higher versions of the cloud connector.
For more information on using the SAP Cloud Connector, see SAP Cloud Platform Connector.
Once you have created a receiver channel and selected the LDAP Receiver Adapter, you can configure the
following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
Select the General tab and provide values in the fields as follows.
Select the Connection tab and provide values in the fields as follows.
Connection
Parameter Description
Address Enter the URL of the LDAP directory service that you are
connecting to.
Proxy Type Select the proxy type that you want to use. Currently, only
On Premise is supported.
Authentication Select the authentication type that you want to use. Cur
rently, only Simple is supported.
Credential Name Enter the credential name you have deployed in the tenant.
Select the Processing tab and provide values in the fields as follows.
Processing
Parameter Description
Operation Select the operation that you want to perform. The sup
ported operations are:
Input Type Select the type of input that you are providing (applicable
only for Insert operation).
Scope Select an attribute, placed below the Base DN, to define the
extent of the search and the attributes are:
● Object
● One Level
● Subtree
Search Filter Specify a search criteria applied to each entry within the
scope.
Example
● (&(mail=*.gmail.com)(objectClass=user))
● (sAMAccountType=805306368)
Output Type Select the type of output format when the data is returned.
Example
cn, sAMAccountName,sn,givenName
Size Limit Define an integer value, to set the maximum number of en
tries that is returned.
Timeout (in min) Define an integer value, to set the maximum time the server
should wait before returning the results.
The LDAP adapter supports input via JNDI attributes. If you choose this as the input type, you use a script step
to obtain values to attributes that are then passed to the LDAP service.
importClass(com.sap.gateway.ip.core.customdev.util.Message);
importClass(java.util.HashMap);
importClass(javax.naming.directory.Attribute);
importClass(javax.naming.directory.BasicAttribute);
importClass(javax.naming.directory.BasicAttributes);
importClass(javax.naming.directory.Attributes);
In the script, the values for attributes givenName, displayName and telephoneNumber are declared in the
script before they are passed to the LDAP adapter. Similarly, you can also create a script where these values
are dynamically fetched during runtime.
The example schema contains a set of attributes for a given record. In case you want the schema to contain
additional attributes, you can manually edit the schema before using it in the mapping step.
For example, if you want to add a field, telephone number, you can add an element in the schema under the
sequence element.
Let us consider a scenario where you want to add an attribute to the message (payload) that you are sending to
the LDAP service. For example, you want to add a password attribute. Due to security concerns, you should
encode the password before you add it.
You can achieve this by adding a Script step after the mapping step in the integration flow. Here's an example of
the script that you can use in the Script step:
import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.HashMap;
import javax.xml.bind.DatatypeConverter;
import javax.naming.directory.Attribute;
import javax.naming.directory.Attributes;
import javax.naming.directory.BasicAttribute;
import javax.naming.directory.BasicAttributes;
def Message processData(Message message)
{
Attributes attributes = new BasicAttributes();
String quotedPassword = '"'+"Initial@1"+'"';
byte[] unicodePasswordByteArray = quotedPassword.getBytes("UTF-16LE");
attributes.put(new BasicAttribute("unicodePwd", unicodePasswordByteArray));
message.setHeader("SAP_LDAPAttributes",attributes);
return message;
}
You can use the Modify operation to change the DN of an LDAP record. You can do this by adding the tag
<DistinguishedName_Previous> to the input payload with the old DN. Specify the modified DN in
<DistinguishedName> tag and perform the Modify operation.
Here's an example that shows a sample input payload for modiying the DN of an LDAP record:
The mail adapter allows you to connect the tenant to an email server. The sender mail adapter can download e-
mails and access the e-mail body content as well as attachments. The receiver mail adapter allows you to send
encrypted messages by e-mail.
Related Information
You use the mail sender adapter to download e-mails from mailboxes using the IMAP or POP3 protocol, to
access the content of the e-mail body, and to access e-mail attachments
If you have configured a mail sender adapter, message processing is performed as follows at runtime:
According to the Scheduler settings in the mail sender adapter, the tenant sends requests to an email server
(think of this as the sender system), but the data flow is in the opposite direction, from the mail server to the
tenant. In other words, the tenant reads files from the mail server (a process that is also referred to as polling).
Note
In the following cases certain features might not be available for your current integration flow:
● A feature for a particular adapter or step was released after you created the corresponding shape in
your integration flow.
● You are using a product profile other than the one expected.
More information: Adapter and Integration Flow Step Versions [page 405]
Caution
To take the necessary precautions in order to avoid any unwanted behavior of your scenario (in particular,
security risks) check out the following topic:
Once you have created a sender channel and selected the mail sender adapter, you can configure the following
attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
Select the General tab and provide values in the fields as follows.
Select the Connection tab and provide values in the fields as follows.
Connection
Parameter Description
Address Specifies the host name or address of the IMAP server, for
example, mail.domain.com.
Use one of the following open ports for external mail servers:
Proxy Type The type of proxy that you’re using to connect to the target
system.
Timeout (in ms) Specifies the network timeout for the connection attempt to
the server. The default value is 30000.
Location ID (only if On-Premise is selected for Proxy Type) To connect to an SAP Cloud Connector instance associated
with your account, enter the location ID that you defined for
this instance in the destination configuration on the cloud
side.
● Off
No encryption is initiated by the client.
Note
If your on-premise mail server requires SMTPS, se
lect Off for Protection. The SSL connection then
needs to be configured in SAP Cloud Connector.
Credential Name Specifies the name of the User Credentials artifact that con
tains user name and password (used to authenticate at the
email account).
Processing
Parameter Description
Selection Specify which mails will be processed (all mails or only un
read ones).
(only if as Transport Protocol the option IMAP4 has been se
lected)
Max. Messages per Poll Defines the maximal number of messages that will be read
from the email server in one polling step.
Note
If Post-Processing is set to Mark as Read and the poll
strategy is set to poll for all mails (Selection: All), then
already processed mails will be processed again at every
polling interval.
Select the Scheduler tab and provide values in the fields as follows.
Scheduler
Schedule on Day On Date Specify the date on which you want the
operation to be executed.
(mails are to be polled at a specific day)
Time Zone Select the time zone that you want the
scheduler to use as a reference for the
date and time settings.
Time Zone Select the time zone that you want the
scheduler to use as a reference for the
date and time settings.
Example: With the configuration shown in the figure below, the integration flow will be activated every week on
Monday to poll emails on this day every hour, between 00:00 and 24:00 (Greenwhich Time Zone).
The Run Once option has been removed in the newest version of the adapter. Default values for the interval
under Schedule on Day and Schedule to Recur have been changed so that the scheduler runs every 10 seconds
between 00:00 and 24:00.
Additional Notes
Related Information
When using the mail sender adapter, be aware of the following in order to avoid any unwanted results.
Security Risks
Unlike with other adapters, if you are using the sender mail adapter, the Cloud Integration system cannot
authenticate the sender of an e-mail.
Therefore, if someone is sending you malware, for example, it is not possible to identify and block this sender in
the Cloud Integration system.
To minimize this danger, you can use the authentication mechanism of your mailbox. Bear in mind, however,
that this mechanism might not be sufficient to protect against such attacks.
The mailbox settings for downloading e-mails can interfere with the settings in the sender mail adapter.
For example: When using POP3 protocol, the post-processing setting Delete/Remove might not work properly.
In this case, try to configure the correct behavior in the mailbox.
You use the mail receiver adapter to send encrypted messages by e-mail.
Note
In the following cases certain features might not be available for your current integration flow:
● A feature for a particular adapter or step was released after you created the corresponding shape in
your integration flow.
● You are using a product profile other than the one expected.
More information: Adapter and Integration Flow Step Versions [page 405]
Note
For an example of how to configure the mail receiver adapter in a dedicated demo integration scenario,
ceck out the following topic: .
If you want to learn how to use the mail receiver adapter to send signed and/or encrypted mails, this blog is
for you: Cloud Integration - Sending Signed and/or Encrypted Mails in Mail Receiver Adapter
Once you have created a receiver channel and selected the mail receiver adapter, you can configure the
following attributes.
Select the General tab and provide values in the fields as follows.
Select the Connection tab and provide values in the fields as follows.
Connection
Parameter Description
Address Specifies the host name and (optionally) a port number of the SMTP server.
Use one of the following open ports for external mail servers:
Timeout (in ms) Specifies the network timeout for the connection attempt to the server.
The default value is 30000. The timeout should be more than 0, but less than five mi
nutes.
Proxy Type The type of proxy that you’re using to connect to the target system.
Location ID (only if On-Premise To connect to a cloud connector instance associated with your account, enter the loca
is selected for Proxy Type) tion ID that you defined for this instance in the destination configuration on the cloud
side.
● Off
No encryption is initiated, whether the server requests it or not.
Note
If your on-premise mail server requires SMTPS, select Off for Protection. The
SSL connection then needs to be configured in SAP Cloud Connector.
● STARTTLS Mandatory
If the server supports STARTTLS, the client initiates encryption using TLS. If the
server doesn’t support this option, the connection fails.
● STARTTLS Optional
If the server supports the STARTTLS command, the connection is upgraded to
Transport Layer Security encryption. This works with the normal port 25.
If the server supports STARTTLS, the client initiates encryption using TLS. If the
server doesn’t support this option, client and server remain connected but commu
nicate without encryption.
● SMTPS (only when None has been selected for Proxy Type)
The TCP connection to the server is encrypted using SSL/TLS. This usually requires
an SSL proxy on the server side and access to the port it runs on.
Authentication Specifies which mechanism is used to authenticate against the server with a user name
and password combination. Possible values are:
● None
No authentication is attempted. No credential can be chosen.
● Plain User Name/Password
The user name and password are sent in plain text. You should only use this option
together with SSL or TLS, as otherwise an attacker could obtain the password.
● Encrypted User/Password
The user name and password are hashed before being sent to the server. This au
thentication mechanism (CRAM-MD5 and DIGEST-MD5) is secure even without en
cryption.
Credential Name Specifies the name of a deployed credential to use for authentication.
If you want to configure multiple mail receivers, use a comma (,) to separate the ad
dresses.
If you want to configure multiple mail receivers, use a comma (,) to separate the ad
dresses.
If you want to configure multiple mail receivers, use a comma (,) to separate the ad
dresses.
Body MIME Type Specifies the type of the message body. This type determines how the message is dis
played by different user agents.
Body Encoding Specifies the character encoding (character set) of the message body. The content of the
input message will be converted to this encoding, and any character that is not available
will be replaced with a question mark ('?'). To ensure that data is passed unmodified, se
lect a Unicode encoding, for example, UTF-8.
MIME Type (under The Multipurpose Internet Mail Extensions (MIME) type specifies the data format of the
Attachments) e-mail.
● Text/Plain
● Text/CSV
● Text/HTML
● Application/XML
● Application/JSON
● Application/Octet-Stream
Source Specifies the source of the data. This can be either Body, meaning the body of the input
message, or Header, meaning a header of the input message.
Header Name If the source is Header, this parameter specifies the name of the header that is attached.
Add Message Attachments Select this option to add all attachments contained in the message exchange to the e-
mail.
The parameters From, To, Cc, Bcc, Subject, Mail Body as well as the attachment name, can be dynamically
set at runtime from message headers or content.
Select the Security tab and provide values in the fields as follows.
Security
Parameter Description
Signature and Encryption Type This parameter configures encryption and signature
schemes used for sending e-mails. The message body and
attachments are encrypted with the selected scheme and
can only be decrypted by the intended recipients.
● None
● S/MIME Encryption
● S/MIME Signature
● S/MIME Signature and Encryption
Content Encryption Algorithm Specifies the symmetric (block) cipher. DESede should only
be chosen if the destination system or mail client doesn’t
support AES.
Secret Key Length Specifies the key size of the previously chosen symmetric ci
pher. To increase the security, choose the maximum key size
supported by the destination.
Receiver Public Key Alias Specifies an alias for the public key that is to be used to en
crypt the message. This key has to be part of the tenant key
store.
Send Clear Text Signed Message Sends the signed message as clear text, so that recipients
who don't have S/MIME security are able to read the mes
sage.
Private Key Alias Specifies an alias for the private key that is to be used to de
crypt the message. This key has to be part of the tenant key
store. The alias can be dynamically read from a header or
property using ${header.alias}.
Signature Algorithm Specifies the algorithm used to sign the content using the
private key.
Related Information
With mail sender and receiver adapter, you have several options to protect the communication.
The OData adapter allows you to communicate with an OData service using OData protocol. You use messages
in ATOM or JSON format for communication. This OData adapter uses OData V2 message protocol.
In the sender channel, the OData adapter listens for incoming requests in either ATOM or JSON format.
In the receiver channel, the OData adapter sends the OData request in the format you choose (ATOM or JSON)
to the OData service provider.
Note
In the case of OData service artifacts, OData adapter in the sender channel is not editable. It is
prepopulated with data you have provided when binding OData objects to a data source.
Tip
If your input payload contains nodes without data, the output also contains empty strings. If you want to
avoid empty strings in the output, ensure that the input payload does not contain any empty nodes.
Related Information
The OData sender adapter supports externalization. To externalize the parameters of this adapter choose
Externalize and follow the steps mentioned in Externalize Parameters of an Integration Flow [page 489].
Remember
This component or some of its features might not be available in the Cloud Foundry environment. For more
information on the limitations, see SAP Note 2752867 .
Note
In the following cases certain features might not be available for your current integration flow:
● A feature for a particular adapter or step was released after you created the corresponding shape in
your integration flow.
● You are using a product profile other than the one expected.
More information: Adapter and Integration Flow Step Versions [page 405]
Once you have created a sender channel and selected the OData sender adapter, you can configure the
following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
Select the General tab and provide values in the fields as follows.
Select the Adapter Specific tab and provide values in the fields as follows.
Adapter Specific
Parameter Description
User Role The user role that you are using for authorization. Choose
Select to get a list of all the available user roles for your ten
(only if you select Authorization as User Role).
ant and select the one that you want to use.
Client Certificate The client certificate that you are using for authorization.
Choose Add to add a new row and then choose Select to add
(only if you select Authorization as Client Certificate).
the Subject DN and Issuer DN.
EDMX Select the EDMX file that contains the OData service defini-
tion.
Operation Select the operation that you want to perform on the se
lected Entity Set in the OData service.
Entity Set Entity set in the OData service that you want to perform the
operation on.
Note
The adapter does not support empty response payloads. If used, the invoking of the integration flow will
result in an error.
Remember
This component or some of its features might not be available in the Cloud Foundry environment. For more
information on the limitations, see SAP Note 2752867 .
OData receiver adapter supports externalization. To externalize the parameters of this adapter choose
Externalize and follow the steps mentioned in Externalize Parameters of an Integration Flow [page 489].
Note
Select the General tab and provide values in the fields as follows.
General
Parameter Description
Select the Connection tab and provide values in the fields as follows.
Connection
Parameter Description
Address URL of the OData V2 service that you want to connect to.
Proxy Type The type of proxy you want to use for establishing connec
tion with OData V2 service.
● None
● Basic
● Principal Propagation
● Client Certificate
Credential Name Credential name of the credentials that you have deployed in
Security Material section of (Operations View).
(only if you choose Authentication as Basic, OAuth2 Client
Credentials, or OAuth2 SAML Bearer Assertion).
Private Key Alias Enter the private key alias that enables the system to fetch
the private key from keystore for authentication.
(only if you select Client Certificate Authentication).
Restriction
The values true and false are not supported for this
field.
CSRF Protected Keep this option selected (default setting). It ensures that
your integration flow is protected against Cross-Site-Re
quest-Forgery, a kind of attack where a malicious party can
perform harmful actions by masquerading as the logged in
user
Select the Processing tab and provide values in the fields as follows.
Parameter Description
Operation Details The operation that you want to perform on the selected
OData entity or resource.
Note
For non-GET operations, if the server responds with
HTTP 2xx series code, the same response code is ac
cepted for message processing.
Resource Path Choose Select to launch the Model Operation wizard. Refer
to table Model Operation for fields and description for
model operation wizard.
Note
● You can also specify the Resource Path parameter
dynamically using a header or a property (with an
expression such like $
{header.resourcePath}, for example). Note
that, however, XPath expressions are not supported
for this parameter.
● Navigation entities in the resource path are not sup
ported.
Note
When you use this adapter with the Update or Merge op
eration, use the following format when specifying the
value of the Resource Path parameter:
EntitySetName(key=value) or
EntitySetName(key).
Product(productId='001')
Query Options Query options that you are passing as a part of the URI to
the OData V2 service.
(enabled for Query(GET) and Read(GET) operations.)
Fields Fields in the entity that you are performing the operation on.
If you use the Model Operation wizard, the input for this field
(enabled for Create(POST), Merge(MERGE), Patch(PATCH)
is provided by the wizard.
and Update(PUT) operations.)
Enable Batch Processing Select to perform multiple operations in one request to the
OData V2 service in the $batch mode.
Custom Query Options Additional query options that are not available in the Model
Operation wizard.
Content Type Type of content that you are sending to the OData V2 back
end service. The adapter supports following content types:
(available only for Merge(MERGE) and Patch(PATCH) opera
tion.) ● Atom (default): Select this option to convert an XML
payload to ATOM XML format.
● JSON: Select this option to convert XML payload to
JSON format.
Content Type Encoding Encoding type used for sending content to OData service.
Note
It is recommended to leave the value Empty.
Note
You should select the Process in Pages checkbox if you
are using the adapter in a Local Integration Process that
is invoked by a Looping Process Call step.
Note
You can pass custom HTTP headers to OData receiver if
you have defined the header in content modifier or
script element and the element is placed before OData
receiver adapter in an integration flow.
Timeout (in min) Maximum time the adapter should wait for receiving a re
sponse from the OData V2 service.
HEADER DETAILS Request Headers : Provide the | (Pipe) seperated value list of
HTTP request headers that has to be sent to the OData
backend
Metadata Details OData receiver adapter makes a $metadata call, before the
actual endpoint call. Not all headers or query parameters are
passed to the $metadata call. If your service needs some
headers (for example header apikey, which is a customer
authorization header to invoke API endpoints) or parameters
then you can provide the same in the request headers and
query parameters.
Modeling Operation
This adapter provides a wizard for modeling operations easily. It is recommended that you use this wizard to
ensure that the operation does not contain any errors. The wizard can also fetch the Externalized parameters
that are maintained under the Connection details of the OData V2 outbound adapter.
1. Connect to System: In this step, you provide the details required for connecting to the web service that you
are accessing.
2. Select Entity and Define Operation: In this step, you select the operation you want to perform and the entity
on which you want to perform the operation on. After selecting the entity, you also select the fields, filtering
and sorting conditions.
3. Configure Filter & Sorting: This step is available only for data fetch operations, where you can define the
order in which the records are fetched in the response payload and filter for the fields that you require.
Connect to System
Parameter Description
If you choose Local EDMX File, you select the service defini-
tion EDMX file which contains all these details that you
specified manually when you selected Remote.
Local EDMX File Choose Select to select the EDMX service schema. You can
also manually upload it from your local file system.
(only if you select Connection Source as Local EDMX File).
Address URL of the service that you want to access. If you are con
necting to an on-premise system, enter the Virtual Host in
your Cloud Connector installation.
Proxy Type Type of proxy that you want to use to connect to the service.
Parameter Description
● Complex types
● Collection of complex types
● Simple types
● Collection of simple types
● Void
Sub-Levels Sub-levels of the entity that you want to access. For exam
ple, if you want to access the field Description in the entity
Select Entity Entity that you want to perform the operation on.
Fields Fields associated with the entity that you want to perform
the operation on.
Skip Specifies the top 'n' number of entries to be skipped and the
rest of the entries are fetched.
This step is available only for data fetch operations, Query(GET) and Read(GET).
Parameter Description
Filter By Select the field that you want to use as reference for filtering,
choose the operation (ex: Less Than or Equal), and provide a
value.
Note
The IN operation is available under filtering when editing
the query manually. This operation is not available in the
Query Modelling wizard.
Example
https://<hostname>/odata/v2/User?$filter=userId
in 'ctse1','mhuang1','flynch1'
Sort By Select the field that you want to use as sorting parameter
and choose Ascending or Descending order.
You configure the ODataV4 receiver adapter by understanding the adapter parameters.
The OData V4 receiver adapter supports externalization. To externalize the parameters of this adapter choose
Externalize and follow the steps mentioned in Externalize Parameters of an Integration Flow [page 489].
Once you have created a receiver channel and selected the OData V4 Receiver Adapter, you can configure the
following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
Select the General tab and provide values in the fields as follows.
Select the Connection tab and provide values in the fields as follows.
Connection
Parameter Description
Address Service root URI of the OData V4 service that you want to
connect to.
Proxy Type The type of proxy you want to use for establishing connec
tion with OData V4 service. Currently, you can choose be
tween
● Internet
● On-Premise
● None
● Basic
Credential Name Credential name of the credentials that you have deployed in
Security Material section of (Operations View).
(only if Basic or OAuth2 Client Credentials is selected for
Authentication).
Allow Chunking Select this option if you want to allow the chunking of the
data.
CSRF Protected Keep this option selected (default setting). It ensures that
your integration flow is protected against Cross-Site-Re
quest-Forgery, a kind of attack where a malicious party can
perform harmful actions by masquerading as the logged in
user.
Select the Processing tab and provide values in the fields as follows.
Parameter Description
● Create(POST)
● Query(GET
● Update(PUT)
Query Options Query options that you are passing as a part of the URI to
the OData V4 service.
(enabled for Query(GET) operation).
Request Headers Request Headers: Provide the | (Pipe) seperated value list of
HTTP request headers that has to be sent to the OData
backend.
Response Headers Response Headers: Provide the | (Pipe) seperated value list
of HTTP response headers. The received header values will
then be converted to message/exchange headers.
he ODC adapter enables you to communicate with systems that expose data through the OData Channel for
SAP Gateway.
Once you have created a receiver channel and selected the ODC Receiver Adapter, you can configure the
following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
Select the Connection tab and provide values in the fields as follows.
Connection
Parameter Description
Address Enter the URL of the SAP NetWeaver Gateway OData Chan
nel that you are accessing.
Client Enter the backend system client you want to connect to.
Authentication Select the tuthentication method you want to use for con
necting to the system.
● Basic Authentication
● Deployed Credentials
Credential Name Enter the alias you used while deploying basic authentication
credentials.
Select the Processing tab and provide values in the fields as follows.
Processing
Parameter Description
Operation Select the operation that you want to perform from the drop
down list.
Resource Path Enter the path to the resource that you want to perform the
operation on.
1. Choose .
2. Provide values in fields based on description in table
and choose Connect.
Field Description
Field Description
Authentication or
Deployed
Credentials.
You use the SAP Open Connectors receiver adapter in integration flows to communicate with more than 150
non-SAP cloud applications that are supported by the SAP Cloud Platform Open Connectors service.
The OpenConnector adapter connects with non-SAP cloud applications through REST APIs and is not Cross-
Site Request Forgery (CSRF) protected. Youcan set the content-type to either JSON or XML format while
processing the request or response for a transaction. The adapter also allows you to make calls using common
HTTP methods on RESTful API services for outbound transactions.
Create an instance of the connector before you configure the adapter. If you have not yet enabled the SAP
Cloud Platform Open Connectors service, then read Enable SAP Cloud Platform Open Connectors in trial for
more information about how to enable the service using SAP Cloud Platform.
After you have enabled the service, create an instance for a third-party application from the Connectors
catalog. This blog walks you through the steps for authenticating the connection and testing the APIs for the
specific third-party application. For more information, see SAP Cloud Platform Open Connectors.
Note
HTTP requests to the REST APIs are protected with HTTP custom authentication using your organization
secret, user secret, and instance token.
Once you have created a receiver channel and selected the Open Connectors Receiver Adapter, you can
configure the following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
Select the General tab and provide values in the fields as follows.
General
Parameter Description
Select the Connection tab and provide values in the fields as follows.
Parameter Description
Base URI Specify the URI that depends on your region and environ
ment, which helps you establish connection with the Open
Connecters service.
Example
https://
api.openconnectors.ext.int.sap.hana.o
ndemand.com/elements/api-v2
Credential Name Provide the alias value for identifying the connector creden
tials. This helps the adapter authenticate and communicate
with the connector. You can select the relevant OpenCon
nector security artifact from the list. Choose Select to see
and add the relevant OpenConnector security artifact from
the list.
See:
Note
● Do not use $ character while defining query param
eters.
● Make sure you provide the Base URI before select
ing the resources.
Method Specify the HTTP method to retrieve the data from the con
nector. Find suitable HTTP method for the action performed
by API:
● POST
● GET
● PATCH
● PUT
● DELETE
Note
For XML request format, the HTTP methods are read
from the source.
Request Format Specify the content format of the incoming message. You
can select one of the following request formats:
● XML
● JSON
Note
You must include <SwaggerParser>as the root ele
ment for every XML request as shown in the sample
code below.
Sample Code
<?xml version="1.0"
encoding="UTF-8"
standalone="no"?>
<SwaggerParser>
<basePath>/elements/api-v2</
basePath>
<host>..............</host>
<Resources>
<Resource>
<name>...........</
name>
<Operations>
<Operation>
<methodType>.......</methodType>
<parameters>
<parameter>
............
..........
..........
............
............
</parameter>
</
parameters>
</Operation>
</Operations>
</Resource>
</Resources>
</SwaggerParser>
Response Format Specify the content format for the outgoing message. For ex
ample, if you select XML format, then the response is con
verted to XML and dispatched.
Note
Do not use either JSON to XML or XML to JSON convert
ers in the integration flow for transforming a response
message.
Query Parameters for Resource Define the value of a variable you want to modify. For exam
ple, you have defined a variable /name/{varname1}
{varname2} in the resource field. Now use the Query
Parameters for Resource field to define the value for
{varname1}{varname2} variables. Use commas to
separate more than variable such as varname1=hello
world, varname2= hello SAP.
Note
Defining variable values in the following $
{property.value1} or ${header.value1}
format is not currently supported.
Note
Enable this option only if you are using the receiver
adapter in a Local Integration Process invoked by a
Looping Process Call.
Timeout (in ms) Specify the maximum amount of time the system waits for a
response before terminating the connection.
Related Links
SAP Blog How to create a sample integration scenario using Open Connectors Adapter .
Use ProcessDirect adapter (sender and receiver) to establish fast and direct communication between
integration flows by reducing latency and network overhead provided both of them are available within a same
tenant.
Prerequisites
● Deployment of the ProcessDirect adapter must support N:1 cardinality, where N (producer) → 1
(consumer).
Multiple producers can connect to a single consumer, but the reverse is not possible. The cardinality
restriction is also valid across integration projects. If two consumers with the same endpoint are
deployed, then the start of the second consumer fails.
● The Address mentioned in ProcessDirect configuration settings must match for producer and consumer
integration flows at any given point.
Context
ProcessDirect Receiver Adapter: If the ProcessDirect adapter is used to send data to other integration flows,
the integration flow is considered as a producer integration flow. In this case, the integration flow has a receiver
ProcessDirect adapter.
ProcessDirect Sender adapter: If the ProcessDirect adapter is used to consume data from other integration
flows, the integration flow is considered as a consumer integration flow. In this case, the integration flow has a
sender ProcessDirect adapter.
● Decompose large integration flows: You can split a large integration flow into smaller integration flows
and maintain the in-process exchange without network latency.
● Customize the standard content: SAP Cloud Platform Integration enables integration developers to
customize their integration flows without modifying them entirely. The platform provides plugin
touchpoints where integration developers can add custom content. This custom content is currently
connected using HTTP or SOAP adapters. You can also use the ProcessDirect adapter to connect the
plugin touchpoints at a lower network latency.
● Allow multiple integration developers to work on the same integration flow: In some scenarios,
integration flows can be large (100 steps or more), and if they are owned or developed by just one
integration developer this can lead to an overreliance on one person. The ProcessDirect adapter helps you
to split a large integration flow into smaller parts that can be owned and managed by multiple integration
developers independently. This allows several people to work on different parts of the same integration
flow simultaneously.
● Reuse of integration flows by multiple integration processes spanning multiple integration projects:
Enables the reuse of integration flows such as error handling, logging, extension mapping, and retry
handling across different integration projects and Camel context. Integration developers therefore only
need to define repetitive logic flows once and can then call them from any integration flow using the
ProcessDirect adapter.
● Dynamic endpoint: Enables the producer integration flow to route the message dynamically to different
consumer integration flow endpoints. The producer integration flow look ups the address in the headers,
body, or property of the exchange and the corresponding value is then resolved to the endpoint to which
the exchange is to be routed.
● Multiple MPLs: MPL logs are interlinked using correlation IDs.
● Transaction processing: Transactions are supported on integration flows using the ProcessDirect adapter.
However, the scope of the transaction is isolated and restricted to a single integration flow.
● Header propagation: Header propagation is not supported across producer and consumer integration
flows unless configured in <Allowed Header(s)> in the integration flow's runtime configuration.
Tip
ProcessDirect adapter improves network latency, as message propagation across integration flows do not
involve load balancer. For this reason, we recommend that you consider memory utilization as a parameter
in scenarios involving heavy payloads and alternatively use HTTP adapter in such scenarios because, the
behavior will be the same.
Restriction
Related Information
You use the ProcessDirect sender adapter to establish fast and direct communication between integration
flows by reducing latency and network overhead provided both of them are available within a same tenant.
Once you have created a sender channel and selected the ProcessDirect sender adapter, you can configure the
following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
Select the General tab and provide values in the fields as follows.
General
Parameter Description
Select the Connection tab and provide values in the fields as follows.
Parameter Description
Address URL of the target system that you are connecting to. For ex
ample, /localiprequiresnew.
Note
It may or may not start with "/". It can contain alphanu
meric characters and special characters such as under
score "_" or hyphen "-". You can also use simple expres
sions, for example, ${header.address}.
Remember
● If the consumer has an <Escalated End Event> in the <Exception Sub-Process>, then in case
of exception in the consumer, the MPL status for the producer varies based on the following cases:
○ If the producer integration flow starts with <Timer>, the MPL status for the consumer will be
Escalated and for Producer, it will be Completed.
○ If the producer integration flow starts with <HTTP> Sender, the MPL status for the consumer will be
Escalated and for producer, it will be Failed.
● The combination of <Iterating Splitter> and <Aggregator> in the producer integration flow
might generate an extra MPL (Aggregator MPL) due to the default behavior of Aggregator.
● The <Send> component is incompatible with the ProcessDirect adapter as the adapter supports
asynchronous mode for message exchange and it expects a response.
To learn more about the adapter, see blog on ProcessDirect Adapter in SAP Community .
You use the ProcessDirect receiver adapter to establish fast and direct communication between integration
flows by reducing latency and network overhead provided both of them are available within a same tenant.
Once you have created a receiver channel and selected the ProcessDirect receiver adapter, you can configure
the following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
Select the General tab and provide values in the fields as follows.
General
Parameter Description
Select the Connection tab and provide values in the fields as follows.
Parameter Description
Address URL of the target system that you are connecting to. For ex
ample, /localiprequiresnew.
Note
It may or may not start with "/". It can contain alphanu
meric characters and special characters such as under
score "_" or hyphen "-". You can also use simple expres
sions, for example, ${header.address}.
Remember
● If the consumer has an <Escalated End Event> in the <Exception Sub-Process>, then in case
of exception in the consumer, the MPL status for the producer varies based on the following cases:
○ If the producer integration flow starts with <Timer>, the MPL status for the consumer will be
Escalated and for Producer, it will be Completed.
○ If the producer integration flow starts with <HTTP> Sender, the MPL status for the consumer will be
Escalated and for producer, it will be Failed.
● The combination of <Iterating Splitter> and <Aggregator> in the producer integration flow
might generate an extra MPL (Aggregator MPL) due to the default behavior of Aggregator.
● The <Send> component is incompatible with the ProcessDirect adapter as the adapter supports
asynchronous mode for message exchange and it expects a response.
To learn more about the adapter, see blog on ProcessDirect Adapter in SAP Community .
You can use Remote Function Call (RFC) to integrate on-premise ABAP systems with the systems hosted on
the cloud using the Cloud connector.
Note
Before you can deploy your integration flow, you need to have defined the required RFC destination for your
application. To create destinations you need to either have administrator or developer role in SAP Cloud
Platform cockpit.For more information on how to create RFC destinations, see Creating an RFC Destination
[page 629]. Also you need to have he remote function module XSD file. For more information on how to
generate XSD file, see Generating XSD/WSDL for Function Modules Using ESR (Process Integration) [page
629].
Note
In the following cases certain features might not be available for your current integration flow:
● A feature for a particular adapter or step was released after you created the corresponding shape in
your integration flow.
More information: Adapter and Integration Flow Step Versions [page 405]
RFC executes the function call using synchronous communication, which means that both systems must be
available at the time the call is made. When the call is made for the function module using the RFC interface,
the calling program must specify the parameters of the connection in the form of an RFC destination. RFC
destinations provide the configuration needed to communicate with an on-premise ABAP system through an
RFC interface.
The RFC destination configuration settings are used by the SAP JAVA Connector (SAP JCo) to establish and
manage the connection.
Remember
Once you have created a receiver channel and selected the RFC Receiver Adapter, you can configure the
following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
Select the General tab and provide values in the fields as follows.
General
Parameter Description
Select the Connection tab and provide values in the fields as follows.
Connection
Parameter Description
Send Confirm Transaction (Applicable to BAPI functions) By default this option is disa
bled. You can enable this option if you want to support BAPI
functions that require BAPI_TRANSACTION_COMMIT to be
invoked implicitly by the RFC receiver adapter.
Note
Ensure the following ABAP functions are whitelisted in
Cloud connector before using this option:
● BAPI_TRANSACTION_COMMIT
● BAPI_TRANSACTION_ROLLBACK
Caution
If you enable this option for non-BAPI functions,
BAPI_TRANSACTION_COMMIT can still be invoked,
which is redundant and hence may impact RFC function
execution time.
Create New Connection If you enable this option, the adapter creates a new RFC con
nection in the backend system, every time a new call is made
to the target system.
Remember
This option must be selected if you are using principal
propagation. If you do not select this option, principal
propagation will not work as expected.
Note
You can create dynamic destinations by using regular expressions (header, property) in the Content
Modifier. Select Content Modifier in the integration flow. Then go to Message Header in Content Modifier
properties and assign corresponding value to the header name as the destination name. Select your RFC
adapter and assign dynamic destination by using the expression: ${header/property.<header/
property name>}. For example ${header.abc} or ${property.abc} where abc is the value of the
header or property.
Related Information
Generating XSD/WSDL for Function Modules Using ESR (Process Integration) [page 629]
Creating an RFC Destination [page 629]
Generate an XSD/WSDL file for a function module using the Enterprise Services Repository (ESR).
Procedure
Related Information
Create an RFC destination by adding necessary properties before using it in the integration flow of RFC adapter.
Context
Remember
There are currently certain limitations when working in the Cloud Foundry environment. For more
information on the limitations, see SAP Note 2752867 .
1. Log into the SAP Cloud Platform cockpit and choose Connectivity Destinations .
2. Choose New Destination and enter destination name.
3. Enter the details for the parameters <Name>, <URL>, <User>, <Password>, and select <Type> as RFC.
4. Add the following properties (first four properties are mandatory) and assign appropriate values:
○ jco.client.ashost
Note
The value provided for application server host (ashost) property is the access control value created
in the Cloud connector.
○ jco.client.client
○ jco.client.lang
○ jco.client.sysnr
○ (optional) jco.destination.pool_capacity
○ jco.destination.auth_type
Note
If you are using principal propagation for authentication, for the property jco.destination.auth_type,
specify the value as PrincipalPropagation. Do not specify any additional credentials in the
destination.
5. Select Save.
The SFTP Adapter connects an SAP Cloud Platform tenant to a remote system using the SSH File Transfer
protocol to write files to the system.The SFTP adapter uses a certificate and keystore to authenticate the file
transfer unlike the standard FTP. The SFTP adapter achieves secure transfer by encrypting sensitive
information before transmitting it on the network. It uses SSH protocol to transfer files.
Note
If you want to dynamically override the configuration of the adapter, you can set the following header before
calling the SFTP adapter:
● CamelFileName
Overrides the existing file and directory name that is set directly in the endpoint.
This header can be used to dynamically change the name of the file and directory to be called.
The following examples show the header CamelFileName, read via XPath from the payload, or set using an
expression:
CI-PI currently supports the following ciphers for SSH (SFTP) communication: blowfish-cbc,3des-cbc,aes128-
cbc,aes192-cbc,aes256-cbc,aes128-ctr,aes192-ctr,aes256-ctr,3des-ctr,arcfour,arcfour128,arcfour256.
Caution
The ciphers listed above can change in the future. New ciphers can be added and existing ones can be
removed in case of security weaknesses. In such cases, you will have to change the ciphers on the SFTP
server and reconfigure the integration flows that contain SFTP adapter. SAP will inform customers, if
necessary.
Caution
If you select Run Once option in the Scheduler, you see messages triggered from all the integration flows
with this setting after a software update. After the latest software is installed on a cluster, it is restarted.
This results in the integration flows getting deployed again and you see messages from these integration
flows with Run Once setting.
Related Information
The SFTP sender adapter connects an SAP Cloud Platform tenant to a remote system using the SSH File
Transfer protocol to read files from the system. SSH File Transfer protocol is also referred to as Secure File
Transfer protocol (or SFTP).
If you have configured a sender SFTP adapter, message processing is performed as follows at runtime: The
tenant sends a request to an SFTP server (think of this as the sender system), but the data flow is in the
opposite direction, from the SFTP server to the tenant. In other words, the tenant reads files from the SFTP
server (a process that is also referred to as polling).
Note
Note
In the following cases certain features might not be available for your current integration flow:
● A feature for a particular adapter or step was released after you created the corresponding shape in
your integration flow.
● You are using a product profile other than the one expected.
More information: Adapter and Integration Flow Step Versions [page 405]
Once you have created a sender channel and selected the SFTP sender adapter, you can configure the
following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
Select the General tab and provide values in the fields as follows.
General
Parameter Description
Select the Source tab and provide values in the fields as follows.
Parameters Description
Directory Use the relative path to read the file from a directory, for example, <dir>/<subdir>.
Note
If you do not enter a file name and the parameter remains blank, all the files in the
specified directory are read.
Note
Usage of file name pattern:
Expressions, such as ab*, a.*, *a*, ?b, and so on, are supported.
Examples:
If you specify file*.txt as the File Name, the following files are polled by the
adapter: file1.txt, file2.txt, as well as file.txt and
file1234.txt, and so on.
If you specify file?.txt as the File Name, the following files are polled by the
adapter: file1.txt, file2.txt, and so on, but not the files file.txt or
file1234.txt.
Although you can configure this feature, it is not supported when using the corre
sponding integration content with the SAP Process Orchestration (SAP PO) run
time in releases lower than SAP PO 7.5 SP5.
Caution
Files with file names longer than 100 characters will be processed with the follow
ing limitations:
● If two files with names longer than 100 characters are available for process
ing, only one of these files will be processed at a time. This means that both
files will be processed, but not in parallel. This is also the case if two runtime
nodes are available. If the node fails multiple times while processing a file with
a file name longer than 100 characters, none of the files sharing the first 100
characters with that file can be executed without manual intervention from
the administrator.
● The option Keep File and Mark as Processed in Idempotent Repository (for
sender channels under Processing) will not work for these files.
Address Host name or IP address of the SFTP server and an optional port, for example,
wdfd00213123:22.
Proxy Type The type of proxy that you are using to connect to the target system.
For more information on how to use the On-Premise option to connect to an on-prem
ise SFTP server, check out the SAP Community blog Cloud Integration – How to Con
nect to an On-Premise sftp server via Cloud Connector .
Location ID To connect to an SAP Cloud Connector instance associated with your account, enter
(only if On-Premise is selected for the location ID that you defined for this instance in the destination configuration on
Proxy Type the cloud side.
● User Name/Password
SFTP server authenticates the calling component based on the user name and
password. To make this configuration setting work, you need to define the user
name and password in a User Credential artifact and deploy the artifact on the
tenant.
● Public Key
SFTP server authenticates the calling component based on a public key.
Credential Name Name of the User Credential artifact that contains the user name and password.
(Only available if you have selected Make sure that the user name contains no other characters than A-z, 0-9, _ (under
Public Key for Authentication) score), - (hyphen), / (slash), ? (question mark), @ (at), ! (exclamation mark), $ (dol
lar sign ), ' (apostrophe), (, ) (brackets), * (asterisk), + (plus sign), , (comma), ;
(semicolon), = (equality sign), . (dot), or ~ (tilde). Otherwise, an attempt for anony
mous login is made which results in an error.
Timeout (in ms) Maximum time to wait for the SFTP server to be contacted while establishing connec
tion or performing a read operation.
The timeout should be more than 0, but less than five minutes.
Maximum Reconnect Attempts Maximum number of attempts allowed to reconnect to the SFTP server.
Default value: 3
Reconnect Delay (in ms) How long the system waits before attempting to reconnect to the SFTP server.
Automatically Disconnect Disconnect from the SFTP server after each message processing.
Select the Processing tab and provide values in the fields as follows.
Parameters Description
Read Lock Strategy Prevents files that are in the process of being written from
being read from the SFTP server. The endpoint waits until it
has an exclusive read lock on a file before reading it.
● None: Does not use a read lock, which means that the
endpoint can immediately read the file. None is the sim
plest option if the SFTP server guarantees that a file
only becomes visible on the server once it is completely
written.
● Rename: Renames the file before reading. The Rename
option allows clients to rename files on the SFTP server.
● Content Change: Monitors changes in the file length/
modification timestamp to determine if the write opera
tion on the file is complete and the file is ready to be
read. The Content Change option waits for at least one
second until there are no more file changes. Therefore,
if you select this option, files cannot be read as quickly
as with the other two options.
● Done File Expected : Uses a specific file to signal that
the file to be processed is ready for consumption.
If you have selected this option, enter the name of the
done file. The done file signals that the file to be proc
essed is ready for consumption. This file must be in the
same folder as the file to be processed. Placeholders
are allowed. Default: ${file:name}.done.
Sorting Select the type of sorting to use to poll files from the SFTP
server:
Lock Timeout (in min) Specify how long to wait before trying to process the file
again in the event of a cluster outage. If it takes a very long
time to process the scenario, you may need to increase the
timeout to avoid parallel processing of the same file. This
value should be higher than the processing time required for
the number of messages specified in Max. Messages per Poll.
Default: 15
Change Directories Stepwise Select this option to change directory levels one at a time.
Include Subdirectories Selecting this option allows you to look for files in all the sub
directories of the directory.
Post-Processing Allows you to specify how files are to be handled after proc
essing.
You can select one of the following options from the drop
down list:
Note
If you specify an absolute file path, it may occur
that the file cannot be stored correctly at runtime.
Idempotent Repository You can select one of the following idempotent repository
options:
(only available if you have selected Keep File and Mark as
Processed in Idempotent Repository for Post-Processing) ● In Memory: Keeps the file names in the memory. Files
are read again from the SFTP server when the runtime
node is restarted. It is not recommended to use the In
Memory option if multiple runtime nodes are used. In
this case the other nodes would pick the file and proc
ess it because the memory is specific to the runtime
node.
● Database(default): Stores the file names in a database
to synchronize between multiple worker nodes and to
prevent the files from being read again when the run
time node is restarted. File name entries are deleted by
default after 90 days.
Note
The idempotent repository uses the username,
host name, and file name as key values to identify
files uniquely across integration flows of a tenant.
Retry Threshold for Alerting If the number of attempts to retry polling of a message from
the SFTP server exceeds this threshold value, an alert is
raised. The default value '0' indicates that the alert is not
raised.
Note
If two or more sender channels are configured with the
SFTP connector, the value for the Alert Threshold for
Retry parameter should be the same.
Select the Advanced tab and provide values in the fields as follows.
Field Description
Buffer Size Write file content using the specified buffer size (in Byte).
Flatten File Names Flatten the file path by removing the directory levels so that
only the file names are considered and they are written un
der a single directory.
Max. Messages per Poll Maximum number of messages to gather in each poll. Con
sider how long it will take to process this number of mes
(for sender channel only)
sages, and make sure that you set a higher value for Lock
Timeout (in min).
Note
If you are using the sender SFTP adapter in combination
with an Aggregator step and you expect a high message
load, consider the following recommendation:
Set the value for Max. Messages per Poll to a small num
ber larger than 0 (for example, 20). This ensures proper
logging of the message processing status at runtime.
Prevent Directory Traversal If the file contains any backward path traversals such as \..\
or /../.. , this carries a potential risk of directory traversal. In
such a case, message processing is stopped with an error.
The unique message ID is logged in the message processing
log.
Note
We recommend that you specify the Directory and File
Name fields to avoid any security risks. If you provide
these fields, the header is not considered.
Select the Scheduler tab and provide values in the fields as follows.
Note
SFTP polling is supported in the following way: The same file can be polled by multiple endpoints
configured to use the SFTP channel. This means that you can now deploy an integration flow with a
configured SFTP channel on multiple runtime nodes (which might be necessary to meet failover
requirements) without the risk of creating duplicates by polling the same file multiple times. Note that to
enable the new option, integration flows (configured to use SFTP channels) that were developed prior to
the introduction of this feature have to be regenerated.
https://blogs.sap.com/2018/11/16/cloud-integration-how-to-connect-to-an-on-premise-sftp-server-via-cloud-
connector/
The SFTP receiver adapter connects an SAP Cloud Platform tenant to a remote system using the SSH File
Transfer protocol to write files to the system. SSH File Transfer protocol is also referred to as Secure File
Transfer protocol (or SFTP).
Note
Note
In the following cases certain features might not be available for your current integration flow:
● A feature for a particular adapter or step was released after you created the corresponding shape in
your integration flow.
More information: Adapter and Integration Flow Step Versions [page 405]
Once you have created a receiver channel and selected the SFTP receiver adapter, you can configure the
following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
Select the General tab and provide values in the fields as follows.
General
Parameter Description
Select the Target tab and provide values in the fields as follows.
Parameters Description
Directory Use the relative path to write the file to a directory, for exam
ple <dir>/<subdir>.
Note
If you do not enter a file name and the parameter re
mains blank, the content of the CamelFileName
header is used as file name. If this header is not speci
fied, the Exchange ID is used as file name.
Expressions, such as ab*, a.*, *a*, and so on, are not sup
ported.
myfile20151201170800.xml
Note
Be aware of the following behavior if you have config-
ured the file name dynamically: If you have selected the
Append Timestamp option, the timestamp overrides the
file name defined dynamically via the header
(CamelFileName).
Caution
Note that in case files are processed very quickly, the
Append Timestamp option might not guarantee unique
file names.
Proxy Type The type of proxy that you are using to connect to the target
system.
(only if On-Premise is selected for Proxy Type with your account, enter the location ID that you defined for
this instance in the destination configuration on the cloud
side.
● User Name/Password
SFTP server authenticates the calling component based
on the user name and password. To make this configu-
ration setting work, you need to define the user name
and password in a User Credential artifact and deploy
the artifact on the tenant.
● Public Key
SFTP server authenticates the calling component based
on a public key.
Credential Name Name of the User Credential artifact that contains the user
name and password.
(only if you have selected User Name/Password for
Authentication)
(only if you have selected Public Key for Authentication) Make sure that the user name contains no other characters
than A-z, 0-9, _ (underscore), - (hyphen), / (slash), ?
(question mark), @ (at), ! (exclamation mark), $ (dollar
sign ), ' (apostrophe), (, ) (brackets), * (asterisk), + (plus
sign), , (comma), ; (semicolon), = (equality sign), . (dot),
or ~ (tilde). Otherwise, an attempt for anonymous login is
made which results in an error.
Timeout (in ms) Maximum time to wait for the SFTP server to be contacted
while establishing connection or performing a read opera
tion.
The timeout should be more than 0, but less than five mi
nutes.
Default value: 3
Reconnect Delay (in ms) How long the system waits before attempting to reconnect
to the SFTP server.
Automatically Disconnect Disconnect from the SFTP server after each message proc
essing.
Select the Processing tab and provide values in the fields as follows.
Processing
Parameters Description
Change Directories Stepwise Select this option to change directory levels one at a time.
Handling for Existing Files Define how existing files should be treated.
Select the Advanced tab and provide values in the fields as follows.
Field Description
Buffer Size Write file content using the specified buffer size (in Byte).
Flatten File Names Flatten the file path by removing the directory levels so that
only the file names are considered and they are written un
der a single directory.
Max. Messages per Poll Maximum number of messages to gather in each poll. Con
sider how long it will take to process this number of mes
(for sender channel only)
sages, and make sure that you set a higher value for Lock
Timeout (in min).
Note
If you are using the sender SFTP adapter in combination
with an Aggregator step and you expect a high message
load, consider the following recommendation:
Set the value for Max. Messages per Poll to a small num
ber larger than 0 (for example, 20). This ensures proper
logging of the message processing status at runtime.
Prevent Directory Traversal If the file contains any backward path traversals such as \..\
or /../.. , this carries a potential risk of directory traversal. In
such a case, message processing is stopped with an error.
The unique message ID is logged in the message processing
log.
Note
We recommend that you specify the Directory and File
Name fields to avoid any security risks. If you provide
these fields, the header is not considered.
Note
SFTP polling is supported in the following way: The same file can be polled by multiple endpoints
configured to use the SFTP channel. This means that you can now deploy an integration flow with a
configured SFTP channel on multiple runtime nodes (which might be necessary to meet failover
requirements) without the risk of creating duplicates by polling the same file multiple times. Note that to
enable the new option, integration flows (configured to use SFTP channels) that were developed prior to
the introduction of this feature have to be regenerated.
Related Information
https://blogs.sap.com/2018/11/16/cloud-integration-how-to-connect-to-an-on-premise-sftp-server-via-cloud-
connector/
You use this procedure to configure a communication channel with the SOAP (SAP RM) adapter.
Note
In the following cases certain features might not be available for your current integration flow:
● A feature for a particular adapter or step was released after you created the corresponding shape in
your integration flow.
● You are using a product profile other than the one expected.
More information: Adapter and Integration Flow Step Versions [page 405]
The SOAP (SAP RM) Sender Adapter exchanges messages with a sender system based on the SOAP
communication protocol and SAP Reliable Messaging (SAP RM) as the message protocol. SAP RM is a
simplified communication protocol for asynchronous Web service communication that does not require the
use of Web Service Reliable Messaging standards. A size limit for the inbound message can be configured for
the sender adapter.
You have the option to set SOAP headers using Groovy script (for example, using the Script step).
● SapAuthenticatedUserName
Contains the user name of the client that calls the integration flow.
If the sender channel is configured to use client certificate authentication, no such header is set (as it is not
available in this case).
Once you have created a sender channel and selected the SOAP (SAP RM) sender adapter, you can configure
the following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
Select the Connection tab and provide values in the fields as follows.
Parameters and Values of Sender SOAP (SAP RM) Adapter - Connection Details
Parameters Description
Address Relative endpoint address at which the ESB listens to the incoming requests, for ex
ample, /HCM/GetEmployeeDetails.
Note
When you specify the endpoint address /path, a sender can also call the inte
gration flow through the endpoint address /path/<any string> (for exam
ple, /path/test/).
Be aware of the following related implication: When you in addition deploy an in
tegration flow with endpoint address /path/test/, a sender using the /
path/test endpoint address will now call the newly deployed integration flow
with the endpoint address /path/test/. When you now undeploy the inte
gration flow with endpoint address /path/test, the sender again calls the in
tegration flow with endpoint address /path (original behavior). Therefore, be
careful reusing paths of services. It is better using completely separated end
points for services.
URL to WSDL URL to the WSDL defining the WS provider endpoint (of the receiver). You can spec
ify the WSDL by selecting a source to browse for a WSDL either from an On-Premise
ES Repository or your local workspace.
In the Resources view, you can upload an individual WSDL file or an archive file (file
ending with .zip) that contains multiple WSDLs or XSDs, or both. For example, you
can upload a WSDL that contains an imported XSD referenced by an xsd:import
statement. This means that if you want to upload a WSDL and dependent resources,
you need to add the parent file along with its dependencies in a single archive (.zip
file).
You can download the WSDL by using the Integration Operations user interface (in
the Properties view, Services tab, under the integration flow-specific endpoint). For
newly deployed integration flows, the WSDL that is generated by the download cor
responds to the endpoint configuration in the integration flow.
The WSDL download does not work for WSDLs with external references because
these WSDLs cannot be parsed.
Processing Settings This feature corresponds to an older version of this adapter. The reason why it is
shown can be that you either have selected a certain product profile other than SAP
Cloud Platform Integration or (in case you have selected SAP Cloud Platform Inte
gration product profile) that you continue editing an integration flow which exists al
ready for a certain time. If you still like to use this feature, you have the following op
tions:
When you use the up-to-date adapter version, the processing setting Robust is im
plicit activated.
Note
Note the following:
○ You can also type in a role name. This has the same result as selecting
the role from the value help: Whether the inbound request is authenti
cated depends on the correct user-to-role assignment defined in SAP
Cloud Platform Cockpit.
○ When you externalize the user r, the value help for roles is offered in
the integration flow configuration as well.
○ If you have selected a product profile for SAP Process Orchestration,
the value help will only show the default role ESBMessaging.send.
Select the Conditions tab and provide values in the fields as follows.
Conditions
Parameter Description
Maximum Message Size This parameter allows you to configure a maximum size for
inbound messages (smallest value for a size limit is 1 MB). All
inbound messages that exceed the specified size (per inte
gration flow and on the runtime node where the integration
flow is deployed) are blocked.
● Body Size
● Attachment Size
Note
For Exactly-Once handling, the sender SOAP (SAP RM) adapter saves the protocol-specific message ID in
the header SapMessageIdEx. If this header is set, SOAP (SAP RM) receiver use the content of this header
as the message ID for outbound communication. Usually, this is the desired behavior and enables the
receiver to identify any duplicates. However, if the sender system is also the receiver system, or several
variants of the message are sent to the same system (for example, in an external call or multicast), the
receiver system will incorrectly identify these messages as duplicates. In this case, the header
SapMessageIdEx must be deleted (for example, using a script) or overwritten with a new generated
message ID. This deactivates Exactly-Once processing (that is, duplicates are no longer recognized by the
protocol).
If you want to set SOAP headers via the Camel header, the following table shows which Camel header
corresponds to which SOAP header.
QualityOfService SapPlainSoapQoS
ExactlyOnce ExactlyOnce
ExactlyOnceInOrder ExactlyOnceInOrder
QueueId SapPlainSoapQueueId
Related Information
Exchanges messages with a receiver system based on the SOAP communication protocol and SAP Reliable
Messaging (SAP RM) as the message protocol. SAP RM is a simplified communication protocol for
asynchronous Web service communication that does not require the use of Web Service Reliable Messaging
standards.
You have the option to set SOAP headers using Groovy script (for example, using the Script step).
● SOAPAction Header
This header is part of the Web service specification.
Once you have created a receiver channel and selected the SOAP (SAP RM) receiver adapter, you can configure
the following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
Select the Connection tab and provide values in the fields as follows.
Address Endpoint address at which the ESB posts the outgoing message, for example http://
<host>:<port>/payment.
You can dynamically configure the address field of the SOAP (SAP RM) Adapter.
When you specify the address field of the adapter as ${header.a} or ${property.a}, at run
time the value of header a or exchange property (as contained in the incoming message)
will be written into the Camel header CamelDestinationOverrideUrl and will be used in run
time to send the message to.
Also in case the CamelDestinationOverrideUrl header has been set by another process
step (for example, a Content Modifier), its value will be overwritten.
The endpoint URL that is actually used at runtime is displayed in the message processing
log (MPL) in the message monitoring application (MPL property
RealDestinationUrl). Note that you can manually configure the endpoint URL using
the Address attribute of the adapter. However, there are several ways to dynamically over
ride the value of this attribute (for example, by using the Camel header CamelHttpUri).
Proxy Type The type of proxy that you are using to connect to the target system.
Location ID only in case On- To connect to a cloud connector instance associated with your account, enter the location
Premise is selected for Proxy ID that you defined for this instance in the destination configuration on the cloud side. You
Type. can also enter ${header.headername} or ${property.propertyname} to dynamically read
the value from a header or a property.
URL to WSDL URL to the WSDL defining the WS provider endpoint (of the receiver). You can specify the
WSDL by selecting a source to browse for a WSDL either from an On-Premise ES Reposi
tory or your local workspace.
In the Resources view, you can upload an individual WSDL file or an archive file (file ending
with .zip) that contains multiple WSDLs or XSDs, or both. For example, you can upload a
WSDL that contains an imported XSD referenced by an xsd:import statement. This
means that if you want to upload a WSDL and dependent resources, you need to add the
parent file along with its dependencies in a single archive (.zip file).
Endpoint Name of the selected port of a selected service (that you provide in the Service Name
field) contained in the referenced WSDL.
Note
Using the same port names across receivers is not supported in older versions of the
receiver adapters. To use the same port names, you need to create a copy of the
WSDL and use it.
Operation Name Name of the operation of the selected service (that you provide in the Service Name field)
contained in the referenced WSDL.
Private Key Alias Allows you to enter the private key alias name that gets the private key from the keystore
and authenticates you to the receiver in an HTTPS communication.
Note
If you have selected the Connect using Basic Authentication option, this field is not
visible.
You can dynamically configure the Private Key Alias property by specifying either a header
or a property name in one of the following ways:
$ {header.headername} or $ {property.propertyname}
Please be aware that in some cases this feature can have a negative impact on perform
ance.
Compress Message Enables the WS endpoint to send compressed request messages to the WS provider and
to indicate to the WS provider that it can handle compressed response messages.
Allow Chunking Used for enabling HTTP chunking of data while sending messages.
Return HTTP Response Code When selected, writes the HTTP response code received in the response message from
as Header
the called receiver system into the header CamelHttpResponseCode.
Note
You can use this header, for example, to analyze the message processing run (when
level Trace has been enabled for monitoring). Furthermore, you can use this header to
define error handling steps after the integration flow has called the SOAP (SAP RM)
receiver.
You cannot use the header to change the return code since the return code is defined
in the adapter and cannot be changed.
Clean Up Request Headers Select this option to clean up the adapter specific-headers after the receiver call.
Request Timeout Specifies the time (in milliseconds) that the client will wait for a response before the con
nection is interrupted.
Note that the timeout setting has no influence on the Transmission Control Protocol (TCP)
timeout if the receiver or any additional component interconnected between the Cloud In
tegration tenant and the receiver has a lower timeout. For example, consider that you have
configured a receiver channel timeout of 10 minutes and there is another component in
volved with a timeout of 5 minutes. If nothing is transferred for a period of time, the con
nection will be closed after the fifth minute. In HTTP communication spanning multiple
components (for example, from a sender, through the load balancer, to a Cloud Integration
tenant, and from there to a receiver), the actual timeout period is influenced by each of the
timeout settings of the individual components that are interconnected between the sender
and receiver (to be more exact, of those components that can control the TCP session).
The component or device with the lowest number set for the idle session timeout will de
termine the timeout that will be used.
● Basic
The tenant authenticates itself against the receiver using user credentials (user name
and password).
It is a prerequisite that user credentials are specified in a Basic Authentication artifact
and deployed on the related tenant.
● Client Certificate
The tenant authenticates itself against the receiver using a client certificate.
It is a prerequisite that the required key pair is installed and added to a keystore. This
keystore has to be deployed on the related tenant. The receiver side has to be config-
ured appropriately.
● None
● Principal Propagation
The tenant authenticates itself against the receiver by forwarding the principal of the
inbound user to the cloud connector, and from there to the back end of the relevant
on-premise system
Note
This authentication method can only be used with the following sender adapters:
HTTP, AS2, SOAP, IDOC
Note
Please note that the token for principal propagation expires after 30 minutes.
If it takes longer than 30 minutes to process the data between the sender and re
ceiver channel, the token for principal propagation expires, which leads to errors
in message processing.
Note
In the following cases certain features might not be available for your current in
tegration flow:
○ A feature for a particular adapter or step was released after you created the
corresponding shape in your integration flow.
○ You are using a product profile other than the one expected.
More information: Adapter and Integration Flow Step Versions [page 405]
You can dynamically configure the Credential Name field of the adapter by using a Simple
Expression (see http://camel.apache.org/simple.html . For example, you can dynami
cally define the Credential Name of the receiver adapter by referencing a message header
${header.MyCredentialName} or a message property $
{property.MyCredentialName}.
With this adapter, the tenant can exchange messages with another system that supports Simple Object Access
Protocol (SOAP) 1.1 or SOAP 1.2.
Related Information
The SOAP (SOAP 1.x) sender adapter enables a SAP Cloud Platform tenant to exchange messages with a
sender system that supports Simple Object Access Protocol (SOAP) 1.1 or SOAP 1.2.
Remember
There are currently certain limitations when working in the Cloud Foundry environment. For more
information on the limitations, refer to SAP Note 2752867 .
Note
In the following cases certain features might not be available for your current integration flow:
● A feature for a particular adapter or step was released after you created the corresponding shape in
your integration flow.
● You are using a product profile other than the one expected.
More information: Adapter and Integration Flow Step Versions [page 405]
You have the option to set SOAP headers using Groovy script (for example, using the Script step).
● SapAuthenticatedUserName
Contains the user name of the client that calls the integration flow.
If the sender channel is configured to use client certificate authentication, no such header is set (as it is not
available in this case).
Select the General tab and provide values in the fields as follows.
General
Parameter Description
Select the Connection tab and provide values in the fields as follows.
Connection
Parameter Description
Address Relative endpoint address on which the integration runtime expects incoming requests,
for example, /HCM/GetEmployeeDetails.
Note
When you specify the endpoint address /path, a sender can also call the integration
flow through the endpoint address /path/<any string> (for example, /path/
test/).
Be aware of the following related implication: When you in addition deploy an integra
tion flow with endpoint address /path/test/, a sender using the /path/test
endpoint address will now call the newly deployed integration flow with the endpoint
address /path/test/. When you now undeploy the integration flow with endpoint
address /path/test, the sender again calls the integration flow with endpoint ad
dress /path (original behavior). Therefore, be careful reusing paths of services. It is
better using completely separated endpoints for services.
● Manual: You configure the service behavior manually by the parameters shown be
low.
● WSDL: The service behavior is defined via WSDL configuration.
Use WS-Addressing Select this option to accept addressing information from message information headers
during runtime.
(only if Service Definition:
Manual is selected)
Message Exchange Pattern Specifies the kind of messages that are processed by the adapter.
(only if Service ● Request-Reply: The adapter processes both request and response.
Definition:Manual is selected
Tip
When using this option, the response code can accidentally be overwritten by a
called receiver. Assume that, for example the integration flow contains an SOAP
sender adapter (with a Request-Reply pattern) and an HTTP receiver adapter.
Let's furthermore assume that the HTTP receiver returns an HTTP response
code 202 (as it has accepted the call). In this case, the SOAP sender adapter re
turns in the reply also HTTP response code 202 instead of 200 (OK). To over
come this situation, you have to remove the header
CamelHttpResponseCode before the message reply is sent back to the
sender.
● One-Way
URL to WSDL URL to the WSDL defining the WS provider endpoint (of the receiver). You can specify the
WSDL by selecting a source to browse for a WSDL either from an On-Premise ES Reposi
(only if as Service Definition the
tory or your local workspace.
option WSDL is selected)
In the Resources view, you can upload an individual WSDL file or an archive file (file end
ing with .zip) that contains multiple WSDLs or XSDs, or both. For example, you can up
load a WSDL that contains an imported XSD referenced by an xsd:import statement.
This means that if you want to upload a WSDL and dependent resources, you need to add
the parent file along with its dependencies in a single archive (.zip file).
Note
● If you specify a WSDL, you also have to specify the name of the selected service
and the name of the port selected for this service. These fields must have a
namespace prefix.
Expected format: <namespace>:<service_name>
Example: p1:MyService
● Don't use WSDLs with blanks.
We recommend that you don't use blanks in WSDL names or directories, as this
can lead to runtime issues.
You can download the WSDL by using the Integration Operations user interface (in the
Properties view, Services tab, under the integration flow-specific endpoint). For newly de
ployed integration flows, the WSDL that is generated by the download corresponds to the
endpoint configuration in the integration flow.
The WSDL download does not work for WSDLs with external references because these
WSDLs can't be parsed.
For more information on how to work with WSDL resources, refer to the following blog:
Cloud Integration – Usage of WSDLs in the SOAP Adapter
Endpoint Name of the selected endpoint of a selected service (that you provide in the Service
Name field) contained in the referenced WSDL
(only if Service Definition: The
adapter only: WSDL is selected)
Processing Settings ● WS Standard: Message is executed with WS standard processing mechanism. Errors
are not returned to the consumer.
(only if one of the following op ● Robust: WSDL provider invokes service synchronously and the processing errors are
tions is selected: returned to the consumer.
Depending on your choice, you can also specify one of the following properties:
Note
Note the following:
○ You can also type in a role name. This has the same result as selecting the
role from the value help: Whether the inbound request is authenticated de
pends on the correct user-to-role assignment defined in SAP Cloud
Platform Cockpit.
○ When you externalize the user r, the value help for roles is offered in the in
tegration flow configuration as well.
○ If you have selected a product profile for SAP Process Orchestration, the
value help will only show the default role ESBMessaging.send.
Select the WS-Security tab and provide values in the fields as follows.
See WS-Security Configuration for the Sender SOAP 1.x Adapter [page 86] for more information.
Select the Conditions tab and provide values in the fields as follows.
Conditions
Parameter Description
Maximum Message Size This parameter allows you to configure a maximum size for
inbound messages (smallest value for a size limit is 1 MB). All
inbound messages that exceed the specified size (per inte
gration flow and on the runtime node where the integration
flow is deployed) are blocked.
● Body Size
● Attachment Size
Related Information
https://blogs.sap.com/2018/01/25/cloud-integration-soap-adapter-web-service-security/
https://blogs.sap.com/2018/01/24/cloud-integration-wss-between-cloud-integration-and-sap-po-soap-
adapter/
WS-Security Configuration for the Sender SOAP 1.x Adapter [page 86]
You use a sender channel to configure how inbound messages are to be treated at the tenant’s side of the
communication.
● How the tenant verifies the payload of an incoming message (signed by the sender)
The sender SOAP 1.x adapter allows the following combination of message-level security options:
● Verifying a payload
● Verifying and decrypting a payload
For a detailed description of the SOAP adapter WS-Security parameters, check out Configure the SOAP (SOAP
1.x) Sender Adapter [page 655] (under WS-Security).
Note
With the following steps, you can easily modify and extend the basic integration flow that was introduced in
the Getting Started section of this documentation.
More information:
You can easily modify the integration flow described in the Getting Started section by adding a SOAP client as
sender.
The figure shows the integration flow model that you get as a result of this exercise.
1. The SOAP client (represented by the Sender shape) sends a Simple Object Access Protocol (SOAP)
message toSAP Cloud Platform Integration through a SOAP (SOAP 1.x) sender channel.
The SOAP message contains a product identifier.
2. The Content Modifier creates a message header (which we also call productIdentifier) and writes the
actual value of the productIdentifier element into it. This header will be used in the subsequent step.
3. The Request Reply step passes the message to an external data source and retrieves data (about
products) from there.
The external data source (which is represented by the lower WebShop shape) supports the Open
DataProtocol (OData). For our scenario, we use the ESPM WebShop, which is based on the Enterprise
Sales and Procurement Model (ESPM) provided by SAP. The demo application can be accessed at the
following address: https://espmrefapps.hana.ondemand.com/espm-cloud-web/webshop/
4. An OData receiver channel is used for the connection to the OData source. To query for exactly one
product (for the product identifier provided with the inbound message), the header that was created in the
preceding Content Modifier is used.
5. The OData service provides the details of one specific product, which is identified by the actual value of the
productIdentifier field (provided with the inbound SOAP message).
6. Finally, the result of the request is forwarded to an e-mail account using the Mail receiver adapter (the e-
mail server is represented by the right Mail … shape in the integration flow model).
Related Information
Create a SOAP channel to define how the sender calls the integration flow.
1. Click the Sender shape. The context icons for the Sender appear.
If you click the information icon, the version of the integration flow component is displayed.
Do not confuse the version of an individual integration flow component with the software version of
SAP Cloud Platform Integration. An integration flow component gets a new version each time a new
feature is added to it by SAP. Let's imagine a situation where you started modeling an integration flow
some time ago and now want to continue working on it. Let's assume that SAP has updated the
software in the meantime. A new version of an integration flow step or shape that you have used is now
available, containing a new feature. You can continue to use the old component version, but if you want
to use the new feature you need to update to the new version.
2. Click the arrow icon and drag and drop the cursor on the Start event.
The list of available adapter types is displayed in a dialog.
Tip
Selecting User Role does not mean that you are determining the usage of basic authentication.
User Role authorization only means that the permissions of the sender of the message are checked
based on roles (which are assigned to the user that is associated with the sender).
However, note that if you select this authorization option, you can also configure other inbound
authentication scenarios. You can, for example, configure client certificate authentication for the
sender and add an additional step (certificate-to-user mapping) to map the certificate to a user
whose permissions are checked (based on the roles assigned to this user).
For productive scenarios, we recommend using client certificate authentication with certificate-to-
user mapping. However, to simplify the setup of this integration flow, we propose that you choose
basic authentication - simply because it is much easier to configure the sender in this case.
You can select other roles for inbound authentication (if you have defined these roles for the runtime
node in SAP Cloud Platform Cockpit), but you don't use this option in this scenario.
You can use a SOAP client of your choice to send a SOAP message to SAP Cloud Platform Integration.
● The authentication is defined in such a way that the SOAP client is authorized to send a request to SAP
Cloud Platform Integration and process the integration flow.
In the SOAP adapter settings for User Role-based authorization, make sure that you specify the same
credentials for the user associated with the SOAP client as for the user who is authorized to process the
integration flow (in our example, this is the user who is assigned the role ESBMessaging.send).
● As the address for the SOAP request, enter the endpoint address that is displayed for the integration flow
artifact (in the Monitor section of the Web UI under Manage Integration Content).
● To define the message structure for our example integration flow, you can use a WSDL file with the content
from the info box below. Simply copy and paste the content into a text editor, save the file with
extension .wsdl, and import this WSDL file into your SOAP client (if this is supported).
Send the SOAP request with a dedicated value for the productIdentifier (for example, HT-1080).
Send another SOAP message with another productIdentifier (for example, HT-2001), and you receive details of
another product.
Sample Code
The SOAP (SOAP 1.x) receiver adapter enables a SAP Cloud Platform tenant to exchange messages with a
receiver system that supports Simple Object Access Protocol (SOAP) 1.1 or SOAP 1.2.
Remember
This component or some of its features might not be available in the Cloud Foundry environment. For more
information on the limitations, see SAP Note 2752867 .
Note
In the following cases certain features might not be available for your current integration flow:
● A feature for a particular adapter or step was released after you created the corresponding shape in
your integration flow.
● You are using a product profile other than the one expected.
More information: Adapter and Integration Flow Step Versions [page 405]
You have the option to set SOAP headers using Groovy script (for example, using the Script step).
● SOAPAction Header
This header is part of the Web service specification.
Caution
Note that messages that contain RPC-style bindings can only be processed by the SOAP receiver channel if
no WSDL file is provided (that is, if you leave the URL to WSDL field empty). We recommend that you use
A WSDL binding describes how the service is bound to a message protocol. For the processing of SOAP
messages, you can choose from the following binding types:
RPC stands for Remote Procedure Call. For more information on these options and the meaning of literal
and encoded, see http://www.w3.org/TR/2001/NOTE-wsdl-20010315 .
Once you have created a receiver channel and selected the SOAP (SOAP 1.x) receiver Adapter, you can
configure the following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
Select the General tab and provide values in the fields as follows.
General
Parameter Description
Select the Connection tab and provide values in the fields as follows.
Connection
Parameter Description
Address Endpoint address at which the ESB Bus posts the outgoing message, for example, http://
<host>:<port>/payment.
You can dynamically configure the address field of the SOAP (SOAP 1.x) adapter.
Also, if the CamelDestinationOverrideUrl header has been set by another process step
(for example, a Content Modifier), its value is overwritten.
The endpoint URL that is used at runtime is displayed in the message processing log
(MPL) in the message monitoring application (MPL property RealDestinationUrl).
Note that you can manually configure the endpoint URL using the Address attribute of the
adapter. However, there are several ways to dynamically override the value of this attrib
ute (for example, by using the Camel header CamelDestinationOverrideUrl).
Proxy Type The type of proxy that you are using to connect to the target system:
Note
If you select the On-Premise option, the following restrictions apply to other pa
rameter values:
○ Do not use an HTTPS address for Address, as it leads to errors when per
forming consistency checks or during deployment.
○ Do not use the option Client Certificate for the Authentication parameter, as
it leads to errors when performing consistency checks or during deploy
ment.
Note
If you select the On-Premise option and use the SAP Cloud Connector to connect
to your on-premise system, the Address field of the adapter references a virtual
address, which has to be configured in the SAP Cloud Connector settings.
● If you select Manual, you can manually specify Proxy Host and Proxy Port (using the
corresponding entry fields).
Furthermore, with the parameter URL to WSDL you can specify a Web Service Defini-
tion Language (WSDL) file defining the WS provider endpoint (of the receiver). You
can specify the WSDL by either uploading a WSDL file from your computer (option
Upload from File System) or by selecting an integration flow resource (which needs to
be uploaded in advance to the Resources view of the integration flow).
This option is only available if you have chosen a Process Orchestration product pro
file.
Location ID (only available if To connect to a Cloud Connector instance associated with your account, enter the loca
you have selected On-Premise tion ID that you defined for this instance in the destination configuration on the cloud
for Proxy Type) side. You can also enter ${header.headername} or $
{property.propertyname} to dynamically read the value from a header or a prop
erty.
URL to WSDL URL to the WSDL defining the WS provider endpoint (of the receiver). You can specify the
WSDL by selecting a source to browse for a WSDL either from an On-Premise ES Reposi
tory or your local workspace.
In the Resources view, you can upload an individual WSDL file or an archive file (file ending
with .zip) that contains multiple WSDLs or XSDs, or both. For example, you can upload
a WSDL that contains an imported XSD referenced by an xsd:import statement. This
means that if you want to upload a WSDL and dependent resources, you need to add the
parent file along with its dependencies in a single archive (.zip file).
Note
● If you specify a WSDL, you also have to specify the name of the selected service
and the name of the port selected for this service. These fields must have a
namespace prefix.
Expected format: <namespace>:<service_name>
Example: p1:MyService
● Don't use WSDLs with blanks:
It is not recommended to use blanks in WSDL names or directories. This could
lead to runtime issues.
For more information on how to work with WSDL resources, see the following blog: Cloud
Integration – Usage of WSDLs in the SOAP Adapter
Service Name Name of the selected service contained in the referenced WSDL
Port Name Name of the selected port of a selected service (that you provide in the Service Name
field) contained in the referenced WSDL.
Note
Using the same port names across receivers isn't supported in older versions of the
receiver adapters. To use the same port names, you need to create a copy of the
WSDL and use it.
Operation Name Name of the operation of a selected service (that you provide in the Service Name field)
contained in the referenced WSDL.
Connect Without Client This feature corresponds to the Authentication setting None and is shown when you use
Authentication
an older version of this adapter. It is shown either because you have selected a product
profile other than SAP Cloud Platform Integration or (if you have selected the SAP Cloud
Platform Integration product profile) because you are editing an integration flow that has
already existed for some time.
Select this option to connect the tenant anonymously to the receiver system.
Select this option if your server allows connections without authentication at the trans
port level.
Connect Using Basic This feature corresponds to the Authentication setting Basic and is shown when you use
Authentication
an older version of this adapter. It is shown either because you have selected a product
profile other than SAP Cloud Platform Integration or (if you have selected the SAP Cloud
Platform Integration product profile) because you are editing an integration flow that has
already existed for some time.
Select this option to allow the tenant to connect to the receiver system using the de
ployed basic authentication credentials.
Credential Name: Enter the credential name of the username-password pair specified dur
ing the deployment of basic authentication credentials on the cluster.
● Basic
The tenant authenticates itself against the receiver using user credentials (user
name and password).
It is a prerequisite that user credentials are specified in a User Credentials artifact
and deployed on the related tenant. Enter the name of this artifact in the Credential
Name field of the adapter.
● Client Certificate
The tenant authenticates itself against the receiver using a client certificate.
This option is only available if you have selected Internet for the Proxy Type parame
ter.
It is a prerequisite that the required key pair is installed and added to a keystore. This
keystore has to be deployed on the related tenant. The receiver side has to be config-
ured appropriately.
● None
● Principal Propagation
The tenant authenticates itself against the receiver by forwarding the principal of the
inbound user to the cloud connector, and from there to the back end of the relevant
on-premise system
Note
This authentication method can only be used with the following sender adapters:
HTTP, SOAP, IDoc, AS2.
Note
The token for principal propagation expires after 30 minutes.
If it takes longer than 30 minutes to process the data between the sender and
receiver channel, the token for principal propagation expires, which leads to er
rors in message processing.
Note
You can externalize all attributes related to the configuration of the authentication op
tion. This includes the attributes with which you specify the authentication option as
such, as well as all attributes with which you specify further security artifacts that are
required for any configurable authentication option (Private Key Alias or Credential
Name).
● Externalize all attributes related to the configuration of all options, for example,
Authentication and Credential Name and Private Key Alias.
● Externalize only one of the following attributes: Private Key Alias or Credential
Name.
Avoid incomplete externalization, for example, only externalizing the attribute for
the Authentication parameter but not the related Credential Name parameter. In such
cases, the integration flow configuration (based on the externalized parameters) can
not work properly.
The reason for this is the following: If you have externalized the Authentication param
eter and only the Private Key Alias parameter (but not Credential Name), all authenti
cation options in the integration flow configuration dialog (Basic, Client Certificate,
and None) are selectable in a dropdown list. However, if you now select Basic from the
dropdown list, no Credential Name can be configured.
Credential Name (only available Name of the User Credentials artifact that contains the credentials for basic authentica
if you have selected Basic for
tion
the Authentication parameter)
You can dynamically configure the Credential Name field of the adapter by using a Simple
Expression (see http://camel.apache.org/simple.html . For example, you can dynami
cally define the Credential Name of the receiver adapter by referencing a message header
${header.MyCredentialName} or a message property $
{property.MyCredentialName}.
Private Key Alias (only available Specifies an alias to indicate a specific key pair to be used for the authentication step.
if you have selected Client
Certificate for the You can dynamically configure the Private Key Alias parameter by specifying either a
Authentication parameter) header or a property name in one of the following ways: $ {header.headername} or
$ {property.propertyname}.
Timeout (in ms) Specifies the time (in milliseconds) that the client waits for a response before the connec
tion is interrupted.
Note that the timeout setting has no influence on the Transmission Control Protocol
(TCP) timeout if the receiver or any additional component interconnected between the
Cloud Integration tenant and the receiver has a lower timeout. For example, consider that
you have configured a receiver channel timeout of 10 minutes and there is another com
ponent involved with a timeout of 5 minutes. If nothing is transferred for a period of time,
the connection will be closed after the fifth minute. In HTTP communication spanning
multiple components (for example, from a sender, through the load balancer, to a Cloud
Integration tenant, and from there to a receiver), the actual timeout period is influenced
by each of the timeout settings of the individual components that are interconnected be
tween the sender and receiver (to be more exact, of those components that can control
the TCP session). The component or device with the lowest number set for the idle ses
sion timeout will determine the timeout that will be used.
Compress Message Enables the WS endpoint to send compressed request messages to the WS Provider and
to indicate to the WS Provider that it can handle compressed response messages.
Allow Chunking Used for enabling HTTP chunking of data while sending messages.
Return HTTP Response Code as When selected, writes the HTTP response code received in the response message from
Header
the called receiver system into the header CamelHttpResponseCode.
Note
You can use this header, for example, to analyze the message processing run (when
level Trace has been enabled for monitoring). Furthermore, you can use this header to
define error handling steps after the integration flow has called the SOAP receiver.
Caution
It is recommended that you model the integration flow in such a way that header
CamelHttpResponseCode is deleted after is has been evaluated. The reason is
that this header can have an impact on the communication with a sender system in
case one of the following sender adapters are used in the same integration flow:
SOAP 1.x, XI, IDoc, SOAP SAP RM. This is because in such a case the value of header
CamelHttpResponseCode also determines the response code used in the con
nection with the sender system. This is in most cases not the desired behavior.
Furthermore, note that in case the SOAP 1.x receiver channel uses a WSDL with a
one-way operation, header CamelHttpResponseCode is not set (even if feature
Return HTTP Response Code as Header is activated).
Clean Up Request Headers Select this option to clean up the adapter-specific headers after the receiver call.
Select the WS-Security tab and provide values in the fields as follows.
See WS-Security Configuration for the Receiver SOAP 1.x Adapter [page 86] for more information.
Related Information
https://blogs.sap.com/2018/01/25/cloud-integration-soap-adapter-web-service-security/
https://blogs.sap.com/2018/01/24/cloud-integration-wss-between-cloud-integration-and-sap-po-soap-
adapter/
WS-Security Configuration for the Receiver SOAP 1.x Adapter [page 86]
With a receiver channel you configure the outbound communication at the tenant’s side of the communication.
● How the tenant signs the payload of a message (to be verified by the receiver)
● How the tenant encrypts the payload of a message (to be decrypted by the receiver)
The receiver SOAP 1.x adapter allows to configure the following combinations of message security methods:
● Signing a payload
● Signing and encrypting a payload
Signing and encryption (and verifying and decryption) is based on a specific set up of keys as illustrated in the
figures. Moreover, for the message exchange, specific communication rules apply as been agreed between the
administrators of the Web service client and Web service provider (for example, if certificates are to be sent
with the message).
There are two options how these security and communication settings can be specified:
For a detailed description of the SOAP adapter WS-Security parameters, check out Configure the SOAP (SOAP
1.x) Receiver Adapter [page 667] (under WS-Security).
The SuccessFactors adapter enables you to communicate with the SuccessFactors system. You use the OData
V2 message protocol to connect to the OData V2 Web services of the SuccessFactors system.
The OData V2 message protocol for the SuccessFactors adapter is available only in the receiver channel. For
more information, see Configure the SuccessFactors OData V2 Receiver Adapter [page 677].
Tip
● If your input payload contains nodes without data, the output will also contain empty strings. If you
want to avoid empty strings in the output, ensure that the input payload does not contain any empty
nodes.
● You can use headers and exchange properties in the adapter settings to dynamically pass values at
runtime. For more information, see Headers and Exchange Properties Provided by the Integration
Framework [page 900].
Whenever the SuccessFactors OData V2 adapter is used, the following headers with the specified values will be
sent to the SuccessFactors back end as HTTP request headers:
1. X-SF-Correlation-Id : MPL ID
2. X-SF-Client-Tenant-Id : Client/Tenant ID
3. X-SF-Process-Name : iFlow Name
4. X-Agent-Name : SAP Cloud Platform Integration
Related Information
Configure the SuccessFactors OData V2 receiver adapter by understanding the adapter parameters.
The SuccessFactors OData V2 receiver adapter supports externalization. To externalize the parameters of this
adapter choose Externalize and follow the steps mentioned in Externalize Parameters of an Integration Flow
[page 489].
Note
Remember
You must enable HTTP Session Reuse, either On Exchange level or On Integration Flow level.
For more information refer to Specify the Runtime Configuration [page 482].
Parameter Description
Connection
Parameter Description
Address URL of the SuccessFactors data center that you are connect
ing to. This field is populated based on what you select in
SFSF Data Center. You can browse and select SuccessFac
tors data center URL by using the Select option.
Proxy Type Type of proxy you want to use for connecting to SuccessFac
tors OData V2 service.
● Internet
● On-Premise
● None
● Basic
● Client Certificate (only if you selecte Internet for Proxy
Type)
● Principal Propagation
● OAuth2 Client Credentials (only if you selecte Internet
for Proxy Type)
● OAuth2 SAML Bearer Assertion (only if you selecte
Internet for Proxy Type)
Credential Name Name of the credentials that you have deployed in Security
Material in (Operations View)
(only if Basic, OAuth2 Client Credentials, or OAuth2 SAML
Bearer Assertion is selected for Authentication).
Private Key Alias Specifies an alias to indicate a specific key pair to be used
for the authentication step.
Processing
Parameter Description
● Complex types
● Collection of complex types
● Simple types
● Collection of simple types
● Void
Resource Path Path of the resource (entity) that you want to perform the
operation on.
Note
Navigation entities in the resource path are not sup
ported.
Fields The fields in the entity that you want to modify. You can add
this using the Model Operation Wizard [page 681].
(only for PUT and POST operations.)
Query Options Additional options that you want to add to the query like
$top or how to order the results using orderby. You can add
(only for GET operations.)
this using the Model Operation Wizard.
Custom Query Options Query options that are specific to the SuccessFactors OData
V2 service like purge.
Pagination Allows you to set the type of pagination for your query re
sults.
Remember
For more information, see Pagination.
Note
It is recommended to leave the value Empty.
Retry on Failure This option enables you to mitigate intermittent network is
sues. By selecting this, you enable the integration flow to re
try connecting to the SuccessFactors OData V2 service in
case of network issues. The system retries connection every
3 minutes for a maximum of 5 times. The retry happens for
the following scenarios:
● Upsert: For 412 inner error code along with HTTP re
sponse code 200
● Query Operations: Retry will be for 502, 503, 504 HTTP
response codes corresponding to Create, Update and
Delete operations respectively.
Remember
This option is not enabled for Content Enricher.
Timeout (in min) Maximum time the system waits for a response before ter
minating the connection.
HEADER DETAILS Request Headers : Provide the | (Pipe) seperated value list of
HTTP request headers that has to be sent to the OData
backend
This adapter provides a wizard for modeling operations easily. It is recommended that you use this wizard to
ensure that the operation does not contain any errors. The wizard can also fetch the Externalized parameters
that are maintained under the Connection details of the OData V2 outbound adapter.
1. Connect to System: In this step, you provide the details required for connecting to the web service that you
are accessing.
2. Select Entity and Define Operation: In this step, you select the operation you want to perform and the entity
on which you want to perform the operation on. After selecting the entity, you also select the fields, filtering
and sorting conditions.
3. Configure Filter & Sorting: This step is available only for data fetch operations, where you can define the
order in which the records are fetched in the response payload and filter for the fields that you require.
Connect to System
Parameter Description
If you choose Local EDMX File, you select the service defini-
tion EDMX file which contains all these details that you
specified manually when you selected Remote.
Local EDMX File Choose Select to select the EDMX service schema. You can
also manually upload it from your local file system.
(only if you select Connection Source as Local EDMX File).
Address URL of the service that you want to access. If you are con
necting to an on-premise system, enter the Virtual Host in
your Cloud Connector installation.
Proxy Type Type of proxy that you want to use to connect to the service.
Parameter Description
● Complex types
● Collection of complex types
● Simple types
● Collection of simple types
● Void
Sub-Levels Sub-levels of the entity that you want to access. For exam
ple, if you want to access the field Description in the entity
Select Entity Entity that you want to perform the operation on.
Fields Fields associated with the entity that you want to perform
the operation on.
Skip Specifies the top 'n' number of entries to be skipped and the
rest of the entries are fetched.
This step is available only for data fetch operations, Query(GET) and Read(GET).
Parameter Description
Filter By Select the field that you want to use as reference for filtering,
choose the operation (ex: Less Than or Equal), and provide a
value.
Note
The IN operation is available under filtering when editing
the query manually. This operation is not available in the
Query Modelling wizard.
Example
https://<hostname>/odata/v2/User?$filter=userId
in 'ctse1','mhuang1','flynch1'
Sort By Select the field that you want to use as sorting parameter
and choose Ascending or Descending order.
The SuccessFactors receiver adapter enables you to communicate with the SuccessFactors system. You use
the OData V4 message protocol to connect to the OData V4-based Web services of the SuccessFactors
system.
Tip
● If your input payload contains nodes without data, the output also contains empty strings. If you want
to avoid empty strings in the output, ensure that the input payload does not contain any empty nodes.
● You can use headers and exchange properties in the adapter settings to dynamically pass values during
runtime. More information: Headers and Exchange Properties Provided by the Integration Framework
[page 900].
Once you have created a receiver channel and selected the SuccessFactors (OData V4) receiver adapter, you
can configure the following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page
495].
Select the General tab and provide values in the fields as follows.
Select the Connection tab and provide values in the fields as follows.
Connection
Parameters Description
Address URL of the SuccessFactors data center that you want to con
nect to.
Credential Name Credential name that you have used while deploying creden
tials on the tenant.
Proxy Type Type of proxy you want to use to connect to the SuccessFac
tors system.
Select the Processing tab and provide values in the fields as follows.
Processing
Parameters Description
Operation Select the operation that you want to perform from the drop
down list.
● Query(GET)
● Create(POST)
● Update(PUT)
Resource Path Provide the resource path of the entity that you want to ac
cess.
Query Options Query options that you want to send to the OData V4 service
with operation details. (Relevant for Query(GET) operation
only)
The SuccessFactors adapter enables you to communicate with the SuccessFactors system. You use the REST
message protocol to connect to the REST-based Web services of the SuccessFactors system.
Related Information
The SuccessFactors (REST) sender adapter connects an SAP Cloud Platform tenant to a SuccessFactors
sender system using the REST message protocol.
Once you have created a sender channel and selected the SuccessFactors (REST) sender adapter, you can
configure the following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
Select the General tab and provide values in the fields as follows.
General
Parameter Description
Select the Processing tab and provide values in the fields as follows.
Processing
Parameter Description
Proxy Type Type of proxy you want to use to connect to the SuccessFac
tors system.
If you choose Manual, you need to enter values for the fields
Proxy Host and Proxy Port.
Proxy Host is the name of the proxy host you are using.
Note
You can only perform GET operations on the sender
channel.
Note
See the relevant LMS API documentation for more de
tails.
Select the Scheduler tab and provide values in the fields as follows.
Scheduler
Schedule on Day On Date Specify the date on which you want the
operation to be executed.
Time Zone Select the time zone that you want the
scheduler to use as a reference for the
date and time settings.
The SuccessFactors (REST) receiver adapter connects an SAP Cloud Platform tenant to a SuccessFactors
receiver system using the REST message protocol.
Once you have created a receiver channel and selected the SuccessFactors (REST) receiver adapter, you can
configure the following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
Select the General tab and provide values in the fields as follows.
General
Parameter Description
Select the Processing tab and provide values in the fields as follows.
Processing
Parameter Description
Proxy Type Type of proxy you want to use to connect to the SuccessFac
tors system.
If you choose Manual, you need to enter values for the fields
Proxy Host and Proxy Port.
Proxy Host is the name of the proxy host you are using.
Note
You can only perform GET operations on the sender
channel.
Note
See the relevant LMS API documentation for more de
tails.
The SuccessFactors adapter enables you to communicate with the SuccessFactors system. You use the SOAP
message protocol to connect to the SOAP-based Web services of the SuccessFactors system.
Note
You can now pass filter conditions via a header or property while performing an asynchronous or ad hoc
operation.
Restriction
If you deploy an integration flow in Cloud Integration, it deploys in multiple tenants. Polling is triggered from
only one of these tenants. The message monitor displays the process status for the tenants where the
Scheduler has not started. As a result, the message monitor displays messages of less than a few
milliseconds, where the scheduler was not triggered. These entries contain firenow=true in the log. You can
ignore these entries.
Remember
You must enable HTTP Session Reuse with either the On Exchange or On Integration Flow level.
For more information, see Specify the Runtime Configuration [page 482].
Related Information
The SuccessFactors (SOAP) sender adapter connects an SAP Cloud Platform tenant to SOAP-based Web
services of a SuccessFactors sender system (synchronous or asynchronous communication).
Remember
You must enable HTTP Session Reuse with either the On Exchange or On Integration Flow level.
For more information, see Specify the Runtime Configuration [page 482].
Once you have created a sender channel and selected the SuccessFactors (SOAP) sender adapter, you can
configure the following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
Select the General tab and provide values in the fields as follows.
General
Parameter Description
Select the Processing tab and provide values in the fields as follows.
Processing
Parameter Description
Address URL of the SuccessFactors data center that you want to con
nect to. You can browse to and select the SuccessFactors
data center URL by using the Select option.
Address Suffix The system automatically populates this field with /sfapi/v1/
soap as you have selected the SOAP message protocol.
Credential Name Credential name for your credentials that has been deployed
on the tenant.
Proxy Type Type of proxy you want to use to connect to the SuccessFac
tors system.
If you choose Manual, you need to enter values for the fields
Proxy Host and Proxy Port.
Proxy Host is the name of the proxy host you are using.
Operation Operation that you want to perform on the entity that you
are accessing on the SuccessFactors system.
Timeout (in min) Maximum time the system waits for a response.
Select the Scheduler tab and provide values in the fields as follows.
Scheduler
Schedule on Day On Date Specify the date on which you want the
operation to be executed.
Time Zone Select the time zone that you want the
scheduler to use as a reference for the
date and time settings.
If you want to select or change the entity, and modify the query, follow the steps in Modifying
SuccessFactors SOAP Entity and Operation [page 694].
Note
Whenever the SuccessFactors SOAP adapter is used, the following headers with the specified values will be
sent to the SuccessFactors back end as HTTP request headers:
1. X-SF-Correlation-Id: MPL ID
2. X-SF-Client-Tenant-Id: Client/Tenant ID
3. X-SF-Process-Name: iFlow Name
4. X-Agent-Name: SAP Cloud Platform Integration
The SuccessDactors (SOAP) receiver adapter connects an SAP Cloud Platform tenant to SOAP-based Web
services of a SuccessFactors receiver system (synchronous or asynchronous communication).
Remember
You must enable HTTP Session Reuse with either the On Exchange or On Integration Flow level.
For more information, see Specify the Runtime Configuration [page 482].
Once you have created a receiver channel and selected the SuccessFactors (SOAP) receiver adapter, you can
configure the following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
Select the General tab and provide values in the fields as follows.
General
Parameter Description
Select the Connection tab and provide values in the fields as follows.
Connection
Parameter Description
Address URL of the SuccessFactors data center that you want to con
nect to. You can browse to and select the SuccessFactors
data center URL by using the Select option.
Address Suffix The system automatically populates this field with /sfapi/v1/
soap as you have selected the SOAP message protocol.
Credential Name Credential name for your credentials that has been deployed
on the tenant.
Proxy Type Type of proxy you want to use to connect to the SuccessFac
tors system.
If you choose Manual, you need to enter values for the fields
Proxy Host and Proxy Port.
Proxy Host is the name of the proxy host you are using.
Select the Processing tab and provide values in the fields as follows.
Processing
Parameter Description
Operation Operation that you want to perform on the entity that you
are accessing on the SuccessFactors system.
Process in Pages You cannot use the Process in Pages option with the Query
operation if the Process Call step is used in a Multicast
(only if you are using the SuccessFactors SOAP adapter in
branch.
the receiver channel of a Local Integration Process).
Note
● By selecting Process in Pages, you enable the
adapter to process messages in batches. The size
of a message batch is defined by the value that you
specify in the Page Size field.
● After fetching the first page in the local integration
process, if the processing of this page by subse
quent flow steps results in SuccessFactors idle ses
sion timeout, then the SuccessFactors connector
internally uses the startRow API to fetch the next
page.
Tip
In the Process Call step that calls the Local Integration
Process, ensure that you enable looping and set
Expression Type as Non-XML, Condition Expression as $
{property.SAP_SuccessFactorsHasMoreRe
cords.<receiver.name>} contains
'true', and Maximum Number of Iterations as 999.
Note
If you want to select or change the entity, and modify the query, follow the steps in Modifying
SuccessFactors SOAP Entity and Operation [page 694].
Note
Whenever the SuccessFactors SOAP adapter is used, the following headers with the specified values will be
sent to the SuccessFactors back end as HTTP request headers:
1. X-SF-Correlation-Id: MPL ID
2. X-SF-Client-Tenant-Id: Client/Tenant ID
3. X-SF-Process-Name: iFlow Name
4. X-Agent-Name: SAP Cloud Platform Integration
Prerequisites
● You are configuring the communication channel that is assigned with the SuccessFactors SOAP adapter.
● The TrustedCAs from the service provider you are connecting to have been added to the system keystore
associated with your HCI account.
Context
The SuccessFactors SOAP adapter enables you to modify the entity that you are accessing and the operation
that you are performing on the entity. You can use this adapter to perform the following operations:
● Query
● Insert
● Update
● Upsert
You use the following procedure to change the entity and modify the operation for the SuccessFactors SOAP
adapter.
Procedure
Fields Description
System Name of the SuccessFactors data center that you are con
necting to
Address URL of the SuccessFactors data center that you are con
necting to
3. If you have added the system with authentication details, perform the following substeps:
a. In the System field, choose the system you want to connect to from the dropdown list.
b. In the Password field, provide the appropriate password.
c. Choose Connect.
4. In the Entity Selection dialog, select the entity that you want to access.
5. In Operation dropdown list, choose the operation that you want to perform.
Note
You can only perform the Query operation in the sender channel.
6. In the Fields dropdown list, select the fields that you want to perform the operation on.
Note
You can select multiple fields by selecting their respective checkboxes. Choose the operation you want
to perform.
7. If you want to specify filter conditions, choose the icon, select the operation you want to use, and
perform the following substeps:
a. In the Filter By field, enter the fields that you want to filter.
b. In the Operator dropdown list, select the operator that you want to use to define the filter condition.
c. In the Input Type field, select the type of input you want to provide for defining the filter condition.
d. In the Value field, provide the value of the input for the filter condition.
e. Select AND or OR based on how you want this filter condition to be evaluated when the operation is
executed.
f. Choose Add to add more than one filter condition.
8. If you want to specify sorting conditions, choose and perform the following substeps:
a. In the Field field, select the field you want to include in the sorting condition.
b. If you want the sorting to be done in descending order, select the checkbox in the Desc column.
c. Choose to add more than one filter condition.
Results
The system displays a message that an XSD file has been created. You can use this XSD file for mapping steps.
You use the Twitter receiver adapter to extract information from the Twitter platform based on certain criteria
such as keywords, user data, for example. As one example, you can use this feature to send, search for and
receive Twitter feeds.
The connection works that way that the tenant logs on to Twitter based on an OAuth authentication mechnism
and searches for information based on criteria as configured in the adapter at design time. OAuth allows the
tenant to access someone else’s resources (of a specific Twitter user) on behalf of the tenant. As illustrated in
the figure, the tenant (through the Twitter receiver adapter) calls the Twitter API to access resources of a
specific Twitter user. Currently, the Twitter adapter can only be used as receiver adapter. For more information
on the Twitter API, go to: https://dev.twitter.com/ .
Once you have created a receiver channel and selected the Twitter Receiver Adapter, you can configure the
following attributes. See Integration Flow Editor for SAP Cloud Platform Integration [page 495].
Select the General tab and provide values in the fields as follows.
General
Parameter Description
Select the Connection tab and provide values in the fields as follows.
Connection
Parameter Description
Endpoint To access Twitter content, you can choose among the following general options.
● Send Tweet
Allows you to send content to a specific user timeline.
● Search
Allows you to do a search on Twitter content by specifying keywords.
● Send Direct Message
Allows you to send messages to Twitter (write access, direct message).
User Specifies the Twitter user from which account the information is to be extracted.
Page Size Specifies the maximum number of results (tweets) per page.
Number of Pages Specifies the number of pages which you want the tenant to consume.
Max. Characters Retrieved from Select the maximum numbers of characters that are fetched from a tweet.
Tweet
Note
(only if you select Search end
● If you select 140 and tweet contains more than 140 characters, then the tweet is
point)
truncated and the tweet URL is appended.
● If you select 280 and the tweet contains more than 140 characters, then for reg
ular tweets and for retweets the adapter fetches the entire tweet.
(only if you select Search end Use commas to separate different keywords or a valid Twitter Search API query (-(For
point) more information, go to https://dev.twitter.com/rest/public/search ).
Consumer Key An alias by which the consumer (tenant) that requests Twitter resources is identified
Consumer Secret An alias by which the shared secret is identified (that is used to to define the token of the
consumer (tenant))
Access Token An alias by which the access token for the Twitter user is identified
In order to make authorized calls to the TwitterAPI, your application must first obtain an
OAuth access token on behalf of a Twitter user
Access Token Secret An alias by which shared secret is identified that is used to define the token of the Twitter
user
Proxy Type The type of proxy you want to use for establishing connection with OData V2 service.
● Internet
● Manual: If you select this option, you can manually specify Proxy Host and Proxy Port
(using the corresponding entry fields).
The authorization is based on shared secret technology. This method relies on the fact that all parties of a
communication share a piece of data that is known only to the parties involved. Using OAuth in the context of
this adapter, the Consumer (that calls the API of the receiver platform on behalf of a specific user of this
platform) identifies itself using its Consumer Key and Consumer Secret, while the context to the user itself is
defined by an Access Token and an Access Token Secret. These artifacts are to be generated for the receiver
platform app (consumer) and should be configured that way that they will never expire. This adapter only
supports consumer key/secret and access token key/secret artifacts that do not expire.
To finish the configuration of a scenario using this adapter, the generated consumer key/secret and access
token key/secret artifacts are to be deployed as Secure Parameter artifact on the related tenant. To do this, use
the Integration Operations feature, position the cursor on the tenant and chosen Deploy Artifact. As artifact
type, choose Secure Parameter.
The XI adapter connects an SAP Cloud Platform tenant to a remote system that can process the XI message
protocol.
Related Information
The XI sender adapter allows you to connect a tenant to a local Integration Engine in a sender system.
Remember
This component or some of its features might not be available in the Cloud Foundry environment. For more
information on the limitations, see SAP Note 2752867 .
Note
In the following cases certain features might not be available for your current integration flow:
● A feature for a particular adapter or step was released after you created the corresponding shape in
your integration flow.
● You are using a product profile other than the one expected.
More information: Adapter and Integration Flow Step Versions [page 405]
Use
With this adapter you can connect on-premise backend system to SAP Cloud Platform Integration. The XI
sender adapter communicates (receive messages) over the XI 3.0 protocol.
● Avoid to change the temporary storage location for operative scenarios (or do this very carefully).
Such an action can result in data loss. The reason s that outdated messages (which will not be retried any
more) can still be stored there, also when you have changed the Temporary Storage attribute in the
meantime. When you plan to change this attribute, make sure there are no messages any more in the
originally configured temporaty storage.
● Avoid changing (sender or receiver) participant or channel name in the integration flow.
The name of the configured Temporary Storage is generated based on these names. If you change these
names in the integration flow model and deploy the integration flow again, a temporary storage is
When you have created a sender channel (with XI adapter selected), you can configure the following attributes.
Connections Tab
Sender: Connection
Parameter Description
Address Address under which a sender system can reach the tenant
Note
When you specify the endpoint address /path, a sender can also call the integration
flow through the endpoint address /path/<any string> (for example, /path/
test/).
Be aware of the following related implication: When you in addition deploy an integration
flow with endpoint address /path/test/, a sender using the /path/test endpoint
address will now call the newly deployed integration flow with the endpoint address /
path/test/. When you now undeploy the integration flow with endpoint address /
path/test, the sender again calls the integration flow with endpoint address /path
(original behavior). Therefore, be careful reusing paths of services. It is better using com
pletely separated endpoints for services.
● Client Certificate: Sender authorization is checked on the tenant by evaluating the sub
ject/issuer distinguished name (DN) of the certificate (sent together with the inbound
request). You can use this option together with the following authentication option: Cli
ent-certificate authentication (without certificate-to-user mapping).
● User Role: Sender authorization is checked based on roles defined on the tenant for the
user associated with the inbound request. You can use this option together with the fol
lowing authentication options:
○ Basic authentication (using the credentials of the user)
The authorizations for the user are checked based on user-to-role assignments de
fined on the tenant.
○ Client-certificate authentication and certificate-to-user mapping
The authorizations for the user derived from the certificate-to-user mapping are
checked based on user-to-role assignments defined on the tenant.
Depending on your choice, you can also specify one of the following properties:
Note
Note the following:
○ You can also type in a role name. This has the same result as selecting the role
from the value help: Whether the inbound request is authenticated depends on
the correct user-to-role assignment defined in SAP Cloud Platform Cockpit.
○ When you externalize the user r, the value help for roles is offered in the integra
tion flow configuration as well.
○ If you have selected a product profile for SAP Process Orchestration, the value
help will only show the default role ESBMessaging.send.
Communication Party Specify the communication party for the response. Per default no party is set.
Communication Component Specify the communication component for the response. The default is DefaultXIService.
Quality of Service Defines how the message (from the sender) is processed by the tenant.
● Best Effort
The message is sent synchronously; this means that the tenant waits for a re
sponse before it continues processing.
No temporaty storage of the message needs to be configured, as message re
quest and response are processed immediately after another.
● At Least Once
The message is sent asynchronously. This means that the tenant does not wait
for a response before continuing processing. It is expected that the receiver
guarantees that the message is processed exactly once.
This option guarantees that the message is processed at least once on the ten
ant. If a message with identical message ID is received multiple times from a
sender,, all of them will be processed.
If you choose this option, the message needs to be temporarily stored on the
tenant (in the storage configured under Temporary Storage). As soon as the
message is store there, the sender receives successfully received status mes
sage. If an error occurs, the message is retried from the temporary storage.
● Exactly Once (Only possible if as Temporary Storage the option Data Store is se
lected)
The message is sent asynchronously. This means that the tenant does not wait
for a response before continuing processing.
This option guarantees that the message is processed exactly once on the ten
ant. If a message with identical message ID is received multiple times from a
sender, only the first one will be processed. The subsequent messages can be
identified as duplicates (based on the value of the message header
SapMessageIdEx, see below) and will not be processed.
Note
For Exactly Once handling, the sender XI adapter saves the protocol-spe
cific message ID in the header SapMessageIdEx. If this header is set, XI
receiver uses the content of this header as the message ID for outbound
communication. Usually, this is the desired behavior and enables the re
ceiver to identify any duplicates. However, if the sender system is also the
receiver system, or several variants of the message are sent to the same
system (for example, in an external call or multicast), the receiver system
will incorrectly identify these messages as duplicates. In this case, the
header SapMessageIdEx must be deleted (for example, using a content
modifier) or overwritten with a new generated message ID. This deactivates
Exactly Once processing (that is, duplicates are no longer recognized by the
protocol).
If you choose this option, the message needs to be temporarily stored on the
tenant (in the storage configured under Temporary Storage). As soon as the
message is store there, the sender receives successfully received status mes
sage. If an error occurs, the message is retried from the temporary storage.
Temporary Storage Temporary storage location for messages that are processed asynchronously. Mes
sages for which processing runs into an error can be retried from the temporary
(only if as Quality of Service the op
storage.
tion Exactly Once is selected)
You can choose among the following storage types:
● Data Store
The message is temporarily stored in the database of the tenant (in case of an
error). In case of successful message processing, the message is immediately
removed from the data store.
You can monitor the content of the data store in the Monitor section of the Web
UI under Manage Stores in the Data Stores tile.
Note
The data store name is automatically generated and contains the following
parts:
Below the data store name, you find a reference to the associated integra
tion flow in the following form: <integration flow name>/XI
● JMS Queue
The message is stored temporarily in a JMS queue on the configured message
broker.
If possible, use this option as it comes with a better performance.
You can monitor the content of the data store in the Monitor section of the Web
UI under Manage Stores in the Message Queues tile.
Note
The name of the JMS queue is automatically generated and contains the
following parts:
Note
This option is only available if you have an Enterprise Edition license.
Lock Timeout (in min) Enter a value for the retry timeout of the in-progress repository.
Retry Interval (in min) Enter a value for the amount of time to wait before retrying message delivery.
Exponential Backoff Select this option to double the retry interval after each unsuccessful retry.
Maximum Retry Interval (in min) You can set an upper limit on that value to avoid an endless increase of the retry in
(only configurable when Exponential terval. The default value is 60 minutes. The minimum value is 10 minutes.
Backoff is selected)
Dead-Letter Queue Select this option to place the message in the dead-letter queue if it cannot be proc
essed after three retries caused by an out-of-memory. Processing of the message is
(only if as Temporary Storage the op
stopped then.
tion JMS Queue is selected)
In such cases, a lock entry is created which you can view and release in the Monitor
section of the Web UI under Managing Locks.
Use this option to avoid out-of-memory situations (caused in many cases by large
messages).
For more information, read the SAP Community blog Cloud Integration – Configure
Dead Letter Handling in JMS Adapter .
Encrypt Message during Persistence Select this option in case the messages should be stored in an encrypted way during
(only in case you have selected certain processing steps.
Exactly Once as Quality of Service)
When as Quality of Service you have selected Exactly Once, you can use certain headers to specify that after a
defined number of message retries message processing is changed in a specific way. For example, you can
configure the integration flow so that after 5 retries the message is routed to a specific receiver (who will then
receive an alert email). You can do this by using one of the following mentioned headers in a dynamc
expression.
Which header you can use, depends on the chosen kind of temporary storage.
● If as Temporary Storage you have chosen the option Data Store, you can use header
SAP_DataStoreRetries.
● If as Temporary Storage you have chosen the option JMS Queue, you can use header SAPJMSRetries.
Tip
Example
When as Temporary Storage you have chosen the option Data Store, you can use the following expression in
the route that is supposed to forward the message to the receiver of the alert email:
In this example, the message is routed to the related receiver after 5 retries.
Conditions
Parameter Description
Maximum Message Size This parameter allows you to configure a maximum size for
inbound messages (smallest value for a size limit is 1 MB). All
inbound messages that exceed the specified size (per inte
gration flow and on the runtime node where the integration
flow is deployed) are blocked.
● Body Size
● Attachment Size
Related Information
Headers and Exchange Properties Provided by the Integration Framework [page 900]
https://blogs.sap.com/2018/06/04/cloud-integration-configuring-scenario-using-the-xi-sender-adapter/
https://blogs.sap.com/2018/12/04/cloud-integration-configuring-scenario-with-xi-sender-handling-multiple-
interfaces/
https://blogs.sap.com/2018/08/15/cloud-integration-configuring-explicit-retry-in-exception-sub-process-for-
xi-adapter-scenarios/
The XI receiver adapter allows you to connect a tenant to a local Integration Engine in a receiver system. The
adapter supports communication over the XI 3.0 protocol.
Remember
This component or some of its features might not be available in the Cloud Foundry environment. For more
information on the limitations, see SAP Note 2752867 .
Note
In the following cases certain features might not be available for your current integration flow:
● A feature for a particular adapter or step was released after you created the corresponding shape in
your integration flow.
More information: Adapter and Integration Flow Step Versions [page 405]
● Avoid to change the temporary storage location for operative scenarios (or do this very carefully).
Such an action can result in data loss. The reason s that outdated messages (which will not be retried any
more) can still be stored there, also when you have changed the Temporary Storage attribute in the
meantime. When you plan to change this attribute, make sure there are no messages any more in the
originally configured temporaty storage.
● Avoid changing (sender or receiver) participant or channel name in the integration flow.
The name of the configured Temporary Storage is generated based on these names. If you change these
names in the integration flow model and deploy the integration flow again, a temporary storage is
generated with the new name. However, there can still be messages in the old storage. These messages will
not be retried any more after the new storage has been created.
● Apply the right transaction handling.
JMS queues and data stores support different transactional processing. As there are additional
implications of each transactional processing option with other integration flow steps, we urgently
recommend to follow these rules:
○ If as Temporary Storage you select JMS Queue in an XI receiver adapter and the XI adapter is used in a
sequential multicast or splitter pattern, as Transaction Handling you have to select Required for JMS.
If as Temporary Storage you select Data Store in an XI receiver adapter and the XI adapter is used in a
sequential multicast or splitter pattern, as Transaction Handling you have to select Required for JDBC.
For the XI sender adapter no transaction handler is required.
For the XI receiver adapter no transaction handler is required if the XI adapter is not used in a
sequential multicast or in a split scenario.
There is no distributed transaction support. Therefore, you cannot combine JMS and JDBC
transactions. As a consequence, transactional behavior cannot be guaranteed in scenarios using the XI
receiver adapter with JMS storage in multicast scenarios together with flow steps that need a JDBC
transaction handler (like, for example, Data Store or Write Variables).
● The XI Adapter can be used in an Exactly Once scenario (as Quality of Service) in the Send step.
● The XI Adapter can be used in an Best Effort scenario (as Quality of Service) in the Request-Reply step.
When you have created a receiver channel (with XI adapter selected), you can configure the following
attributes.
Receiver: Connection
Parameter Description
Address Address under which the local integration engine of the receiver system can be called
https://<host name>:<port>/sap/xi/engine?type=receiver&sap-
client=<client>
Note
You can find out the constituents (HTTPS port) of the URL by choosing transaction SMICM in the receiver
The endpoint URL that has actually been used at runtime is displayed in the message processing log (MPL) in
the message monitoring application (MPL property RealDestinationUrl).
Proxy Type The type of proxy that you are using to connect to the target system:
Note
If you select the On-Premise option, the following restrictions apply to other parameter values:
○ Do not use an HTTPS address for Address, as it leads to errors when performing consistency
checks or during deployment.
○ Do not use the option Client Certificate for the Authentication parameter, as it leads to errors
when performing consistency checks or during deployment.
Note
If you select the On-Premise option and use the SAP Cloud Connector to connect to your on-premise
system, the Address field of the adapter references a virtual address, which has to be configured in
the SAP Cloud Connector settings.
● If you select Manual, you can manually specify Proxy Host and Proxy Port (using the corresponding entry
fields).
Furthermore, with the parameter URL to WSDL you can specify a Web Service Definition Language
(WSDL) file defining the WS provider endpoint (of the receiver). You can specify the WSDL by either up
loading a WSDL file from your computer (option Upload from File System) or by selecting an integration
flow resource (which needs to be uploaded in advance to the Resources view of the integration flow).
This option is only available if you have chosen a Process Orchestration product profile.
Location ID To connect to an ASP Cloud Connector instance associated with your account, enter the location ID that you
only in case defined for this instance in the destination configuration on the cloud side. You can also enter $
On-Premise {header.headername} or ${property.propertyname} to dynamically read the value from a
is selected header or a property.
for Proxy
Type.
● None
No authentication is configured.
● Basic Authentication
The tenant authenticates itself against the receiver based on user credentials (user and password).
Note that when this authentication option is selected, the required security artifact (User Credentials)
has to be deployed on the tenant.
● Certificate-Based Authentication
The tenant authenticates itself against the receiver based on X.509 certificates.
Note that when this authentication option is selected, the required security artifact (Keystore) has to be
deployed on the tenant.
● Principal Propagation
The tenant authenticates itself against the receiver by forwarding the principal of the inbound user to the
cloud connector, and from there to the back end of the relevant on-premise system. You can only use
principal propagation if you have selectedBest Effort as the Quality of Service.
Note
This authentication method can only be used with the following sender adapters: HTTP, SOAP, IDoc,
XI sender (and Quality-of-Service Best Effort)
Note
Please note that the token for principal propagation expires after 30 minutes.
If it takes longer than 30 minutes to process the data between the sender and receiver channel, the
token for principal propagation expires, which leads to errors in message processing.
For special use cases, this authentication method can also be used with the AS2 adapter.
Credential Name of User Credentials artifact that needs to be deployed separate on the tenant (it contains user name
Name and password for the user to be authenticated).
Private Key Optional entry to specify the alias of the private key to be used for authentication. If you leave this field empty,
Alias the system checks at runtime for any valid key pair in the tenant keystore.
Compress Enables the tenant to send compressed request messages to the receiver (which acts as WS provider) and to
Message indicate to the receiver that it can handle compressed response messages.
Return When selected, writes the HTTP response code received in the response message from the called receiver
HTTP
system into the header CamelHttpResponseCode.
Response
Code as This feature is disabled by default.
Header
Note
You can use this header, for example, to analyze the message processing run (when level Trace has been
enabled for monitoring). Furthermore, you can use this header to define error handling steps after the
integration flow has called the XI receiver.
You cannot use the header to change the return code since the return code is defined in the adapter and
cannot be changed.
Clean Up Select this option to clean up the adapter-specific headers after the receiver call.
Request
Headers
XI ● Communication Party
Identifiers Specifies the Communication Party header value of the request message sent to the receiver system.
for Sender A communication party typically represents a larger unit involved in an integration scenario. You usually
use a communication party to address a company.
An XI mes
This parameter can be configured dynamically.
sage con
● Communication Component
tains spe
Specifies the Communication Component header value of the request message sent to the receiver sys
cific header
tem.
elements
You typically use a communication component to address a business system as the sender or receiver of
that are
messages.
used to ad
This parameter can be configured dynamically.
dress the
sender or a
receiver of
the mes
sage, such
as commu
nication
party, com
munication
compo
nent, and
service in
terface. For
more infor
mation on
the con
cepts be
hind these
entities,
see the
documen
tation of
SAP Proc
ess Integra
tion at
https://
help.sap.co
m/viewer/
product/
SAP_NET
WEA
VER_PI/
ALL/.
XI ● Communication Party
Identifiers Specifies the Communication Party header value of the response message received from the receiver
for Receiver system.
A communication party typically represents a larger unit involved in an integration scenario. You usually
An XI mes
use a communication party to address a company.
sage con
This parameter can be configured dynamically.
tains spe
● Communication Component
cific header
Specifies the Communication Component header value of the response message received from the re
elements
ceiver system.
that are
You typically use a communication component to address a business system as the sender or receiver of
used to ad
messages.
dress the
This parameter can be configured dynamically.
sender or a
receiver of
Note
the mes
sage, such To get this information, go to the receiver system and choose transaction SLDCHECK. In section
as commu LCR_GET_OWN_BUSINESS_SYSTEM, you find the business system ID (which typically has the form
nication <SID>_<client>).
party, com
● Service Interface and Service Interface Namespace
munication
These attributes specify the service interface that determines the data structure of the response mes
compo
sage received from the receiver system.
nent, and
The receiver interface is described according to how interfaces are defined in the Enterprise Services Re
service in
pository, that means, with a name and a namespace.
terface. For
This parameter can be configured dynamically.
more infor
mation on
the con
cepts be
hind these
entities,
see the
documen
tation of
SAP Proc
ess Integra
tion at
https://
help.sap.co
m/viewer/
product/
SAP_NET
WEA
VER_PI/
ALL/.
XI Message Select this option to specify how the XI Message ID shall be defined.
ID
You can choose among the following options:
Determinati
on ● Generate (default)
Generates a new XI message ID.
● Reuse
Take over the message ID passed with the header SapMessageIdEx. If the header is not available in
runtime, a new message ID is generated
● Map
Maps a source message ID to the new XI message ID.
For more information on how to use this option in an end-to-end scenario, check out the SAP Community
blog Cloud Integration – Configuring ID Mapping in XI Receiver Adapter
Quality of Defines how the message (from the tenant) is expected to be processed by the receiver.
Service
There are the following options:
● Best Effort
The message is sent synchronously; this means that the tenant waits for a response before it continues
processing.
No temporary storage of the message needs to be configured, as message request and response are
processed immediately after another.
● Exactly Once
The message is sent asynchronously. This means that the tenant does not wait for a response before
continuing processing. It is expected that the receiver guarantees that the message is processed exactly
once.
If you choose this option, the message needs to be temporarily stored on the tenant (in the storage con
figured under Temporary Storage). As soon as the message is store there, the sender receives success
fully received status message. If an error occurs, the message is retried from the temporary storage.
Temporary Temporary storage location for messages that are processed asynchronously. Messages for which processing
Storage runs into an error can be retried from the temporary storage.
This automatically generated name is subject to a length restriction and must be no more than 40
characters (including the underscore). If the data store name exceeds this limit, you must shorten
the participant name or channel name accordingly.
Below the data store name, you find a reference to the associated integration flow in the following
form: <integration flow name>/XI
● JMS Queue
The message is stored temporarily in a JMS queue on the configured message broker.
If possible, use this option as it comes with a better performance.
You can monitor the content of the data store in the Monitor section of the Web UI under Manage Stores
in the Message Queues tile.
Note
The name of the JMS queue is automatically generated and contains the following parts:
Note
This option is only available if you have an Enterprise Edition license.
Retry Enter a value for the amount of time to wait before retrying message delivery (in case of an error).
Interval (in
min)
Exponential Select this option to double the retry interval after each unsuccessful retry.
Backoff
Maximum You can set an upper limit on that value to avoid an endless increase of the retry interval. The default value is
Retry
60 minutes. The minimum value is set to 10 minutes.
Interval (in
min)
(only con
figurable
when
Exponential
Backoff is
selected)
Dead-Letter Select this option to place the message in the dead-letter queue if it cannot be processed after three retries
Queue caused by an out-of-memory. Processing of the message is stopped then.
(only if as In such cases, a lock entry is created which you can view and release in the Monitor section of the Web UI
Temporary under Managing Locks.
Storage the
Use this option to avoid out-of-memory situations (caused in many cases by large messages).
option JMS
Queue is For more information, read the SAP Community blog Cloud Integration – Configure Dead Letter Handling in
selected) JMS Adapter .
Encrypt Select this option in case the messages should be stored in an encrypted way during certain processing
Message steps. It is recommended to choose this option if the message can contain sensitive data. Note that this set
during ting might decrease performance of the integration scenario.
Persistence
(only in
case you
have se
lected
Exactly
Once as
Quality of
Service)
When as Quality of Service you have selected Exactly Once, you can use certain headers to specify that after a
defined number of message retries message processing is changed in a specific way. For example, you can
configure the integration flow so that after 5 retries the message is routed to a specific receiver (who will then
receive an alert email). You can do this by using one of the following mentioned headers in a dynamc
expression.
Which header you can use, depends on the chosen kind of temporary storage.
● If as Temporary Storage you have chosen the option Data Store, you can use header
SAP_DataStoreRetries.
● If as Temporary Storage you have chosen the option JMS Queue, you can use header SAPJMSRetries.
Example
When as Temporary Storage you have chosen the option Data Store, you can use the following expression in
the route that is supposed to forward the message to the receiver of the alert email:
In this example, the message is routed to the related receiver after 5 retries.
Related Information
Headers and Exchange Properties Provided by the Integration Framework [page 900]
https://blogs.sap.com/2018/06/04/cloud-integration-configuring-scenario-using-the-xi-receiver-adapter/
https://blogs.sap.com/2018/06/04/cloud-integration-configuring-scenario-using-the-xi-sender-adapter/
https://blogs.sap.com/2018/08/15/cloud-integration-configuring-explicit-retry-in-exception-sub-process-for-
xi-adapter-scenarios/
https://blogs.sap.com/2018/09/21/cloud-integration-usage-of-xi-adapter-in-send-and-request-reply-step/
https://blogs.sap.com/2018/11/16/cloud-integration-configuring-id-mapping-in-xi-receiver-adapter/
Context
This is how using an End Message event has an impact on the message status (shown in the message
processing log).
Note
To catch any exceptions thrown in the integration process and handle them, you can use an Exception
Subprocess.
If an exception occurs during the processing sequence which has been handled in an Exception Subprocess,
the message status displayed in the message processing log is Failed.
If you like to configure your integration flow that way that the message status displayed in the message
processing log is Failed (even in case an exception occurs during the processing sequence which has been
handled successfully in an Exception Subprocess), you have the following options:
Procedure
Context
You use a Terminate Message event if you want to stop a message from further processing. This can be the
case if, for example, you have defined specific values on the payload. If the payload doesn't match those values,
the process is terminated.
Note
The message status displayed in the message processing log is Completed, because it has terminated the
message succesfully.
If you like to configure your integration flow that way that the message status displayed in the message
processing log is Failed (even in case an exception occurs during the processing sequence which has been
handled successfully in an Exception Subprocess), you have the following options:
This blog provides more information on the different message events that you can use in an integration flow:
Message Events in Integration Flows
Related Information
Context
Procedure
Receiver not found Receiver could not be found because the URL points to a
non-existent resource (for example, HTTP 404 error).
Not authenticated to invoke receiver Receiver could not be called because authentication has
failed (for example, HTTP 401 error).
Receiver tries to redirect Receiver could not be reached (HTTP 302 error).
Internal server error in receiver Internal server error occurred in the receiver system (for
example, HTTP 500 error).
Others – not further qualified Escalation category has not been further qualified.
This topic describes the behavior of integration processes containing an escalation end event.
The Escalation Event does not abort the integration flow processing as a whole but only throws the Escalation
Event .
If an Escalation End Event is used in the main integration process, the escalation end event throws an Escalated
event and the processing stops afterwards as there are no further steps to execute
If an integration process is calling a local integration process and an escalation event has been defined in the
local integration process, the Escalation Event is thrown, and the processing continues in the calling flow. If
there are still steps to process in the main flow, the processing continues normally at this level and the main
flow message status will be Completed.
Note
In case of a synchronous exchange pattern, the process raises an exception and the message will be
marked as Escalated. This behavior violates the process described above and no exception will be raised,
but this handling was explicitly chosen, since the event is usually defined in the exception sub-process and
the sender expects a valid response in a synchronous exchange pattern.
Note
In case of an asynchronous exchange pattern no exception will be raised and the message status will be
Escalated.
Related Information
You can configure an integration flow to automatically start and run on a particular schedule.
Context
If you want to configure a process to automatically start and run on a particular schedule, you can use this
procedure to set the date and time on which the process should run once or repetitively. The day and time
combinations allow you to configure the schedule the process requires. For example, you can set the trigger
just once on a specific date or repetitively on that date, or you can periodically trigger the timer every day,
specific days in a week or specific days in a month along with the options of specific or repetitive time.
A Timer Start event and a Start Message event (sender channel) must not be modelled in the same integration
flow. This would result in an error ERROR: Multiple 'Start Events' not allowed within an
Integration Process pool.
Note
When you use timer with Run Once option enabled, the message is triggered only when the integration flow
is deployed. If you want to trigger the message with an integration flow that you have already deployed, you
have to Undeploy the integration flow and Deploy it again. If you restart the integration flow bundle,
message will not be triggered.
When you delploy or undeploy an integration flow with Timer-Scheduler, the system automatically releases
all the scheduler locks.
Note
When you deploy small integration flows with Timer (for example, an integration flow with timer, content
modifier and mail adapter), due to extremely fast processing times, multiple schedules are triggered.
If the Timer is configured to trigger message processing at periodic intervals and the processing is not
completed before the next scheduled interval, then the Timer skips the following interval.
Procedure
Run Once NA NA
Schedule on Day On Date Select the date on which you want the
message processing to be triggered
Select if you want the message proc
essing to be triggered on a specific
On Time Select the time on which you want the
date and time.
message processing to be triggered
Note
If you configure the Timer to trigger message processing on a specific time and date, and once this
message processing is completed, the integration flow will be in error state. You can see this in the
Manage Integration Content view in Monitor tab. This is because the configured time and date is in the
past and the integration flow cannot process any further messages.
Message routers enable you to define the message path. You can also perform operations like splitting the
message based on configured conditions and routing the split messages to different message paths.
Note
The Throw Exception option is selected to set the behavior of the router to respond to error occurrence.
Prerequisites
● You have logged in to your customer workspace in the SAP Cloud Platform Integration web application.
● You are editing the integration flow in the editor.
Context
Note
In the following cases certain features might not be available for your current integration flow:
● A feature for a particular adapter or step was released after you created the corresponding shape in
your integration flow.
● You are using a product profile other than the one expected.
More information: Adapter and Integration Flow Step Versions [page 405]
The General Splitter splits a composite message comprising N messages into N individual messages, each
containing one message with the enveloping elements of the composite message. We use the term enveloping
elements to refer to the elements above and including the split point. Note elements that follow the one which
is indicated as split point in the original message (but on the same level), are'nt counted as enveloping
elements. They will not be part of the resulting messages.
If you use a Splitter step in a local integration process, the following limitations apply:
● You can use Splitter and Gather steps together, but you must close each Splitter step with a Gather step.
● You cannot use a Splitter without a child element.
● If a Splitter is used in combination with Gather, the message returned to the main process is the message
at the end of the local process.
Procedure
Field Description
Expression Type Specify the expression type you want to use to define the
split point (the element in the message structure below
which the message is to be split).
Note
The defined Stop on Exception handling takes priority
over the end event defined in an exception subpro
cess with respect to continuing with the next split. It
means that the next split is only executed if Stop on
Exception is not set.
○ XPath
The splitter argument is specified by an XPath ex
pression.
○ Line Break
If the input message is a non-XML file, it is split ac
cording to the line breaks in the input message.
Empty lines in input messages will be ignored and no
empty messages are created.
(Enabled only if you select XPath in the Expression Type You can specify the absolute or relative path.
field)
Note
Note that only the following types of XPath expres
sions are supported:
○ |
○ +
○ *
○ >
○ <
○ >=
○ <=
○ [
○ ]
○ @
Note
When addressing elements with namespaces in the
Splitter step, you have to specify them in a specific
way in the XPath expression, and the namespace dec
laration in the incoming message must follow a cer
tain convention:
<root xmlns:n0=“http://
myspace.com“></root>
Caution
You cannot split by values of message elements.
<customerList>
<customers>
<customerNumber>0001</
customerNumber>
<customerName>Paul Smith</
customerName>
</customers>
<customers>
<customerNumber>0002</
customerNumber>
<customerName>Seena
Kumar</customerName>
</customers>
</customerList>
/customerList/customers/
customerNumber=0001
Note
Note additional instructions for the usage of XPath ex
pressions for specific cases provided under Using
XPath Expressions in the Splitter Step [page 726].
Note
Note the following behavior if the specified XPath ex
pression does not match the inbound payload: If no
split point is found for the specified XPath, no out
bound payload will be generated from the Splitter
step and the integration flow will be ended with status
Completed.
Grouping The size of the groups into which the composite message
is to be split.
Parallel Processing Select this checkbox if you want to enable (parallel) proc
essing of all the split messages at once.
Number of Concurrent Processes If you have selected Parallel Processing, the split mes
sages are processed concurrently in threads. Define how
(Enabled only if Parallel Processing is selected)
many concurrent processes to use in the splitter. The de
fault is 10. The maximum value allowed is 50.
Timeout (in s) Maximum time in seconds that the system will wait for
processing to complete before it is aborted. The default
(Enabled only if Parallel Processing is selected)
value is 300 seconds.
Caution
Note that after the specified timeout the splitter proc
essing stops without exception.
Note
If Stop on Exception is set, the exception subprocess
has no effect on the exception. The processing is
stopped, and the message is set to Failed.
Next Steps
When a message is split (as configured in a Splitter step of an integration flow), the Camel headers listed below
are generated every time the runtime finishes splitting an Exchange. You have several options for accessing
● CamelSplitIndex
Provides a counter for split items that increases for each Exchange that is split (starts from 0).
● CamelSplitSize
Provides the total number of split items (if you are using stream-based splitting, this header is only
provided for the last item, in other words, for the completed Exchange).
● CamelSplitComplete
Indicates whether an Exchange is the last split.
Related Information
Relative paths are only supported if the split point is on the same level and under the same root path.
If a relative path points to an element that appears under different parent elements in the inbound message,
the General Splitter works in the following way.
In this case, the addressed element occurs always under the same parent element.
Sample Code
<root>
<shop>
<customerReview>
<id>3</id>
Specifying the relative XPath expression //id in the General Splitter leads to the following result messages:
Sample Code
<root>
<shop>
<customerReview>
<id>3</id>
</customerReview>
</shop>
</root>
<root>
<shop>
<customerReview>
<id>4</id>
</customerReview>
</shop>
</root>
In this case, the same element occurs under different parent elements in the inbound message.
In the example inbound message, element id occurs under two different parent elements (products and
customerReview):
Sample Code
<root>
<shop>
<product>
<id>1</id>
<id>2</id>
</product>
<customerReview>
<id>3</id>
<id>4</id>
</customerReview>
</shop>
</root>
Sample Code
<root>
<shop>
<product>
<id>1</id>
<id>2</id>
</product>
<customerReview>
<id>3</id>
</customerReview>
</shop>
</root>
Sample Code
<root>
<shop>
<product>
<id>1</id>
<id>2</id>
</product>
<customerReview>
<id>4</id>
</customerReview>
</shop>
</root>
The resulting messages are split according to the element for which also one parent element is addressed in
the relative XPath (//customerReview/id).
However, in case you specify the relative XPath expression //id in the General Splitter, you get four result
messages:
Sample Code
<root>
<shop>
<product>
<id>1</id>
</product>
</shop>
</root>
Sample Code
<root>
<shop>
<product>
<id>2</id>
</product>
</shop>
</root>
<root>
<shop>
<product>
<id>3</id>
</product>
</shop>
</root>
Sample Code
<root>
<shop>
<product>
<id>4</id>
</product>
</shop>
</root>
In this case, the message is split according to the addressed element, but the split point occurs in certain result
messages obviously under an unexpected parent element: Although the inbound message parts <id>3</id>
and <id>4</id> are under the parent element customerReview, the General Splitter puts them under the
parent element products. This result might not correspond to what you expect.
Prerequisites
● You have logged in to your customer workspace in the SAP Cloud Platform Integration web tooling.
● You are editing the integration flow in the editor.
Context
You perform this task when you have to specify conditions based on which the messages are routed to a
receiver or an interface during runtime. If the message contains XML payload, you form expressions using the
XPath-supported operators. If the message contains non-XML payload, you form expressions using the
operators shown in the table below:
= ${header.SenderId} = '1'
!= ${header.SenderId} != '1'
in ${header.SenderId} in '1,2'
Example
A condition with expression ${header.SenderId} regex '1.*' routes all the messages that have a
Sender ID starting with 1.
Note
● You can define a condition based on property or exception that may occur.
● If the condition ${property.SenderId} = '1' is true, then Router routes the message to a
particular sender whose Sender ID is 1.
● If the condition ${exception.message}contains 'java.lang.Exception' is true, then Router
routes the message to a particular receiver, otherwise it routes to another receiver.
Procedure
If you select Throw Exception, the default route must terminate with Terminate Message.
Recommendation
We recommend that you ensure that the routing branches of a router are configured with the same
type of condition, either XML or non-XML, and not a combination of both. At runtime, the specified
conditions are executed in the same order. If the conditions are a combination of both non-XML
and XML type, the evaluation fails.
e. If you want to set the selected route as the default, so that its associated receiver handles the error
situation if no receiver is found, select the Default Route option.
Note
Only the route that you last selected as Default Route is considered the default route.
Prerequisites
● You have logged in to your customer workspace in the SAP Cloud Platform Integration web tooling.
● You are editing the integration flow in the editor.
Context
The Iterating Splitter splits a composite message into a series of smaller messages without copying the
enveloping elements of the composite message.
If you use a Splitter step in a local integration process, the following limitations apply:
● You can use Splitter and Gather steps together, but you must close each Splitter step with a Gather step.
● You cannot use a Splitter without a child element.
● If a Splitter is used in combination with Gather, the message returned to the main process is the message
at the end of the local process.
Procedure
Field Description
Expression Type Specify the expression type you want to use to define the
split point (the element in the message structure below
which the message is to be split).
Note
The defined Stop on Exception handling takes priority
over the end event defined in an exception subpro
cess with respect to continuing with the next split. It
means that the next split is only executed if Stop on
Exception is not set.
○ XPath
The splitter argument is specified by an XPath ex
pression.
○ Token
The splitter argument is specified by a keyword.
○ Line Break
If the input message is a non-XML file, it is split ac
cording to the line breaks in the input message.
Empty lines in input messages will be ignored and no
empty messages are created.
Token (Enabled only if you select Token in the Expression The keyword or token to be used as a reference for split
Type field) ting the composite message
(Enabled only if you select XPath in the Expression Type You can specify the absolute or relative path.
field)
Note
Note that only the following types of XPath expres
sions are supported:
○ |
○ +
○ *
○ >
○ <
○ >=
○ <=
○ [
○ ]
○ @
Note
When addressing elements with namespaces in the
Splitter step, you have to specify them in a specific
way in the XPath expression, and the namespace dec
laration in the incoming message must follow a cer
tain convention:
<root xmlns:n0=“http://
myspace.com“></root>
Caution
You cannot split by values of message elements.
<customerList>
<customers>
<customerNumber>0001</
customerNumber>
<customerName>Paul Smith</
customerName>
</customers>
<customers>
<customerNumber>0002</
customerNumber>
<customerName>Seena
Kumar</customerName>
</customers>
</customerList>
/customerList/customers/
customerNumber=0001
Grouping The size of the groups into which the composite message
is to be split.
Parallel Processing Select this checkbox if you want to enable processing of all
the split messages at once.
Number of Concurrent Processes If you have selected Parallel Processing, the split mes
sages are processed concurrently in threads. Define how
(Enabled only if Parallel Processing is selected)
many concurrent processes to use in the splitter. The de
fault is 10. The maximum value allowed is 50.
Timeout (in s) Maximum time in seconds that the system will wait for
processing to complete before it is aborted. The default
(Enabled only if Parallel Processing is selected)
value is 300 seconds.
Caution
Note that after the specified timeout the splitter proc
essing stops without exception.
Note
If Stop on Exception is set, the exception sub-process
has no effect on the exception. The processing is
stopped, and message is set to Failed.
Next Steps
When a message is split (as configured in a Splitter step of an integration flow), the Camel headers listed below
are generated every time the runtime finishes splitting an Exchange. You have several options for accessing
these Camel headers at runtime. For example, suppose that you are configuring an integration flow with a
Splitter step before an SFTP receiver adapter. If you enter the string split_${exchangeId}_Index$
{header.CamelSplitIndex} for File Name, the file name of the generated file on the SFTP server contains
the Camel header CamelSplitIndex. In other words, the information on the number of split Exchanges
induced by the Splitter step.
● CamelSplitIndex
Provides a counter for split items that increases for each Exchange that is split (starts from 0).
● CamelSplitSize
Provides the total number of split items (if you are using stream-based splitting, this header is only
provided for the last item, in other words, for the completed Exchange).
● CamelSplitComplete
Indicates whether an Exchange is the last split.
Prerequisites
● You have logged into your customer workspace in SAP Cloud Platform Integration web tooling.
● You are editing the integration flow in the editor.
Context
You use a PKCS#7/CMS Splitter if you want to break down a PKCS7 Signed Data message into two files: one
containing the signature and one containing the content.
Procedure
Field Description
Payload File Name Name of the file that will contain the payload after the
splitting step
Signature File Name Name of the file (extension .sig) that will contain the sig
nature after the splitting step
Wrap by Content Info Select this option if you want to wrap PKCS#7 signed data
containing the signature into PKCS#7 content.
PayloadFirst Select this option if you want the payload to be the first
message returned.
BASE64 Payload Select this option if you want to encode the payload with
the base64 encoding scheme after splitting.
BASE64 Signature Select this option if you want to encode the signature us
ing the base64 encoding scheme after splitting.
Prerequisites
You have logged into SAP Cloud Platform Integration and editing the integration flow.
Context
You can use the Multicast step to send copies of the same message to multiple routes. You can send copies to
all routes at once using Parallel Multicast or in an order of your choice using Sequential Multicast. This allows
you to perform multiple operations on the same message in a single integration process. Without Multicast, you
needed multiple integration processes to perform this task.
Procedure
Prerequisites
Remember
This component or some of its features might not be available in the Cloud Foundry environment. For more
information on the limitations, see SAP Note 2752867 .
Context
You use the EDI splitter to split inbound bulk EDI messages, and configure the splitter to validate and
acknowledge the inbound messages. If you choose to acknowledge the EDI message, then the splitter
transmits a functional acknowledgement after processing the bulk EDI message. A bulk EDI message can
contain one or more EDI formats, such as EDIFACT, EANCOM, and ASC-X12. You can configure the EDI splitter
to process different EDI formats depending on the business requirements of the trading partners.
Field Description
General
Processing
Timeout (in sec) Set the time limit in seconds for the EDI split
ter to process individual split messages. If
there are any processes still pending once the
time has lapsed, the splitter terminates the
processes and updates the MPL status.
EDIFACT
Example
Consider a scenario where you receive a
bulk EDI message containing five pur
chase orders. In Interchange mode, if a
single EDI message fails, the entire inter
change is rejected. However, in Message
mode, if a single EDI message fails, only
the invalid message is rejected and the
valid messages are dispatched for further
processing.
Note
During runtime only XSD’s from Inte
gration Content Advisor (ICA) are
supported.
Process Invalid Messages This feature is available only for Message op
tion in transaction mode. If you select this op
tion, you must use a router after the splitter to
process the split messages.
Note
This header name is fetched from camel
header. The header is added in script ele
ment. This script element is added before
converter element. You can add value for
this header in the script element.
Note
Configure a router and apply the
routing condition $
{header.SAP_EDI_ACKNOWLED
GEMENT} = 'true' to route the
functional acknowledgement.
Note
In case of rules violation, you see the ac
knoedgement in a specific format. Here's
how the acknowledge is formatted:
Include UNA Segment The trading partner uses the UNA segment in
the CONTRL message to define special char
acters, such as separators and indicators.
This option enables the splitter to include spe
cial characters in the CONTRL message. If not
selected, the UNA segment is not included in
the CONTRL message.
X12
Note
○ During runtime only XSD’s from
Integration Content Advisor
(ICA) are supported.
○ If you wish to remove an XSD file
from the project, then select the
relevant XSD file and choose
Remove.
Note
This header name is fetched from camel
header. The header is added in script ele
ment. This script element is added before
converter element. You can add value for
this header in the script element.
Note
Configure a router and apply the
routing condition $
{header.EDI_ACKNOWLEDGEME
NT}='true' to process the func
tional acknowledgement.
Exclude AK3 and AK4 Notifies the splitter to exclude the AK3 and
AK4 segments from the functional acknowl
edgement message. However, it retains the
details of the AK1, AK2, AK5, and AK9 seg
ments in the functional acknowledgement.
The error codes for UN-EDIFACT interchange and message levels are given below:
Interchange Level
Note
If you select Interchange transaction mode, the splitter treats the entire EDI interchange as a single
entity, and includes interchange errors in the acknowledgement.
Message Level
8 Invalid date.
The following segments are part of message payload of EANCOM, and the table below mentions the headers
and values for the given payload.
UNB+UNOC:3+4006501000002:14+5790000016839:14+100818:0028+0650+++++XXXXX'
UNH+1+INVOIC:D:96A:EN:EAN008'
SAP_EDI_Document_Standard EANCOM
SAP_EDI_Sender_ID 4006501000002
SAP_EDI_Sender_ID_Qualifier 14
SAP_EDI_Receiver_ID 5790000016839
SAP_EDI_Receiver_ID_Qualifier 14
SAP_EDI_Interchange_Control_Number 0650
SAP_EDI_Message_Type INVOIC
SAP_EDI_Message_Version D
SAP_EDI_Message_Release 96A
SAP_EDI_Message_Controlling_Agency EN
SAP_EDI_Message_Association_Assign_Code EAN008
The element creates camel headers and populates it with respective extracted values. For example, if following
segments are part of message payload of EDIFACT, then respective headers and values for the same are given
in the table below.
UNB+UNOC:3+4006501000002:14+5790000016839:14+100818:0028+0650+++++XXXXX'
UNH+1+INVOIC:D:96A:UN'
SAP_EDI_Document_Standard UN-EDIFACT
SAP_EDI_Sender_ID 4006501000002
SAP_EDI_Sender_ID_Qualifier 14
SAP_EDI_Receiver_ID 5790000016839
SAP_EDI_Receiver_ID_Qualifier 14
SAP_EDI_Interchange_Control_Number 0650
SAP_EDI_Message_Type INVOIC
SAP_EDI_Message_Version D
SAP_EDI_Message_Release 96A
SAP_EDI_Message_Controlling_Agency UN
If following segments are part of message payload of ASC-X12 , then respective headers and values for the
same are given in the table below.
GS*IN*GSRESNDR*GSRERCVR*20030709*0816*12345*X*004010~
810*0001~
SAP_EDI_Document_Standard ASC-X12
SAP_EDI_Sender_ID WWRESNDR
SAP_EDI_Sender_ID_Qualifier ZZ
SAP_EDI_Receiver_ID WWRERCVR
SAP_EDI_Receiver_ID_Qualifier ZZ
SAP_EDI_Interchange_Control_Number 000046668
SAP_EDI_Message_Type 810
SAP_EDI_Message_Version 004010
SAP_EDI_GS_Sender_ID GSRESNDR
SAP_EDI_Receiver_ID GSRERCVR
SAP_EDI_GS_Control_Number 12345
SAP_GS_Functional_Id_Code IN
SAP_GS_Responsible_Agency_Code X
SAP_ISA_Acknowledgment_Requested 0
SAP_ISA_Auth_Information_Qualifier 00
SAP_ISA_Control_Standards_Identifier
SAP_ ISA_Security_Information_Qualifier 00
SAP_ISA_Usage_Indicator P
SAP_ISA_Version_Number 004010
SAP_MessageProcessingLogID
SAP_ST_Control_Number
The two splitter types General Splitter and Iterative Splitter behave differently in their handling of the
enveloping elements of the input message.
The following figures illustrate the behavior of both splitter types. In both cases an input message comprising a
dedicated number of items is split into individual messages.
General Splitter
The General Splitter splits a composite message comprising N messages into N individual messages, each
containing one message with the enveloping elements of the composite message. We use the term enveloping
elements to refer to the elements above and including the split point. Note elements that follow the one which
is indicated as split point in the original message (but on the same level), are'nt counted as enveloping
elements. They will not be part of the resulting messages.
Caution
Note that in case there are elements in the original message that follow the one indicated as split point (and
on the same level), the General Splitter generates result messages where these elements are missing. In the
following example (for sakes of simplicity with only two instead of three split messages), the split point is
set to element C that is followed by element E. As shown in the figure, element E is missing in each result
message.
Iterating Splitter
The Iterating Splitter splits a composite message into a series of messages without copying the enveloping
elements of the composite message.
Related Information
If you expect a non-XML file as the input message and, as a result, selectLine Break in the Expression Type field,
General Splitter and Iterative Splitter process the message differently, as shown for the following example.
Sample Code
The General Splitter would transform this into the following messages:
The Iterative Splitter would transform this into the following messages:
Assume you expect an XML message and like to address elements with Expression Type set to XPath. This is an
example for an input message representing items ordered by a certain customer.
<customer xmlns="http://myCustomer.com">
<customerNumber>0001</customerNumber>
<customerName>Paul Smith</customerName>
<order id="100">
<items>
<item>Shirt</item>
<item>Shoe</item>
</items>
</order>
<order id="101">
<items>
<item>Shoe</item>
<item>Car</item>
<item>Watch</item>
</items>
</order>
</customer>
To split this message, you need to define a namespace mapping (of a prefix and a namespace) for the
integration flow (under Runtime Configuration), for example: xmlns:n1=http://myCustomer.com.
When configuring the General Splitter and like to address the message element with an XPath expression,
make sure that you refer to the namespace prefix.
/n1:customer/n1:order
Namespaces are inherited from parent to child element. This is why you need to indicate the namespace
explicitly in the XPath expression.
Then the message is split with the General Splitter into the following two chunks:
<customer xmlns="http://myCustomer.com">
<customerNumber>0001</customerNumber>
<customerName>Paul Smith</customerName>
<order id="100">
<items>
<item>Shirt</item>
<item>Shoe</item>
</items>
</order>
</customer>
<customer xmlns="http://myCustomer.com">
<customerNumber>0001</customerNumber>
<customerName>Paul Smith</customerName>
<order id="101">
<items>
<item>Shoe</item>
<item>Car</item>
<item>Watch</item>
</items>
</order>
</customer>
This is an example for an input message representing orders from different customers.
<customerList xmlns=”http://myCustomer.com”>
<customers>
<customerNumber>0001</customerNumber>
<customerName>Paul Smith</customerName>
</customers>
<customers>
<customerNumber>0002</customerNumber>
<customerName>Seena Kumar</customerName>
</customers>
</customerList>
To split this message, you need to define a namespace mapping (of a prefix and a namespace) for the
integration flow (under Runtime Configuration), for example: xmlns:n2=http://myCustomer.com.
/n2:customerList/n2:customers
Then the message is split with the Iterative Splitter into the following two chunks:
<customers>
<customerNumber>0001</customerNumber>
<customerName>Paul Smith</customerName>
<customers>
<customerNumber>0002</customerNumber>
<customerName>Seena Kumar</customerName>
</customers>
Prerequisites
● You have accessed the customer workspace in SAP Cloud Platform Integration web application.
● You are editing the integration flow in the editor.
Context
The Gather step enables you to merge messages from more than one route in an integration process. You
define conditions based on the type of messages that you are gathering using the Gather step. You can choose
to gather:
The Join element enables you to bring together the messages from different routes before combining them into
a single message. You use this in combination with the Gather element. Join only brings together the messages
from different routes without affecting the content of messages.
Remember
● If you want to combine messages that are transmitted to more than one route by Multicast, you need to
use Join before using Gather.
● If you want to combine messages that are split using Splitter, you use only Gather.
Based on this, you choose the strategy to combine the two messages.
● For XML messages of the same format, you can combine without any conditions (multimapping format) or
specify the XPath to the node at which the messages have to be combined
● For XML messages of different formats, you can only combine the messages For plain text messages, you
can only specify concatenation as the combine strategy
● Specify valid xpath expression that includes namespace prefixes if incoming payload contains namespace
declarations, including default namespace declarations
If your incoming payload contains namespace declarations including default namespace, ensure that you
specify xpath with namespace prefixes. Also ensure that the namespace prefix mapping is defined in the
Sample Code
<root xmlns="http:defaultnamespace.com">
<f:table xmlns:f="http://www.w3schools.com/furniture">
<f:name>African Coffee Table</f:name>
<f:width>80</f:width>
<f:length>120</f:length>
</f:table>
<table>
<name>African Coffee Table</name>
<width>80</width>
<length>120</length>
</table>
</root>
f and d are the prefixes defined in the Namespace Mapping field of Runtime Configuration and mapped to the
namespaces http://www.w3schools.com/furniture and http:defaultnamespace.com respectively. Examples of
valid xpaths for the above XML are:
● //f:table
● /d:root/f:table
● /d:root/d:table
Note
You can only edit the properties of gather and the integration elements of the integration flow that support
editing. Modeling palette is not available for integration flows containing gather.
Procedure
Value Description
XML (Same Format) If messages from different routes are of the same format
XML (Different Format) If messages from different routes are of the different for
mat
Plain Text If messages from different routes are of the plain text for
mat
Note
In case you are
using the
mapping step to
map the output
of this strategy
you can have the
source XSD in the
LHS and specify
the Occurrence
as 0..Unbounded.
Combine at XPath Combine the Combine from Source XPath of the node
incoming messages (XPath) that you are using as
at the specified XPath reference in the
source message to
retrieve the
information.
Note
In case you are
using the
mapping step to
map the output
of this strategy
This flow element does not work directly with Router. It is recommended to model the flow using Local
Integration Process.
You can use the Aggregator step to combine multiple incoming messages into a single message.
Prerequisites
Caution
Usage of an Aggregator step in a Local Integration Process or Exception Subprocess is not supported.
Note
When you use the Aggregator step in combination with a polling SFTP sender adapter and you expect a
high message load, please consider the following recommendation:
For the involved in SFTP sender adapter set the value for Maximum Messages per Poll to a small number
which is larger than 0 (for example, 20) (under Advanced Parameters). That way, you ensure a proper
message processing status logging at runtime.
Procedure
Attribute Description
○ Combine in Sequence
Aggregated message contains all correlated incoming
messages, and the original messages are put into a
sequence.
○ Combine
Aggregated message contains all correlated incoming
messages.
Message Sequence Expression (XPath) Enter an XPath expression for that message element
based on which a sequence is being defined.
(only if for Aggregation Algorithm the option Combine in
Sequence has been selected) You can use only numbers to define a sequence. Each se
quence starts with 1.
Last Message Condition (XPath) Define the condition (XPATH = value) to identify the
last message of an aggregate.
Note that the header attribute can only have one of the
following values:
○ timeout
Processing of the aggregate has been finished be
cause the configured Completion Timeout has been
reached.
○ predicate
Processing of the aggregate has been finished be
cause the Completion Condition has been fulfilled.
Completion Timeout Defines the maximum time between two messages before
aggregation is being stopped (period of inactivity).
Note that the header attribute can only have one of the
following values:
○ timeout
Processing of the aggregate has been finished be
cause the configured Completion Timeout has been
reached.
○ predicate
Processing of the aggregate has been finished be
cause the Completion Condition has been fulfilled.
Data Store Name Enter the name of the transient data store where the ag
gregated message is to be stored. The name should begin
with a letter and use characters (aA-zZ, 0-9, - _ . ~ ).
Note that only local data stores (that apply only to the in
tegration flow) can be used. Global data stores cannot be
used for this purpose.
Note
The Integration Operations feature provides a Data
Store Viewer that allows you to monitor your transient
data stores.
Next Steps
● CamelAggregatedCompletedBy
This header is relevant for use cases with message aggregation.
The header attribute can only have one of the following values:
○ timeout
Processing of the aggregate has been stopped because the configured Completion Timeout has been
reached.
○ predicate
Processing of the aggregate has finished because the Completion Condition has been met.
Here's a list of the message transfermers available in SAP Cloud Platform Integration and links to their
documentation.
EDI Extractor
Prerequisites
Context
The CSV to XML converter converts content in CSV format into XML format.
Procedure
Field Description
XML Schema Choose Browse. Select the XML schema you want to use.
Path to Target Element in XSD XPath in the XML schema file where the content from CSV
file has to be placed.
Record Marker in CSV The record in CSV file that indicates the entry which the
converter has to consider as the starting point of the con
tent.
Note
If you do not provide a value for this field, all the re
cords in the CSV file are considered for conversion
Field Separator in CSV Select the character from dropdown list that is used as
the field separator in CSV file.
Tip
If you want to use a field separator that is not available
in the dropdown list, manually enter the character.
Exclude First Line Header Select this checkbox if you want to exclude the header in
formation in first line of CSV file for conversion.
Note
If the checkbox is not selected, then the attributes of
the CSV file are mapped according to the order of oc
currence in the XSD.
4. Once the option Exclude First Line Header is enabled, you get the option to order the XML elements in the
output based on the CSV fields sequence or the XSD elements.
1. Example
Sample Code
Sample Code
name, city, id
name1,city1,id1
name2,city2,id2
Sample Code
<EmployeeList>
<Employee>
<id>id1</id>
<name>name1</name>
<city>city1</city>
</Employee>
<Employee>
<id>id2</id>
<name>name2</name>
<city>city2</city>
</Employee>
</EmployeeList>
Example
Sample Code
Sample Code
name,city,id
name1,city1,id1
name2,city2,id2
Output
Sample Code
<EmployeeList>
<Employee>
<id>name1</id>
<name>city1</name>
<city>id1</city>
</Employee>
<Employee>
<id>name2</id>
<name>city2</name>
<city>id2</city>
</Employee>
</EmployeeList>
Prerequisites
Context
The XML to CSV converter converts content in XML format into CSV format.
Note that for the schema of the source XML document only simple type is supported.
Procedure
Field Description
Path to Source Element in XSD Path to the source element in the XSD file
Field Separator in CSV Select the character that you want to use as the field sep
arator in CSV file from dropdown list
Tip
If you want to use a field separator that is not available
in the dropdown list, manually enter the character.
Include Field Name as Headers Select this checkbox if you want to use the field names as
the headers in CSV file
Include Parent Element Select this checkbox if you want to include the parent ele
ment of the XML file in CSV file
Include Attribute Values Select this checkbox if you want to include attribute val
ues in the CSV file
Note
○ The CSV format will contain the values in the same order of occurrence of header and row tags in
the XML.
○ When optional fields exists in the XML, the ordering is maintained in the order when a tag is
encountered.
○ If there are namespaces declared in the schema file, the same has to be declared at the integration
flow level and the namespace prefixes can be used in the XPATH expression.
The JSON to XML converter enables you to transform messages in JSON format to XML format.
Prerequisites
● You are familiar with the conversion rules for JSON to XML conversion.
For more information see Conversion Rules for JSON to XML Conversion [page 775].
● You are familiar with the limitations for JSON to XML conversion.
For more information see Limitations for JSON to XML Conversion [page 775].
Procedure
1. Launch SAP Cloud Platform Integration web application by accessing the URL provided by SAP.
2. Choose the Design tab.
3. Select the required integration package.
4. In the Package content view, choose the integration flow you want to edit.
5. In the Integration Package Editor page, choose Edit.
6. In the palette, choose , then choose Transformation Converter JSON to XML Converter .
7. Insert the converter at the desired position in your integration process.
The properties tab page for the XML to JSON Converter opens.
8. Define the parameters for your conversion, see table below.
Processing JSON Prefix (only if the option Use Enter the mapping of the JSON prefix
Namespace Mapping is selected) to the XML namespace. The JSON
namespace/prefix must begin with a
letter and can contain aA-zZ and 0-9.
Name (only if the option Add XML Enter the name of the XML root ele
Root Element is selected) ment. The name must comply with
the NCName rules:
Namespace Mapping (only if the op Enter the namespace of the XML root
tion Add XML Root Element is se element that you have configured in
lected) the integration flow.
9. If you want to continue editing the integration package without exiting, choose Save.
10. If you want to retain your changes as a variant, choose Save as version to retain a copy of the current
artifact.
11. If you want to terminate the creation of package, choose Cancel before saving it.
Related Information
The conversion from JSON format to XML format follows the following rules:
● The value of a complex element (having attributes for example) is represented by a separate member
"$":"value".
● An element with multiple child elements of the same name is represented by an array. This also holds if
between the children with the same name other children with another name reside.
Example: <root><childA>A1</childA><childB>B</childB><childA>A2</childA></root> is
transformed in the non-streaming case to {"root":{"childA":["A1","A2"],"childB":"B"}},
which means that the order of the children is not preserved.
In the streaming case, the result is: {"root":{"childA":["A1"],"childB":"B","childA":
["A2"]"}}, which means that a non-valid JSON document is created because the member "childA"
appears twice.
● An element with no characters or child elements is represented by "element" : ""
● An attribute is represented as JSON member whose name is the concatenation of '@' , the JSON prefix
corresponding to the XML namespace, JSON delimiter, and the attribute name. If the attribute has no
namespace, no prefix and JSON delimiter are added to the JSON name.
● An element is represented as JSON member whose name is the concatenation of JSON prefix
corresponding to the XML namespace, JSON delimiter, and the element name. If the element has no
namespace, no prefix and JSON delimiter is added to the JSON name.
● A member with JSON null value is transformed to empty element; example: "C":null is converted to
<C/>.
● Conversion of "@attr":null to XML is not supported (you get a NullPointerException, since cluster
version 1.21 you get a JsonXmlException).
● The result XML document is encoded in UTF-8 and gets the XML header "<?xml version='1.0'
encoding='UTF-8'?>".
● The content of the XML-Namespace-to-JSON-Prefix map is transformed to namespace prefix declarations
on the root element.
● If for a JSON prefix no XML Namespace is defined, then the full member name with the prefix and JSON
delimiter is set as element name: "p.A" leads to <p.A>.
Related Information
Tip
If you need to convert a JSON file that doesn't fulfil this requirement, you can do the following:
Add a content modifier before the JSON to XML converter that changes the message body. In the entry
field in tab Message Body, enter:
{"root": ${in.body}}
● If no XML namespace is defined for a JSON prefix, the full member name with the prefix and JSON
delimiter is set as the element name: "p:A" -> <p:A>.
Related Information
Whether you select the option Use Namespace Mapping or not, leads to different transformation results.
{"abc:A":{"xyz:B":{"@xyz:attr1":"1","@attr2":"2","$":"valueB"},"C":
["valueC1","valueC2"],"D":"","E":"valueE"}}
abc http://com.sap/abc
xyz http://com.sap/xyz
No Namespace Mapping
{"abc.A":{"xyz.B":{"@xyz.attr1":"1","@attr2":"2","$":"valueB"},"C":
["valueC1","valueC2"],"D":"","E":"valueE"}}
The XML to JSON converter enables you to transform messages in XML format to JSON format.
Prerequisites
● You are familiar with the conversion rules for XML to JSON conversion.
For more information see Conversion Rules for XML to JSON Conversion [page 229].
● You are familiar with the limitations for XML to JSON conversion.
For more information see Limitations for XML-to-JSON Conversion [page 230].
● You are familiar with streaming for XML to JSON conversion.
For more information see How Streaming in the XML-to-JSON Converter Works [page 231].
Procedure
1. Launch the SAP Cloud Platform Integration web application by accessing the URL provided by SAP.
2. Choose the Design tab.
3. Select the required integration package.
4. In the Package content view, choose the integration flow you want to edit.
5. In the Integration Package Editor page, choose Edit.
Option Description
JSON Output Encoding Enter the JSON output encoding. The default value is from
header or property.
XML Namespace (only if the option Namespace Mapping If you select from header or property, the converter tries
is selected) to read the encoding from the message header or ex
change property CamelCharsetName. If there is no value
defined, UTF-8 is used.
JSON Prefix Separator (only if the option Namespace Enter the JSON prefix separator to be used to separate
Mapping is selected) the JSON prefix from the local part. The value used must
not be used in the JSON prefix or local name.
Suppress JSON Root Element Choose this option to create the JSON message without
the root element tag.
With streaming you can specify whether the whole XML document or only specified XML elements are to
be presented by JSON arrays.
Note
A JSON object is an unordered set of name/value
pairs that begins with { and ends with }. Each name
is followed by : and the name/value pairs are sepa
rated by ,.
10. If you want to continue editing the integration package without exiting, choose Save.
11. If you want to retain your changes as a variant, choose Save as version to retain a copy of the current
artifact.
12. If you want to terminate the creation of package, choose Cancel before saving it.
To ensure a successful conversion from XML format to JSON format, you should make yourself familiar with
the conversion rules.
The conversion from XML format to JSON format follows the following rules:
● An element is represented as JSON member whose name is the concatenation of JSON prefix
corresponding to the XML namespace, JSON delimiter, and the element name. If the element has no
namespace, no prefix and JSON delimiter is added to the JSON name.
● An attribute is represented as JSON member whose name is the concatenation of '@' , the JSON prefix
corresponding to the XML namespace, JSON delimiter, and the attribute name. If the attribute has no
namespace, no prefix and JSON delimiter are added to the JSON name.
● An element with no characters or child elements is represented by "element" : ""
● An element with multiple child elements of the same name is represented by an array. This also holds if
between the children with the same name other children with another name reside.
Example: <root><childA>A1</childA><childB>B</childB><childA>A2</childA></root> is
transformed in the non-streaming case to {"root":{"childA":["A1","A2"],"childB":"B"}},
which means that the order of the children is not preserved.
In the streaming case, the result is: {"root":{"childA":["A1"],"childB":"B","childA":
["A2"]"}}, which means that a non-valid JSON document is created because the member "childA"
appears twice.
● The value of a complex element (having attributes for example) is represented by a separate member
"$":"value".
● Elements with mixed content (for example. <A>mixed1_value<B>valueB</B>mixed2_value</A>) are
not supported. Currently you get wrong results for XML to JSON: {"A":
{"B":"valueB","$":"mixed1_valuemixed2_value"}} in the non-streaming case or
{"A":"$":mixed1_value","B":"valueB","$":"mixed2_value"}} in the streaming case.
● All element/attribute values are transformed to JSON string.
● No namespace declaration is written into the JSON document.
● Tabs, spaces, new lines between elements and attributes are ignored. However, a white space value of an
element with simple type is represented in JSON; example <A> </A> is represented by "A":" ".
● If you have an element with namespace but without XML prefix whose namespace is not contained in the
XML-namespace-to-Json-prefix map, then you get an exception: <A xmlns="http://test"/> leads to
IllegalStateException Invalid JSON namespace: http://test.
This is not the case if you choose the streaming option. With streaming the namespace is just ignored: <A
xmlns="http://test"/>v</A> leads to {"A":"v"}.
● If you have an element with namespace and XML prefix whose namespace is not contained in the XML-
namespace-to-Json-prefix map then the XML prefix is used as JSON prefix <ns:A xmlns:ns:="http://
test"/> leads to "ns.A":"" (if JSON delimiter is '.').
Related Information
To ensure a successful conversion form XML to JSON format you have to know the limitations for this
conversion.
● The XML element and attribute names must not contain any delimiter characters, because the delimiter is
used in JSON to separate the prefix from the element name.
● Elements with mixed content are not supported.
● XML comments (<!-- comment -->) are not represented in the JSON document; they are ignored.
● DTD declarations are not represented in the JSON document; they are ignored.
● XML processing instructions are not represented in the JSON document; they are ignored.
● No conversion to JSON primitive types for XML to JSON. All XML element/attribute values are transformed
to a JSON string.
● Entity references (except the predefined entity references & < > " ') are not
represented in the JSON document; they are ignored.
● If a sibling with another name resides between XML sibling nodes with the same name, then the order of
the siblings is not kept in JSON in the non-streaming case, because siblings with the same name are
represented by one array. Example: <root><childA>A1</childA><childB>B</
childB><childA>A2</childA></root> leads to {"root":{"childA":
["A1","A2"],"childB":"B"}}.
In the streaming case this leads to an invalid JSON document: {"root":{"childA":
["A1"],"childB":"B","childA":["A2"]}.
● If you have an element with a namespace but no XML prefix whose namespace is not contained in the XML-
namespace-to-JSON-prefix map, you get an exception: <A xmlns="http://test"/> -> IllegalStateException
Invalid JSON namespace: http://test.
If you choose the streaming option, the namespace is ignored: <A xmlns="http://test"/>v</A> leads
to {"A":"v"}.
Related Information
The individual tags of an XML document are processed consecutively,irrespective of where in the overall
structure the tag occurs and how often (multiplicity). This means that during the streaming process the
converter cannot know if an element occurs in the structure more than once. In other words, during the
streaming process the object model that reflects the overall structure of the XML document (and, therefore,
also all information that can only be derived from the object model, like the multiplicity of elements) is not in
place. This is different to the non-streaming case, where the converter can calculate the multiplicity of the XML
elements from the object model of the complete XML document. The multiplicity is needed to create a correct
JSON document. Elements whose multiplicity is greater than one must be transformed to a JSON member
with an array. For example, you may think that for the XML document <root><B>b1</B><B>b2</B></root>,
you create the JSON document {“root”:{“B”:”b1”,”B”:”b2”}}. However, this JSON document is invalid,
because the member name “B” occurs twice on the same hierarchy level.
To ensure nevertheless a conversion that creates correct JSON documents during streaming, you need to
either manually provide the information about which XML elements are multiple in advance, or decide that
every XML element is converted to a JSON array (when configuring the converter in the Integration Designer).
To illustrate this behavior, let’s consider how the following simple XML structure has to be converted to JSON:
<root>
<A>a</A>
<B>b1</B>
<B>b2</B>
<C>c</C>
</root>
Without streaming, the converter would produce the following JSON structure:
{"root":{"A":"a","B":["b1","b2"],"C":"c"}}
As expected, the XML element root/B would transform into a JSON member with an array as value, where the
array has two values (b1 and b2) – according to the multiplicity of root/B. Note that a JSON array is indicated
by the following type of brackets: [ … ].
With streaming with all elements to JSON arrays, the converter would produce the following JSON structure:
{"root":[{"A":["a"],"B":["b1","b2"],"C":["c"]}]}
All XML elements are transformed into members with a JSON array as value.
With streaming and specific elements as arrays (where root/A and root/B are specified), the converter would
produce the following JSON structure:
{"root":{"A":["a"],"B":["b1","b2"],"C":"c"}}
An array is produced only for the XML elements root/A and root/B, but not for root/C.
With streaming and specific elements as arrays (where only root/A is specified), the converter would produce
the following invalid JSON structure:
{"root":{"A":["a"],"B":"b1",”B”:"b2","C":"c"}}
Whether you select the option Use Namespace Mapping or not, leads to different transformation results.
Example
http://com.sap/abc abc
http://com.sap/xyz xyz
{"abc:A":{"xyz:B":{"@xyz:attr1":"1","@attr2":"2","$":"valueB"},"C":
["valueC1","valueC2"],"D":"","E":"valueE"}}
Note
Example
{"A":{"B":{"@attr1":"1","@attr2":"2","$":"valueB"},"C":
["valueC1","valueC2"],"D":"","E":"valueE"}}
Examples and Special Cases of JSON Message without Root Element Tag
Example
The following example shows the transformation from XML to JSON (option Suppress JSON Root Element is
selected).
<root>
<A>a</A>
<B>b</B>
<C>c</C>
<D>d</D>
</root>
{
"A": "a",
"B": "b",
"C": "c",
"D": "d"
Special Cases
<root>v</root>
"v"
<root></root>
""
Input XML Message: Root Element with Simple Value (options Suppress JSON Root Element, Steaming All are
selected)
<root><A>a<A><B><C>c</C></B></root>
{"A":["a"],"B":[{"C":["c"]}]}
with dropRootElement=false
{"root":{["A":["a"],"B":[{"C":["c"]}]]}}
The EDI to XML converter enables you to transform single incoming EDI messages from EDI to XML format.
Prerequisites
Remember
This component or some of its features might not be available in the Cloud Foundry environment. For more
information on the limitations, see SAP Note 2752867 .
Use the procedure here to convert EDIFACT and ASC-X12 format into XML format.
Procedure
1. Launch SAP Cloud Platform Integration web application by accessing the URL provided by SAP.
2. Choose the Design tab.
3. Select the required integration package.
4. In the Package content view, choose the integration flow you want to edit.
5. In the Integration Package Editor page, choose Edit.
6. In the graphical editor of integration flow, choose the EDI to XML Converter element..
7. Define the parameters to convert the EDI data format to XML data format.
Field Description
Note
○ You can add XSD files to the integration flow. For
more details, please refer to the topic Validating
Message Payload against XML Schema, in de
veloper's guide.
○ The file name of the xml schema for EDIFACT
should have the following format:
UN-EDIFACT_ORDERS_D96A.xsd
The file name comprises of following three parts
separated by '_':
○ First part "UN-EDIFACT" refers to the EDI
standard with organization name. This value
is fixed and cannot be customised.
○ Second part "ORDERS" refers to the mes
sage type.
○ Third part "D96A" refers to the version .
○ The file name of the xml schema for ASC-X12
should have the following format:
ASC-X12_810_004010.xsd
The file name comprises of following three parts
separated by '_':
○ First part "ASC-X12" refers to the ASC-X12
standard with organization name. This value
is fixed and cannot be customised.
○ Second part "810" refers to the message
type.
○ Third part "004010" refers to the version .
○ The above mentioned values should match with
schema content.
○ During runtime only XSD’s from Integration Con
tent Advisor (ICA) are supported.
Note
This header name is fetched from camel header. The
header is added in script element. This script element
is added before converter element. You can add value
for this header in the script element.
Exclude Interchange and Group envelopes If selected the feature notifies the converter to exclude the
interchange and group envelopes found in an EDI docu
ment.
8. If you want to continue editing the integration package without exiting, choose Save.
9. Choose Save as version to retain a copy of the current artifact.
10. If you want to terminate the creation of package, choose Cancel before saving it.
Note
○ Any EDIFACT message is an interchange. An interchange can have multiple groups. And each
group consists of message types. For EDIFACT message, the EDI elements in SAP Cloud Platform
Integration support only 1 message type per interchange but does not support any group segment
(GS) per interchange segment.
○ Any ASC-X12 message is an interchange. An interchange can have multiple groups. And each
group consists of transaction sets. For ASC-X12 message, the EDI elements in SAP Cloud Platform
Integration support only 1 group segment (GS) per interchange segment and only 1 transaction set
(ST) per group segment.
○ SAP Cloud Platform Integration does not support repetition characters. Repetition character is a
single character which separates the instances of a repeating data element. For example, ^ (caret
sign) is a repetition character.
Note
SAP_EDI_Document_Number header contains document number for the single incoming EDI file.
Example
● The following example shows the transformation from EDI to XML format of EDIFACT message.
UNA:+.? '
UNB+UNOC:3+SENDERABC:14:XXXX+ReceiverXYZ:14:YYYYY+150831:1530+1+HELLO WORLD++A
+++1'
UNH+1+ORDERS:D:96A:UN++2'
BGM+220+MY_ID+9+NA'
DTM+137:201507311500:203'
DTM+2:201508010930:203'
CNT+16:10'
UNT+5+1'
UNZ+1+1'
● The following example shows the transformation from EDI to XML format of ASC-X12 message.
Input sample ASC-X12 EDI Message
Sample Code
The XML to EDI converter transforms a XML message in XML format to EDI format. You can convert EDIFACT
and ASC-X12 format into XML format.
Context
Remember
This component or some of its features might not be available in the Cloud Foundry environment. For more
information on the limitations, see SAP Note 2752867 .
Procedure
1. Launch SAP Cloud Platform Integration web application by accessing the URL provided by SAP.
2. Choose the Design tab.
3. Select the required integration package.
4. In the Package content view, choose the integration flow you want to edit.
5. In the Integration Package Editor page, choose Edit.
6. In the graphical editor of integration flow, choose the XML to EDI Converter element..
7. Define the parameters to convert the XML data format to EDI data format.
Note
○ You can add XSD files to the
integration flow. For more de
tails, please refer to the topic
Validating Message Payload
against XML Schema.
○ The file name of the xml
schema for ASC-X12 should
have the format, ASC-
X12_810_004010.xsd. It con
tains three parts separated
by _:
○ First part ASC-X12 refers
to the ASC-X12 standard
with organization name.
This value is fixed and
cannot be customised.
○ Second part 810 refers
to the message type.
○ Third part 004010 refers
to the version.
○ The aforementioned values
should match with the
schema content.
Note
This header name is fetched from
Camel header. The header is
added in script element. This
script element is added before
the converter element. You can
add value for this header in the
script element.
Note
You can also manually specify the
custom separator.
Note
○ You can add XSD files to the
integration flow. For more de
tails, please refer to the topic
Validating Message Payload
against XML Schema, in de
veloper's guide.
○ The file name of the xml
schema for ASC-X12 should
have the format, ASC-
X12_810_004010.xsd. It con
tains three parts separated
by _:
○ First part ASC-X12 refers
to the ASC-X12 standard
with organization name.
This value is fixed and
cannot be customised.
○ Second part 810 refers
to the message type.
○ Third part 004010 refers
to the version.
○ The aforementioned values
should match with the
schema content.
○ During runtime only XSD’s
from Integration Content Ad
visor (ICA) are supported.
Note
This header name is fetched from
camel header. The header is
added in script element. This
script element is added before
converter element. You can add
value for this header in the script
element.
Note
You can also manually specify the
custom separator.
8. If you want to continue editing the integration package without exiting, choose Save.
9. Choose Save as version to retain a copy of the current artifact.
10. If you want to terminate the creation of package, choose Cancel before saving it.
Note
○ Any EDIFACT message is an interchange. An interchange can have multiple groups. And each
group consists of message types. For EDIFACT message, the EDI elements in SAP Cloud Platform
Note
SAP_EDI_Document_Number header contains document number for the single incoming EDI file.
○ The following example shows the transformation from XML to EDI format of EDIFACT message.
Input sample EDIFACT XML Message
UNA:+.? '
UNB+UNOC:3+SENDERABC:14:XXXX+ReceiverXYZ:14:YYYYY+150831:1530+1+HELLO WORLD
++A+++1'
UNH+1+ORDERS:D:96A:UN++2'
BGM+220+MY_ID+9+NA'
DTM+137:201507311500:203'
DTM+2:201508010930:203'
CNT+16:10'
UNT+5+1'
UNZ+1+1'
○ The following example shows the transformation from XML to EDI format of ASC-X12 message.
Input sample ASC-X12 XML Message
Example
The following segments are part of message payload of EANCOM, and the table below mentions the headers
and values for the given payload.
UNH+1+INVOIC:D:96A:EN:EAN008'
SAP_EDI_Document_Standard EANCOM
SAP_EDI_Sender_ID 4006501000002
SAP_EDI_Sender_ID_Qualifier 14
SAP_EDI_Receiver_ID 5790000016839
SAP_EDI_Receiver_ID_Qualifier 14
SAP_EDI_Interchange_Control_Number 0650
SAP_EDI_Message_Type INVOIC
SAP_EDI_Message_Version D
SAP_EDI_Message_Release 96A
SAP_EDI_Message_Controlling_Agency EN
SAP_EDI_Message_Association_Assign_Code EAN008
The element creates camel headers and populates it with respective extracted values. For example, if following
segments are part of message payload of EDIFACT, then respective headers and values for the same are given
in the table below.
UNB+UNOC:3+4006501000002:14+5790000016839:14+100818:0028+0650+++++XXXXX'
UNH+1+INVOIC:D:96A:UN'
SAP_EDI_Document_Standard UN-EDIFACT
SAP_EDI_Sender_ID 4006501000002
SAP_EDI_Sender_ID_Qualifier 14
SAP_EDI_Receiver_ID 5790000016839
SAP_EDI_Receiver_ID_Qualifier 14
SAP_EDI_Interchange_Control_Number 0650
SAP_EDI_Message_Type INVOIC
SAP_EDI_Message_Version D
SAP_EDI_Message_Release 96A
SAP_EDI_Message_Controlling_Agency UN
If following segments are part of message payload of ASC-X12 , then respective headers and values for the
same are given in the table below.
GS*IN*GSRESNDR*GSRERCVR*20030709*0816*12345*X*004010~
810*0001~
SAP_EDI_Document_Standard ASC-X12
SAP_EDI_Sender_ID WWRESNDR
SAP_EDI_Sender_ID_Qualifier ZZ
SAP_EDI_Receiver_ID WWRERCVR
SAP_EDI_Receiver_ID_Qualifier ZZ
SAP_EDI_Interchange_Control_Number 000046668
SAP_EDI_Message_Type 810
SAP_EDI_Message_Version 004010
SAP_EDI_GS_Sender_ID GSRESNDR
SAP_EDI_Receiver_ID GSRERCVR
SAP_EDI_GS_Control_Number 12345
SAP_GS_Functional_Id_Code IN
SAP_GS_Responsible_Agency_Code X
SAP_ISA_Acknowledgment_Requested 0
SAP_ISA_Auth_Information_Qualifier 00
SAP_ISA_Control_Standards_Identifier
SAP_ ISA_Security_Information_Qualifier 00
SAP_ISA_Usage_Indicator P
SAP_ISA_Version_Number 004010
SAP_MessageProcessingLogID
SAP_ST_Control_Number
Prerequisites
● You have logged into your customer workspace in SAP Cloud Platform Integration web application.
● You are editing the integration flow in the editor.
Context
You use the content modifier step to modify the content of incoming message by providing additional
information in the header or body of the message.
The Content Modifier allows you to modify a message by changing the content of the data containers that are
involved in message processing (message header, message body, or message exchange). Depending on which
container you want to modify, select one of the tabs Message Header, Message Body, or Exchange Property. If
modifying the Message Body, you can enter the data you want to add to the message body in an editor. If
modifying the Message Header or the Exchange Property, you can define how to access the content of the
incoming message (which is then used to change the selected data container). For example, you can select the
Type XPath to specify an XPath expression that addresses a particular element in the incoming message, which
will be used to change the message header.
Note that data written to the message header during a processing step (for example, in a Content Modifier or
Script step) will also be part of the outbound message addressed to a receiver system (whereas properties will
Note
In the following cases certain features might not be available for your current integration flow:
● A feature for a particular adapter or step was released after you created the corresponding shape in
your integration flow.
● You are using a product profile other than the one expected.
More information: Adapter and Integration Flow Step Versions [page 405]
Procedure
1. If the Content Modifier step is present in the integration flow, select it to edit the properties.
2. If you want to add Content Modifier step to the integration flow, perform the following substeps:
a. In the palette, choose (Message Transformers)and then choose (Content Modifier).
b. Place Content Modifier step in the integration process.
3. Go to the Message Header tab or choose Exchange Property (depending on whether you want to modify a
message header or write data to the exchange property).
Note
Name defined for an Exchange Property is case-sensitive. This is a known behavior of the camel
framework.
Attribute Description
Action You can specify whether the Content Modifier should cre
ate or delete the header or property defined by the table
row.
Type Indicates the kind of data you want to use to change the
content of the selected header or property data container.
○ Constant
Allows you to write a constant value to the header or
property data container.
Special characters like {} and [] are not allowed.
○ Header
Allows you to specify the name of a Camel header.
You can then use this header to dynamically define
properties in subsequent steps.
For example, if you specify the header name
CamelSplitIndex, the Camel header of the same
name, which counts the actual number of splits in a
message split scenario, is accessed from the incom
ing message.
In a subsequent step, you use the following expres
sion to refer to this header (for dynamic configura-
tion): ${header.CamelSplitIndex}.
To enter a header, select header in the Type column
and in the Value column choose Select to browse for
predefined headers.
Note
During outbound communication, headers will be
handed over to all message receivers and inte
gration flow steps, whereas properties will remain
within the integration flow and will not be handed
over to receivers.
Note
The Select Header dialog shows you a list of
headers that you have specified:
○ XPath
Allows you to retrieve data from the incoming mes
sage using XML Path Language (XPath). For example,
if you have selected this type, you can specify the fol
lowing Value to point to an element customerName
in the incoming message: /Order/Customer/
CustomerNumber.
Note
○ If the XPath contains a namespace prefix,
specify the association between the name
space and the prefix in the Runtime
Configuration of the integration flow.
For example, let's assume that the following
XPath expression is used:
/ns1:order/ns1:customer/ns1:ID
In this case, you need to specify a name
space mapping in the runtime configuration
as follows:
xmlns:ns1=http://
mycompany.com/order
○ The step has been leveraged to use the ca
pabilities of XPath 3.1 Enterprise Edition
(EE). To read more about the new features of
XPath 3.1 visit the saxon documentation
published by Saxonica.
○ Expression
Allows you to enter a Camel simple expression (see
http://camel.apache.org/simple.html ).
For example, you can use the expression $
{exchangeId} to add the exchange ID (a unique
identifier of the message exchange) to a data con
tainer.
○ Property
Allows you to specify an exchange property.
To enter a property, select property in the Type col
umn and define a value based on the selected value
type.
Note
During outbound communication, headers will be
handed over to all message receivers and inte
gration flow steps, whereas properties will remain
within the integration flow and will not be handed
over to receivers.
○ External Parameter
This option is not available in WebUI but for compati
bility reasons we still support content using this field.
Example
○ If you want to increase the value in the mes
sage header define the name of the Number
Range.
○ If you do not want to increase the value in
the message header, then define the value
as <name of the Number Range>:$
{header.headername} or <name of
the Number Range>:<correlation
ID>.
Note
Make sure you use the same correlation ID for all
future processing.
The Data Type column is used only for the types XPath and
Expression. The data type can belong to any Java class.
For example, if you are addressing a string-type element,
enter java.lang.String as the Java data type. For
more information about supported data types, see http://
camel.apache.org/simple.htm .
6. Go to the Message Body tab, and define the fields as shown below.
Attribute Description
Remember
Ensure that you set a placeholder for the existing
header information.
Note
○ If you add a Content Modifier step without a header, body, and property, you cannot trace the
element.
○ The Data Type column is mainly used for the XPath type. The data type can belong to any Java
class. An example of a data type for an XPath is java.lang.String.
<order>
<book>
<BookID>A1000</BookID>
<Count>5<Count>
</book>
</order>
In the Body tab of the Content Modifierstep, you specify the content expected in the outgoing message. Keep a
placeholder for the header information to modify the content as shown below:
<invoice>
<vendor>${header.vendor}</vendor>
${in.body}
<deliverydate>${header.date}</delivery>
</invoice>
In the Header tab of the Content Modifier step, enter the following:
<invoice>
<vendor>ABC Corp</vendor>
<order>
<book>
<BookID>A1000</BookID>
<Count>5<Count>
</book>
</order>
<deliverydate>25062013</deliverydate>
</invoice>
Related Information
An integration flow is a BPMN (Business Process Model and Notation)-like model that allows you to specify
how a message is to be processed on a tenant.
The only prerequisite for a message that is to be processed by the Camel framework is that it comprises the
following elements:
● Headers
Contain information related to the message, for example, information for addressing the message sender.
● Attachments
Contain optional data that is to be attached to the message.
● Body
Contains the payload (usually with the business-related data) to be transferred in the message.
For as long as a message is being processed, a data container (referred to as Exchange) is available. This
container is used to store additional data besides the message that is to be processed. An Exchange can be
seen as an abstraction of a message exchange process as it is executed by the Camel framework. An Exchange
is identified uniquely by an Exchange ID. In the Properties area of the Exchange, additional data can be stored
temporarily during message processing. This data is available for the runtime during the whole duration of the
message exchange.
You can use the Content Modifier step to modify a message by adding additional data to it.
More precisely, this step type allows you to modify the content of the following three data containers during
message processing:
● Message Header
You can add headers to the message, and edit and delete headers.
● Message Body
You can modify the message body part.
● Exchange Property
You can write data to the message exchange, and edit and delete the properties.
For example, you can retrieve the value of a particular element of the payload of an inbound message and write
this value to the header of the message (to make it available for subsequent processing steps).
You need to specify additional parameters in the Content Modifier step to tell the integration runtime how
exactly to access the data from the incoming message (which is to be written to one of the three data
containers above).
Here's a simple example to show how this works: Let's say you want to write the value of the element
CustomerNumber from an inbound XML message to the message header, to make it available for subsequent
process steps.
On the Message Header tab of the Content Modifier, add a new entry. Specify XPath as the Type (because you
want to address the CustomerNumber element in an incoming XML message). For Value, enter the exact XPath
expression that is to be used to address this element (for example, /Order/Customer/CustomerNumber). In an
additional field, you now need to specify the data format expected for the content of the CustomerNumber
element. To express that this is a String element, you need to specify a valid Java data type, which is
java.lang.string in this case. For Name, enter the desired name of the message header (which should
contain the CustomerNumber value), for example, CustomerNo.
Example
The following example shows how to modify both the header and body data container of a message using the
Content Modifier step.
<order>
<book>
<BookID>A1000</BookID>
<Count>5</Count>
</book>
</order>
On the Header tab of the Content Modifier, enter the following to write constant values to the message header:
On the Body tab, keep placeholders for the header information specified in the first Content Modifier step ($
{header.vendor} and ${header.date}) to modify the content as shown below. Additionally, use a
placeholder ${in.body} for the incoming message.
<invoice>
<vendor>${header.vendor}</vendor>
${in.body}
<deliverydate>${header.delivery date}</delivery>
</invoice>
<invoice>
<vendor>ABC Corp</vendor>
<order>
<book>
<BookID>A1000</BookID>
<Count>5</Count>
</book>
</order>
<deliverydate>25062013</deliverydate>
</invoice>
The integration runtime supports the following two kinds of (internal) data representations: binary data and
string (sequence of characters). Conversions between these representations may cause issues that can result
in erroneous message processing.
This topic lists the main measures to help you avoid any such issues.
The Content Modifier step provides the following options for configuring how data is to be encoded (based on
the CamelCharsetName element):
General Recommendations
As far as applicable for your scenario, try to implement the following measures:
Recommendations
Use UTF-8 character encoding only You do not have to set any Make sure that all mappings use UTF-8
CamelCharsetName property or for output encoding. If you do not de
Try to use UTF-8 encoding for all binary
header (as UTF-8 is the default), or any fine an output encoding, UTF-8 is used
character representations throughout
XML declaration (as UTF-8 is also the as the default.
the scenario.
default for XML).
The challenge with this solution is that
it requires all your communication part
ners to send and receive data in UTF-8
character encoding (which, although
not uncommon, is unfortunately not al
ways the case).
Use a fixed character encoding Perform the following steps: The challenge with this solution is that if
throughout the whole integration flow you have more than one communica
● Set the CamelCharsetName ex
tion partner, it is rather unlikely that
If your communication partners require change property to your character
they all agree on a character encoding
a character encoding for communica set.
different from UTF-8.
tion other than UTF-8 (for example, ● Make sure that all your mappings
ISO-8859-15), and this character en have the required output. Make
coding is the same for all communica sure that you do not remove the
tion partners, you can set up your inte XML declaration for binary XML
gration flow to use that character en representation as this could cause
coding. the parsing of binary XML content
to fail.
Avoid binary-to-string conversions If you are working with XML data and If using this solution, the integration
you are communicating with different flow developer must not modify a mes
communication partners using different sage body that contains an XML docu
character encodings, one way to avoid ment with a content modifier or a
character encoding issues is to avoid string-based script.
the serialized string representation of
The filter and XSLT mapping steps both
XML documents altogether.
have the option to provide string out
In this case, the content of the put. This option should only be used if
CamelCharsetName is irrelevant as the result is a non-XML string, not an
no string-to-binary conversion occurs XML document or fragment.
(only XML parsing and marshaling).
The integration developer must there
fore exercise caution and due care if ap
plying these measures.
Related Information
When configuring integration flows, you are usually dealing with the following kinds of data representations:
Binary Data
There is always some loss of data when binary data is converted into strings. You should therefore avoid
converting pure binary data to strings.
Text Data
Text data can be represented in binary or string form. In order to convert string representation to binary
representation and the other way round,, the desired (or existing) binary encoding needs to be known.
The integration framework uses the following scheme to determine the type of character encoding:
1. If the header CamelCharsetName is set, the value of this header is used as the character encoding.
2. If the exchange property CamelCharsetName is set, the value of this exchange property is used as the
character encoding.
There is no universal way to determine the correct encoding for the data. However, for some communication
protocols (such as mail or HTTP) this information may be transferred via the protocol header.
The integration developer needs to make sure that the character encoding that is defined by the header or
exchange property and the binary content encoding are in sync. Otherwise, subtle, hard-to-solve issues can
occur. These issues often aren't revealed by testing, as they usually only occur with non-ASCII characters.
XML documents usually contain information about the character encoding in serialized form within the XML
declaration (at the beginning of the XML document).
However, note that this XML declaration is irrelevant if string representation is used for the document. If the
XML document is parsed from or marshaled (serialized) to binary representation, the charset definition (and
the byte order marker (BOM)) are used to determine the content encoding.
Caution
The conversion between the parsed and the serialized form works fine without any configuration and has a
mechanism to determine and use the correct character set for the conversion. However, issues can occur
when converting directly between the two serialized forms (binary and string). This conversion is not XML-
specific and uses the scheme for text data. This conversion will actually damage the content of the
document if the character encoding used in Camel does not match the character encoding used in the XML
document.
Conversions between binary and string (highlighted in the figure) generally require attention from the
integration developer because there is no generic way to determine the character encoding of the binary text
or XML content (and the integration developer must be sure to set the correct value there).
XSLT sheets can change the content encoding of the XML document if they convert XML to XML. In this case,
the output encoding will either be defined by the XSLT or it will be UTF-8, even if the input document has a
different encoding. If direct binary-to-string conversion takes place before and after the XSLT, the integration
developer has to make sure that the CamelCharsetName property or header is defined appropriately.
You use this task to encode messages using an encoding scheme to secure any sensitive message content
during transfer over the network.
Procedure
○ Base64 Encode
Encodes the message content using base64.
○ GZIP Compress: Compresses the message content using GNU zip (GZIP).
○ ZIP Compress: Compresses the message content using zip (only zip archives with a single entry
supported).
○ MIME Multipart Encode: Transforms the message content into a MIME multipart message.
If you want to send a message with attachments, but the protocol (for example, HTTP or SFTP) does
not support attachments, you can send the message as a MIME multipart instead.
Note
Note that SAP Cloud Platform Integration does not support the processing of MIME multipart
messages that contain multiple attachments with the same file name.
A message with the header Content-Type "text/plain" is sent to a MIME multipart encoder. The add multipart
headers inline functionality is activated.
Sample Code
Sample Code
Message Body
Message-ID: <...>
MIME-Version: 1.0
Content-Type: multipart/related;
boundary="----=_Part_0_1134128170.1447659361365"
------=_Part_0_1134128170.1447659361365
Content-Type: text/plain
Content-Transfer-Encoding: 8bit
Body text
------=_Part_0_1134128170.1447659361365
Content-Type: application/binary
Content-Transfer-Encoding: binary
Content-Disposition: attachment; filename="Attachment File Name"
[binary content]
------=_Part_0_1134128170.1447659361365
<message>
Input for encoder
</message>
If you select Base64 Encode, the output message would look like this:
PG1lc3NhZ2U+DQoJSW5wdXQgZm9yIGVuY29kZXINCjwvbWVzc2FnZT4NCg==
A Multipurpose Internet Mail Extensions (MIME) multipart message allows you to combine different kinds of
content in one message (for example, plain text and attachments).
To mention a use case, if you want to send a message with attachments, but the protocol (for example, HTTP or
SFTP) does not support attachments, you can send the message as a MIME multipart instead.
With a multipart subtype you can specify how the different content types are combined as MIME multipart
message. The property Multipart Subtype in the Encoder step allows you to specify the Content-Type property
An input message for the MIME Multipart Encoder step doesn’t have to be composed in a specific way.
An inbound message for a MIME Multipart Decoder step has to be a MIME multipart message. Hereby, the
multipart headers can either be stored as Camel headers or as part of the message body.
You have the option to dynamically (based on the content of the processed message) add custom headers to a
MIME multipart message. Do enable this option, you have to activate Add Multipart Header Inline. In that case,
the option Include Headers is displayed.
You can now enter regular expressions for the headers which are to be added dynamically. With such regular
expressions (regex), you can define placeholders for the custom headers:
Tip
Example:
When you enter for Include Headers the string (x-.*|myAdditionalHeader), all headers that start with
x- and the header myAdditionalHeader are added dynamically.
The following table summarizes how the Encoder step transforms the message depending on whether you
select or deselect the option Add Multipart Header Inline.
Selected The Encoder transforms the inbound message into a new message where the message
body (of the resulting message) is a MIME multipart message with headers.
Body and attachments (if available) of the inbound message will be added as separate
parts of the multipart message. The attachments are removed from the resulting message.
Note that the message will also always be transformed into a MIME multipart message, re
gardless whether it contains attachments or not.
● The inbound message has attachments: Encoder transforms message body and at
tachments of the inbound message into a MIME multipart message. The headers of
the multipart message will be added as Camel headers. The message body will be re
placed by the rest of the message.
● The inbound message has no attachments: Encoder does not change the inbound
message.
The following figures illustrate how the property Add Multipart Header Inline influences the processing of the
message.
The following table summarizes how the Decoder step transforms the message depending on whether you
select or deselect the option Multipart Headers Inline.
Selected The Decoder transforms the first part of the multipart message into the message body of
the resulting message and the following parts (if available) will be transformed into attach
ments of the resulting message.
In case the inbound message is, other than expected, no MIME multipart message with in
line headers, the complete message body is interpreted as a preamble of the MIME multi
part, and the resulting message is empty.
● The inbound message either doesn’t contain the multipart header as Camel header or
the Content-Type is no multipart, the Decoder step doesn’t change the inbound mes
sage.
● In all other cases, the header of the inbound message will be used as header of the
multipart message (and deleted). The message body of the resulting message will be
built up out of those parts which are contained in the message body (and, if available,
out of the attachments).
You use this task to decode the message received over the network to retrieve original data.
Procedure
Note
If this option is not selected and the content type camel-header is not set to a multipart type, no
decoding takes place.
If this option is selected and the body of the message is not a MIME multipart (with MIME headers
in the body), the message is handled as a MIME comment and the body is empty afterwards.
Note
Note that SAP Cloud Platform Integration does not support the processing of MIME multipart
messages that contain multiple attachments with the same file name.
Example
Let us suppose that an input message to the decoder is a message encoded in Base64 that looks like this:
PG1lc3NhZ2U+DQoJSW5wdXQgZm9yIGVuY29kZXINCjwvbWVzc2FnZT4NCg==
<message>
Input for encoder
</message>
Prerequisites
Context
You use Filter to extract information from an incoming message. In other words, you filter out parts of the
message that you do not want and extract only the data that you want.
Note
The Filter (1.1 version) step has been leveraged to use the capabilities of XPath 3.1 Enterprise Edition (EE),
for extracting information from an incoming XML message. To read more about the new features of XPath
3.1 visit the saxon documentation published by Saxonica.
<Message>
<orders>
<order>
<clientId>I0001</clientId>
<count>100</count>
</order>
<order>
<clientId>I0002</clientId>
<count>10</count>
</order>
</orders>
</Message>
Let us assume that you are interested only in the count. You use the Filter and specify an Xpath /Message/
orders/order/count/text().
The output of the content filter would be data in count fields of the message. The output in this example with
the specified Xpath is 10010
.
Procedure
Field Description
Name Specify a name for filter step. By default, the value is Filter.
XPath Expression Specify Xpath to the message node that contains informa
tion to be extracted
4. In Value Type dropdown list, select value type based on description in table.
Context
In detail, the Message Digest integration flow step transforms a message into a canonical XML document. From
this document, a digest (hash value) is calculated and written to the message header.
Note
Canonicalization transforms an XML document into a form (the canonic form) that makes it possible to
compare it with other XML documents. With the Message Digest integration flow step, you can apply
canonicalization to a message (or to parts of a message), calculate a digest out of the transformed
message, and add the digest to the message header.
In simple terms, canonicalization skips non-significant elements from an XML document. To give some
examples, the following changes are applied during the canonicalization of an XML document: Unification
of quotation marks and blanks, or encoding of empty elements as start/end pairs.
Procedure
Option Description
Filter (XPath) If you only want to transform part of the message, enter an XPath expression to specify the part (op
tional attribute).
Digest Algo Select the hash algorithm to be used to calculate the digest.
rithm
You can choose between the following hash algorithms:
○ SHA-1
○ SHA-256
○ SHA-384
○ SHA-512
○ MD5
Target Enter the name of the target header element which is to contain the hash value (digest).
Header Name
This is a mandatory attribute.
Prerequisites
Version Details
Software Version
Context
You use this task to execute custom Java script or Groovy script for message processing. SAP Cloud Platform
Integration provides a Java API to support this use case.
Note
Note that data written into the message header at a certain processing step (for example, in a Content
Modifier or Script step) will also be part of the outbound message addressed to a receiver system. Because
of this, it is important to consider the following restriction regarding the header size if you use an HTTP-
based receiver adapter: If the message header exceeds 8 KB, it might lead to situations that the receiver
cannot accept the inbound call (relevant for all HTTP-based receiver adapters).
Note
Cloud Integration supports the XML Document Object Model (DOM) to process XML documents.
Note
Any application that parses XML data is prone to the risk of XML External Entity (XXE) Processing attacks.
To overcome this issue, you should take the following measures to protect integration flows that contain
Script steps (using Groovy script or Java Script) against XXE Processing attacks:
Note
Cloud Integration framework supports conversion of payload into the following formats:
○ String
○ InputStream
○ byte[]
To convert the payload into String or InputStream use the following fullyQualifiedClassName:
○ java.lang.String
○ java.io.InputStream
To convert the payload into byte[] use the following:
○ For groovy script:
var body =
message.getBody( java.lang.reflect.Array.newInstance(java.lang.Byte.T
YPE,0).getClass());
Note
Depending on the actual Camel message content, the getBody(<type>) can still use wrong
encoding.
Note
○ You should not add or modify a property or header name starting with sap.
○ If no Script Function is specified in the script flow step, processData function is called by default.
Caution
When converting parts of the message object like the body or even headers or properties into a string
(as string) or into a byte array (as byte[]) please consider copies of the existing data is created which
requires extra memory. This resource consumption may even exceed the memory size of the original
object if string conversion is executed.
Example:
Depending on the size of the object byte[] or string conversion can endanger worker node of OOM
failure. Please consciously decide which part of the message object should be converted.
5. Click on OK.
6. Specify the script step name in Name.
7. Specify a custom function that will take the message object as the argument in Script Function.
In the script you require Script Function, which will be executed at runtime.
8. Save the changes.
Note
○ In the properties section, you can also choose Select to browse another script.
○ You can add external jar(s) using Resource view. You can then invoke functions from these external
jar(s) in the script.
Example
If you want to access the security artifacts such as secure store and keystore that are delpoyed using the
deployment wizard, refer the table below.
import com.sap.it.api.ITApiFactory;
import com.sap.it.api.securestore.SecureStoreService;
import com.sap.it.api.securestore.UserCredential;
import com.sap.it.api.securestore.exception.SecureStoreException;
Sample Code
import com.sap.it.api.ITApiFactory;
import com.sap.it.api.keystore.KeystoreService;
import com.sap.it.api.keystore.exception.KeystoreException;
Sample Code
import com.sap.it.spi.ITApiHandler;
import com.sap.it.api.mapping.ValueMappingApi;
messageLog.setStringProperty("Information","The transformed
locale value is : " + mappedValue);
}
}
catch(Exception e)
{
messageLog.setStringProperty("Exception",e.toString())
return null;
}
return message;
}
import com.sap.it.api.ITApiFactory;
import com.sap.it.api.nrc.NumberRangeConfigurationService;
import com.sap.it.api.nrc.exception.NumberRangeConfigException;
Sample Code
def service =
ITApiFactory.getApi(NumberRangeConfigurationService.class,
null);
Next Steps
There are the following additional Java interfaces for the message processing log (MPL) which you can address
with the script step (either in Groovy Script or JavaScript):
The interface MessageLogFactory can be used through the variable messageLogFactory in order to
retrieve an instance of MessageLog.
You can use the following methods in an instance of MessageLog in order to set a property of a given type in
the message processing log:
You can use the following method in an instance of MessageLog in order to add string to an attachment using
message processing log:
You can use the following method in an instance of MessageLog in order to add, set, get, remove headers to/
from an attachment using message processing log:
You can use the following method in an instance of MessageLog in order to add, set attachment objects as a
map using message processing log:
Use method addAttachmentAsString to add a longer, structured document to the message processing log
(MPL). Use method setStringProperty only for short strings (containing one or a few words).
If the value "null" is specified for the parameter mediaType, then the value "text/plain" is assumed as
media type.”
As an example, the following code lines allow you to set a string property:
Groovy:
Caution
Note that the properties provided by the script step are displayed in alphabetical order in the resulting
message processing log (MPL). That means that the sequence of properties in the MPL does not
necessarily reflect the sequence applied in the script.
The method addCustomHeaderProperty allows you to to add user-defined attributes (name/value pairs) to
the message processing log. These attributes are then persisted and can be used for message search.
You can use Groovy programming language (Groovy script) to set SOAP headers.
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Header>
<AuthHeader soap:actor="actor_test" soap:mustUnderstand="1" xmlns="http://
www.Test.com/">
<ClientId>username</ClientId>mustUnderstand
<Password>password</Password>
</AuthHeader>
</soap:Header>
<soap:Body>
<test:TestMessage xmlns:test="http://hci.sap.com/ifl/test">
<MessageContent>customer1</MessageContent>
</test:TestMessage>
</soap:Body>
</soap:Envelope>
import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.ArrayList;
import java.util.List;
import javax.xml.namespace.QName;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import org.apache.cxf.binding.soap.SoapHeader;
import org.w3c.dom.Document;
import org.w3c.dom.Element;
import com.sap.it.api.ITApiFactory;
import com.sap.it.api.securestore.SecureStoreService;
import com.sap.it.api.securestore.UserCredential;
def Message processData(Message message) {
// First fetch user name and password which must be entered into the SOAP
header from the secure store service.
// This is not necessary if your SOAP header does not require data from the
secure store service.
def service = ITApiFactory.getApi(SecureStoreService.class, null);
def credential = service.getUserCredential("partner1_credential_alias");
if (credential == null){
return message;
}
Script example to identify any exceptions that arise when sending messages using the HTTP receiver.
You can use Groovy programming language (Groovy script) to identify any exceptions that arise when sending
messages using the HTTP receiver.
import com.sap.gateway.ip.core.customdev.util.Message;
def Message processData(Message message) {
messageLog.addAttachmentAsString("http.ResponseBody",
ex.getResponseBody(), "text/plain");
// copy the http error
response to an iflow's property
message.setProperty("http.ResponseBody",ex.getResponseBody());
// copy the http error
response to the message body
message.setBody(ex.getResponseBody());
// copy the value of http
error code (i.e. 500) to a property
message.setProperty("http.StatusCode",ex.getStatusCode());
// copy the value of http
error text (i.e. "Internal Server Error") to a property
message.setProperty("http.StatusText",ex.getStatusText());
}
}
return message;
}
You can use the Script step to access Partner Directory content.
The following code snippets show the interface. An example how to access the Partner Directory is given below.
/**
* Looks-up the partner ID for the triple scheme, agency, alternative
* partner ID. The partner ID is needed for the other methods in this class.
*
/**
* Looks-up the alternative partner ID for the triple scheme, agency,
* partner ID.
*
* @param agency
* issuing agency of the external partner ID
* @param scheme
* identification scheme
* @param partnerId
* partner ID
* @return alternative partner ID, or <code>null</code>, if no alternative
* partner ID exists for the specified triple
* @throws PartnerDirectoryException
* if an error occurs
* @throws IllegalArgumentException
* if an input argument is <code>null</code> or empty
*
*/
String getAlternativePartnerId(String agency, String scheme, String
partnerId) throws PartnerDirectoryException;
/**
* Returns the partner specific value of a partner directory parameter.
*
* @param parameterId
* ID of the Partner Directory parameter
* @param partnerId
* ID of the partner
* @param type
* the type of the parameter, supported types are
* <code>java.lang.String.class</code> for String parameters,
* <code>com.sap.it.api.pd.BinaryData.class</code> for Binary
* Data parameters
* @return value of the parameter, or <code>null</code> if the parameter
* does not exist in the Partner Directory
* @throws PartnerDirectoryException
* if an error occurs
* @throws IllegalArgumentException
* if an input argument is <code>null</code> or the given
* <code>type</code> is not a supported type or if
* <tt>parametrId</tt> or <tt>partnerId</tt> is empty
*
*/
<T> T getParameter(String parameterId, String partnerId, Class<T> type)
throws PartnerDirectoryException;
/**
/**
* Returns the Partner ID to which a authorized user was assigned to. Or
null if the user does not exist.
*
* @param authorizedUser
* authorized User
* @throws PartnerDirectoryException
* if an error occurs
* @throws IllegalArgumentException
* if the input argument is <code>null</code> or empty
*/
String getPartnerIdOfAuthorizedUser(String authorizedUser) throws
PartnerDirectoryException;
/**
* Returns the authorized users of a partner.
*
* The authorized user names will be returned with lower case characters
(Locale.English).
*
* @param partnerId
* @return list of users or empty list if no authorized user exists
* @throws PartnerDirectoryException
* if an error occurs
* @throws IllegalArgumentException
* if the input argument is <code>null</code> or empty
*/
List<String> getAuthorizedUsers(String partnerId) throws
PartnerDirectoryException;
/**
* Container to store store binary data (as byte array) and the associated
* content type for Partner Directory Binary Parameters.
*/
public class BinaryData {
private final byte[] data;
private final String contentType;
Example
It checks that the partner ID associated with the logged-in user is consistent with the partner ID which can be
derived from the message content.
It is assumed that the alternative partner ID of the message has been set before into the property
AlternativePartnerID.
import com.sap.gateway.ip.core.customdev.util.Message;
import com.sap.it.api.pd.PartnerDirectoryService;
import com.sap.it.api.pd.BinaryData;
import java.util.Map;
import java.security.cert.X509Certificate;
import com.sap.it.api.ITApiFactory;
import javax.security.auth.Subject;
import java.security.Principal;
import java.util.Set;
def Message processData(Message message) {
// alternative partner ID from property which has been filled before from
the message
map = message.getProperties();
def apid = map.get("AlternativePartnerID");
if (apid == null || apid.isEmpty()){
return message;
}
Security elements in the integration flow enable you to encrypt, decrypt, sign and verify messages. This
ensures that the message is secure and can only be accessed by the intended recipient. Encryption also
prevents non-repudiation of messages during message exchange.
Simple Signer makes it easy to sign messages to ensure authenticity and data integrity when sending a
message to participants on the cloud.
Context
You work with the Simple Signer to make the identity of the sender known to the receiver or receivers and thus
ensure the authenticity of the messages. This task guarantees the identity of the sender by signing the
messages with a private key using a signature algorithm.
● You have logged into your customer workspace in SAP Cloud Platform Integration web tooling.
● You are editing the integration flow in the editor.
Option Description
4. Define the following parameters in the Processing tab to sign the incoming message with one or more
signatures.
Option Description
Private Key Alias Enter an alias to select the private key from the keystore.
Signature Algorithm Select a signature algorithm for the RSA or DSA private
key type.
Signature Header Name Enter the name of the message header where the signa
ture value in Base64 format is stored.
Context
You work with the PKCS#7/CMS signer to make your identity known to the participants and thus ensure the
authenticity of the messages you are sending on the cloud. This task guarantees your identity by signing the
messages with one or more private keys using a signature algorithm.
● You have logged into your customer workspace in SAP Cloud Platform Integration web tooling.
● You are editing the integration flow in the editor.
Parameter Description
4. Define the following parameter in the Processing tab to sign the incoming message with one or more
signatures.
Parameter Description
Block Size (in bytes) Enter the size of the data that is to be encoded.
Include Content in Signed Data You can choose to include the original content that is to be
signed in the SignedData element. This SignedData ele
ment is written to the message body.
You also have the option to keep the original content in the
message body and to include the signed data elsewhere:
Up to version 1.2 of the PKCS#7/CMS Signer you can
choose to include the signed data in the
SapCmsSignedData header. From version 1.3 of the
PKCS#7/CMS Signer onwards, you can include the signed
data in the SapCmsSignedData property.
Encode Signed Data with Base64 You can also Base64-encode the signed data in either the
message body or the message header, to further protect it
during message exchange.
Note
When you Base64-encode the signed data, you en
code either the message header or body, depending
on where the signed data is placed. When verifying
the message, make sure you specify which part of the
message (header or body) was Base64-encoded.
Signer Parameters For each private key alias, define the following parameters:
You sign a message with an XML digital signature to ensure authenticity and data integrity while sending an
XML resource to the participants on the cloud.
Context
Procedure
1. In the palette, choose (Security Elements), then Signer XML Digital Signer .
2. Place XML Digital Signer in the integration process and define the message path.
3. Define the following parameter in the General tab.
Parameter Description
4. Define the following parameters in the Processing tab to create an XML digital signature for the incoming
message.
Private Key Alias Enter an alias for selecting a private key from keystore.
You can also enter ${header.headername} or ${prop
erty.propertyname} to read the name dynamically from a
header or exchange property.
Signature Algorithm Signature algorithm for the RSA or DSA private key type
Digest Algorithm Digest algorithm that is used to calculate a digest from the
canonicalized XML document
XML Schema file path (only if the option Detached XML Choose Browse and select the file path to the XML
Signatures is selected) schema file that is used to validate the incoming XML
document. This file has to be in the package
source.main.resources.xsd
Signatures for Elements (only if the option Detached XML Choose Add to enter the XPath to the attribute of type ID,
Signatures is selected) in order to identify the element to be signed. Example: /
nsx:Document/SubDocument/@Id
Parent Node (only if the option Enveloped XML Signature is Specify how the parent element of the Signature element
selected for the attribute Signature Type) is to be specified. You have the following options:
Parent Node Name (only if the option Enveloped XML A local name of the parent element of the Signature ele
Signature is selected for the attribute Signature Type and ment
Specified by Name and Namespace is selected for Parent
This attribute is only relevant for Enveloped XML Signa
Node)
ture case. The Signature element is added at the end of
the children of the parent.
Parent Node Namespace (only if the option Enveloped Namespace of the parent element of the Signature ele
XML Signature is selected for the attribute Signature Type ment
and Specified by Name and Namespace is selected for
This attribute is only relevant for Enveloped XML Signa
Parent Node)
ture case. In the Enveloped XML Signature case, a null
value is also allowed to support no namespaces. An empty
value is not allowed.
XPath Expression (only if the option Enveloped XML Enter an XPath expression for the parent node to be speci
Signature is selected for the attribute Signature Type and fied.
Specified by XPath expression is selected for Parent Node)
This attribute is only relevant for Enveloped XML Signa
ture case.
Key Info Content Specifies which signing key information will be included in
the KeyInfo element of the XML signature. You can select a
combination of the following attribute values:
Note
The KeyInfo element might not contain the whole
certificate chain, but only the certificate chain
that is assigned to the key pair entry.
○ X.509 Certificate
X509Certificate element containing the X.509 certifi-
cate of the signer key
○ Issuer Distinguished Name and Serial Number
X509IssuerSerial element containing the issuer dis
tinguished name and the serial number of the X.509
certificate of the signer key
○ Key Value
Key Value element containing the modulus and expo
nent of the public key
Note
You can use any combination of these four attrib
ute values.
Sign Key Info With this attribute you can specify a reference to the Key
Info element.
For more information about the various attributes, see the following:
http://www.w3.org/TR/xmldsig-core/
Property Description
Canonicalization Method for SignedInfo Specify the canonicalization method to be used to trans
form the SignedInfo element that contains the digest
(from the canonicalized XML document).
Transform Method for Payload Specify the transform method to be used to transform the
inbound message body before it is signed.
○ CamelXmlSignatureTransformMethods
Specifies transformation methods in a comma-separated list.
You can use this header to specify transformation methods in a comma-separated list. This header will
overwrite the value of the option Transform Method for Payload.
Example
Sample Code
Example of this use case: The XML signature verifier of the receiving system expects an XML
signature as shown in the following code snippet.
The signature is a detached signature, because the signature element is a sibling of the signed
element B. However, the receiving system requires the enveloped-signature transform method to
be specified in the Transforms list. To ensure this, you have to configure a detached signature in the
XML Signer step, then add a Content Modifier step before the XML Signer step, where you specify
the header "CamelXmlSignatureTransformMethods" with the constant value “http://www.w3.org/
2000/09/xmldsig#enveloped-signature,http://www.w3.org/TR/2001/REC-xml-c14n-20010315".
For more information about the various methods, see the following:
http://www.w3.org/2000/09/xmldsig#enveloped-signature,http://www.w3.org/TR/2001/REC-xml-
c14n-20010315
http://www.w3.org/2000/09/xmldsig#enveloped-signature
http://www.w3.org/TR/2001/REC-xml-c14n-20010315
http://www.w3.org/TR/2001/REC-xml-c14n-20010315#WithComments
http://www.w3.org/2001/10/xml-exc-c14n#
http://www.w3.org/2001/10/xml-exc-c14n#WithComments
6. On the Advanced tab page, under XML Document Parameters, specify the following parameters.
Property Description
Reference Type Enter the value of the type attribute of the content refer
ence.
Output Encoding Select an encoding scheme for the output XML document.
Exclude XML Declaration Specify whether the XML declaration header shall be
omitted in the output XML message.
Disallow DOCTYPE Declaration Specify whether DTD DOCTYPE declarations shall be dis
allowed in the incoming XML message.
Related Information
● Enveloping XML Signature: The input message body is signed and embedded within the signature. This
means that the message body is wrapped by the Object element, where Object is a child element of the
Signature element.
Example
A template of the enveloping signature is shown below and describes the structure supported by XML
signature implementation. ("?" denotes zero or one occurrence; the brackets [] denote variables whose
values can vary.)
<Signature>
<SignedInfo>
<CanonicalizationMethod>
<SignatureMethod>
<Reference URI="#[generated object_id]"
type="[optional_type_value]">
<Transform Algorithm="canonicalization method">
<DigestMethod>
<DigestValue>
</Reference>
(<Reference URI="#[generated keyinfo_id]">
<Transform Algorithm="http://www.w3.org/TR/2001/REC-xml-
c14n-20010315"/>
<DigestMethod>
<DigestValue>
</Reference>)?
</SignedInfo>
<SignatureValue>
(<KeyInfo Id="[generated keyinfo_id]">)?
<--!The Id attribute is only added if there exists a reference-->
<Object Id="[generated object_id]"/>
</Signature>
Example
A template of the enveloped signature is shown below and describes the structure supported by XML
signature implementation. ("?" denotes zero or one occurrence; the brackets [] denote variables whose
values can vary):
<[parent element]>
...
<Signature>
<SignedInfo>
<CanonicalizationMethod>
<SignatureMethod>
<Reference URI="" type="[optional_type_value]">
<Transform Algorithm="http://www.w3.org/2000/09/
xmldsig#enveloped-signature"/>
<Transform Algorithm=[canonicalization method]/>
<DigestMethod>
<DigestValue>
</Reference>
(<Reference URI="#[generated keyinfo_Id]">
<Transform Algorithm="http://www.w3.org/TR/2001/REC-xml-
c14n-20010315"/>
<DigestMethod>
<DigestValue>
</Reference>)?
</SignedInfo>
<SignatureValue>
(<KeyInfo Id="[generated keyinfo_id]">)?
<--!The Id attribute is only added if there exists a reference-->
</Signature>
</[parent element]>
● Detached XML Signature: The digital signature is a sibling of the signed element. There can be several
XML signatures in one XML document.
You can sign several elements of the message body. The elements to be signed must have an attribute of
type ID. The ID type of the attribute must be defined in the XML schema that is specified during the
configuration.
Additionally, you specify a list of XPath expressions pointing to attributes of type ID. These attributes
determine the elements to be signed. For each element, a signature is created as a sibling of the element.
The elements are signed with the same private key. Elements with a higher hierarchy level are signed first.
This can result in nested signatures.
Example
A template of the detached signature is shown below and describes the structure supported by XML
signature implementation. ("?" denotes zero or one occurrence; the brackets [] denote variables whose
values can vary):
Sample Code
Note that the following elements are generated and cannot be configured with the Integration Designer:
● Key Info ID
● Object ID
The sender canonicalizes the XML resource to be signed, based on the specified transform algorithm. Using a
digest algorithm on the canonicalized XML resource, a digest value is obtained. This digest value is included
within the 'Reference' element of the 'SignedInfo' block. Then, a digest algorithm, as specified in the signature
algorithm, is used on the canonicalized SignedInfo. The obtained digest value is encrypted using the sender's
private key.
Note
Canonicalization transforms the XML document to a standardized format, for example, canonicalization
removes white spaces within tags, uses particular character encoding, sorts namespace references and
eliminates redundant ones, removes XML and DOCTYPE declarations, and transforms relative URIs into
absolute URIs. The representation of the XML data is used to determine if two XML documents are
identical. Even a slight variation in white spaces results in a different digest for an XML document.
Prerequisites
● You have logged into your customer workspace in SAP Cloud Platform Integration web application.
● You are editing the integration flow in the editor.
You use the PGP Encryptor to encrypt or sign and encrypt the payload using OpenPGP standard.
Procedure
Parameter Description
4. In the Processing tab provide values in the fields based on the following description.
Parameter Description
Signatures Select this option if you want to sign the payload with a
signature.
Content Encryption Algorithm In the dropdown list, select the algorithm you want to use
to encrypt the payload.
Note
The length of the secret key depends on the encryp
tion algorithm that you choose.
Compression Algorithm Select the algorithm you want to use to compress the pay
load.
Integrity Protected Data Packet Select if you want to create an Encrypted Integrity Pro
tected Data Packet. This is a specific format where an ad
ditional hash value is calculated (using SHA-1 algorithm)
and added to the message.
Encryption User ID of Key(s) from Public Keyring You can specify the encryption key user IDs (or parts of
them). Based on this, system look for the public key in
PGP public keyring.
Context
You use the PGP Decryptor to decrypt a message using OpenPGP standards.
To decrypt a message, the decryptor step requires a private key that must be deployed on the tenant (as part
of a PGP Secret Keyring). The PGP Secret Keyring can contain multiple private keys.
To make sure that the right private key is used for decryption, the encrypted message (processed by the
decryptor) contains a reference that enables the system to uniquely identify the right key.
Procedure
Parameter Description
5. If you have selected Optional or Required in Signatures field, choose to add signer user ID.
Note
Context
You perform this task to protect the message content from being altered while it is being sent to other
participants on the cloud, by encrypting the content. In the integration flow model, you configure the Encryptor
by providing information on the public key alias, content encryption algorithm, and secret key length. The
encryptor uses one or more receiver public key aliases to find the public key in the keystore. The encryption
process uses a symmetric key of specified length for content encryption. The symmetric key is encrypted by
the public recipient key with the cipher. The encryption is determined by the type of Content Encryption
Algorithm that you select. The encrypted content and the receiver information containing the symmetric
encryption key are placed in the message body.
In addition to encrypting the message content, you can also sign the content to make your identity known to
the participants and thus ensure the authenticity of the messages you are sending. This task guarantees your
identity by signing the messages with one or more private keys using a signature algorithm.
● You have logged into your customer workspace in SAP Cloud Platform Integration web tooling.
● You are editing the integration flow in the editor.
Block Size (in bytes) Enter the size of the data that is to be encoded.
Encode Body with Base64 Select this option if the message body will be base64-en
coded.
Content Encryption Algorithm Specify the algorithm that is to be used to encrypt the
payload.
6. Define the parameters for the signing process (only if you selected Signed and Enveloped Data for
Signatures).
Context
You use the PKCS#7 decryptor to decrypt messages from a participant on the cloud. You can also verify the
authenticity of a signed message by verifying the signature of SignedAndEnvelopedData object.
To decrypt a message, the decryptor requires a private key that must be deployed on the tenant (as part of a
key pair). Note that the tenant keystore can contain multiple key pairs. To make sure that the right private key is
used for decryption, the encrypted message (processed by the decryptor) contains a reference that enables
the system to uniquely identify the right private key. In particular, for the PKCS#7/CMS decryptor, the message
contains an issuer distinguished name and an issuer-specific serial number that uniquely identify the
certificate for the corresponding public key.
Procedure
Parameter Description
Body is Base64 Encoded Select if you expect the body of payload to be Base64 en
coded.
5. Choose Add to enter the public key alias of the expected senders.
Note
You perform this task to ensure that the signed message received over the cloud is authentic.
Context
In the integration flow model, you configure the Verifier by providing information about the public key alias, and
whether the message header or body is Base64-encoded, depending on where the Signed Data is placed. For
example, consider the following two cases:
● If the Signed Data contains the original content, then in the Signature Verifier you provide the Signed Data
in the message body.
● If the Signed Data does not contain the original content, then in the Signature Verifier you provide the
Signed Data in the header SapCmsSignedData and the original content in the message body.
The Verifier uses the public key alias to obtain the public keys of type DSA or RSA that are used to decrypt the
message digest. In this way the authenticity of the participant who signed the message is verified. If the
verification is not successful, the Signature Verifier informs the user by raising an exception.
Under Public Key Alias you can enter one or multiple public key aliases for the Verifier.
In general, an alias is a reference to an entry in a keystore. A keystore can contain multiple public keys. You
can use a public key alias to refer to and select a specific public key from a keystore.
You can use this attribute to support the following use cases:
● Management of the certificate lifecycle. Certificates have a certain validity period. Using the Public Key
Alias attribute in the Verifier step, you can enter both an alias of an existing certificate (which will expire
within a certain time period) and an alias for a new certificate (which does not necessarily have to exist
already in the keystore). In this way, the Verifier is configured to verify messages signed by either the old or
the new certificate. As soon as the new certificate has been installed and imported into the keystore, the
Verifier refers to the new certificate. In this way, certificates can be renewed without any downtime.
● You can use different aliases to support different signing senders with the same Verifier step. Using the
Public Key Alias attribute, you can specify a list of signing senders.
Note
Exceptions that occur during runtime are displayed in the Message Processing Log view of the Operations
Integration perspective.
Procedure
Parameter Description
4. Enter the following details in the Processing tab to verify the signatures of the incoming message.
Parameter Description
Header is Base64 Encoded Select this option to verify if the Signed Data encoded in
Base64 is included in the header.
Body is Base64 Encoded Select this option to verify if the Signed Data encoded in
Base64 is included in the message body.
Public Key Alias Enter an alias name to select a public key and correspond
ing certificate from the keystore.
Context
The XML Signature Verifier validates the XML signature contained in the incoming message body and returns
the content which was signed in the outgoing message body.
Note
For enveloping and enveloped XML signatures the incoming message body must contain only one XML
signature.
The Verifier supports enveloping and enveloped XML signatures and detached XML signatures. In the
enveloping XML signature case, one reference whose URI references the only allowed 'Object' element via ID,
and an optional reference to the optional KeyInfo element via ID is supported
In the validation process, a public key is required and it is fetched from the worker node keystore. On receiving
the XML message, the Verifier canonicalizes the data identified by the 'Reference' element and then digests it
to give a digest value. The digest value is compared against the digest value available under the 'Reference'
element of the incoming message. This helps to ensure that the target elements were not tampered with.
Then, the digest value of the canonicalized 'SignedInfo' is calculated. The resulting bytes are verified with the
signature on the 'SignedInfo' element, using the sender's public key. If both the signature on the 'SignedInfo'
element and each of the 'Reference' digest values verify correctly, then the XML signature is valid.
Note
Procedure
Parameter Description
4. Enter the following parameters in the Processing tab to verify XML digital signature for the incoming
message.
Parameter Description
<Signature>
<SignedInfo>
<CanonicalizationMethod>
<SignatureMethod>
<Reference
URI="#[object_id]">
(<Transform
Algorithm=[canonicalization
method]/>)?
<DigestMethod>
<DigestValue>
</Reference>
(<Reference
URI="#[keyinfo_id]">
(<Transform
Algorithm=[canonicalization
method]/>)?
<DigestMethod>
<DigestValue>
</Reference>)?
</SignedInfo>
<SignatureValue>
(<KeyInfo
(Id="[keyinfo_id]")?>)?
<Object Id="[object_id]"/>
</Signature>
<[parent]>
<Signature>
<SignedInfo>
<CanonicalizationMethod>
<SignatureMethod>
<Reference URI="">
<Transform
Algorithm="http://www.w3.org/
2000/09/xmldsig#enveloped-
signature"/>
(<Transform
Algorithm="[canonicalization
method]"/>)?
<DigestMethod>
<DigestValue>
</Reference>
(<Reference
URI="#[keyinfo_id]">
(<Transform
Algorithm="[canonicalization
method]"/>)?
<DigestMethod>
<DigestValue>
</Reference>)?
</SignedInfo>
<SignatureValue>
(<KeyInfo
(Id="[keyinfo_id]")?>)?
</Signature>
</[parent]>
(<[signed element]
Id="[id_value]">
<!-- signed element must
have an attribute of type ID -->
...
</[signed element]>
<other sibling/>* <!--
between the signed element and
the corresponding signature
element, there can be other
siblings -->
<Signature>
<SignedInfo>
<CanonicalizationMethod>
<SignatureMethod>
<ReferenceURI="#[id_value]"type="
[optional_type_value]">
<!-- reference URI
contains the ID attribute value
of the signed element -->
<TransformAlgorithm=[canonicaliza
tion method]/>
<DigestMethod>
<DigestValue>
</Reference>
(<ReferenceURI="#[generated_keyin
fo_Id]">
<TransformAlgorithm="http://
www.w3.org/TR/2001/REC-xml-
c14n-20010315" />
<DigestMethod>
<DigestValue>
</Reference>)?
</SignedInfo>
<SignatureValue>
(<KeyInfoId="[generated_keyinfo_i
d]">)?
<Signature>)+
Check for Key Info Element Select this option to check that the XML Signature con
tains a Key Info element
Note
In case multiple public key aliases are specified (using
the Public Key Alias attribute), this option is manda
tory (to make sure that from the KeyInfo the public
key can be derived).
Disallow DOCTYPE Declaration Select this option to disallow DTD DOCTYPE declaration in
the incoming XML message
Public Key Alias Enter an alias name to select a public key and correspond
ing certificate
Using the Public Key Alias, you can enter one or multiple
public key aliases for the Verifier.
Note
In general, an alias is a reference to an entry in a key
store. A keystore can contain multiple public keys. You
can use a public key alias to refer to and select a spe
cific public key from a keystore.
You can use the transient data store to temporarily store messages.
Context
The transient data store (data store for short) supports four types of operations:
If you use a Write operation, you can store the messages in the data store by configuring the data store
name and a unique Entry ID.
You can also specify the maximum number of messages you fetch in each poll.
This component stores data on your tenant (using SAP ASE Platform Edition). Note that there is an overall disk
space limit of 32 GB.
Related Information
Simple Integration Flow Using Data Store Write and Delete Operations [page 870]
This step selects entries from a transient data store and creates a bulk message containing the data store
entries.
Context
A data store operations step has to be triggered explicitly, for example, by a Timer event.
Note
A data store can be created during message processing using the following options:
Caution
This step selects messages from the data store and provides as output a bulk message with all entries,
according to the specified value of the parameter Number of Polled Messages.
Procedure
Attribute Description
Data Store Name Specifies the name of the data store (no white spaces).
You can dynamically define the data store name based on a header or exchange
property. Use the format ${header.headername} to dynamically read the
name from a header, or ${property.propertyname} to read it from an
exchange property.
The maximum length allowed for the data store name is 40 characters. If you enter
a longer string, a validation error is raised. Note that this length restriction applies
to the value that is used for this parameter at runtime. Therefore, if you configure
this parameter dynamically, make sure that the expected header or property value
does not exceed this length restriction. Otherwise, a runtime error will be raised.
Visibility Defines whether the data store is shared by all integration flows (deployed on the
tenant) or only by one specific integration flow.
○ Global: Data store is shared across all integration flows deployed on the
tenant.
○ Integration Flow: Data store is used by one integration flow.
Number of Polled Messages Specifies the maximum number of messages to be fetched from the data store
within one poll (default is 1).
You can also configure this property in such a way that the number of fetched
messages per poll is dynamically evaluated at runtime based on the incoming
message. To do this, you can enter the following kind of expressions:
Delete on Completion Select this option to delete a message from the data store after having
successfully processed the message.
Results
This step provides an output message with a structure as defined by the following DTD content.
Sample Code
Example structure for the case when two messages are retrieved from the data store:
Sample Code
There is no in-order processing of data store entries. In other words, the sequence of data store entries
selected with a Select operation is random and non-deterministic. If you have restricted the number of
entries to be fetched (with the Number of Polled Messages attribute), the selection of retrieved entries is also
random.
Next Steps
To find out how a data store Select operation works, you can enhance the sample integration flow Timer-
Initiated Scenario with a Mail Receiver with a few clicks.
● Add a data store Write step after the Request Reply step.
● Add a data store Select step after the data store Write step.
Sample Code
In the data store Select step, specify a value bigger than 1 for the parameter Number of Polled Messages. Then,
in the first content modifier (on the Body tab), specify a different value for the element productIdentifier
than for the first message processing run. When you deploy and run the integration flow again, the resulting
message (after the integration flow has been processed) should look like this:
Sample Code
You can monitor the content of the data store in the Monitor section. Under Manage Stores, choose the Data
Stores tile. For more information, see .
Context
A data store operations step has to be triggered explicitly, for example, by a Timer event.
This component stores data on your tenant (using SAP ASE Platform Edition). Note that there is an overall disk
space limit of 32 GB.
Procedure
Attribute Description
Data Store Name Specifies the name of the data store (no white spaces).
You can dynamically define the data store name based on a header or exchange
property. Use the format ${header.headername} to dynamically read the
name from a header, or ${property.propertyname} to read it from an
exchange property.
The maximum length allowed for the data store name is 40 characters. If you enter
a longer string, a validation error is raised. Note that this length restriction applies
to the value that is used for this parameter at runtime. Therefore, if you configure
this parameter dynamically, make sure that the expected header or property value
does not exceed this length restriction. Otherwise, a runtime error will be raised.
Visibility Defines whether the data store is shared by all integration flows (deployed on the
tenant) or only by one specific integration flow.
○ Global: Data store is shared across all integration flows deployed on the
tenant.
○ Integration Flow: Data store is used by one integration flow.
Entry ID Specify an entry ID that will be stored together with the message content.
Details for the entry ID are read from the incoming message. You can enter the
following kinds of expressions:
If the Entry ID is not defined, the data store Write component uses the value of the
SapDataStoreId header as the entry ID. If this header is not defined, the data
store component generates an entry ID and sets the SapDataStoreId header
with the generated value.
Tip
If you like the system to generate an entry ID for the data store operation,
remove the header SapDataStoreId before the data store write step and
leave the Entry ID field in the data store empty.
Retention Threshold for Alerting Time period (in days) within which the messages have to be fetched before an alert
in (d) is raised.
Raising an alert means that the corresponding entry in the data store monitor gets
the status Overdue (indicated with red color).
You can open the data store monitor by selecting the Data Stores tile in the
Operations view of the Web UI under Manage Stores.
Expiration Period in (d) Number of days after which the stored messages are deleted (default is 90 days,
maximum possible value is 180 days).
The minimum value of Expiration Period should be at least twice that of Retention
Threshold for Alerting.
Encrypt Stored Message Select this option to encrypt the message in the data store.
Overwrite Existing Message Select this option to overwrite an existing message in the data store.
Trying to overwrite an existing entry without having this option selected will result
in a DuplicateEntryException.
Include Message Headers Select this option to store message headers in addition to the payload.
Note
Camel* and SAP_* headers will not be stored.
Note
Only select this option if you want to read the message afterwards by
retrieving it with Get operations.
Caution
Consider that all headers will be stored and it may take up a lot of space.
Next Steps
Be aware of the following fact when you plan to execute a data store Select operation in a subsequent
integration flow step: When retrieving a message body from a data store in a subsequent step, only XML is
supported.
The only option to read non-XML data is provided by the data store Get operation.
You can monitor the content of the data store in the Monitor section. Under Manage Stores, choose the Data
Stores tile. For more information, see .
If a Datastore Write step fails because the entry already exists (duplicate key exception), this exception
cannot be handled by an Exception subprocess. The reason is that the database transaction is rolled back
even if an Exception subprocess is used.
This step gets a specific entry from the transient data store.
Context
A data store operations step has to be triggered explicitly, for example, by a Timer event.
Note
A data store can be created during message processing using the following options:
Caution
Procedure
Attribute Description
Data Store Name Specifies the name of the data store (no white spaces).
You can dynamically define the data store name based on a header or exchange
property. Use the format ${header.headername} to dynamically read the
The maximum length allowed for the data store name is 40 characters. If you enter
a longer string, a validation error is raised. Note that this length restriction applies
to the value that is used for this parameter at runtime. Therefore, if you configure
this parameter dynamically, make sure that the expected header or property value
does not exceed this length restriction. Otherwise, a runtime error will be raised.
Visibility Defines whether the data store is shared by all integration flows (deployed on the
tenant) or only by one specific integration flow.
○ Global: Data store is shared across all integration flows deployed on the
tenant.
○ Integration Flow: Data store is used by one integration flow.
Entry ID Specify an entry ID that will be stored together with the message content.
Details for the entry ID are read from the incoming message. You can enter the
following kinds of expressions to dynamically specify the entry ID:
You can also set the header SapDataStoreId to specify the Entry ID, and the
corresponding entry will be read at runtime.
Delete on Completion Select this option to delete a message from the data store after having
successfully processed the message.
Throw Exception on Missing You have the option to throw an exception if the entry with the specified Entry ID
Entry does not exist in the datastore.
Remember
If you disable this option, the header SAP_DatastoreEntryFound is set
to false and no exception is thrown, even if the Entry ID does not exist.
You can monitor the content of the data store in the Monitor section. Under Manage Stores, choose the Data
Stores tile. For more information, see .
Context
A data store operations step has to be triggered explicitly, for example, by a Timer event.
Note
A data store can be created during message processing using the following options:
Procedure
Attribute Description
Data Store Name Specifies the name of the data store (no white spaces).
You can dynamically define the data store name based on a header or exchange
property. Use the format ${header.headername} to dynamically read the
name from a header, or ${property.propertyname} to read it from an
exchange property.
The maximum length allowed for the data store name is 40 characters. If you enter
a longer string, a validation error is raised. Note that this length restriction applies
to the value that is used for this parameter at runtime. Therefore, if you configure
this parameter dynamically, make sure that the expected header or property value
does not exceed this length restriction. Otherwise, a runtime error will be raised.
Visibility Defines whether the data store is shared by all integration flows (deployed on the
tenant) or only by one specific integration flow.
○ Global: Data store is shared across all integration flows deployed on the
tenant.
○ Integration Flow: Data store is used by one integration flow.
Entry ID Specify an entry ID that will be stored together with the message content.
Details for the entry ID are read from the incoming message. You can enter the
following kinds of expressions to dynamically specify the entry ID:
You can also set the header SapDataStoreId to specify the Entry ID, and the
corresponding entry will be deleted at runtime.
Next Steps
You can monitor the content of the data store in the Monitor section. Under Manage Stores, choose the Data
Stores tile. For more information, see .
The following integration flow reads product data (for a given productIdentifier) from a product catalog. In a
further step, the integration flow, creates a Data Store entry with a data store Entry ID. The value of this
parameter is defined dynamically at runtime (based on the productIdentifier given with the inbound request).
In a final step, the integration flow again deletes the entry from the data store in case the price (for the product
identified with the productIdentifier given in the actual request) is lower than a specified number.
● In the first Content Modifier, in tab Message Header, create a header with the following parameters:
Parameter Value
Name productIdentifier
Type XPath
Value //productIdentifier
The HTTP client calling the integration flow provides the following message body with the request (example
body for productIdentifier HT-1007):
<root><productIdentifier>HT-1007</productIdentifier></root>
The Content Modifier takes the value of the productIdentifier field provided with the actual HTTP
request and stores it as a property.
● Configure the OData receiver channel so that it connects to the ESPM WebShop, which is based on the
Enterprise Sales and Procurement Model (ESPM) provided by SAP.
Specify the following settings:
In tab Connection, as Address enter the following URL: https://espmrefapps.hana.ondemand.com/
espm-cloud-web/espm.svc
In tab Processing, in field Query Options, enter:
$select=ProductId,Category,CategoryName,CurrencyCode,DimensionDepth,DimensionHei
ght,DimensionUnit,DimensionWidth,LongDescription,Name,Price,QuantityUnit,ShortDe
scription,SupplierId,Weight&$filter=ProductId eq '${header.productIdentifier}'
Note that this query reads product data for the productId, which is dynamically provided at runtime
through the expression ${header.productIdentifier}.
● In the second Content Modifier, in tab Exchange Property, define a property with the following parameters:
Parameter Value
Name Price
Type XPath
Value //Price
Parameter Value
Entry ID ${xpath./Products/Product/ProductId}
● The Router step leads to two routes. The default route end with a Message End event. The other route
contains a further Data Store Delete step. For the upper route, specify the following condition:
Parameter Value
● In the upper route, add a Data Store Delete step with the following parameters:
Parameter Value
Entry ID ${xpath./Products/Product/ProductId/
text()}
When you run this scenario the first time with a productIdentifier value that retrieves a product with a price
larger than or equal 100, a data store is created with an entry with Entry ID given by the productIdentifier.
To check the content of the data store, open the Operations view of the Web UI and under Manage Stores click
the Data Stores tile. To inspect its content, select the data store with the specified name (in our example,
WebshopCustomerReviews).
When you run the scenario again with a productIdentifier value that retrieves a product with a price larger than
or equal 100, a second entry is added to the data store.
Tip
To select suitable productIds for your message processing runs, you can check the product catalog page of
the ESPM WebShop application at:
https://espmrefapps.hana.ondemand.com/espm-cloud-web/webshop
Context
You define variables and specify values to use them in integration flows to support message flow execution. You
can also use these variables across multiple integration flows.
This component stores data on your tenant (using SAP ASE Platform Edition). Note that there is an overall disk
space limit of 32 GB.
Procedure
1. If a Write Variables element is present in the integration flow, choose it to define variables.
2. If you want to add a Write Variables element to the integration flow, perform the following substeps:
a. In the palette, choose (Persistence), then Write Variables.
b. Place the Write Variables element in the integration process and define the message path.
3. On the General tab, you can change the name of the Write Variables element.
4. On the Processing tab, choose Add to add a new variable.
5. In the Name column, enter the variable name.
The variable name can either be constant or type ${header.source}. For example, a valid value is
Variable1 or type ${header.source}.
6. In the Type column, select a value from the dropdown list based on the descriptions in the following table.
Variable Types
Note
○ You need to enter a valid Java data type when you
define a variable of type XPath
○ The step has been leveraged to use the capabili
ties of XPath 3.1 Enterprise Edition (EE). To read
more about the new features of XPath 3.1 visit
the saxon documentation published by Saxonica.
7. In the Value column, either enter a value for the variable or choose Select to browse predefined headers.
Note
Ensure that the value matches the type of variable you are defining.
8. If you want the variable to be used in other integration flows, select the checkbox in the Global Scope
column.
9. Save or deploy the configuration.
Store a message payload so that you can access the stored message and analyze it at a later point in time.
Context
In an integration flow, you add a process step for persistence to store a message at a specific point in the
process.
At runtime, information such as message GUID, timestamp, and payload are stored for the messages at the
persistence process steps.
Note
A message is stored on the runtime node for 90 days. After this time, the message is automatically deleted.
Note
The file name and the Content-type field of the payload have a character limit of 120 characters.
This component stores data on your tenant (using SAP ASE Platform Edition). Note that there is an overall disk
space limit of 32 GB.
Procedure
Option Description
Step ID Provide a unique Step ID, which can be a descriptive name or a step number. For example, for a persis
tence step configured after a mapping step it could be MessageStoredAfterMapping.
To access and analyze the stored messages, you can use the OData API. For more information, see .
The XML validator validates the message payload in XML format against the configured XML schema.
Prerequisites
You have the XML schema (XSD files) added in the .src.main.resources.xsd location of your integration flow
project. If you do not have the specified location in your project, you need to create one first and then add the
XSD files.
Context
You use this procedure to assign XML schema (XSD files) to validate the message payload in a process step.
The validator checks the message payload against configured XML schema, and report discrepencies in
message payload. If the validation fails, the Cloud Integration system stops the whole message processing by
default.
Note
The XML Validator 2.0 version allows you to validate XML files against an XML schema. It supports
validation of XSD 1.1 specification along with XSD 1.0 specification.
Procedure
1. In the palette, choose (List of Validators), and then choose XML Validator.
2. In the Name field, enter an appropriate validator flow step name.
3. In the XML Schema field, select Browse.
4. Choose an XSD file that you want to use to validate the format.
Note
○ You can have references to other XSDs within the same project. XSDs residing outside the projects
cannot be referred.
○ You can enter a value less than 5000 for attribute maxOccurs, in input xsd. You can also enter
unbounded, if you do not want to check for max occurrence but would like to support any number
of nodes.
5. If you want to continue the processing even if the system encounters error while validating, then select the
check box Prevent Exception on Failure.
Note
You can define verious steps that execute a call into a remote (external) component or into a sub process of the
integration flow.
Related Information
Related Information
Prerequisites
● You have accessed your customer workspace in SAP Cloud Platform Integration web application.
● You are editing the integration flow in integration flow editor.
Context
The content enricher adds the content of a payload with the original message in the course of an integration
process. This converts the two separate messages into a single enhanced payload. This feature enables you to
make external calls during the course of an integration process to obtain additional data, if any.
Consider the first message in the integration flow as the original message and the message obtained by making
an external call during the integration process as the lookup message. You can choose between two strategies
to enrich these two payloads as a single message:
● Combine
● Enrich
Original Message
<EmployeeList>
<Employee>
<id>111</id>
<name>Santosh</name>
<external_id>ext_111</external_id>
</Employee>
<Employee>
<id>22</id>
<name>Geeta</name>
<external_id>ext_222</external_id>
</Employee>
</EmployeeList>
Lookup Message
<EmergencyContacts>
<contact>
<c_id>1</c_id>
<c_code>ext_111</c_code>
<isEmergency>0</isEmergency>
<phone>9999</phone>
<street>1st street</street>
<city>Gulbarga</city>
</contact>
<contact>
<c_id>2</c_id>
<c_code>ext_111</c_code>
<isEmergency>1</isEmergency>
If you use Combine as the aggregation strategy, the enriched message appears in the following format.
Enriched Message
<multimap:messages xmlns:multimap=”http://sap.com/xi/XI/SplitAndMerge”>
<message1>
<EmployeeList>
<Employee>
<id>111</id>
<name>Santosh</name>
<external_id>ext_111</external_id>
</Employee>
<Employee>
<id>22</id>
<name>Geeta</name>
<external_id>ext_222</external_id>
</Employee>
</EmployeeList>
</message1>
<message2>
<EmergencyContacts>
<contact>
<c_id>1</c_id>
<c_code>ext_111</c_code>
<isEmergency>0</isEmergency>
<phone>9999</phone>
<street>1st street</street>
<city>Gulbarga</city>
</contact>
<contact>
<c_id>2</c_id>
<c_code>ext_111</c_code>
<isEmergency>1</isEmergency>
<phone>1010</phone>
<street>23rd Cross</street>
<city>Chitapur</city>
</contact>
<contact>
<c_id>3</c_id>
<c_code>ext_333</c_code>
<isEmergency>1</isEmergency>
<phone>007</phone>
<street></street>
<city>Raichur</city>
</contact>
</EmergencyContacts>
</message2>
</multimap:messages xmlns:multimap=”http://sap.com/xi/XI/SplitAndMerge”>
Enrich offers you control on how you can merge the original and lookup message. In this example, we consider
the node <ext_111> as the reference to enrich the original message with the lookup message.
<EmployeeList>
<Employee>
<id>111</id>
<name>Santosh</name>
<external_id>ext_111</external_id>
<contact>
<c_id>1</c_id>
<c_code>ext_111</c_code>
<isEmergency>0</isEmergency>
<phone>9999</phone>
<street>1st street</street>
<city>Gulbarga</city>
</contact>
<contact>
<c_id>2</c_id>
<c_code>ext_111</c_code>
<isEmergency>1</isEmergency>
<phone>1010</phone>
<street>23rd Cross</street>
<city>Chitapur</city>
</contact>
</Employee>
<Employee>
<id>22</id>
<name>Geeta</name>
<external_id>ext_222</external_id>
</Employee>
</EmployeeList>
In the enriched message, you can see the content of the lookup message after the node <message2>.
Remember
If lookup message contains more than one entry of the key element, content enricher enhances the
enriched message with all the entries referred by the key element in lookup message. In the above example,
the lookup message contains the key element ext_111 in two places. You can see that the enriched
message contains both the <contact> entries that the key element refers to.
Note
In such cases, the Content Enricher will not enrich the message.
1. If you want to add Content Enricher to the integration process, perform the following substeps.
Value Description
Enrich You can define the path to node and key element based on
which the original message is enriched with the lookup
message.
4. If you have selected Enrich as the Content Enrichment Type, provide values in fields based on description in
table.
Original Message Path to Node Path to the reference node in the origi
nal message
You use a send step type to configure a service call to a receiver system for scenarios and adapters where no
reply is expected.
Context
Using this step makes only sense in combination with one of the following adapter types (for the channel
between the send step and the receiver):
● AS2 adapter
● JMS adapter
● Mail adapter
● SOAP SAP RM adapter
● SFTP adapter
Procedure
1. In the palette, click Call ( icon) and choose External Call Send , and drop it inside the
Integration Process pool.
2. In the palette, click Participants ( icon), select Receiver, and drop it outside the Integration Process pool.
3. Create a connection between the send step and the receiver and configure the channel (selecting one of
the above mentioned adapter types).
You use this step to call an external receiver system in a synchronous step and get back a response.
Context
Certain integration scenarios might require that the Cloud Integration tenant communicates with an external
service, retrieves data from it, and further processes the data. In such cases, you can use the request reply step
as an exit to connect to the external service.
For example, in your integration scenario the tenant needs to retrieve data on electronic products from a
product catalog (the external service) and to further process the data. If the external service exposes the data
through a REST API, you can use the request reply step to connect to the service via the HTTP or OData
receiver adapter.
Caution
The request reply step does not work with all available Cloud Integration adapter types.
Adapters that can be used with the request reply step are the following ones (to mention examples):
For a detailed instruction on how to use this step in a simple example scenario, check out .
Procedure
You can invoke a local integration process from the main integration process by using a local call.
Related Information
Context
You can use this step type to invoke a local integration process from the main integration process.
You use local integration processes to keep the size of a process model at a manageable scale. That way, you
can break down the main integration process into smaller fragments (represented by local integration
processes). You combine these fragments to achieve the complete message processing design of your
integration flow.
Procedure
All local integration processes are listed that are modeled for this integration flow.
4. Save and deploy the integration flow.
Related Information
Context
Field Description
Expression Type Specify the kind of expression you want to enter in the
Condition Expression field.
○ XML
For XPath expressions, for example: //
customerName = ‘Smith’
○ Non-XML
For Camel Simple Expression Language
Note
Examples:
Max. Number of Iterations Maximum number of iterations that the loop can perform
before it stops (99999 iterations maximum).
Note
The local loop process refers to a while loop. The sub process will run as long as the loop condition is
fulfilled.
Every morning, an account owner wants to check all transactions performed on his account. He calls a
specific web service and has defined this request:
<accountID>12345<accountID>
<action>Transaction_of_last_Day</action>
</accountinfo>
<accountinforesponse>
<transaction>
<id>1</id>
...
</transaction>
...
<transaction>
<id>55</id>
...
</transaction>
<hasMore>true</hasMore>
</accountinforesponse>
The account owner has to call the web service again and again until there are no more transactions
available and he gets the response:
<hasMore>false</hasMore>
To simplify the call, he can use the loop embedded in the HCP Integration Service. He needs to define a
while condition in the Xpath such as/accountinforesponse[hasMore = ‘true’].
As long as data are available, the call will continue. The sub-process inducing the loop uses the ServiceCall
step in the Request-Reply mode to call the web service. As soon as the web service gets the response
<hasMore>false</hasMore>, the processing exits the loop and continues with the next step. The last
response of the web service is the new payload, that will be taken as message body into the next step.
Related Information
Context
You use this element to catch any exceptions thrown in the integration process and handle them.
2. To add an exception subprocess to the integration flow, choose Process Exception Subprocess from
the palette. The subprocess can be dropped into the integration process and should not be connected to
any of the elements of the integration flow.
3. Select the exception subprocess.
a. In the property sheet specify a name.
4. Start the process with Error Start event always.
5. End the process with either End Message or Error End or Escalation event.
Note
○ You can use an End Message event to wrap the exception in a fault message and send it back to the
sender in the payload.
○ You can use an Error End event to throw the exception to default exception handlers.
6. You can also add other flow elements between the start and end events .
Note
○ For example, you can choose Add Service Call from the context menu of a connection within the
pool. This enables you to call another system to handle the exception.
○ The following elements are not supported within an Exception Subprocess:
○ Another Exception Subprocess
○ Integration Process
○ Local Integration Process
○ Sender
○ Receiver
○ Start Message
○ Terminate Message
○ Timer Start
○ Start Event
○ End Event
○ Router
○ If a Datastore Write step fails because the entry already exists (duplicate key exception), this
exception cannot be handled by an Exception subprocess. The reason is that the database
transaction is rolled back even if an Exception subprocess is used.
Note
○ The message processing log will be in an error state even if a user catches an exception and
performs additional processing on it.
○ You can get more details on exception using ${exception.message} or $
{exception.stacktrace}.
○ You cannot catch exceptions of local integration process in the main integration process.
You use the local integration process to simplify your integration process. You can break down the main
integration process into smaller fragments by using local integration processes. You combine these fragments
to achieve your main integration process.
Prerequisites
● You have accessed the customer workspace in SAP Cloud Platform Integration web application.
● You are editing the integration flow containing local integration process element.
Context
You use the Local Integration Process to define an integration process that is specific to the integration flow in
which it is created. You can use this integration process with the Process Call step.
Restriction
You cannot use the following integration flow steps within the Local Integration Process step:
Procedure
To know more about Transcation Handling, see: Defining Transaction Handling [page 890]
4. If you want to configure the elements inside the local integration process, refer to the documentation
relevant to those elements.
Note
There should not be any empty element in the Local Integration Process.
5. If you want to add the local integration process to a process call element, perform the following substeps:
Note
You can use an Error End event to throw the exception to default exception handlers.
You can configure transaction handling on integration process or local integration process level.
Context
Transactional processing means that the message (as defined by the steps contained in a process) is
processed within one transaction.
For example, consider a process with a Data Store Write operation. If transaction handling is activated, the Data
Store entry is only committed if the whole process is executed successfully. In an error case, the transaction is
rolled back and the Data Store entry is not written. If transaction handling is deactivated, the Data Store entry
is committed directly when the integration flow step is executed. In an error case, the Data Store entry is
nevertheless persisted (and not removed or rolled back).
Tip
In an early version of the (local) Integration Process, transaction handling was enabled implicitly if
persistence-related steps were modeled in the integration flow (for example, a Data Store Get step).
However, you could not explicitly configure transactional behavior for this integration flow element.
If your integration flow still contains the old version of a (local) Integration Process shape, migrate it to a
new version to be able to disable transactional behavior.
With the new version of the (local) Integration Process, you can configure transaction handling explicitly.
Caution
Choosing JDBC transaction handling is mandatory to ensure that ag
gregations are executed consistently.
Data Store operations Required for JDBC (recommended but not mandatory)
If you choose Not Required, the related database operation is committed for
Write
each single step and no end-to-end transaction handling is implemented.
This also applies to scenarios that include the In general, no transaction handling is required.
AS2 adapter.
Note
These adapters do not require JMS transaction handling. Their retry
handling works independently from the selected transaction handler.
JMS sender adapter together with JDBC re Required for JDBC
sources (Data Store, Aggregator, Write varia
bles) Note
This also applies to scenarios that include the We recommend that you do not use transactional JMS resources and
AS2 adapter. JDBC resources in parallel.
Several JMS receiver adapters together with a Required for JMS (mandatory)
JMS sender adapter
This setting is mandatory to ensure that the data is consistently updated in
the JMS queue.
Note
Distributed transactions between JMS and JDBC resources are not
supported.
For more details on parallel processing, check the documentation for the General or Iterating Splitter and
Parallel Multicast.
Let us assume that you want to configure a message multicast and the integration flow also contains a Data
Store operations step. In this case, you can choose one of the following options to overcome the mentioned
limitation:
Procedure
1. Depending on whether you want to configure transaction handling for an integration process or a local
integration process, select the header of the corresponding shape in the integration flow modeling area.
2. Specify the details for transactional processing:
To configure a transactional process for an Integration Process, select one of the following options for the
Transaction Handling property:
Attribute Description
Required for JDBC You can specify that Java Database Connectivity (JDBC)
transactional database processing is applied (to ensure
that the process is accomplished within one transaction).
Caution
The maximum timeout setting is 12 hours.
Required for JMS You can specify that Java Message Service (JMS) transac
tional database processing is applied (to ensure that the
process is accomplished within one transaction).
Caution
The maximum timeout setting is 12 hours.
To configure a transactional process for a local Integration Process, select one of the following options for
the Transaction Handling property:
Attribute Description
Required for JMS You can specify that Java Message Service (JMS) transac
tional database processing is applied.
Caution
The maximum timeout setting is 12 hours.
Required for JDBC You can specify that Java Database Connectivity (JDBC)
transactional database processing is applied (to ensure
that the process is accomplished within one transaction).
Caution
The maximum timeout setting is 12 hours.
Related Information
For certain integration flow parameters you can enter a reference to a header or property instead of a fixed
value. At runtime, the actual value of the header or property in the incomung message is then used as
parameter value. This is referred to as dynamic configuration.
Prerequisites
Context
The following kinds of headers and properties can be used to dynamically configure a certain parameter.
Procedure
1. Open the integration flow in edit mode and select the component where you like to dynamically configure a
parameter.
2. Specify the parameter value by entering an expression of the following form.
${header.<name of header>}
or
${property.<name of property>}
Example:
In a content modifier step that is placed before the component where you dynamically configure a
parameter, you have defined the following header (which contains the customer number of an inbound
message): customerNo.
For the parameter (to be defined dynamically) enter the following expression:
${header.customerNo}
At runtime, the actual value for the header customerNo (derived from the inbound message) is used as
parameter value.
3. Finish the configuration of the integration flow.
Related Information
You can define placeholders for parameters of certain adapters or step types. The values of these parameters
will then dynamically be set based on the content of the processed message.
To set an attribute to be dynamically filled by a message header attribute, enter a variable in the form $
{header.attr} in the corresponding field for the attribute of the corresponding step or adapter.
At runtime, the value of the header attribute (attr) of the processed message is written into the field for the
corresponding attribute of the outbound email.
For example, assume that you dynamically define the email Subject of the mail adapter as shown in the figure
below by the variable {header.attr}.
At runtime, a message is received whose header contains a header attribute attr with the value value1. The
mail adapter will then dynamically set the subject of the outbound email with the entry value1.
Note that the mail adapter processes message content either already contained in the inbound mail (from a
sender system) or as modified by content modifier steps on its way between sender and mail adapter.
As shown in the figure, we assume that the inbound message contains a header header1 with value value1.
Let us assume that you like to define the Subject attribute of the mail receiver adapter dynamically via this
header. To do that, specify the Subject field by the following entry:
${header.header1}
As a result, the mail adapter dynamically writes value value1 of header header1 (from inbound message) into
the subject of the outbound email.
Related Information
The following tables list all parameters that can be configured dynamically (for the various adapter types and
integration flow steps).
Adapters
SOAP 1.x sender ● Private Key Alias for Response Signing (WS Security)
● Public Key Alias for Response Encryption (WS Security)
XI receiver ● Address
● Credential Name
● Private Key Alias
● XI-specific identifiers (Communication Party (for
sender and receiver), Communication Component (for
sender and receiver), Service Interface (for receiver),
and Service Interface Namespace (for receiver))
The integration framework gives you options to evaluate certain parameters at runtime, which allows you to
define sophisticated ways of controlling message processing. There are two different kinds of parameter:
● Message header
This is transferred as part of the message header.
When you use an HTTP-based receiver adapter, these parameters are converted to HTTP headers and
transferred asi such to the receiver.
Note
Note that data written to the message header during a processing step (for example, in a Content
Modifier or Script step) will also be part of the outbound message addressed to a receiver system
(whereas properties will remain within the integration flow and will not be handed over to receivers).
Because of this, it is important to consider the following header size restriction if you are using an
HTTP-based receiver adapter: If the message header exceeds a certain value, the receiver may not be
able to accept the inbound call (this applies to all HTTP-based receiver adapters). The limiting value
depends on the characteristics of the receiver system, but typically ranges between 4 and 16 KB. To
overcome this issue, you can use a subsequent Content Modifier step to delete all headers that are not
supposed to be part of the outbound message.
● Exchange property
For as long as a message is being processed, a data container (referred to as Exchange) is available. This
container is used to store additional data besides the message that is to be processed. An Exchange can be
seen as an abstraction of a message exchange process as it is executed by the Camel framework. An
Exchange is identified uniquely by an Exchange ID. In the Properties area of the Exchange, additional data
can be stored temporarily during message processing. This data is available for the runtime during the
whole duration of the message exchange.
When you use an HTTP-based receiver adapter, Exchange properties are not converted to an HTTP header
for transfer to the receiver.
You can use the Content Modifier to modify the content of the message header and the Exchange property (as
well as of the message body) at one or more steps during message processing.
Remember
Please do not modify headers or properties prefixed with SAP unless otherwise specified in the document.
If modified it can result in runtime issues during message processing.
You can use the message header and the Exchange property to configure various sophisticated ways of
controlling message processing.
When configuring an integration flow using the modeling user interface, you can define placeholders for
attributes of certain adapters or step types. The value that is actually used for message processing is set
Another option to derive such data from a message at runtime is to access a certain element in the message
payload.
The following headers and Exchange properties are supported by the integration framework.
Note
A subset of these parameters is provided by the associated Open Source components, such as Apache
Camel.
Type (Prop
erty or Related
Name Header) Component Description
Archived-At Header Mail adapter Specifies a link to the archived form of an e-mail.
CamelAggregated Header Aggregator Is relevant for use cases with message aggregation.
CompletedBy
The header attribute can only have one of the following values:
● timeout
Processing of the aggregate has been stopped because the con
figured Completion Timeout has been reached.
● predicate
Processing of the aggregate has finished because the Comple
tion Condition has been met.
CamelCharset Property Encoder Specifies the character encoding to be applied for message process
Name ing.
CamelFileName Header SFTP re Overrides the existing file and directory name that is set directly in
ceiver the endpoint.
adapter
You can use this header to dynamically change the name of the file
and directory to be called.
If you do not enter a file name in the SFTP receiver adapter, the con
tent of the CamelFileName header (if set) is used as file name. If
this header is not specified, the Exchange ID is used as file name.
CamelHttpMethod Header HTTPS Refers to the HTTP method name of the incoming request (GET,
sender POST, PUT, DELETE, and so on).
adapter
The HTTPS sender adapter sets this header.
CamelHttpQuery Header HTTPS Refers to the query string that is contained in the request URL (for ex
sender ample, CamelHttpQuery=abcd=1234).
adapter
The HTTPS sender adapter sets this header.
HTTP re
In the context of a receiver adapter, this header can be used to dy
ceiver
namically change the URI to be called.
adapter
CamelHttpRespon Header HTTPS You can use this header to manually set the HTTP response status
seCode sender code.
adapter
CamelHttpUri Header HTTP re Overrides the existing URI set directly in the endpoint.
ceiver
You can use this header to dynamically change the URI to be called.
adapter
CamelHttpUrl Header HTTPS Refers to the complete URL called, without query parameters.
sender
For example, CamelHttpUrl=https://
adapter
test.bsn.neo.ondemand.com/http/hello.
CamelServletCon Header HTTPS Refers to the path specified in the address field of the channel.
textPath sender
For example, if the address in the channel is /abcd/1234, then
adapter
CamelServletContextPath is /abcd/1234.
CamelSplitIndex Property Splitter Provides a counter for split items that increases for each Exchange
that is split (starts from 0).
CamelSplitSize Property Splitter Provides the total number of split items (if you are using stream-
based splitting, this header is only provided for the last item, in other
words, for the completed Exchange).
Example
Sample Code
Example of this use case: The XML signature verifier of the re
ceiving system expects an XML signature as shown in the follow
ing code snippet.
Id="_6bf13099-0568-4d76-8649-
faf5dcb313c0">
<ds:SignedInfo>
...
<ds:Reference
URI="#IDforB">
<ds:Transforms>
<ds:Transform Algorithm="http://
www.w3.org/2000/09/xmldsig#enveloped-
signature" />
<ds:Transform Algorithm="http://
www.w3.org/TR/2001/REC-xml-
c14n-20010315" />
</ds:Transforms>
...
</ds:Reference>
</ds:SignedInfo>
<ds:SignatureValue>aUDFmiG71</
ds:SignatureValue>
</ds:Signature>
<root>
CamelXmlSignatur Header XML Signer Specifies the Id attribute value of the QualifyingProperties element.
eXAdESQualifying
PropertiesId
CamelXmlSignatur Header XML Signer Specifies the Id attribute value of the SignedDataObjectProperties
eXAdESSignedDa element.
taObjectPropertie
sId
CamelXmlSignatur Header XML Signer Specifies the Id attribute value of the SignedSignatureProperties el
eXAdESSigned ement.
SignaturePropertie
sId
CamelXmlSignatur Header XML Signer Specifies the value of the Encoding element of the DataObjectFor
eXAdESDataOb mat element.
jectFormatEncod
ing
Cc Header Mail receiver Additional e-mail address that the message is sent to.
adapter
Content-Encoding Header HTTP re The encoding used during message transport (for example, gzip for
ceiver GZIP file compression).
adapter
This information is used by the receiver to retrieve the media type
that is referenced by the content-type header.
If this header is not specified, the default value identity (no com
pression) is used.
Content-Type Header HTTP re HTTP content type that fits to the body of the request.
ceiver
The content type is composed of two parts: a type and a subtype.For
adapter
example, image/jpeg (where image is the type and jpeg is the
Mail receiver subtype).
adapter
Examples:
Note
If transferring text/* content types, you can also specify the
character encoding in the HTTP header using the charset pa
rameter.
Date Header Mail adapter The date and time when the e-mail was sent.
From Header Mail adapter Email address that the message comes from.
JMSTimestamp Header JMS Con Time when a JMS message was created.
sumer
Message-ID Header Mail adapter ID that the mail system assigned to the e-mail when it was first cre
ated.
Reply-to Header Mail adapter Message ID of the message that this e-mail is a reply to.
SAP_ApplicationID Header When you monitor the messages at runtime, you can search for all
messages whose defined SAP_ApplicationID has a specific
value (displayed as the MessageID attribute in the Message Monitor
ing editor).
Note
Only the first 120 characters are displayed.
As Type, select the XPath expression that points to the message ele
ment that is to be used as the application ID.
SapAuthenticate Header SOAP sender User name of the client that calls the integration flow.
dUserName adapter
If the sender channel is configured to use client certificate authenti
XI sender cation, no such header is set (as it is not available in this case).
adapter
SAP_Correla Property Specifies whether message processing logs (MPLs) are to be corre
teMPLs lated with each other using a correlation ID.
You can use this header to specify that the behavior of the integration
flow changes depending on the number of retries that are actually
performed. For example, you can use this header to define that after a
certain number of retries the message is routed to a specific receiver
(for example, to send an alert message to a recipient).
You can use this header in case you configure the XI adapter with
Data Store as temporary storage.
SAP_ERiCRes Header ELSTER re The ELSTER receiver adapter sets this header. It contains a technical
ponse ceiver status created by the ERiC (ELSTER Rich Client) library.
adapter
SAP_ErrorModel Property You can use this property to set a Model Step ID for an integration
StepID flow step. This identifier is required to relate to an integration flow
step in error handling.
SapIDocType Header IDoc sender This header contains the IDoc type from the sending system (for ex
adapter ample, WPDTAX01).
IDoc receiver
adapter
SapIDocTransferId Header IDoc sender This header contains the incoming IDoc number from the sending
adapter system (for example, 0000000000166099).
SapIDocDbId Header IDoc sender The IDoc receiver adapter sends a request and gets an XML re
adapter sponse.
IDoc receiver The adapter parses the XML response and generates this header
adapter from it. The header contains the IDoc number from the receiver sys
tem (for example, 0000000000160816).
SapPlainSoap Header IDoc sender Only relevant if the receiver channel is SAPRM. The header contains
QueueId adapter the QueueID from the receiver system.
SAP_MAIL_EN Property Mail sender The alias used for decryption of an encrypted mail.
CRYPTION_DE adapter
TAILS_DECRYP
TION_ALIAS
(String)
SAP_MAIL_EN Property Mail sender The received mail was successfully decrypted (not set, true, false).
CRYPTION_DE adapter
TAILS_DECRYP
TION_OK (boolean)
SAP_MAIL_EN Property Mail sender There is an error message if the mail could not be decrypted.
CRYPTION_DE adapter
TAILS_ER
ROR_MESSAGES
(String)
SAP_MAIL_SIGNA Property Mail sender The error message for a failed verification.
TURE_DE Adapter
TAILS_ER
ROR_MESSAGES
(Array of String)
SAP_MessagePro Property Points to the message processing log for the respective Exchange.
cessingLogID
You can use this property to read the ID of the message processing
log (no write access supported).
SAP_MessagePro Header You can use this property to set an at most 40 characters alphanu
cessingLogCustom meric custom status for the current message processing log. The
Status value is transferred as CustomStatus attribute to the root part of
the message processing log and then stored in the message process
ing log header table.
SAP_MessagePro Header Log level under which the message processing log for the corre
cessingLogLevel sponding message exchange is written. Allowed values are INFO,
NONE, DEBUG (case-insensitive).
SAP_ReceiverOver Property Defines the handling of the message header SAP_Receiver . If set
write to true, the header SAP_Receiver will be overwritten with the
new value in case a value is assigned to the SAP_Receiver header.
If set to false, the new value is added to the already existent header
content. The content is stored as a comma-separated list,
Note
Example configuration:
Name: SAP_ReceiverOverwrite
Type: Constant
Value: True
SapDataStoreId Header Data Store Entry ID used/set by the Data Store component.
SapDataStoreMax Header Data Store Used dynamically overwrite the configured number of polled mes
Results sages in case of Data Store SELECT operation.
SAPJMSAlerttime Header JMS Con Specifies the time when an alert needs to be sent.
sumer
You can use this header to specify that the behavior of the integration
flow changes depending on the number of retries that are actually
performed. For example, you can configure a scenario where a mail is
sent to an administrator with the message as an attachment and the
integration flow is terminated successfully after a specified number
of retries.
You can use this header in case you configure the XI adapter with JMS
Queue as temporary storage.
SapMessageId Header SOAP (SAP SAPRM and XI protocol header for the message identifier
RM) Sender
Adapter
XI Adapter
SapMessageIdEx Header SOAP (SAP SAPRM and XI protocol header for the message identifier
RM) Sender
Adapter
XI Adapter
Sender Header Mail adapter Specifies the actual sender (acting on behalf of the e-mail address
stated in the From header).
SOAPAction Header SOAP This header is part of the Web service specification.
adapter
Subject Header Mail adapter Specifies the subject of the e-mail message.
To Header Mail adapter Specifies the e-mail address that the message is sent to.
Related Information
You can define how to handle errors when message processing fails at runtime.
Context
Error messages can include sensible data about systems. Therefore, error messages aren't directly returned to
the sender by default. Instead, an error template is returned. With this template, you can access the error
message after authenticating against SAP Cloud Platform Integration. You can enable this setting if you want to
provide the error message to the sender without any indirection.
Procedure
Option Description
Select Return Exception to Sender When a message can't be processed, an exception is sent back to the
sender.
Deselect Return Exception to Sender (de When a message can't be processed, no error handling strategy is used.
fault setting)
Use Resources to manage different resources associated within an integration content optimally. The term
"resources" refers to a collection of different categories of file types. A table in the resources view displays files
grouped by categories and their filenames alphabetically sorted. Expanding each resource category shows the
files within the category.
Resource files with a link allows you to view or modify the content in a file-specific editor. You can modify the
file content only when the integration content is in edit mode.
Note
.xsl
.gy
.groovy
WSDL .wsdl
EDMX .edmx
.xml
Adding Resources
But you cannot add multiple archive (*.zip) files or an archive file along with other resource files, only
resources with the following extensions and dependencies are uploaded:
Note
The above resource view uploaders is supported for Remote APIs as well
Note
MMAP files can be added from ESR. To know more, see: Importing Mapping Content from ES
Repository [page 468]
You can also copy MMAP files from other integration flows. While adding a MMAP file, the dependant
resources get copied along with the file. You get to see a summary of the list of dependant resources in
a dialog box before adding the file.
6. In the Resources table, select one or more dependent files and choose OK to upload the files.
You can perform the following actions for managing resource files:
Actions Description
(Delete) Removes relevant resource file from the integration flow. Be
fore deleting make sure that the selected file is not being re
ferred in other integration flows.
(Download) Downloads the respective resource file to your local file sys
tem.
Use this view to compare the default and configured values of the integration flow. The advantage of this view is
that you can see the consolidated view of all parameter values externalized.
Context
The integration developer uses this externalized parameter view to compare the default and configured values
for quality assurance.
Default values of parameters can be updated from the Externalize Parameter view and the configured value
displayed in the read-only fashion.
Note
Modifying the default value of a parameter replaces the current value at all locations. Configured values are
always preceded by the default value. Select Configure option if any changes are required to the configured
value.
This view offers two more options. You can use Filter by Name/Value option to filter externalized
parameter values either by entering name or by values.
You can use Remove Used option to remove unused externalized parameters. This option allows you to
permanently remove the parameters that aren’t consumed by any components of an integration flow.
For more information, see Externalize Parameters of an Integration Flow [page 489].
Procedure
Use this view to see all errors and warnings associated with integration components and resources.
The Problems view displays all issues related to integration components and resources occurred during design
time. This helps you identify and resolve issues that have an impact during runtime.
The first column in the table displays the Severity status of an issue. In the second column, there is a
Description of the issue. The third column highlights the Location of the integration component or resource
where the issue was encountered. When you click the location ID, the application redirects you to the affected
configuration of the integration component or to the affected resource.
Example
Consider a location is displayed as SFTP [Receiver_1]. Here the issue is related to a SFTP receiver adapter.
This is identified by noticing the name and ID displayed in brackets, sometimes components may only have
ID displayed. You must note that for resources ID's are not displayed.
Note
Content validation issues of resources when edited in their respective editors are not propagated to the
Integration Flow.
At design time of an integration project, the way how a message is to be processed is specified by a number of
integration flow steps (for example, content modifier, encryptor or routing step). When an integration flow is
being processed at runtime, errors can occur at several individual steps within the process flow (referred to as
processing steps to differentiate them from the integration flow steps as modelled at design time).
Integration flow steps can specify message processing at different levels of complexity. Therefore, in general an
integration flow step (design time) can result in multiple processing steps steps at runtime.
Each processing step gets a unique Step ID which is displayed in the message processing log.
For example, a content modifier step in an integration flow can (at runtime) be related to multiple processing
steps: The content modifier step can be configured in a way that one processing step changes the message
header, one the message body, and another one an exchange property.
To relate a modelled integration flow step (like the content modifier mentioned above) to an error occurring at
a certain processing step at runtime, you can use an identifier which is referred to as Model Step ID.
To allow an integration flow developer to relate to a certain integration flow step during error handling, the
runtime provides the Model Step ID of the integration flow step where the error occurred as Exchange Property
SAP_ErrorModelStepID. The content of the property can then be evaluated in the error handling.
Note
You can use the property for instance in a condition definition of a Router step to choose a different error
handling strategy depending on the step where the error occurred.
Context
Procedure
1. In the Integration Package Editor, in the Artifacts section, select the required data flow from the list.
2. Choose to Deploy.
3. In the Settings dialog box that appears, enter the following details:
Once you enter the above mentioned details, the application forms the Data Service URL.
4. Choose OK.
5. In the LOG ON screen that appears, enter the organization name.
6. Choose Log On.
The Data Integration application opens its Projects tab.
7. In the dialog box that appears, enter the following details as required:
○ Name
○ Source
Note
To switch to SAP Cloud Platform Integration Web Catalog view from Data Integration using the Back
Button in the browser, perform the required substeps:
○ If you use Internet Explorer, from the context menu of Back Button , select Cloud Integration.
○ If you use Google Chrome, click twice on Back Button .
○ If you use Mozilla Firefox, from the context menu of Back Button , select Cloud Integration .
If you want to switch to SAP Cloud Integration Web Catalog view from Data Integration using the
Forward Button in the browser then by default, it displays the Projects tab page without the Settings
option. This behavior remains the same for Internet Explorer, Google Chrome, and Mozilla Firefox.
If you do not have authorization to access the Data Integration application then you get the following
error messages:
○ Error message for Google Chrome and Mozilla Firefox:
User is not included in the organization. Contact your security administrator for assistance.
○ Error message for Internet Explorer:
HTTP 404: Not found error
When you edit an integration package or an artifact, it is locked and cannot be edited by other users
simultaneously. In case you close your browser without saving the changes or there is a session timeout, the
package or artifact remains locked, until you unlock it.
1. Launch the SAP Cloud Platform Integration application by accessing the URL provided by SAP.
2. Go to the Design section, and select the locked integration package or artifact.
3. Choose Edit.
4. Select Yes when you are prompted to close the previous session.
The package or artifact is now unlocked and you can continue editing it.
Note
If the browser is closed accidently or the session times out before you could save your changes, the
unsaved changes will be lost.
If the integration package or artifact you want to edit is locked by another user, you see Locked by:
<username>. You cannot edit it. Ask the user who has locked the resource to release it by logging out of the
session.
Hyperlinks
Some links are classified by an icon and/or a mouseover text. These links provide additional information.
About the icons:
● Links with the icon : You are entering a Web site that is not hosted by SAP. By using such links, you agree (unless expressly stated otherwise in your
agreements with SAP) to this:
● The content of the linked-to site is not SAP documentation. You may not infer any product claims against SAP based on this information.
● SAP does not agree or disagree with the content on the linked-to site, nor does SAP warrant the availability and correctness. SAP shall not be liable for any
damages caused by the use of such content unless damages have been caused by SAP's gross negligence or willful misconduct.
● Links with the icon : You are leaving the documentation for that particular SAP product or service and are entering a SAP-hosted Web site. By using such
links, you agree that (unless expressly stated otherwise in your agreements with SAP) you may not infer any product claims against SAP based on this
information.
Example Code
Any software coding and/or code snippets are examples. They are not for productive use. The example code is only intended to better explain and visualize the syntax
and phrasing rules. SAP does not warrant the correctness and completeness of the example code. SAP shall not be liable for errors or damages caused by the use of
example code unless damages have been caused by SAP's gross negligence or willful misconduct.
Gender-Related Language
We try not to use gender-specific word forms and formulations. As appropriate for context and readability, SAP may use masculine word forms to refer to all genders.
SAP and other SAP products and services mentioned herein as well as
their respective logos are trademarks or registered trademarks of SAP
SE (or an SAP affiliate company) in Germany and other countries. All
other product and service names mentioned are the trademarks of their
respective companies.