0% found this document useful (0 votes)
65 views

ASAP Vedios Notes

document
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views

ASAP Vedios Notes

document
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 45

Message retry- synchronous Messages we cannot retry

 Synchronous- sender nunchi receiver ki request velte ..reciever will send response to sender
 Asynchronous :

Setup Scenario With Asynchronous Decoupling

To configure the decoupling of inbound and outbound message processing you need to configure two
processes:, one process to receive the inbound message and store it in the JMS queue and a second
process to trigger the message from the JMS queue to the receiver backend. The blog describes the
configuration using two separate integration flows.

Configure the Integration Flow Receiving the Message

The first integration flow will be configured to receive the message via any inbound adapter. In this
sample setup we use the HTTP adapter.

Configure the JMS Receiver Channel

Create the integration flow with the inbound channel required by your scenario, and use the JMS
adapter as the outbound adapter. You only have to configure the Queue Name for the JMS queue that
will be used to store the messages. It is Important to set an Expiration Period that suits your scenario.
The default value is 90 days', which means that after 90 days any messages, that were not delivered
within this period will be deleted from the queue.
With the 10-December 2017 release, you will have the option to configure, if exchange properties are
forwarded to the JMS queue. Before, the properties were always forwarded along with the message,
which caused severe issues in some scenarios. The default value of the option will be that properties are
not forwarded, because this could lead to runtime issues if the properties are too large. In detail the size
restrictions are described in the Blog 'JMS Resource and Size Limits'.

This option is available in adapter version 1.1.


Configure Transaction Handling

You need to configure the correct transaction manager in the integration process for transactional end-
to-end processing. Our process does not contain any JDBC resources (data store, variables, aggregator),
and only one direct JMS Receiver without splitter or sequential multicast; so we don't need a specific
transaction handler. Select the integration process and switch to the Processing tab. In Transaction
Handling drop down select Not Required.
More details about the different transaction handling options and existing limitations are described in
the blog 'How to Configure Transaction Handling in Integration Flow'.

Deploy the Integration Flow

Now you can deploy the integration flow. During deployment, the specified queue will be created
automatically in the message broker. You can check that the queue has been created in the Queue
Monitor available in the operations view in the Manage Storages section. The monitor is described in
more detail in the Monitoring section below.

If a message broker has not yet been provisioned, the deployment will end with an error. In Manage
Integration Content the error details for your integration flow tell you that no messaging host is
available.
Configure the Integration Flow Doing the Retry

To consume the messages from the JMS queue, you configure a second integration flow with a JMS
sender channel and the outbound adapter needed for your scenario. In this sample configuration we use
the HTTP adapter.

Configure the JMS Sender Channel

Create the integration flow with the outbound channel required by your scenario, and use the JMS
adapter as the inbound adapter. You only have to configure the Queue Name for the JMS queue that
you want to poll the messages from. Use the same queue name used in the receiving integration flow.

Number of Concurrent Processes is set to 1 per default and should only be increased if more parallel
processing is needed. Because the JMS resources are limited, you need to keep the number of parallel
processes as low as possible. Especially if large messages are processed in the scenario, increasing the
parallelism may lead to out of memory problems.

In the JMS sender channel you also have to configure the retry handling. Set the Retry Interval as
required for your scenario (, the default value is 1 minute). We also recommend using Exponential
Backoff, which means that the retry interval is doubled after each unsuccessful retry. By using this
setting you can avoid calling the receiver backend every minute if it is unavailable for a longer time
period. The Maximum Retry Interval defines the maximum time between two retries if exponential
backoff is used. The recommendation is to either keep the 60 minutes or even increase it if this is
acceptable by the scenario.

Note, that the JMS sender does not guarantee the order in which the messages are consumed. If several
processes are configured to consume messages and/or multiple worker nodes process messages, the
messages are processed in parallel. Furthermore, in case the message is in retry because of an error it is
taken out from processing until the retry interval is reached.

Configure Transaction Handling

If you are using JMS Sender, it is not necessary to set the transaction handling to Required for
JMS, because the retry handling works independently of the selected transaction handler. You should
select the transaction manager as required by the scenario that you have configured in the integration
flow.

In the simple example shown, no JDBC or JMS resources are used, that need a transaction handler, so
we select Not Required.
There are some limitations on configuring transaction handling configuration and on which flow steps
can be used in which combinations. This is described in detail in the blog 'How to configure Transaction
Handling in Integration Flow'.

Optional: Explicit Retry Configuration

In the JMS sender channel, you can configure the time intervals between retries, but you cannot
configure that processing will end after a specific number of retries. If required, you can configure this in
an exception subprocess that calls a local process for retry handling using the header SAPJMSRetries set
by the JMS sender adapter.

Note that the header SAPJMSRetriesmay not indicate the exact number of retries in clusters with
multiple worker nodes. It may happen that a retry is triggered on multiple worker nodes shortly one
after the other before the message is moved to the error queue and the counter in the header is
increased.

To configure this, add a Local Integration Process to the integration flow. In this process you configure
the specific retry handling. In this sample, we check the number of retries executed and after 6
unsuccessful retries, we end the processing and trigger a mail to alert someone to the problem.

In the local integration process configure a Router after the start event, add an Error End Event and
configure the router as shown in the picture. The header SAPJMSRetries set by the JMS adapter is used
to decide if the message processing continues or if it ends and a mail is triggered. If message processing
is to continue, an error is raised by the process so that the message stays in the JMS queue and goes into
retry status.
We set the transaction handling to From Calling Process in the local process, and as there are neither
JMS nor JDBC transacted resources configured in the local process, we take over the Not
Required setting from main process.

Furthermore, you need to add an Exception Subprocess in the main process, and add a Local Process
Call, where you select the configured local process, within the exception subprocess. Add a Receiver for
the mail to be sent. Connect the Message End Event of the exception subprocess to the receiver using a
Mail adapter receiver channel. In this channel, you configure the mail address the mail is to be sent to
and the details to be sent.

With this configuration the message will be removed from the JMS queue after 5 unsuccessful retries, so
that no further retries are executed. A mail is triggered to notify the appropriate person to take manual
action.

If you use the explicit retry configuration, you are free to configure retries as required for your scenario.
For example, you can notify someone after 3 retries but not stop the processing until 10 retries have
been made, to allow enough time for manual actions. Or, you can send the message after 3 retries via an
alternative receiver channel.

Process Middle layer lo (cpi or PI) we cannot resend ..say from webpage ...suppose you got a order
request from flipkart and during that time the sap is down obviously we got the request to PI but
from PI to Sap the message got failed say tomorrow sap is up and from PI if we resend it once again
order will be created in sap but order it will not go to Flipkart

We can not resend the synchronous failed message, since the sender expects the response for the
request sent. If the request messages gets failed at some stage before going to the receiver, then the
sender would get the error log as the response and the message will be in canceled state with restart
not possible.

The only possible way for synchronous messages is to retrigger from sender system only.
Usually Synchoronus messages cant be resend,you need to change the status of the message then
resend it.

But better way is to resend the message from source system… Synchronous messages can never be
reprocessed.

Use Case

Scenario: A customer places an order via a web application that triggers a synchronous API call to an ERP
system for order processing.

 Initial Request: The customer submits their order, and the web application sends a synchronous
message to the ERP to confirm the order.
 Processing Time: The ERP takes time to process the order and respond with a confirmation.
 Failure Scenario: If the web application times out waiting for the ERP’s response, it may attempt
to resend the order. However, this could result in:
o Duplicate orders in the ERP.
o An inconsistent state where the customer receives multiple confirmations or faces
issues due to overlapping transactions.

Conclusion

To handle failures or the need for retries, asynchronous messaging patterns are generally recommended
in scenarios where responses can be delayed or where multiple attempts might be necessary. In the
case of synchronous interactions, proper error handling and user feedback mechanisms should be
implemented to manage the situation without resending the message.

------------------------------------------------------------------------------------------------------------------------
JMS:

Asynchronous Messaging with Retry Using the JMS Adapter

In many cases integration scenarios have to be decoupled asynchronously between sender and
receiver message processing to ensure that a retry is done from the integration system rather than
the sender system. This can be achieved, for example, by using JMS queues to temporarily store the
messages in the Cloud Integration system if the receiver system cannot be reached, and retry them
from there. Follow the steps described below to activate the message broker and to setup the
scenario. Use the monitoring tools described to check the processing and view the stored messages
in the queue monitor.
What is the difference between JMS and Datastore in SAP CPI?
While a JMS sender can be configured to consume multiple messages in parallel, the Data Store sender
will only pick up the next message after the previous message has been processed. With that approach it
is not possible to send multiple messages for the same interface in parallel to the receiver system

Data Store, Variables, and JMS Queues: When to Use Which Option?

Cloud Integration supports various integration patterns. Certain integration patterns require a storage
location to read and write the content of a message during the execution of a scenario. When the
message content is needed for later processing steps or if the message content is evaluated in a later
phase, storage steps are required. Storing data is also required in asynchronous message patterns. In
this section, we discuss how to use the following storage options (in the context of different use cases):

 Global data store


 Global variable
 Java Messaging Service (JMS) message queue

 These are the main characteristics of the data store and the JMS queues (for more technical
information like the storage size, for example, see Using Data Storage Features When Designing
Integration Flows).

Option Usage

Data The data store is used to persist messages on the database of your SAP Cloud
store/variable Integration tenant. You can use the Data Store Operations integration flow steps
(Write, Get, Select, and Delete) to do operations on the data store.

You can use the Data Store sender adapter to allow SAP Cloud Integration to consume
messages from a data store. This feature helps you to enable asynchronous
decoupling of inbound and outbound processing by using the data store as temporary
storage. When configuring the Data Store sender adapter, you can specify parameters
such as the poll interval that determines the time to wait before consuming messages
from the data store. You can also define the retry behavior.

This feature allows you to set up scenarios with reliable messaging using
asynchronous decoupling.
Option Usage

JMS queue You can use the JMS receiver adapter to store messages in a JMS queue. One
important key characteristic of the Java Message Service (JMS) feature is the support
of high-speed messaging with high throughput. This is why it offers the optimal
solution for reliable messaging using asynchronous decoupling. JMS is a Java-based
standard application programming interface (API) for sending and receiving messages.
It enables efficient asynchronous communication based on a JMS message broker
between different components. The JMS message broker is a separate runtime
component that ensures that messages in JMS queues in the JMS message broker are
treated separately. The JMS adapter is used to store messages in the JMS queue and
to consume messages from the JMS queue in the JMS message broker.

The processing sequence used by the JMS adapter is first-in, first-out (FIFO). However,
because of parallel processing and retry handling this does not mean that messages
are processed in a guaranteed order.

The following questions help you to decide which option to use.


se Cases for Data Store and Variables

Store and Pickup


Assume that no one of the communication partners involved in the integration scenario can open its
firewall for calls from outside. That means, a communication partner needs to fetch data from the cloud.

In such a case, you can implement a scenario where 2 integration flows are involved in the following
way.

1. Integration flow 1 stores a message (received from a connected sender component) in the data
store.
2. Integration flow 2 (initiated by an external call or a Data Store sender adapter) actively polls the
message from the data store.

You can use the Data Store sender adapter to allow SAP Cloud Integration to consume messages
from a data store. This feature helps you to enable asynchronous decoupling of inbound and
outbound processing by using the data store as temporary storage. When configuring the Data
Store sender adapter, you can specify parameters such as the poll interval that determines the
time to wait before consuming messages from the data store. You can also define the retry
behavior.

3. Integration flow 2 gets the message from the data store.


In this scenario, the time when a message is picked up by integration flow 2 isn't known. It's
recommended that the scenario is designed in a way that integration flow 2 removes the message only
after having received a confirmation of the receiver systems.

Store for Later Pick Up


Assume that your business case requires a message to be "parked" for a certain time (for example,
because the associated backend isn't ready yet to process the message). For example, in an integration
scenario a message is processed and finally sent to a receiver. However, in case the receiver cannot be
reached, the message is stored in the data store. At a later point in time (when the receiver is ready),
the message is picked up and sent to the receiver.

To illustrate such a setup, let us imagine a component (Readiness Checker) that "tells" the integration
scenario whether processing can continue or not.

In this setup, Cloud Integration waits for a state change before continuing message processing at certain
steps. The actual state is requested from a remote component (Readiness Checker).

Both integration flows request a status (Ready or Not Ready) from the Readiness checker and only
continue processing the message in the Ready case.

In case the status is Not Ready, integration flow 1 stores the message in the data store (to be picked up
later by integration flow 2).

Integration flow 2 repeatedly requests the actual status from the Readiness Checker and continues to do
so (controlled by a scheduler) as long until the Readiness Checker provides the status Ready. Only then,
it continues message processing. In this case, it takes the message from the data store and continues
message processing. In the figure, we assume that after the 2nd request the status Ready is provided as
response.

1. Integration flow 1 requests the actual state from the Readiness Checker.

In case the status Ready is provided as response, integration flow 1 continues processing as
designed for the success case, and we're done.

2. The Readiness Checker provides the status Not Ready as response.


3. Integration flow 1 stores ("parks") the message in the data store.
4. Integration flow 2 requests the actual state from the Readiness Checker for the first time.
5. The Readiness Checker provides the status Not Ready as response (let's assume this).
6. After a certain time period, integration flow 2 requests the actual state from the Readiness
Checker again.
7. Let us assume that now the Readiness Checker provides the status Ready as response.
8. Integration flow 2, in this case, picks up the message from the data store.
9. Integration flow 2 reads the message from the data store (and continues processing).

Store Data Once, Read Multiple Times


Assume that in your integration scenario multiple remote systems (consumers) need to access data from
another remote component (provider). Furthermore, you like to avoid that all consumers perform
individual requests to get the same data from the provider.

In such a case, you can request the information from the provider once (with 1 integration flow) and
store the information as a global variable (using the Write Variables step). Global variables are stored in
the same database as Data Store content (see Using Data Storage Features When Designing Integration
Flows).

The consumers can access the information from the global variable (through additional integration
flows) instead of performing separate calls to the provider.
The figure shows a setup where information from the WebShop (which is the provider component) is
requested by multiple consumers.

1. Integration flow 1 requests the information from the provider.

As an example, think of a scenario where a product price is ready on a daily basis from the
WebShop by an integration flow.

2. As a response, integration flow 1 gets the information.


3. Integration flow 1 stores the information as global variable.

Each consumer (consumer A or consumer B in our example setup) can access the information from the
global variable instead of calling the provider again.

Asynchronous Decoupling of Messages - Option 1


Some sender systems can’t perform a retry in case there's a failure in transferring the message.

To support such a use case, you can implement a scenario like depicted in the figure.
In the scenario, a global data store is used.

1. Integration flow 1 receives the message from the sender and stores it in the data store (using
the Data Store Write step).
2. Integration flow 2 (using the Data Store sender adapter) actively polls the message from the
data store. Alternatively, this step can also be triggered by a Timer step with a scheduler.
3. Integration flow 2 reads the message from the data store.

4. Integration flow 2 sends the message to the receiver.

In this scenario, integration flow 2 has to remove the message (using a Data Store Delete step) only if
the message is transferred successfully to the receiver.

Asynchronous Decoupling of Messages - Option 2


An alternative usage of the data store to asynchronously decouple inbound from outbound processing is
shown in the following figure.
In this scenario, integration flow 1 stores the message in the data store only in case it can't be delivered
to the receiver (failure case). This measure guarantees that the "expensive" operation that involved
database persistence is only performed if necessary.

Integration flow 2 checks regularly for failed messages that need to be resent to the receiver. In detail,
the following steps are processed.

In the scenario, a global data store is used.

1. Integration flow 1 tries to send the message to the receiver, but this step fails.
2. Integration flow 1 stores the message in the data store.

To handle exceptions, integration flow 1 is designed in such a way that an exception subprocess
takes over this step (see Handle Exceptions).

3. Integration flow 2 (using the Data Store sender adapter) actively polls the message from the
data store. Alternatively, this step can also be triggered by a Timer step with a scheduler.
4. Integration flow 2 reads the message from the data store.

5. Integration flow 2 sends the message to the receiver.

Data Store Sender Adapter

This adapter enables Cloud Integration to consume messages from a data store. This feature helps you
to enable asynchronous decoupling of inbound and outbound processing by using the data store as
temporary storage.
To understand the concept of asynchronous decoupling, assume that a sender sends a message to Cloud
Integration (inbound processing). If there's an error in outbound processing (for example, a receiver
can't be reached temporarily), the middleware (Cloud Integration) retries message processing
independently. There's no need that the sender triggers a reprocessing of the message as soon as the
error situation is fixed. The sender relies on the middleware to do that. To support this scenario, Cloud
Integration stores the message received from the sender in the data store. To implement this step, you
can design a dedicated integration flow that receives the sender's message and uses a Data Store
Write step to store it in the data store, see: Define Data Store Write Operations.

Furthermore, you model outbound processing in an integration flow that initially consumes the message
from the data store (using the Data Store sender adapter). The outbound integration flow retries the
message from the data store as long as the error situation lasts.

The following figure depicts the described example setup.

You can model the steps that write into the data store and those that consume message from it also in
the same integration flow.

If multiple worker nodes are set up, there's no parallel processing of the same data store entry by these
multiple worker nodes.
To use this adapter type in an integration flow, connect a Sender with a Start Message shape and select
adapter type Data Store.

Once you've created a Data Store sender channel, you can configure the following attributes.

Go to the General tab to configure the following adapter parameters.

General

Parameter Description

Name Enter the name of the channel.

Go to the Connection tab to configure the following adapter parameters.

Connection

Parameter Description

Data Store Name Specifies the name of the data store (no white spaces).

The maximum length allowed for the data store name is 40 characters. If you enter
a longer string, a validation error is raised. Note that this length restriction applies
to the value that is used for this parameter at runtime.

Visibility Defines whether the data store is shared by all integration flows (deployed on the
tenant) or only by one specific integration flow.

 Global: Data store is shared across all integration flows deployed on the
tenant.

 Integration Flow (default setting): Only a single integration flow uses the
data store. A data store configured with this setting is also referred to as
local data store.

For more information and guidelines how to use this parameter, see: Anticipate
Message Throughput When Choosing a Storage Option.
Connection

Parameter Description

Poll Interval (in Specify the poll interval in seconds to wait before consuming messages from the
s)
data store. The default is set to 10 seconds. The minimum is set to 1 second. The
maximum is set to 300 seconds.

The adapter continuously consumes messages from the data store if the data store
contains entries that are ready to be processed.

The poll interval only becomes effective as soon as the data store doesn't contain
such entries anymore. From that point in time, the adapter waits for the time
specified by the Poll Interval parameter and then again tries to consume messages
from the data store.

Note
The smaller the poll interval (for example, 1 second or less), the more load is put on
the data store.

Retry Interval (in Enter a value for the amount of time to wait before retrying message delivery. The
min)
default is set to 1 minute. The minimum is set to 1 minute. The maximum is set to
1440 minutes (24 hours).

Exponential Select this option to double the retry interval after each unsuccessful retry. By
Backoff
default, this option is deselected.

Maximum Retry You can set an upper limit on that value to avoid an endless increase of the retry
Interval (in min)
interval. The default value is 60 minutes. The minimum value is 10 minutes. The
maximum is set to 1440 minutes (24 hours).

Lock Timeout (in Enter a value for the timeout of the in-progress repository. After this time, a
min):
message is retried in case of a cluster outage. The default value is 10 minutes. The
minimum value is 1 minute. The maximum is set to 300 minutes (5 hours).
Use Case for JMS

Using message queues (JMS queues), you can implement asynchronous decoupling of messages in an
improved way as compared to using the data store (see Use Cases for Data Store and Variables).
To connect to the JMS queue, use the JMS sender and receiver adapter as shown in the following
process flow.

1. Integration flow 1 stores the message (received from the sender) in the JMS queue using the
JMS receiver adapter.
2. The message broker pushes the message to integration flow 2 (through the JMS sender
adapter).
3. Integration flow 2 sends the message to the receiver.

If there's a transfer failure, the message is retried automatically from the message broker.

Integration flow 2 can process multiple messages in parallel. This way, the performance in high volume
scenarios is improved.

[SAP CPI] – HOW TO USE JMS ADAPTER IN SAP CPI TO RESEND MESSAGE

 Kindly take short look diagram


In this case, message from sender will send to queue is call message queue. And from there, they will
send to SFTP. If any issue of FTP server, message will be hold and resend

Step 1 : Create integration flow for message queue.

For simple, this integration flow we will just include one sender (HTTPS) and one receiver (JMS)

Queue Name : Name of queue in CPI


When we use POSTMAN to run this integration flow We will receive message in Message Queues

Step 2 : Create integration flow consume message queue in step 1

In this step, we create new integration flow to consume message queue. We just add sender adapter is
JMS with name equal name of queue in step 1, and SFTP adapter for receiver. We simulate two case

 Case 1 : Cannot connect to SFTP. We will see that message still in queue. Queue try resend message
every short time
 Case 2 : Connect SFTP ok, so message send from POSTMAN to SFTP ok

This is integration flow which using JMS adapter for sender


Queue Name : Get name of message queue which config in step 1

Step 3 : Test cases

Case 1 : Message send to queue, connect to SFTP error. Message will hold in queue

 POSTMAN to queue

 Queue to SFTP -> Fail


Message hold in queue

Case 2 : Message consumed by queue


[SAP CPI] – WORKING WITH SIMULATION IN SAP CPI

#1. Integration Simulation in SAP CPI

In first TIP, I talk about how to run integration flow in simulation mode. Example I have integration flow
collect data from XML and insert database by JDBC adapter

I want to run simulator to check my integration right or wrong. This is step by step I do.

 In mode read only, click on arrow from start to message mapping -> choose start point.
 Click on icon Start Simulation -> enter body of request. IF your request include Header or Properties,
also add in here
 Click on arrow where you want to stop simulation -> Choose Stop endpoint.

 After set start point and end point. Go to menu choose start simulation
 Run simulation success we can click every envelop to view message data output.

#2 XML & XSD

Because of CPI mapping use format XML, XSD. Do we have to know how to parse XSD base on XML.

Go to here to convert XSD from XML


#3. Integration flow simulation with input ZIP file send to SFTP

 We have ZIP file which include many file in there and integration flow will split every file with file name,
after that will send to SFTP.
 Create integration flow

 Configure of ZIP Splitter


 Configure of Content Modifier, get file name of files from request

NOTES

${header.CamelFileNameOnly.replaceAll(“[0-9A-Za-z]+\/”,””)}

 Set start point from Sender to ZIP splitters


 End point after content modifier.

 Click on Start point. Choose ZIP file into body


 Run Simulation. After success, we will see one list file which split from zip file
Summary

In this article, I shared step by step How to working with Simulation in SAP CPI. It uses in some cases test
integration flow, to view data at every step in flow. Thanks for your reading, any question kindly leave
your comment below this.

[SAP CPI] – HOW TO CONFIGURATION CPI CONNECT TO SFTP WITH PRIVATE/PUBLIC KEY
Hi guys, in this articles I share step by step how to config connection from SAP CPI to SFTP server with
private/public key. For configuration connect from CPI to SFTP by using credential user, kindly see this
blog.

First, take a short look this diagram


For secure SSH communication a known hosts file has to be deployed in the cloud integration tenant
containing the public host key of the sftp server so that the sftp server will be trusted.

Furthermore, for public key authentication with the sftp server, a private key has to be maintained in
the cloud integration tenant key store. Also User/Password can be used instead, in this case user
credentials have to be deployed in the cloud integration tenant. Recommended configuration option for
secure communication is public key authentication.

After configure SFTP server, we will have some info of it as

 User name
 Password phrase
 Host name
 Private key file (*.ppk)

Let’s go

Step 1 : Export private key (*.PPK) into SSH key

 Open WinSCP
 Choose Tools
 Choose item Run PuTTYgen
 Choose button Load to load file .ppk
 Export to OpenSSHkey

 Save this file to use in step 2


Step 2 : Download OpenSSL for Window

 Go here to download OpenSSL


 Copy it to C:\OPENSSL

 Create folder SSL and copy file openssl.cnf into it

 At folder OpenSSL run CMD by administrator


Step 3 Create X.509 Certificate from SSH key which created in Step 1

openssl req -new -x509 -days 3650 -key SFTP_PrivateKey_demo.pem -out SFTP_x509cert_Demo.pem

After this step, we receiver one file *.pem in folder

Step 4. Create PKCS key (.P12) from X.509 certificate in step 3

openssl pkcs12 -export -in SFTP_x509cert_Demo.pem -inkey SFTP_PrivateKey_Demo.pem -out


sftp_keystore_demo.p12
– export – in <X.509 certificate in step 3>

– inkey <private key in step 1>

Enter pass phrase for This is pass phrase which get from administrator when config
private key SFTP with PPK file.

This is password which we create by our self to use in step import


Enter export password
certificate to CPI

After this step, we have PKCS (*.p12) in folder


Step 5. Create Known Host in CPI

 Go to integration Suite Application


 Go to Design integration flow
 Go to Monitor item in left menu
 Go to Connectivity Test
NOTE

If check host from on-premise through SAP CLOUD CONNECTOR, then we must choose On-Premise for
Proxy Type

 Create notepad and paste Host Key into it and set name file
 Add this known host into CPI
Step 6 Import PKCS (.P12) in step 4 into SAP CPI. Add Key Pair

NOTE

Password is Export Password in step 4


Step 7 : Test SFTP and permission access folder on SFTP

 Go to Connectivity Test in SAP CPI monitor

(1) Public key


(2) User name connect to SFTP

(3) Key pair in step 6

 Test access right to folder

You might also like